Why Am I Getting Dumber From Using ChatGPT?
MIT scanned 54 brains writing essays with ChatGPT. Connectivity halved, recall collapsed, ownership tanked. Here is the mechanism, and the fix.
Stop configuring. Start building.
SaaS builder templates with AI orchestration.
Problem: You used to draft an email without help. Now you open ChatGPT first. The reply comes back fast and reads fine, except you cannot quote a single sentence of it five minutes later. Reading a long PDF feels like a chore. Holding an argument in your head for ten minutes feels harder than it did last year. You are not imagining the slide. Researchers have started measuring it.
Quick Win: Paste this rule above any prompt you would normally lead with:
Do not give me an answer yet. List the three best questions I should be asking about this. I will pick one, write my own draft, and then you refine it.That single move flips the order of engagement. You think first. The model refines second. Keep reading for what the brain scan studies actually show, and the cheap habits that keep your edge.
The Feeling Has A Name Now
You felt it before anyone gave it a label. The blank page used to fill itself once you sat with it long enough. Now the cursor sits there until you tab over to a chatbot. The work goes faster. Your grip on the work goes weaker.
MIT calls this cognitive debt. The phrase comes from a 2025 EEG study with the title Your Brain on ChatGPT. The metaphor is borrowed from finance. Every shortcut you take with the tool is a small loan. The bill arrives later, in skills you no longer have.
What 32 Electrodes On 54 Heads Picked Up
Nataliya Kosmyna and her team at the MIT Media Lab put 54 people through four months of essay writing. Three groups. One wrote with ChatGPT. One wrote with Google. One wrote with no tools at all. Each session was a SAT-style prompt with 32-channel EEG running the whole time.
The numbers are the kind that shut down a debate:
| Measure | ChatGPT group result |
|---|---|
| Brain connectivity in alpha and theta bands | About half the level of the no-tool group |
| Cognitive load during the task | Down 32% versus baseline |
| Writing speed | 60% faster |
| Could quote a passage they had just written | 17% of the time |
| Self-perceived ownership of the essay | Lowest of the three groups |
| English-teacher rating of voice | "Soulless" |
Alpha and theta waves are the bands tied to creative ideation, semantic search, and memory consolidation. When ChatGPT was in the loop, those bands quieted down. The work got finished. Almost nothing of it stuck.
Dr. Zishan Khan, a child psychiatrist on the paper, put it plainly: the neural connections that help you access information, recall facts, and stay resilient under pressure all weaken with disuse.
It Is Not Just MIT
A separate team at Microsoft Research and Carnegie Mellon ran the largest survey in the field for CHI '25. Hank Lee and colleagues collected 936 real-world AI interactions from 319 knowledge workers. The headline correlation:
| Factor | Effect on critical thinking |
|---|---|
| Higher confidence in the AI | Less critical thinking (B = -0.69, p < 0.001) |
| Higher confidence in yourself | More critical thinking (B = 0.26, p = 0.026) |
| Time pressure | Less critical thinking |
| Task perceived as low stakes | Less critical thinking |
Workers reported doing critical thinking on only 60% of the AI-aided tasks they shared. The mental work shifted from doing the task to checking the AI. Verification is real cognitive labor, but it is shallower than the original analysis would have been.
The paper revives a 1983 essay by Lisanne Bainbridge called Ironies of Automation. Her point holds: when you hand the routine work to a machine and reserve only the exceptions for the human, the human stops getting reps. By the time an exception arrives, the muscle is gone.
Google Did A Smaller Version Of This In 2011
Betsy Sparrow, Jenny Liu, and Daniel Wegner ran four experiments at Columbia and called the result the Google Effect. People remember less of the content they expect to find online later. They remember the location of the file better than the file itself. The internet had become external memory, the way a colleague or a spouse can be.
Google still made you click. Skim. Compare. Decide. Synthesis was your job. ChatGPT removes that whole step. The answer arrives finished, fluent, and confident. Three or four mental reps you used to do are now gone.
Why ChatGPT Is Different From A Calculator
Other tools automate one piece of the chain. A calculator automates arithmetic. GPS automates wayfinding. Spell check automates spelling. The thinking either side of that one step is still yours.
A chatbot automates the whole chain. From the question to the artifact. The thinking does not happen somewhere else. It does not happen at all. Cal Newport puts it the cleanest way anyone has: writing is thinking. Skip the writing and you skip the thinking it would have produced.
There is a second trap on top of that. Gerlich's 2025 study of 666 people found that even users who tried to evaluate AI answers were anchored by the first reply. Whatever the model said first set the frame for the rest of the conversation. Careful users and careless users converged on similar answers. The first frame wins.
What The Neuroscience Actually Shows
The numbers above describe a behavior. The neuroscience underneath them describes a mechanism.
Three patterns repeat across the MIT data:
| Brain band | Function | What ChatGPT did to it |
|---|---|---|
| Theta (4 to 8 Hz) | Memory encoding, semantic search | Suppressed during and after the task |
| Alpha (8 to 12 Hz) | Internal attention, idea retrieval | Suppressed during the task |
| Prefrontal connectivity | Planning, judgment, self-monitoring | Reduced versus the no-tool group |
The hippocampus does not encode a memory you did not work for. The prefrontal cortex does not strengthen circuits it did not run. This is not metaphor. It is the same use-it-or-lose-it pattern that shows up for any motor or cognitive skill that goes unpracticed for long enough.
A separate finding from the American Hospital Association is the cleanest real-world echo of this. Doctors who used AI to flag polyps during colonoscopies for three months were measurably worse at spotting polyps without the AI when researchers turned it off. Three months. Trained specialists.
The Receipts Are Not Only Academic
Search the word "atrophy" on Hacker News and you land on Addy Osmani's Avoiding Skill Atrophy in the Age of AI. The top comment reads: "I feel dumber, less confident, and less motivated now than I ever did pre-AI. I become easily frustrated, and reading docs or learning new frameworks feels like a chore." That comment has hundreds of upvotes.
The same line shows up everywhere:
| Forum | Thread |
|---|---|
| r/GithubCopilot | "I feel dumber nowadays because of AI" |
| r/GradSchool | "ChatGPT is making my students stupider" |
| r/edtech | "AI isn't a tool, it's a surrogate" |
| r/programming | "AI Is Making Us Worse Programmers" |
| r/nosurf | "Is ChatGPT making us dumber" |
YouTube has a one-million-view video titled ChatGPT Brain Rot Is Real. TikTok has an active #chatgptbrainrot discovery page. The Wall Street Journal ran a piece called How to Make Sure ChatGPT Doesn't Make You Dumber. The Atlantic coined "the age of de-skilling". Harvard Gazette ran a faculty Q&A asking if AI dulls our minds. A pattern this loud across this many surfaces is not panic. It is people noticing the same thing in their own lives.
The Riskiest Users Are The Youngest
Michael Gerlich's survey of 666 people found a steep age gradient. Critical thinking scores for the 17-to-25 cohort were roughly 45% lower than for the 46-and-up cohort. The same group reported leaning on AI tools 40 to 45% more often than their elders.
Younger users grew up with the answer machine on the desk. They never built the offline habit it would later replace. Kosmyna told TIME she put the paper out as a pre-print specifically because she feared a "GPT kindergarten" rollout before the developmental data was in.
The flip side is good news for adults already reading this. Older brains in the study held connectivity better. The reps you already have do not vanish overnight. They do erode if you stop using them.
The One Habit That Kept Brains Lit Up
Buried in the MIT paper is a result that changes the whole conversation. The fourth session of the study was a switch test. People who had used ChatGPT for three sessions had to write the next essay with no tools. People who had used no tools for three sessions got ChatGPT for the first time.
The first group was lost. They could not retrieve their own earlier arguments. Connectivity stayed flat. The second group looked different. Their brains lit up with high alpha and theta connectivity, prefrontal engagement, and active occipito-parietal regions. The MIT team called this group the Brain-to-LLM condition.
Same tool. Same task. Different order of engagement. Different result.
The rule that falls out of this is short. Think first. Prompt second. Refine third.
Habits That Make ChatGPT Augment Instead Of Atrophy
Pick three of these and run them for two weeks. The point is not purity. It is keeping the reps:
| Habit | What it protects |
|---|---|
| Write a rough draft in your own words before you open the chatbot | Prefrontal planning, semantic retrieval |
| Ask the model for raw facts, not finished conclusions | Synthesis, judgment |
| Demand a counter-argument to every answer it gives you | Anchoring resistance |
| Verify any quote, number, or citation by hand | Long-term memory encoding |
| Sit with a hard problem for ten minutes before prompting | Frustration tolerance, deep search |
| Read the source the AI cites, not just the summary | Comprehension, source evaluation |
| Close the tab before reviewing your draft one last time | Voice, ownership, recall |
Andy Clark, the philosopher who originally coined the extended-mind thesis, gives the cleanest one-liner for the new tools. Treat the model like a colleague who is sometimes brilliant and sometimes entirely off the rails. You verify a colleague. You should verify the model.
Where Build This Now Fits
Most AI products use a single shape. You ask. It answers. You ship. Your brain is never in the loop after the prompt. That is the order MIT measured falling apart.
Build This Now is a SaaS build system that runs on Claude Code. The pipeline is five commands long. The order is the point.
/discover runs first. Six research agents force you to spell out the idea, the user, the market, the pricing, and the tools before any code exists. You are deciding, not consuming.
/mvp-spec runs second. The spec for every feature lands in front of you for review. You read it, push back, edit, approve. The architecture is yours before a file gets written.
/mvp-build runs third. Eighteen specialist agents take the spec and build it. Quality gates check the work. You verify and accept each piece. The agents do the typing. You keep the judgment.
That sequence is the Brain-to-LLM order, encoded as a build pipeline. The thinking happens before the model writes. The model does not get to anchor the frame because the frame is already yours.
Skip the reps and the muscle goes. Keep the order and the tool stays a tool. Build with the brain on, not the brain off, not the brain after.
Stop configuring. Start building.
SaaS builder templates with AI orchestration.