While AI increases efficiency, it may also make humans “dumber.” A recent study published on the preprint platform arXiv by MIT’s Media Lab indicates that people who use AI tools to write papers exhibit lower brain activity compared to those who rely solely on their brains. Long-term dependence on AI tools can lead to “cognitive debt,” weakening critical thinking, increasing susceptibility to manipulation, and diminishing creativity.
Large language models like ChatGPT and DeepSeek are being increasingly used. To explore the cognitive costs associated with using AI in the context of writing papers, Nataliya Kosmyna and her colleagues from MIT’s Media Lab conducted an experiment. They divided participants into three groups: one using AI (ChatGPT), one using a search engine (Google), and one relying entirely on their brains (with internet access restricted). They measured brain wave activity during paper writing in these three groups. After completing their papers in their respective groups, some participants were reassigned to new groups to write a new paper on the same topic.
Analysis of the EEG readings revealed significantly different neural connectivity patterns between the three groups. The brain’s connectivity systematically weakened as external support increased. Specifically, the “brain-only” group showed the strongest and most widespread neural network connections, with more active information flow from the occipital lobe (back of the brain) to the prefrontal cortex. The search engine group had moderate neural connectivity strength, with stronger activation in brain regions associated with visual processing and memory, reflecting their engagement with visual information during the use of the search engine.
The AI group exhibited the weakest performance. Despite also using digital interfaces, this group did not show the same level of activation in the visual cortex as the search engine group. Particularly in the frequency bands associated with situational memory consolidation and semantic encoding, this indicated that they were passively integrating AI-generated content rather than internalizing it into their memory networks. After removing AI, they struggled significantly to cite their own recently written paper, even failing to cite it altogether.
The authors believe these findings demonstrate that external tools like AI not only change task performance but also reshape the underlying cognitive architecture. The brain-only group relied on widely distributed neural networks for endogenous content creation; the search engine group used a hybrid strategy of visual information management and regulation; and the AI group employed a procedural integration of optimizing AI suggestions.
Behavioral data suggest that stronger neural connectivity is associated with more solid memory and more precise semantic processing. While the brain-only group faced higher cognitive load, they achieved deeper learning, resulting in a higher sense of ownership over their papers. In contrast, the AI group, while gaining efficiency, showed weaker memory traces and lower identification with their papers.
In the final part of the experiment, participants switched groups. The participants who had originally used AI but were now switched to the brain-only group showed lower brain activity, not only failing to surpass the brain-only group but also falling behind the search engine group. Participants who had originally relied on their brains but were switched to the AI group showed a slight decrease in brain activity but still maintained relatively high neural engagement, with improved paper quality.
The authors warn of “cognitive debt.” Analysis and interview content reveal that participants who shifted from the AI group to the brain-only group repeatedly focused on narrower topic ranges. This repetitive pattern suggests that most people did not deeply engage with the subject matter or critically examine the content provided by AI, resulting in more superficial writing. This phenomenon reflects the accumulation of “cognitive debt”—saving mental effort in the short term but at the cost of long-term capability degradation, including weakened critical thinking, increased susceptibility to manipulation, and reduced creativity.
“When the educational impact of large language models has not yet been fully recognized, this study reveals the grim reality that their use may lead to a decline in learning ability.” The authors believe that, in the context of widespread AI tool usage, the impact on neural cognitive development must be carefully assessed, particularly the potential trade-offs between external tools like AI and the brain’s internalized synthesis.