🧠 Brain-Only or the LLM Group?
Exploring how AI tools reshape human cognition, creativity, and learning
What happens when we write, think, and create with – or without – artificial intelligence?
A recent wave of research comparing “brain-only,” “search engine,” and “LLM-assisted” essay writing has ignited global debate among educators and AI researchers. These studies explore not only how we use technology, but how it’s beginning to shape the very way we think, remember, and create.
Inside the Experiment: Brain vs. Search vs. LLM
In the study “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Tasks,” participants wrote essays under three conditions:
- 
Brain-only group: relied solely on their own knowledge.
 - 
Search engine group: used traditional web searches.
 - 
LLM group: used ChatGPT-style language models.
 
The findings were eye-opening.
🧩 The Brain-only group displayed the strongest neural engagement and the widest network activity – their brains worked harder and more cohesively across memory and reasoning regions.
🔍 The Search group showed moderate brain activity. Their focus shifted toward external visual information (scrolling, filtering results), slightly reducing internal cognitive coordination.
💬 The LLM group, however, exhibited the weakest neural coupling overall. Because the AI provided cognitive support, their brains “offloaded” much of the effort involved in generating, structuring, and refining text – a process known as cognitive offloading.
When Memory Fades: The Cost of Cognitive Offloading
The behavioral data was just as striking.
Despite producing coherent essays, 83.3% of the LLM group couldn’t correctly quote from their own writing minutes later. None produced accurate quotations at all.
This suggests their brains never deeply encoded the information – they selected and accepted AI text without fully internalizing it. In contrast, the Brain-only group achieved near-perfect recall by Session 2, while the Search group still remembered more than the LLM users.
The takeaway? Relying too heavily on AI may save time but could erode deep memory formation – weakening the neural pathways that support true learning.
Creativity and Ownership: When the “Soul” Goes Missing
The research uncovered another layer: essays produced with AI assistance tended to sound polished yet hollow.
Teachers described them as “technically correct but soulless.”
The linguistic patterns were homogenous, lacking personal voice and nuance – a sign that creativity was reduced when the AI handled most of the cognitive load. Even more interestingly, LLM users reported a lower sense of authorship, often assigning partial ownership to the AI. This diminished cognitive agency correlated with weaker neural activity in regions tied to self-reflection and evaluation.
When we let AI generate ideas for us, we may also surrender a piece of our creative identity.
The Illusion of Thinking — in AI Itself
But it’s not just humans who face cognitive limits. Another study, “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity,” examined advanced Large Reasoning Models (LRMs) – AI systems that attempt to simulate multi-step thought.
Their findings:
- 
Low complexity tasks: Regular LLMs often outperform LRMs. The “thinking” models tend to overthink simple problems.
 - 
Medium complexity: LRMs shine – they handle reasoning tasks that benefit from extended, stepwise analysis.
 - 
High complexity: Both fail completely. As difficulty increases, LRMs paradoxically reduce their reasoning effort – a built-in ceiling to their cognitive scaling.
 
Even when given explicit algorithms, LRMs still stumble. Their “thinking” isn’t true reasoning; it’s pattern recognition with memory, not genuine comprehension.
A Balanced Model for Human-AI Learning
When the two studies are viewed together, a nuanced message emerges:
AI can enhance productivity but may also dull our cognitive edge if used too early or too often.
Interestingly, in a follow-up session, participants who first wrote essays without AI and later used it on familiar topics showed greater brain connectivity. This suggests that starting with self-driven work – before introducing AI – creates a neurocognitively optimal sequence for learning and creativity.
Takeaway: Partnership, Not Replacement
For educators and learners, the goal should not be to avoid AI but to integrate it wisely.
Let AI handle repetitive, mechanical tasks – formatting, grammar, summarization – while humans focus on idea generation, critical thinking, and reflection.
Our minds grow through challenge and effort. When we offload too much to machines, we risk forgetting not just what we know, but how to think.
True innovation lies in the balance between human depth and AI efficiency – a partnership where technology amplifies, rather than replaces, our intellectual agency.