So well written. You named the tradeoff so many people are already living through, but haven’t yet paused to see.
I’ve been writing about this too, how reflection, curiosity, and taste are quietly becoming the most critical layers of thinking in the AI era. The more we reward speed and convenience, the more invisible the cost becomes: not just cognitive atrophy, but identity drift.
Thank you for holding this thread with care. It’s one of the most important ones we have right now.
I love how you said “identity drift.” That captures something I’ve been feeling but hadn’t quite put words to. It’s not just about thinking less, it’s about slowly disconnecting from how we think and who we are. I look forward to reading your work!
And as the 80s girl band Bananarama so proclaimed, ‘it ain't what you do its the way that you do it’ and I find brainstorming with AI a true learning experience because I have decades of knowing how to learn.
But the cynic in me worries that if you are an authoritarian, putting the bots in the classroom is exactly what you would do.
What would’ve made this study even more fascinating is if they’d included a fourth group where students use ChatGPT as an interactive collaborator. Not just typing a prompt and pasting the output, but actually co-creating: building an outline together, refining ideas, and editing with intention.
I’d bet that kind of engaged use would show even higher cognitive activation than going solo.
This MIT “AI harms brains” study really tells us nothing because it has serious methodological problems:
1. INCREDIBLY TINY SAMPLE: Only 54 people aged 18-39
2. This has NOT been peer reviewed: Researchers rushed to publish without scientific review
3. The conditions aren’t “real world” case use: Used SAT essay writing, not real-world AI usage
4. This is CLEARLY advocacy over science: Lead researcher admitted releasing early to influence policy, saying she was “afraid policymakers might decide to do GPT kindergarten”
5. The big picture: The study conflates passive AI dependency (copy-pasting outputs) with skilled AI collaboration. It doesn’t examine how people actually integrate AI thoughtfully into their work.
THE BOTTOM LINE: This research tells us more about bad AI habits than AI itself. When used as a thinking partner rather than a replacement for thinking, the experience is fundamentally different.
FOLKS! Quality research takes time!
Preliminary findings released to drive policy decisions should be viewed with HEALTHY skepticism.
I totally agree; this study has numerous flaws, and I don’t see it as definitive by any means. I shared it more as a jumping-off point. The post isn’t really about the science itself, but about what happens behaviorally when we start reinforcing shortcuts over thinking.
I really like your breakdown. I definitely knew there were flaws in the study, and you laid them out so clearly. I wasn’t trying to endorse the science as much as use it to spark the bigger conversation about reinforcement and what gets repeated.
So well written. You named the tradeoff so many people are already living through, but haven’t yet paused to see.
I’ve been writing about this too, how reflection, curiosity, and taste are quietly becoming the most critical layers of thinking in the AI era. The more we reward speed and convenience, the more invisible the cost becomes: not just cognitive atrophy, but identity drift.
Thank you for holding this thread with care. It’s one of the most important ones we have right now.
I love how you said “identity drift.” That captures something I’ve been feeling but hadn’t quite put words to. It’s not just about thinking less, it’s about slowly disconnecting from how we think and who we are. I look forward to reading your work!
Thank you, appreciate it. Going to read through your blog also. Have a great weekend
Being a mindful sceptic, I love this 😉
And as the 80s girl band Bananarama so proclaimed, ‘it ain't what you do its the way that you do it’ and I find brainstorming with AI a true learning experience because I have decades of knowing how to learn.
But the cynic in me worries that if you are an authoritarian, putting the bots in the classroom is exactly what you would do.
What would’ve made this study even more fascinating is if they’d included a fourth group where students use ChatGPT as an interactive collaborator. Not just typing a prompt and pasting the output, but actually co-creating: building an outline together, refining ideas, and editing with intention.
I’d bet that kind of engaged use would show even higher cognitive activation than going solo.
I completely agree...much more research is needed!
Yes, we are in the very beginning stages of all of this.
This MIT “AI harms brains” study really tells us nothing because it has serious methodological problems:
1. INCREDIBLY TINY SAMPLE: Only 54 people aged 18-39
2. This has NOT been peer reviewed: Researchers rushed to publish without scientific review
3. The conditions aren’t “real world” case use: Used SAT essay writing, not real-world AI usage
4. This is CLEARLY advocacy over science: Lead researcher admitted releasing early to influence policy, saying she was “afraid policymakers might decide to do GPT kindergarten”
5. The big picture: The study conflates passive AI dependency (copy-pasting outputs) with skilled AI collaboration. It doesn’t examine how people actually integrate AI thoughtfully into their work.
THE BOTTOM LINE: This research tells us more about bad AI habits than AI itself. When used as a thinking partner rather than a replacement for thinking, the experience is fundamentally different.
FOLKS! Quality research takes time!
Preliminary findings released to drive policy decisions should be viewed with HEALTHY skepticism.
I totally agree; this study has numerous flaws, and I don’t see it as definitive by any means. I shared it more as a jumping-off point. The post isn’t really about the science itself, but about what happens behaviorally when we start reinforcing shortcuts over thinking.
I appreciate your breakdown!
Thanks. The issues with AI certainly have more to do with human nature than the tools themselves.
The study is deeply flawed though... https://x.com/anecdotal/status/1935075074568225095
I really like your breakdown. I definitely knew there were flaws in the study, and you laid them out so clearly. I wasn’t trying to endorse the science as much as use it to spark the bigger conversation about reinforcement and what gets repeated.
Yes that seems to be the point of the study, to confirm priors. Always a red flag.