A few weeks ago, I sat in a faculty meeting about artificial-intelligence policy for our graduate students. After three years of drafting guidelines, revising them and watching them become obsolete, we were debating whether to ban AI use for thesis proposals and dissertations, despite the technical impossibility of enforcing a ban. One colleague cited an MIT Media Lab study that had gone viral over the summer, showing reduced neural connectivity in people who wrote with ChatGPT. “Cognitive debt,” the researchers called it. The study has real limitations, but it crystallized a worry that has been building since ChatGPT’s launch. If writing is a form of thinking, if the struggle to articulate an idea is part of how you come to understand it, then tools that bypass that struggle might degrade a scientist’s capacity for the kind of thinking that matters most for actual discovery.
I’ve been thinking about AI and scientific writing for a while now, and I find myself caught between two positions I can’t fully accept. The worry of “cognitive debt,” or skill decay, or however you want to frame the general issue, feels legitimate to me. But so does the counterargument that these worries are overblown. And the more I’ve looked for evidence that might settle the question, the more I’ve come to believe that the evidence doesn’t exist—at least not for the population actually practicing science.
Anyone who writes seriously will recognize how useful writing can be in the thinking process. You’re working on an aims page that seemed clear in your head, but it’s just not working when you try to write it down. The logic that felt sturdy in your mind keeps crumbling on the page. You rearrange, and now something else breaks. After an hour of struggle, you realize that what felt like a communication problem is turning out to be a thinking problem, one you couldn’t see until the writing forced you to confront it. The writing-as-thinking assertion has many proponents—whom we might call in this context the “cognitive traditionalists.” As the writer Flannery O’Connor memorably put it: “I write because I don’t know what I think until I read what I say.” Or as Richard Feynman said of his notebooks: “They aren’t a record of my thinking process. They are my thinking process.”
I
The question is whether the loss of unassisted writing actually matters to science. Here, so-called “AI apologists” have arguments on their side. The first is that writing isn’t so precious as the traditionalists make it out to be. Any cognitive benefit might not be about writing per se, but about externalization—the act of forcing internal representations into some external form where they can be inspected, challenged, revised. If that’s the mechanism, then dialogue might work as well as solitary composition. Talking through your ideas with a colleague, defending your approach at a lab meeting, explaining your project to a collaborator from another field—these are also scenarios where vague intuitions get stress-tested, where gaps become visible, where you discover what you actually think. Psychologists Daniel Kahneman and Amos Tversky famously took long walks together to spark ideas that led to their Nobel Prize-winning work in behavioral economics. And if externalization is what matters, some might argue that even dialogue with an appropriate AI system might preserve the cognitive benefit. You’re still articulating. You’re still making implicit reasoning explicit. The medium changed, but perhaps the mechanism remains intact—implying that, for science, enough useful cognition can occur outside of writing that offloading writing won’t be as impactful as the cognitive traditionalists suggest.
The AI apologists have data on their side, at least of a certain kind. A study published last month in Science found that researchers who adopted large language models started posting one-third to one-half more papers, with the largest gains among non-native English speakers. This is not nothing. The barriers that non-native speakers face are real and well documented; if AI can lower them, that’s a genuine benefit for equity in science. These data, however, may provide strong evidence for equity in the professional markers of scientific progress but say nothing about the quality of the underlying science. Indeed, many scientists have raised alarms that AI will merely accelerate the production of papers while doing nothing to address the actual bottlenecks that slow genuine scientific progress.
S
