From bench to bot: A scientist’s guide to AI-powered writing

I was initially skeptical of artificial-intelligence tools such as ChatGPT for scientific writing. But after months of using and teaching generative artificial intelligence, I have come to realize that it has a place in the scientific writer’s tool kit, even if it can’t write that grant for you from scratch.

Computer-generated image of a stack of papers.
Making AIrt: Prompted with the right palettes and ideas, artificial intelligence is also producing illustrations for this column.
Rebecca Horne / Adobe Firefly
In the “From bench to bot” series, neuroscientist and science writer Tim Requarth will explore the promises and pitfalls of artificial-intelligence tools in writing.

As a scientist, you’re a professional writer. You write grants to fund your research and papers to share your findings with the world. You work under deadline — and under pressure. You may not get paid per word or publish bestselling novels, but your livelihood depends on consistently producing quality writing on time.

That doesn’t necessarily mean writing comes easily to you. So, when generative artificial-intelligence (AI) tools such as ChatGPT burst onto the scene late last year, perhaps you were allured by the in silico siren call. Could AI make this critical but challenging part of your job a little less stressful? Or perhaps you looked on with skepticism. Sure, these chatbots might be great for churning out marketing copy, but for scientists they’re just a distraction, a computationally expensive way to produce semi-accurate, uninspired text.

As both a professional science writer and an instructor of scientific writing, I was curious about this new technology while also wary of its limitations and implications — the biases baked into this technology and the ethically questionable way it was built are causes for real concern. At the same time, I knew I couldn’t pretend tools such as ChatGPT don’t exist, because I’d need to be able to guide students, postdoctoral researchers and principal investigators on using them — it’s literally my job: For the past five years, I’ve been working full time at the Vilcek Institute of Graduate Biomedical Sciences at the NYU Grossman School of Medicine in New York City to develop and teach a scientific communication curriculum. If ChatGPT and other such innovations turned out to be powerful writing aids, I wouldn’t want people with brilliant ideas but little GPT savvy to get left behind. If, on the other hand, AI-assisted writing was mostly hype, I’d need to show people who were already using it why it wasn’t going to further their writing goals.

Since ChatGPT’s launch in late 2022, I’ve immersed myself in the wonders and woes of generative AI. I’ve advised many scientists at my home institution and given talks and workshops on this fast-evolving field.

This monthly column will distill what I’ve learned — and am still learning — about how best to incorporate these tools into your writing process. I’ll admit, I was initially skeptical of ChatGPT and similar tools for scientific writing. For intellectual work, writing struck me as too intimately linked to thinking to outsource it to a chatbot. “If people cannot write well, they cannot think well, and if they cannot think well, others will do their thinking for them,” goes one of George Orwell’s more famous quotes. I didn’t trust ChatGPT to do the thinking for myself or those I teach.

But after months of using and teaching generative AI, I have come to realize that it has a place in the scientific writer’s tool kit, even if it can’t write that grant for you from scratch. And the technology is likely only going to get more powerful and become more ubiquitous. In my view, now is not the time for professional writers — scientists included — to bury their heads in the sand. In a decade, lacking proficiency in generative AI might be akin to not knowing how to use a search engine today.

In the columns that follow, I plan to offer a series of use cases that explore ways that this technology can make writing better, faster or easier. I also won’t shy away from telling you when generative AI simply isn’t up to the task. For each of these use cases, I’ll draw on my years of experience teaching writing with traditional pedagogy and also offer practical AI-assisted workflows. We’ll dive into the dark art of “prompt engineering,” which is a fancy term for getting AI to do what you want it to. Equally important, we’ll discuss strategies for “output curation,” or how to ensure you always retain the authority to discern that AI responses are meeting standards of accuracy and rhetorical impact. In the end, I can’t promise you that AI will solve all your writing troubles — and frankly I wouldn’t trust anyone who claims it will — but I can promise you that you’ll have more realistic expectations about what you can ask of AI, and what will still be asked of you.

User beware
Each column will appear with this warning, so heed it now: When exploring the use of AI, it’s crucial to be aware that to incorporate it into our writing life is to navigate a minefield of possible dangers. AI can confidently produce convincing but inaccurate information (often called “hallucinations”), making it untrustworthy for factual queries, which means it is crucial that you have verification checkpoints in your workflow. Even accurate AI-generated content can be biased. It is well documented, for example, that social biases, such as racism and sexism, are embedded in and exacerbated by AI systems. AI may also recapitulate bias in subtler ways, such as by steering users toward established scientific ideas, which are more likely to be represented in the AI’s training data.

Data-privacy concerns arise when using standard web interfaces, as user inputs can be adopted to train future AI models, though certain technical workarounds offer more protection. And at least one major journal (Science) and the U.S. National Institutes of Health have banned the use of AI for some purposes. Lastly, although generative AI generally does not pose a high risk of detectable plagiarism, that risk may increase for highly specialized content that is poorly represented in the training data (which might not be much of a concern for the typical user but could be a larger concern for the typical scientist). Some AI systems in development may overcome some of these problems, but none will be perfect. We’ll discuss these and other issues at length as they arise.

Get alerts for “AI: From bench to bot” in your inbox

This column explores the promises and pitfalls of artificial-intelligence tools in writing — when it can make writing better, faster and easier, and how to navigate the minefield of possible dangers.