Satrajit Ghosh.

Satrajit Ghosh

Director
Open Data in Neuroscience Initiative

Satrajit Ghosh is director of the Open Data in Neuroscience Initiative and a principal research scientist at the McGovern Institute for Brain Research at the Massachusetts Institute of Technology. He is also assistant professor of otolaryngology-head and neck surgery at Harvard Medical School. He is a computer scientist and computational neuroscientist by training.

Ghosh directs the Senseable Intelligence Group, whose research portfolio comprises projects on spoken communication, brain imaging and informatics to address gaps in scientific knowledge in three areas: the neural basis and translational applications of human spoken communication, machine-learning approaches to precision psychiatry and medicine, and preserving information for reproducible research and knowledge generation. He is a principal investigator on National Institutes of Health projects supported by the BRAIN Initiative and the Common Fund and is a big proponent of open and collaborative science.

He received his B.S. (honors) degree in computer science from the National University of Singapore and his Ph.D. in cognitive and neural systems from Boston University.

Explore more from The Transmitter

Photograph of the BRIDGE team and students visiting a laboratory.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo

These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

By Lauren Schenkman
20 May 2025 | 6 min read
Research image of neurite overgrowth in cells grown from people with autism-linked PPP2R5D variants.

Cortical structures in infants linked to future language skills; and more

Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

By Jill Adams
20 May 2025 | 2 min read
Digitally distorted building blocks.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants

A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

By Alona Fyshe
19 May 2025 | 7 min read