Headshot of Steve Ramirez.

Steve Ramirez

Assistant professor of psychological and brain sciences
Boston University

Steve Ramirez is assistant professor of psychological and brain sciences at Boston University and a former junior fellow at Harvard University. He received his B.A. in neuroscience from Boston University and began researching learning and memory in Howard Eichenbaum’s lab. He went on to receive his Ph.D. in neuroscience in Susumu Tonegawa’s lab at the Massachusetts Institute of Technology, where his work focused on artificially modulating memories in the rodent brain. Ramirez’s current work focuses on imaging and manipulating memories to restore health in the brain.

Both in and out of the lab, Ramirez is an outspoken advocate for making neuroscience accessible to all. He is passionate about diversifying and magnifying the voices in our field through intentional mentorship—an approach for which he recently received a Chan-Zuckerberg Science Diversity Leadership Award. He has also received an NIH Director’s Transformative Research Award, the Smithsonian’s American Ingenuity Award and the National Geographic Society’s Emerging Explorer Award. He has been recognized on Forbes’ 30 under 30 list and MIT Technology Review‘s Top 35 Innovators Under 35 list, and he has given two TED Talks.

Explore more from The Transmitter

Photograph of the BRIDGE team and students visiting a laboratory.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo

These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

By Lauren Schenkman
20 May 2025 | 6 min read
Research image of neurite overgrowth in cells grown from people with autism-linked PPP2R5D variants.

Cortical structures in infants linked to future language skills; and more

Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

By Jill Adams
20 May 2025 | 2 min read
Digitally distorted building blocks.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants

A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

By Alona Fyshe
19 May 2025 | 7 min read