Headshot of Samuel Gershman.

Samuel Gershman

Professor in the Department of Psychology and Center for Brain Science
Harvard University

Sam Gershman is professor in the Department of Psychology and Center for Brain Science at Harvard University. His lab studies the computational mechanisms of learning, memory, decision-making and perception. He is also affiliated with the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard. He is author of the 2021 book “What Makes Us Smart: The Computational Logic of Human Cognition.”

Gershman received his B.A. in neuroscience and behavior from Columbia University in 2007 and his Ph.D. in psychology and neuroscience from Princeton University in 2013. From 2013 to 2015 he was a postdoctoral fellow in the brain and cognitive sciences department at the Massachusetts Institute of Technology. He joined Harvard University as assistant professor in 2015.

Explore more from The Transmitter

Photograph of the BRIDGE team and students visiting a laboratory.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo

These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

By Lauren Schenkman
20 May 2025 | 6 min read
Research image of neurite overgrowth in cells grown from people with autism-linked PPP2R5D variants.

Cortical structures in infants linked to future language skills; and more

Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

By Jill Adams
20 May 2025 | 2 min read
Digitally distorted building blocks.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants

A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

By Alona Fyshe
19 May 2025 | 7 min read