Zhe Sage Chen.

Zhe Sage Chen

Associate professor of psychiatry, and of neuroscience and physiology
New York University School of Medicine

Zhe Sage Chen is associate professor of psychiatry, and of neuroscience and physiology, at New York University School of Medicine. He is also a faculty member in the biomedical engineering department at NYU Tandon School of Engineering. He is founding director of the Computational Neuroscience, Neuroengineering and Neuropsychiatry Laboratory and program director of the Computational Psychiatry program at NYU. He works in a wide range of areas in computational neuroscience, neural engineering, machine learning and brain-machine interfaces, studying fundamental research questions related to memory and learning, nociception and pain, and cognitive control. He has authored a book and edited three others, his latest book, “Memory and Sleep: A Computational Understanding,” is slated to be published in late 2025.

Chen earned his Ph.D. in electrical and computer engineering from McMaster University and completed his postdoctoral training at RIKEN Brain Science Institute, Harvard Medical School and the Massachusetts Institute of Technology.

Explore more from The Transmitter

Photograph of the BRIDGE team and students visiting a laboratory.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo

These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

By Lauren Schenkman
20 May 2025 | 6 min read
Research image of neurite overgrowth in cells grown from people with autism-linked PPP2R5D variants.

Cortical structures in infants linked to future language skills; and more

Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

By Jill Adams
20 May 2025 | 2 min read
Digitally distorted building blocks.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants

A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

By Alona Fyshe
19 May 2025 | 7 min read