Moses V. Chao is professor of cell biology, physiology and neuroscience, and psychiatry at the New York University School of Medicine. He is the recipient of a Zenith Award from the Alzheimer’s Association, a Jacob Javits Neuroscience Investigator Award and a Guggenheim Fellowship. He is also a fellow of the American Association for the Advancement of Science and past president of the Society for Neuroscience.

Moses V. Chao
Professor of cell biology, physiology and neuroscience, and psychiatry
New York University School of Medicine
From this contributor
The question of regeneration—an excerpt from ‘Periphery: How Your Nervous System Predicts and Protects Against Disease’
In his recent book, Moses Chao makes the case that the peripheral nervous system can warn of future illnesses.
Explore more from The Transmitter
Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.
Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.
The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.