Ingfei Chen is a writer and editor in Northern California who likes telling stories about medicine, science and the environment. Her articles have published in The New York Times, Science, KQED Mindshift, Scientific American and Smithsonian, among others.
Ingfei Chen
Freelance writer
From this contributor
What baby siblings can teach us about autism
Studies of infants at risk for autism have not yielded a test to predict who will eventually be diagnosed. But they have transformed our understanding of the condition.
The gene hunters
Criss-crossing the globe on a quest for unusual DNA, researchers have discovered a rare mutation that promises insights into both epilepsy and autism — and points to a treatment.
Wide awake: Why children with autism struggle with sleep
Half of children who have autism have trouble falling or staying asleep, which may make their symptoms worse. Scientists are just beginning to explore what goes wrong in the midnight hour.

Wide awake: Why children with autism struggle with sleep
Explore more from The Transmitter
Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.
Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.
The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.