Rahul Rao is a freelance science writer, graduate of New York University’s Science, Health and Environmental Reporting Program, and “Doctor Who” fan.

Rahul Rao
From this contributor
Web app tracks pupil size in people, mice
The app relies on artificial intelligence and could help researchers standardize studies of pupil differences in autistic people and in mouse models of autism.
New library catalogs the human gut microbiome
Researchers put hundreds of gut bacteria strains through their paces to chart the compounds each creates — and to help others explore the flora's potential contribution to autism.
New unified toolbox traces, analyzes neurons
‘SNT’ helps researchers sift through microscope images to reconstruct and analyze neurons and their connections.
Explore more from The Transmitter
Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.
Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.
The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.