Shafaq Zia is a science journalist and a graduate student in the Graduate Program in Science Writing at the Massachusetts Institute of Technology. Previously, she was a reporting intern at STAT, covering the COVID-19 pandemic and the latest research in health technology.

Shafaq Zia
From this contributor
Spotted around the web: COVID-19 during pregnancy, sleep problems, eugenics
Here is a roundup of news and research for the week of 6 June.
Spotted around the web: COVID-19 during pregnancy, sleep problems, eugenics
New resource tracks genetic variations in Han Chinese populations
An online database called NyuWa catalogs genetic variations among nearly 3,000 individuals and provides a comprehensive reference genome for the Han people.

New resource tracks genetic variations in Han Chinese populations
Explore more from The Transmitter
Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.
Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.
The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.