Jeremy Hsu is a science and technology journalist who writes for publications such as Scientific American, Discover, Wired, IEEE Spectrum and Undark. His recent focus has been on how artificial intelligence techniques such as deep learning could impact society.

Jeremy Hsu
From this contributor
How scientists secure the data driving autism research
Protecting the privacy of autistic people and their families faces new challenges in the era of big data.

How scientists secure the data driving autism research
Un ordinateur peut-il diagnostiquer l’autisme?
L’apprentissage automatique (machine learning) présente une possibilité pour aider les cliniciens à repérer l'autisme plus tôt, mais des obstacles techniques et éthiques demeurent.
Why are there so few autism specialists?
A lack of interest, training and pay may limit the supply of specialists best equipped to diagnose and treat children with autism.
Can a computer diagnose autism?
Machine-learning holds the promise to help clinicians spot autism sooner, but technical and ethical obstacles remain.
Explore more from The Transmitter
Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.
Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.
The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.