Joseph Gleeson is professor of neurosciences and pediatrics at the University of California, San Diego.

Joseph Gleeson
Professor
University of California, San Diego
From this contributor
Lessons from n-of-1 trials: A conversation with Joseph Gleeson
Some conditions are too rare for conventional drug trials, leading some scientists to test bespoke treatments in single participants. Gleeson discusses the merits — and limitations — of these tiny trials.

Lessons from n-of-1 trials: A conversation with Joseph Gleeson
Diets may help autistic children with certain genetic profiles
No diet is likely to treat autistic people on a large scale, but diets based on a genetic profile may bring big benefits to a few.

Diets may help autistic children with certain genetic profiles
Explore more from The Transmitter
Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.
Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.
The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.