Reward prediction error
Recent articles
Vijay Namboodiri and Ali Mohebi on the evolving story of dopamine’s role in cognitive function
Researchers discuss the classic stories of dopamine’s role in learning, ongoing work linking it to a wide variety of cognitive functions, and recent research suggesting that dopamine may help us "look back" to discover the causes of events in the world.
Vijay Namboodiri and Ali Mohebi on the evolving story of dopamine’s role in cognitive function
Researchers discuss the classic stories of dopamine’s role in learning, ongoing work linking it to a wide variety of cognitive functions, and recent research suggesting that dopamine may help us "look back" to discover the causes of events in the world.
Explore more from The Transmitter
Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.
Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.
The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.