Rebecca Saxe’s work addresses the human brain’s capacity for abstract thought and the origins of ‘theory of mind,’ the ability to understand the beliefs, hopes and plans of other people.
Rebecca Saxe
Professor
Massachussetts Institute of Technology
From this contributor
U.S. agency backtracks on broad interpretation of ‘clinical trial’
Autism researchers need no longer worry that their basic research will become entangled in the red tape associated with clinical trials.

U.S. agency backtracks on broad interpretation of ‘clinical trial’
1985 paper on the theory of mind
In 1985, Simon Baron-Cohen, Alan Leslie and Uta Frith reported for the first time that children with autism systematically fail the false belief task.
Explore more from The Transmitter
Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.
Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.
The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.