Morton Ann Gernsbacher is Vilas Research Professor and Sir Frederic Bartlett Professor of Psychology at the University of Wisconsin–Madison. She is a specialist in autism and psycholinguistics and has written and edited professional and lay books and over 100 peer-reviewed articles and book chapters on these subjects. She currently serves as co-editor of the journal Psychological Science in the Public Interest and associate editor for Cognitive Psychology, and she has previously held editorial positions for Memory & Cognition and Language and Cognitive Processes. She was also president of the Association for Psychological Science in 2007.
Morton Ann Gernsbacher
Professor
University of Wisconsin - Madison
From this contributor
Book review: “Neurotribes” recovers lost history of autism
Steve Silberman’s new book, “Neurotribes,” recounts his 15-year quest to understand “the legacy of autism.”

Book review: “Neurotribes” recovers lost history of autism
Explore more from The Transmitter
Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.
Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.
The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.