Mayada Elsabbagh

Assistant Professor
McGill University

Mayada Elsabbagh is associate professor of neurology and neurosurgery at the Montreal Neurological Institute at McGill University in Canada. Her research focuses on understanding the root causes of autism and tracing its developmental pathways. The approach combines innovative research with the mission of accelerating the translation of scientific discoveries into community impact. Elsabbagh’s contributions include the discovery of early brain-function markers for autism prior to the onset of behavioral signs. She has supported the successful launch of several collaborative research and translational networks aimed at accelerating the pace of discovery in autism. This includes the Transforming Autism Care Consortium, a Québec research network supported by the Fonds de recherche du Québec-Santé and several community partners. She is also active in global efforts to improve evidence-based practice in the community and capacity-building in low- and middle-income countries. The public value and social relevance of Elsabbagh’s research has been recognized through various awards, including the Neville Butler Memorial Prize and the British Psychological Society Neil O’Conner Prize.

From this contributor

Explore more from The Transmitter

Photograph of the BRIDGE team and students visiting a laboratory.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo

These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

By Lauren Schenkman
20 May 2025 | 6 min read
Research image of neurite overgrowth in cells grown from people with autism-linked PPP2R5D variants.

Cortical structures in infants linked to future language skills; and more

Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

By Jill Adams
20 May 2025 | 2 min read
Digitally distorted building blocks.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants

A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

By Alona Fyshe
19 May 2025 | 7 min read