Yingxi Lin.

Yingxi Li

Professor of psychiatry and neuroscience
University of Texas Southwestern Medical Center

Yingxi Li is professor of psychiatry and neuroscience, and chief of the Psychiatry Neuroscience Research Division at University of Texas Southwestern Medical Center. Her research focuses on uncovering molecular and circuit mechanisms in neurodevelopment, memory formation and neuropsychiatric conditions. Employing a broad array of multidisciplinary experimental techniques, work in her lab spans analyses from the genomic and molecular level to synapse, circuit and whole-animal behavioral levels.

Originally from China, Lin studied engineering physics at Tsinghua University and received her Ph.D. in biophysics from Harvard University. She conducted her postdoctoral research under Michael Greenberg at Harvard Medical School. She was assistant professor from 2009 to 2015 and associate professor from 2015 to 2018 at the McGovern Institute for Brain Research at the Massachusetts Institute of Technology. Prior to her current role, she was full professor and director of the Neuroscience Graduate Program at SUNY Upstate Medical University.

Explore more from The Transmitter

Photograph of the BRIDGE team and students visiting a laboratory.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo

These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

By Lauren Schenkman
20 May 2025 | 6 min read
Research image of neurite overgrowth in cells grown from people with autism-linked PPP2R5D variants.

Cortical structures in infants linked to future language skills; and more

Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

By Jill Adams
20 May 2025 | 2 min read
Digitally distorted building blocks.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants

A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

By Alona Fyshe
19 May 2025 | 7 min read