Kari Hoffman.

Kari Hoffman

Associate professor of psychology
Vanderbilt University

Kari Hoffman is associate professor of psychology at Vanderbilt University, specializing in computational primate neuroethology within the Vanderbilt Brain institute, the Data Science Institute, the Department of Biomedical Engineering and the Department of Psychology. Her research investigates how neural circuits organize and adapt to allow an organism to build and apply knowledge effectively.

Hoffman’s lab uses naturalistic, contingent tasks with primate models to understand brain function in real-world contexts, focusing on how memories are structured over time. To understand neural population organization during and after learning, her team uses high-density, wireless multisite ensemble recordings. These neural and behavioral measures are then compared with computational models of learning and generalization.

Hoffman earned her Ph.D. in systems and computational neuroscience from the University of Arizona and completed a postdoctoral fellowship in the lab of Nikos Logothetis at the Max Planck Institute in Tübingen, Germany. Her contributions to neuroscience have been recognized with Sloan and Whitehall fellowships, an Ontario Early Researcher Award, and designation as a Kavli fellow.

Explore more from The Transmitter

Photograph of the BRIDGE team and students visiting a laboratory.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo

These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

By Lauren Schenkman
20 May 2025 | 6 min read
Research image of neurite overgrowth in cells grown from people with autism-linked PPP2R5D variants.

Cortical structures in infants linked to future language skills; and more

Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

By Jill Adams
20 May 2025 | 2 min read
Digitally distorted building blocks.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants

A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

By Alona Fyshe
19 May 2025 | 7 min read