Headshot of Kenneth Harris.

Kenneth Harris

Professor of quantitative neuroscience
University College London

Kenneth Harris is professor of quantitative neuroscience at University College London. Together with Matteo Carandini, he co-directs the Cortical Processing Laboratory. The aim of the laboratory is to understand the computations performed by neuronal populations in the visual system, the underlying neural circuits and the way these computations lead to perceptual decisions. Current research efforts focus on how cortical populations integrate sensory information with information from within the brain.

Harris received a B.A. and Part III in mathematics from the University of Cambridge in 1993 and a Ph.D. in neural computation from University College London in 1998. He then moved to Rutgers University for postdoctoral work, where he eventually opened a laboratory studying neuronal population activity in the neocortex. He next moved to Imperial College London before joining the faculty at University College London.

Harris received the Alfred P. Sloan Fellowship in 2005 and the Royal Society Wolfson Research Merit Award and an EPSRC Leadership Fellowship in 2010. He was named a Burroughs Wellcome Trust Investigator and a Simons Investigator in 2014.

From this contributor

Explore more from The Transmitter

Photograph of the BRIDGE team and students visiting a laboratory.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo

These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

By Lauren Schenkman
20 May 2025 | 6 min read
Research image of neurite overgrowth in cells grown from people with autism-linked PPP2R5D variants.

Cortical structures in infants linked to future language skills; and more

Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

By Jill Adams
20 May 2025 | 2 min read
Digitally distorted building blocks.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants

A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

By Alona Fyshe
19 May 2025 | 7 min read