Lauren N Ross.

Lauren N. Ross

Associate professor of logic and philosophy of science
University of California, Irvine

Lauren N. Ross is associate professor of logic and philosophy of science at the University of California, Irvine. Her research concerns causal reasoning and explanation in the life sciences, primarily neuroscience and biology.  One main area of her research explores causal varieties—different types of causes, causal relationships and causal systems in the life sciences. Her work identifies the features characteristic of these causal varieties and their implications for how these systems are studied, how they figure in scientific explanations and how they behave. A second main area of work focuses on types of explanation in neuroscience and biology, including distinct forms of causal and noncausal explanation.

Ross’ research has received a National Science Foundation CAREER award, a Humboldt Experienced Researcher Fellowship, a John Templeton Foundation Grant, and an Editor’s Choice Award at the British Journal for the Philosophy of Science.  Recent publications include “Causation in neuroscience: Keeping mechanism meaningful” with Dani S. Bassett in Nature Reviews Neuroscience and a forthcoming book, “Explanation in Biology” (Cambridge University Press: Elements Series).

Explore more from The Transmitter

Photograph of the BRIDGE team and students visiting a laboratory.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo

These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

By Lauren Schenkman
20 May 2025 | 6 min read
Research image of neurite overgrowth in cells grown from people with autism-linked PPP2R5D variants.

Cortical structures in infants linked to future language skills; and more

Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

By Jill Adams
20 May 2025 | 2 min read
Digitally distorted building blocks.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants

A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

By Alona Fyshe
19 May 2025 | 7 min read