Large language models

Recent articles

Two heatmap-like mouse silhouettes overlaid with a grid of ones and zeroes.

How artificial agents can help us understand social recognition

Neuroscience is chasing the complexity of social behavior, yet we have not answered the simplest question in the chain: How does a brain know “who is who”? Emerging multi-agent artificial intelligence may help accelerate our understanding of this fundamental computation.

By Eunji Kong
16 January 2026 | 5 min read
Digitally distorted building blocks.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants

A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

By Alona Fyshe
19 May 2025 | 7 min read

‘Digital humans’ in a virtual world

By combining large language models with modular cognitive control architecture, Robert Yang and his collaborators have built agents that are capable of grounded reasoning at a linguistic level. Striking collective behaviors have emerged.

By Kevin Mitchell
10 February 2025 | 51 min watch
Illustration of two neon-toned sets of concentric circles overlapping, with bright spots where they intersect.

Are brains and AI converging?—an excerpt from ‘ChatGPT and the Future of AI: The Deep Language Revolution’

In his new book, to be published next week, computational neuroscience pioneer Terrence Sejnowski tackles debates about AI’s capacity to mirror cognitive processes.

By Terrence Sejnowski
21 October 2024 | 12 min read

Explore more from The Transmitter

A fragmenting cube hovers over a person reading a book.

Error equation predicts brain’s ability to generalize

Four statistical measurements of neural network geometry capture how well brains and artificial networks use what they already know to solve new problems, a study suggests.

By Natalia Mesa
10 April 2026 | 5 min read
A large, abstract shape flows out of a small box.

Embrace complexity to improve the translatability of basic neuroscience

Researchers must learn to view heterogeneity as an essential feature of the systems they study and a central consideration in experimental design, not a variable to control for or reduce.

By Linda Douw, Klaus Eyer, Lara Keuck
9 April 2026 | 5 min read

Romain Brette reveals fundamental flaws in commonly assumed neuroscience concepts

His new book, “The Brain, In Theory,” offers alternatives to many of the computer science frameworks currently driving theoretical neuroscience.

By Paul Middlebrooks
8 April 2026 | 131 min listen