Computational neuroscience
Recent articles
This paper changed my life: Marino Pagan recalls a decision-making study from four titans in the field
Valerio Mante and David Sussillo, along with their mentors Krishna Shenoy and Bill Newsome, revealed the complexity of neural population dynamics and the power of recurrent neural networks.

This paper changed my life: Marino Pagan recalls a decision-making study from four titans in the field
Valerio Mante and David Sussillo, along with their mentors Krishna Shenoy and Bill Newsome, revealed the complexity of neural population dynamics and the power of recurrent neural networks.
Thinking about thinking: AI offers theoretical insights into human memory
We need a new conceptual framework for understanding cognitive functions—particularly how globally distributed brain states are formed and maintained for hours.

Thinking about thinking: AI offers theoretical insights into human memory
We need a new conceptual framework for understanding cognitive functions—particularly how globally distributed brain states are formed and maintained for hours.
It’s time to examine neural coding from the message’s point of view
In studying the brain, we almost always take the neuron’s perspective. But we can gain new insights by reorienting our frame of reference to that of the messages flowing over brain networks.
It’s time to examine neural coding from the message’s point of view
In studying the brain, we almost always take the neuron’s perspective. But we can gain new insights by reorienting our frame of reference to that of the messages flowing over brain networks.
Gabriele Scheler reflects on the interplay between language, thought and AI
She discusses how verbal thought shapes cognition, why inner speech is foundational to human intelligence and what current artificial-intelligence models get wrong about language.
Gabriele Scheler reflects on the interplay between language, thought and AI
She discusses how verbal thought shapes cognition, why inner speech is foundational to human intelligence and what current artificial-intelligence models get wrong about language.
Accepting “the bitter lesson” and embracing the brain’s complexity
To gain insight into complex neural data, we must move toward a data-driven regime, training large models on vast amounts of information. We asked nine experts on computational neuroscience and neural data analysis to weigh in.

Accepting “the bitter lesson” and embracing the brain’s complexity
To gain insight into complex neural data, we must move toward a data-driven regime, training large models on vast amounts of information. We asked nine experts on computational neuroscience and neural data analysis to weigh in.
Explore more from The Transmitter
Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.
Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.
The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.