David Barack is a philosopher and neuroscientist who studies the neural circuits of foraging behavior and the conceptual foundations of cognitive neuroscience. He is a postdoctoral researcher at the University of Pennsylvania. After earning his B.A. in consciousness studies at Pitzer College, he received his M.A. in philosophy from the University of Wisconsin-Milwaukee and his Ph.D. in philosophy from Duke University, where he also received a certificate in cognitive neuroscience. He is currently writing a book on the neurodynamical foundations of mind.

David Barack
Research associate in neuroscience and philosophy
University of Pennsylvania
From this contributor
Must a theory be falsifiable to contribute to good science?
Four researchers debate the role that non-testable theories play in neuroscience.

Must a theory be falsifiable to contribute to good science?
Explore more from The Transmitter
Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.
Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.
The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.