Emily S. Finn is assistant professor of psychological and brain sciences at Dartmouth College, where she directs the Functional Imaging and Naturalistic Neuroscience (FINN) Lab. Finn has pioneered techniques such as functional connectome fingerprinting and connectome-based predictive modeling for predicting individual behaviors from functional brain connectivity. Her current work is focused on how within- and between-individual variability in brain activity relates to appraisal of ambiguous information under naturalistic conditions such as watching movies or listening to stories.

Emily S. Finn
Assistant professor of psychological and brain sciences
Dartmouth College
From this contributor
To improve big data, we need small-scale human imaging studies
By insisting that every brain-behavior association study include hundreds or even thousands of participants, we risk stifling innovation. Smaller studies are essential to test new scanning paradigms.

To improve big data, we need small-scale human imaging studies
Explore more from The Transmitter
Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.

Sharing Africa’s brain data: Q&A with Amadi Ihunwo
These data are “virtually mandatory” to advance neuroscience, says Ihunwo, a co-investigator of the Brain Research International Data Governance & Exchange (BRIDGE) initiative, which seeks to develop a global framework for sharing, using and protecting neuroscience data.
Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.

Cortical structures in infants linked to future language skills; and more
Here is a roundup of autism-related news and research spotted around the web for the week of 19 May.
The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.

The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.