Naomi Saphra is a research fellow at the Kempner Institute at Harvard University and incoming faculty at Boston University in 2026. She is interested in empirically understanding training in language models: When do models learn to encode linguistic patterns or other structure? What does that tell us about how and why they work? Can we encode useful inductive biases into the training process? Recently, she has begun collaborating with natural and social scientists to use interpretability to understand the world around us.
Saphra earned a Ph.D. at the University of Edinburgh on the training dynamics of neural language models. She worked at New York University, Google, MosaicML and Facebook; and she attended Johns Hopkins University and Carnegie Mellon University.