The basics
Welcome! My name is Nathaniel and I am a machine learning scientist. I am focused on developing methods towards AI interpretability. Currently, I am a Founding Research Scientist at Guide Labs. We are creating foundational LLMs that are engineered so as to be inherently interpretable.
My background is primarily in computational topology and secondarily in algebraic geometry, but I have been dedicated to AI interpretability work since Autumn of 2022. I spent two years making this pivot and honing a new skillset with the support of a Long Term Future Fund grant and a Lightspeed grant. During this time, my goal was to use mechanistic interpretability tools to extract objective functions from complex models.
Background
I have a Ph.D. in mathematics from the University of Maryland, College Park. While I originally worked on algebraic geometry and topology, my thesis was on computational topology, where I proved an approximation theorem on persistent homology with relevance to dimension reduction.
Before graduate school, I did economic consulting at LECG, LLC. Before that I graduated with High Honors from Swarthmore College, majoring in mathematics and minoring in economics. I grew up in New York and Salt Lake City, Utah.
Site guide
On this site, you can also find my CV, some writing on AGI and interpretability.
Other stuff
I also enjoy chocolate, boardgames, reading, classical music, sushi, long walks, and dogs. Especially dogs.
