The basics

Welcome! My name is Nathaniel and I am an AI alignment researcher. I am intensely focused on exploring the mechanisms behind the efficacy of modern ML techniques. In addition to being theoretically fascinating, I think gaining a greater understanding of these issues is potentially existentially critical. Currently, my ambition is to use interpretability tools to extract objective functions from complex models. While my background is primarily in computational topology and secondarily in algebraic geometry, I have been dedicated to AI and alignment work since Autumn of 2022.

These days, I am supported by a Long Term Future Fund grant to pivot from mathematics into ML/AI.

Background

I have a Ph.D. in mathematics from the University of Maryland, College Park. While I originally worked on algebraic geometry and topology, my thesis was on computational topology, where I proved an approximation theorem on persistent homology with relevance to dimension reduction.

Before graduate school, I did economic consulting at LECG, LLC. Before that I graduated with High Honors from Swarthmore College, majoring in mathematics and minoring in economics. I grew up in New York and Salt Lake City, Utah.

Site guide

On this site, you can also find my CV, some writing on AGI and interpretability.

Other stuff

I also enjoy chocolate, boardgames, reading, classical music, sushi, long walks, and dogs. Especially dogs.