Preetum Nakkiran

I'm a Research Scientist at Apple (with Josh Susskind and Samy Bengio), and a Visiting Researcher at UCSD (with Misha Belkin). I'm also part of the NSF/Simons Collaboration on the Theoretical Foundations of Deep Learning.

My research builds conceptual tools for understanding learning systems (including deep learning), using both theory and experiment as appropriate. See the intro of my thesis for more on my motivations and methods.

I recently completed my PhD at Harvard, advised by Madhu Sudan and Boaz Barak. While there, among other things, I co-founded the ML Foundations Group.

[publications]     [CV]     [twitter]     [short-bio]

Collaborations: I am always open to specific emails about my work. Please bump your email 2x if I fail to respond (apologies in advance).

Acknowledgements: I have had the pleasure of collaborating with the following excellent students & postdocs (partial list):

News and Olds:

  • May 2022: I've joined Apple ML Research! I maintain a UCSD affiliation as a Visiting Researcher.
  • Nov 2021: New manuscript on Turing-Universal Learners with Optimal Scaling Laws ( Also other papers.
  • Sept 2021: I have moved to University of California, San Diego.
  • July 2021: I defended my thesis! View the [slides], and read the [thesis]. I suggest the Introduction, which is written for general scientific audience.
  • 1994: Born

Recent/Upcoming Invited Talks

I'm happy to speak about my works and interests. Currently, I will likely speak about The Deep Bootstrap Framework, Distributional Generalization, or musings about scaling, or science, or theory.


I take a scientific approach to machine learning— trying to advance understanding through basic experiments and foundational theory.

See [publications] for full list of papers.


Machine Learning Theory


Machine Learning

Deep Double Descent
Dynamics of SGD
Gauss's Principle of Least Action

About Me

For talks, you can use this [bio].

I did my undergrad in EECS at UC Berkeley. I'm broadly interested in theory and science. In the past, I have interned at OpenAI (with Ilya Sutskever) Google Research (with Raziel Alvarez), Google Brain (with Behnam Neyshabur, Hanie Sedghi), and have also done research in error-correcting codes, distributed storage, and cryptography. I am grateful for past support from NSF GRFP and the Google PhD Fellowship.

See also my old website for more. This version borrowed in part from Luca Trevisan and Jon Barron.

What People are Saying

a "high-level" scientist   —colleague (ML)

makes plots and draws lines through them
            —colleague (TCS)

has merits that outweigh flaws   —reviewer 2

a complainer   —my girlfriend (among others)

Selected Tweets