Preetum Nakkiran

I'm a Research Scientist at Apple, working on foundations of machine learning.

My research builds conceptual tools for understanding learning systems (including deep learning), using both theory and experiment. See the intro of my thesis for more on my motivations and methods.

I completed my PhD at Harvard, and had the unique pleasure of being advised by Madhu Sudan and Boaz Barak. While there, I co-founded the ML Foundations Group. In my postdoc I worked with Misha Belkin, as part of the NSF/Simons Collaboration on the Theoretical Foundations of Deep Learning. Go Bears!

[publications]     [CV]     [twitter]     [short-bio]     preetum@nakkiran.org

Collaborations: I am always open to specific emails about my work. Please bump your email 2x if I fail to respond (apologies in advance).

Interns at Apple (hosted, co-hosted, or collaborated):

  • Annabelle Carrell. PhD student, University of Cambridge.
  • Lunjia Hu. PhD student, Stanford. (hosted by Parikshit Gopalan)
  • Shivam Garg. PhD student, Stanford. (hosted by Kunal Talwar)
  • Rylee Thompson. MASc student, University of Guelph. (hosted by Shuangfei Zhai)
  • Elan Rosenfeld. PhD student, CMU. (hosted by Fartash Faghri)

Acknowledgements: I have had the pleasure of collaborating with the following excellent students & postdocs (partial list):


News and Olds:

  • Sept 2022: 3 out of 6 papers accepted to NeurIPS 2022! The resubmissions will continue until rates improve.
  • Aug 2022: I co-organized the Deep Learning Theory Workshop at the Simons Institute in Berkeley, CA.
  • May 2022: I've joined Apple ML Research! Job talk [slides]
  • Nov 2021: New manuscript on Turing-Universal Learners with Optimal Scaling Laws (free-lunch.org). Also other papers.
  • Sept 2021: I have moved to University of California, San Diego.
  • July 2021: I defended my thesis! View the [slides], and read the [thesis]. I suggest the Introduction, which is written for general scientific audience.
  • 1994: Born

Recent/Upcoming Invited Talks

(I'm taking a sabbatical from preparing slides -- please ask one of my excellent co-authors 😀)

Research

I take a scientific approach to machine learning— trying to advance understanding through basic experiments and foundational theory.

See [publications] for full list of papers.

Theses

Machine Learning Theory

Theory

Machine Learning

Deep Double Descent
Dynamics of SGD
Gauss's Principle of Least Action

About Me

For talks, you can use this [bio].

I did my undergrad in EECS at UC Berkeley. I'm broadly interested in theory and science. In the past, I have interned at OpenAI (with Ilya Sutskever) Google Research (with Raziel Alvarez), Google Brain (with Behnam Neyshabur, Hanie Sedghi), and have also done research in error-correcting codes, distributed storage, and cryptography. I am grateful for past support from NSF GRFP and the Google PhD Fellowship.

See also my old website for more. This version borrowed in part from Luca Trevisan and Jon Barron.

What People are Saying

a "high-level" scientist   —colleague (ML)

makes plots and draws lines through them
            —colleague (TCS)

has merits that outweigh flaws   —reviewer 2

a complainer   —my partner (among others)

Selected Tweets

From The Archives

  • Past successful application materials (fellowships, etc): [drive]
  • Courses I took in undergrad.