Preetum Nakkiran

I'm a postdoc with Misha Belkin at UCSD, and part of the NSF/Simons Collaboration on Deep Learning.

My research aims to build an "empirical theory" of deep learning. See the intro of my thesis for more on what, why, and how.

I recently completed my PhD in the Harvard Theory Group, advised by Madhu Sudan and Boaz Barak.

[publications]     [CV]     [twitter]     preetum@ucsd.edu


News:

  • Sept 2022: (your institution here?)
  • Sept 2021: I have moved to University of California, San Diego.
  • July 2021: I've defended my thesis! View the [slides], and read the [thesis]. I suggest the Introduction, which is written for general scientific audience.

Recent Invited Talks

I'm happy to speak about my works and interests. Currently, I will likely speak about The Deep Bootstrap, Distributional Generalization, or musings about scaling.

Research

I take a scientific approach to machine learning— trying to advance understanding through basic experiments and foundational theory.

See [publications] for full list of papers.

Machine Learning Theory

Theory

Machine Learning

Deep Double Descent
Dynamics of SGD
Gauss's Principle of Least Action

About Me

I did my undergrad in EECS at UC Berkeley. I'm broadly interested in theory and science.

In the past, I have interned at OpenAI (with Ilya Sutskever) Google Research (with Raziel Alvarez), Google Brain (with Behnam Neyshabur, Hanie Sedghi), and have also done research in error-correcting codes, distributed storage, and cryptography. I am partially supported by a Google PhD Fellowship, and I am grateful for past support from NSF GRFP.

See also my old website for more. This version borrowed in part from Luca Trevisan and Jon Barron.

What People are Saying

a "high-level" scientist   —colleague (ML)

makes plots and draws lines through them
            —colleague (TCS)

has merits that outweigh flaws   —reviewer 2