- Sept 2021: I will be starting a postdoc with Misha Belkin at UCSD, as part of the NSF/Simons Collaboration on the Theoretical Foundations of Deep Learning.
- Summer 2020: Interned with Hanie Sedghi and Behnam Neyshabur at Google Brain.
- Spring 2020: Visited Jacob Steinhardt at UC Berkeley.
- Summer 2019: Interned with Ilya Sutskever at OpenAI.
Recent Invited TalksI'm happy to speak about my works and interests. Currently, I will likely speak about The Deep Bootstrap, Distributional Generalization, or musings about scaling.
- Apr 2021: Guest Lecture in ML Theory Course (Boaz Barak), speaking on scaling laws. [slides]
- Apr 2021: UPenn Seminar (Weijie Su group), speaking on Distributional Generalization. [slides]
- Aug 2020: UCLA Big Data and Machine Learning Seminar, speaking on Distributional Generalization.
- Feb 2021: Simons Collaboration Monthly Meeting, speaking on The Deep Bootstrap. [slides]
- Aug 2020: Max Planck+UCLA Math ML Seminar, speaking on Distributional Generalization. [video]
ResearchI take a scientific approach to machine learning— trying to advance understanding through basic experiments and foundational theory.
See [publications] for full list of papers.
Machine Learning Theory
The Deep Bootstrap Framework: Good Online Learners are Good Offline Generalizers
Preetum Nakkiran, Behnam Neyshabur, Hanie Sedghi
Distributional Generalization: A New Kind of Generalization
Preetum Nakkiran*, Yamini Bansal*
- Desk-Rejected from NeurIPS 2020.
- Rejected from ICLR 2021. [reviews]
- Rejected from ICML 2021. [reviews] [rebuttal] [meta]
- In submission to NeurIPS 2021.
Learning Rate Annealing Can Provably Help Generalization, Even for Convex Problems
OPT2020 Workshop (Best Student Paper)
Optimal Regularization Can Mitigate Double Descent
Preetum Nakkiran, Prayaag Venkat, Sham Kakade, Tengyu Ma
Deep Double Descent: Where Bigger Models and More Data Hurt
Preetum Nakkiran, Gal Kaplun*, Yamini Bansal*, Tristan Yang, Boaz Barak, Ilya Sutskever
More Data Can Hurt for Linear Regression:
Sample-wise Double Descent
SGD on Neural Networks Learns Functions of Increasing Complexity
Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L. Edelman, Fred Zhang, Boaz Barak
NeurIPS 2019 (Spotlight).
Adversarial Examples are Just Bugs, Too
Adversarial Robustness May Be at Odds With Simplicity
(Merged appears in COLT 2019).
The Generic Holdout:
Preventing False-Discoveries in Adaptive Data Science
Preetum Nakkiran, Jarosław Błasiok
Algorithmic Polarization for Hidden Markov Models
Venkatesan Guruswami, Preetum Nakkiran, Madhu Sudan
General Strong Polarization
Jarosław Błasiok, Venkatesan Guruswami, Preetum Nakkiran, Atri Rudra, Madhu Sudan
Tracking the L2 Norm with Constant Update Time
Chi-Ning Chou, Zhixian Lei, Preetum Nakkiran
Near-Optimal UGC-hardness of Approximating Max k-CSP_R
Pasin Manurangsi, Preetum Nakkiran, Luca Trevisan
Compressing Deep Neural Networks Using a Rank-Constrained Topology
Preetum Nakkiran, Raziel Alvarez, Rohit Prabhavalkar, Carolina Parada
Automatic Gain Control and Multi-style Training for Robust Small-Footprint Keyword Spotting
with Deep Neural Networks
Rohit Prabhavalkar, Raziel Alvarez, Carolina Parada, Preetum Nakkiran, and Tara Sainath
About MeI did my undergrad in EECS at UC Berkeley. I'm broadly interested in theory and science.
In the past, I have interned at OpenAI (with Ilya Sutskever) Google Research (with Raziel Alvarez), Google Brain (with Behnam Neyshabur, Hanie Sedghi), and have also done research in error-correcting codes, distributed storage, and cryptography. I am partially supported by a Google PhD Fellowship, and I am grateful for past support from NSF GRFP.
What People are Saying
a "high-level" scientist —colleague (ML)
makes plots and draws lines through them
has merits that outweigh flaws —reviewer 2