- I am on the job market for faculty and industry research positions.
- See my [invited talk] on Distributional Generalization at the Max Planck+UCLA Math ML Seminar.
- Summer 2020: Interned with Hanie Sedghi and Behnam Neyshabur at Google Brain.
- Spring 2020: Visited Jacob Steinhardt at UC Berkeley.
- Summer 2019: Interned with Ilya Sutskever at OpenAI.
ResearchI take a scientific approach to machine learning— trying to advance understanding through basic experiments and foundational theory. I aim to discover universal principles of learning.
See [publications] for full list of papers.
Machine Learning Theory
The Deep Bootstrap: Good Online Learners are Good Offline Generalizers
Preetum Nakkiran, Behnam Neyshabur, Hanie Sedghi
In submission. 2020.
Distributional Generalization: A New Kind of Generalization
Preetum Nakkiran*, Yamini Bansal*
In submission. 2020. [talk]
Learning Rate Annealing Can Provably Help Generalization, Even for Convex Problems
Optimal Regularization Can Mitigate Double Descent
Preetum Nakkiran, Prayaag Venkat, Sham Kakade, Tengyu Ma
In submission. 2020.
Deep Double Descent: Where Bigger Models and More Data Hurt
Preetum Nakkiran, Gal Kaplun*, Yamini Bansal*, Tristan Yang, Boaz Barak, Ilya Sutskever
More Data Can Hurt for Linear Regression:
Sample-wise Double Descent
SGD on Neural Networks Learns Functions of Increasing Complexity
Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L. Edelman, Fred Zhang, Boaz Barak
NeurIPS 2019 (Spotlight).
Adversarial Examples are Just Bugs, Too
Adversarial Robustness May Be at Odds With Simplicity
(Merged appears in COLT 2019).
The Generic Holdout:
Preventing False-Discoveries in Adaptive Data Science
Preetum Nakkiran, Jarosław Błasiok
Algorithmic Polarization for Hidden Markov Models
Venkatesan Guruswami, Preetum Nakkiran, Madhu Sudan
General Strong Polarization
Jarosław Błasiok, Venkatesan Guruswami, Preetum Nakkiran, Atri Rudra, Madhu Sudan
Tracking the L2 Norm with Constant Update Time
Chi-Ning Chou, Zhixian Lei, Preetum Nakkiran
Near-Optimal UGC-hardness of Approximating Max k-CSP_R
Pasin Manurangsi, Preetum Nakkiran, Luca Trevisan
Compressing Deep Neural Networks Using a Rank-Constrained Topology
Preetum Nakkiran, Raziel Alvarez, Rohit Prabhavalkar, Carolina Parada
Automatic Gain Control and Multi-style Training for Robust Small-Footprint Keyword Spotting
with Deep Neural Networks
Rohit Prabhavalkar, Raziel Alvarez, Carolina Parada, Preetum Nakkiran, and Tara Sainath
About MeI did my undergrad in EECS at UC Berkeley. I'm broadly interested in theory and science.
In the past, I have interned at OpenAI (with Ilya Sutskever) and Google Research (with Raziel Alvarez), and have also done research in error-correcting codes, distributed storage, and cryptography. I am partially supported by a Google PhD Fellowship, and I am grateful for past support from NSF GRFP.