Preetum Nakkiran
I'm a 5th-year PhD student at Harvard, in the Theory of Computation and ML Theory Groups, advised by Madhu Sudan and Boaz Barak.I'm currently trying to understand why deep learning works. I do theory by doing experiments.
[publications]
[CV]
[twitter]
preetum@cs.harvard.edu
News:
- See my [invited talk] on Distributional Generalization at the Max Planck+UCLA Math ML Seminar.
- Summer 2020: Interned with Hanie Sedghi and Behnam Neyshabur at Google Brain.
- Spring 2020: Visited Jacob Steinhardt at UC Berkeley.
- Summer 2019: Interned with Ilya Sutskever at OpenAI.
Research
I take a scientific approach to machine learning— trying to advance understanding through basic experiments and foundational theory.See [publications] for full list of papers.
Machine Learning Theory
-
The Deep Bootstrap: Good Online Learners are Good Offline Generalizers
Preetum Nakkiran, Behnam Neyshabur, Hanie Sedghi
In submission. 2020. -
Distributional Generalization: A New Kind of Generalization
Preetum Nakkiran*, Yamini Bansal*
In submission. 2020. [talk] -
Learning Rate Annealing Can Provably Help Generalization, Even for Convex Problems
Preetum Nakkiran
OPT2020 Workshop (Best Student Paper) -
Optimal Regularization Can Mitigate Double Descent
Preetum Nakkiran, Prayaag Venkat, Sham Kakade, Tengyu Ma
In submission. 2020. -
Deep Double Descent: Where Bigger Models and More Data Hurt
Preetum Nakkiran, Gal Kaplun*, Yamini Bansal*, Tristan Yang, Boaz Barak, Ilya Sutskever
ICLR 2020. -
More Data Can Hurt for Linear Regression:
Sample-wise Double Descent
Preetum Nakkiran
Manuscript. 2019. -
SGD on Neural Networks Learns Functions of Increasing Complexity
Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L. Edelman, Fred Zhang, Boaz Barak
NeurIPS 2019 (Spotlight). -
Adversarial Examples are Just Bugs, Too
Preetum Nakkiran
Distill 2019. -
Adversarial Robustness May Be at Odds With Simplicity
Preetum Nakkiran
(Merged appears in COLT 2019). -
The Generic Holdout:
Preventing False-Discoveries in Adaptive Data Science
Preetum Nakkiran, Jarosław Błasiok
Manuscript. 2018.
Theory
-
Algorithmic Polarization for Hidden Markov Models
Venkatesan Guruswami, Preetum Nakkiran, Madhu Sudan
ITCS 2019. -
General Strong Polarization
Jarosław Błasiok, Venkatesan Guruswami, Preetum Nakkiran, Atri Rudra, Madhu Sudan
STOC 2018. -
Tracking the L2 Norm with Constant Update Time
Chi-Ning Chou, Zhixian Lei, Preetum Nakkiran
APPROX-RANDOM 2018. -
Near-Optimal UGC-hardness of Approximating Max k-CSP_R
Pasin Manurangsi, Preetum Nakkiran, Luca Trevisan
APPROX-RANDOM 2016.
Machine Learning
-
Compressing Deep Neural Networks Using a Rank-Constrained Topology
Preetum Nakkiran, Raziel Alvarez, Rohit Prabhavalkar, Carolina Parada
INTERSPEECH 2015. -
Automatic Gain Control and Multi-style Training for Robust Small-Footprint Keyword Spotting
with Deep Neural Networks
Rohit Prabhavalkar, Raziel Alvarez, Carolina Parada, Preetum Nakkiran, and Tara Sainath
ICASSP 2015.



About Me
I did my undergrad in EECS at UC Berkeley. I'm broadly interested in theory and science.In the past, I have interned at OpenAI (with Ilya Sutskever) and Google Research (with Raziel Alvarez), and have also done research in error-correcting codes, distributed storage, and cryptography. I am partially supported by a Google PhD Fellowship, and I am grateful for past support from NSF GRFP.
See also my old website for more. This version borrowed in part from Luca Trevisan and Jon Barron.
What People are Saying
a "high-level" scientist —colleague (ML)
makes plots and draws lines through them
—colleague (TCS)
has merits that outweigh flaws —reviewer 2