Preetum Nakkiran

Preetum Nakkiran

Harvard University

H-index: 18

North America-United States

About Preetum Nakkiran

Preetum Nakkiran, With an exceptional h-index of 18 and a recent h-index of 18 (since 2020), a distinguished researcher at Harvard University, specializes in the field of Machine Learning, Deep Learning, Generalization.

His recent articles reflect a diverse array of research interests and contributions to the field:

When Does Optimizing a Proper Loss Yield Calibration?

Loss Minimization Yields Multicalibration for Large Neural Networks

What algorithms can transformers learn? a study in length generalization

Smooth ECE: Principled reliability diagrams via kernel smoothing

A unifying theory of distance from calibration

Perspectives on the State and Future of Deep Learning--2023

Empirical limitations of the NTK for understanding scaling laws in deep learning

LiDAR: Sensing Linear Probing Performance in Joint Embedding SSL Architectures

Preetum Nakkiran Information

University

Position

___

Citations(all)

2213

Citations(since 2020)

2043

Cited By

669

hIndex(all)

18

hIndex(since 2020)

18

i10Index(all)

26

i10Index(since 2020)

25

Email

University Profile Page

Harvard University

Google Scholar

View Google Scholar Profile

Preetum Nakkiran Skills & Research Interests

Machine Learning

Deep Learning

Generalization

Top articles of Preetum Nakkiran

Title

Journal

Author(s)

Publication Date

When Does Optimizing a Proper Loss Yield Calibration?

arXiv preprint arXiv:2305.18764

Jarosław Błasiok

Parikshit Gopalan

Lunjia Hu

Preetum Nakkiran

2023/5/30

Loss Minimization Yields Multicalibration for Large Neural Networks

arXiv preprint arXiv:2304.09424

Jarosław Błasiok

Parikshit Gopalan

Lunjia Hu

Adam Tauman Kalai

Preetum Nakkiran

2023/4/19

What algorithms can transformers learn? a study in length generalization

arXiv preprint arXiv:2310.16028

Hattie Zhou

Arwen Bradley

Etai Littwin

Noam Razin

Omid Saremi

...

2023/10/24

Smooth ECE: Principled reliability diagrams via kernel smoothing

arXiv preprint arXiv:2309.12236

Jarosław Błasiok

Preetum Nakkiran

2023/9/21

A unifying theory of distance from calibration

Jarosław Błasiok

Parikshit Gopalan

Lunjia Hu

Preetum Nakkiran

2023/6/2

Perspectives on the State and Future of Deep Learning--2023

arXiv preprint arXiv:2312.09323

Micah Goldblum

Anima Anandkumar

Richard Baraniuk

Tom Goldstein

Kyunghyun Cho

...

2023/12/7

Empirical limitations of the NTK for understanding scaling laws in deep learning

Transactions on Machine Learning Research

Nikhil Vyas

Yamini Bansal

Preetum Nakkiran

2023/3/27

LiDAR: Sensing Linear Probing Performance in Joint Embedding SSL Architectures

arXiv preprint arXiv:2312.04000

Vimal Thilak

Chen Huang

Omid Saremi

Laurent Dinh

Hanlin Goh

...

2023/12/7

Vanishing gradients in reinforcement finetuning of language models

arXiv preprint arXiv:2310.20703

Noam Razin

Hattie Zhou

Omid Saremi

Vimal Thilak

Arwen Bradley

...

2023/10/31

Benign, tempered, or catastrophic: Toward a refined taxonomy of overfitting

Advances in Neural Information Processing Systems

Neil Mallinar

James Simon

Amirhesam Abedsoltan

Parthe Pandit

Misha Belkin

...

2022/12/6

Limitations of neural collapse for understanding generalization in deep learning

arXiv preprint arXiv:2202.08384

Like Hui

Mikhail Belkin

Preetum Nakkiran

2022/2/17

APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal Representations

NeurIPS Workshop Has it Trained Yet?

Elan Rosenfeld

Preetum Nakkiran

Hadi Pouransari

Oncel Tuzel

Fartash Faghri

2022/10/8

Near-Optimal NP-Hardness of Approximating Max -CSP

Theory of Computing

Pasin Manurangsi

Preetum Nakkiran

Luca Trevisan

2022/2/14

The calibration generalization gap

arXiv preprint arXiv:2210.01964

A Michael Carrell

Neil Mallinar

James Lucas

Preetum Nakkiran

2022/10/5

Knowledge Distillation: Bad Models Can Be Good Role Models

Advances in Neural Information Processing Systems

Gal Kaplun

Eran Malach

Preetum Nakkiran

Shai Shalev-Shwartz

2022/12/6

Limitations of the ntk for understanding generalization in deep learning

arXiv preprint arXiv:2206.10012

Nikhil Vyas

Yamini Bansal

Preetum Nakkiran

2022/6/20

Incentivizing empirical science in machine learning: Problems and proposals

ML Evaluation Standards Workshop at ICLR

Preetum Nakkiran

Mikhail Belkin

2022

General strong polarization

ACM Journal of the ACM (JACM)

Jarosław Błasiok

Venkatesan Guruswami

Preetum Nakkiran

Atri Rudra

Madhu Sudan

2022/3/3

What you see is what you get: Principled deep learning via distributional generalization

Advances in Neural Information Processing Systems

Bogdan Kulynych

Yao-Yuan Yang

Yaodong Yu

Jarosław Błasiok

Preetum Nakkiran

2022/12/6

Deconstructing distributions: A pointwise framework of learning

Gal Kaplun

Nikhil Ghosh

Saurabh Garg

Boaz Barak

Preetum Nakkiran

2022/2/20

See List of Professors in Preetum Nakkiran University(Harvard University)

Co-Authors

H-index: 101
Kannan Ramchandran

Kannan Ramchandran

University of California, Berkeley

H-index: 90
Sham M Kakade

Sham M Kakade

University of Washington

H-index: 69
Madhu Sudan

Madhu Sudan

Harvard University

H-index: 67
Vinod Vaikuntanathan

Vinod Vaikuntanathan

Massachusetts Institute of Technology

H-index: 60
Venkatesan Guruswami

Venkatesan Guruswami

Carnegie Mellon University

H-index: 54
Tengyu MA

Tengyu MA

Stanford University

academic-engine