Niladri S. Chatterji

Niladri S. Chatterji

University of California, Berkeley

H-index: 18

North America-United States

About Niladri S. Chatterji

Niladri S. Chatterji, With an exceptional h-index of 18 and a recent h-index of 17 (since 2020), a distinguished researcher at University of California, Berkeley, specializes in the field of Machine Learning, Optimization, Statistics.

His recent articles reflect a diverse array of research interests and contributions to the field:

On the opportunities and risks of foundation models. arXiv 2021

Deep linear networks can benignly overfit when shallow ones do

Proving test set contamination in black box language models

Random feature amplification: Feature learning and generalization in neural networks

Oracle lower bounds for stochastic gradient sampling algorithms

Holistic evaluation of language models

Is importance weighting incompatible with interpolating classifiers?

Benign overfitting without linearity: Neural network classifiers trained by gradient descent for noisy linear data

Niladri S. Chatterji Information

University

Position

Graduate Student Department of Physics

Citations(all)

3776

Citations(since 2020)

3707

Cited By

416

hIndex(all)

18

hIndex(since 2020)

17

i10Index(all)

21

i10Index(since 2020)

20

Email

University Profile Page

University of California, Berkeley

Google Scholar

View Google Scholar Profile

Niladri S. Chatterji Skills & Research Interests

Machine Learning

Optimization

Statistics

Top articles of Niladri S. Chatterji

Title

Journal

Author(s)

Publication Date

On the opportunities and risks of foundation models. arXiv 2021

arXiv preprint arXiv:2108.07258

Rishi Bommasani

Drew A Hudson

Ehsan Adeli

Russ Altman

Simran Arora

...

2021/8/16

Deep linear networks can benignly overfit when shallow ones do

Journal of Machine Learning Research

Niladri S Chatterji

Philip M Long

2023

Proving test set contamination in black box language models

arXiv preprint arXiv:2310.17623

Yonatan Oren

Nicole Meister

Niladri Chatterji

Faisal Ladhak

Tatsunori B Hashimoto

2023/10/26

Random feature amplification: Feature learning and generalization in neural networks

Journal of Machine Learning Research

Spencer Frei

Niladri S Chatterji

Peter L Bartlett

2023

Oracle lower bounds for stochastic gradient sampling algorithms

Bernoulli

Niladri S Chatterji

Peter L Bartlett

Philip M Long

2022/5

Holistic evaluation of language models

arXiv preprint arXiv:2211.09110

Percy Liang

Rishi Bommasani

Tony Lee

Dimitris Tsipras

Dilara Soylu

...

2022/11/16

Is importance weighting incompatible with interpolating classifiers?

arXiv preprint arXiv:2112.12986

Ke Alexander Wang

Niladri S Chatterji

Saminul Haque

Tatsunori Hashimoto

2021/12/24

Benign overfitting without linearity: Neural network classifiers trained by gradient descent for noisy linear data

Spencer Frei

Niladri S Chatterji

Peter Bartlett

2022/6/28

Foolish crowds support benign overfitting

Journal of Machine Learning Research

Niladri S. Chatterji

Philip M. Long

2022

Undersampling is a minimax optimal robustness intervention in nonparametric classification

arXiv preprint arXiv:2205.13094

Niladri S. Chatterji

Saminul Haque

Tatsunori Hashimoto

2022/5/26

When does gradient descent with logistic loss find interpolating two-layer networks?

Journal of Machine Learning Research

Niladri S Chatterji

Philip M Long

Peter L Bartlett

2021

The interplay between implicit bias and benign overfitting in two-layer linear networks

Journal of machine learning research

Niladri S Chatterji

Philip M Long

Peter L Bartlett

2022

When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations?

Niladri S Chatterji

Philip M Long

Peter Bartlett

2021/7/21

Finite-sample analysis of interpolating linear classifiers in the overparameterized regime

Journal of Machine Learning Reseach

Niladri S. Chatterji

Philip M. Long

2021/5

Is there an analog of Nesterov acceleration for gradient-based MCMC?

Yi-An Ma

Niladri S Chatterji

Xiang Cheng

Nicolas Flammarion

Peter L Bartlett

...

2021/5

Why do gradient methods work in optimization and sampling?

Niladri S. Chatterji

2021

On the theory of reinforcement learning with once-per-episode feedback

Advances in Neural Information Processing Systems

Niladri Chatterji

Aldo Pacchiano

Peter Bartlett

Michael Jordan

2021/12/6

Langevin Monte Carlo without smoothness

Niladri Chatterji

Jelena Diakonikolas

Michael I Jordan

Peter Bartlett

2020/6/3

OSOM: A simultaneously optimal algorithm for multi-armed and linear contextual bandits

Niladri Chatterji

Vidya Muthukumar

Peter Bartlett

2020/6/3

The intriguing role of module criticality in the generalization of deep networks

Niladri S. Chatterji

Behnam Neyshabur

Hanie Sedghi

2020

See List of Professors in Niladri S. Chatterji University(University of California, Berkeley)

Co-Authors

H-index: 203
Michael I. Jordan

Michael I. Jordan

University of California, Berkeley

H-index: 83
Peter Bartlett

Peter Bartlett

University of California, Berkeley

H-index: 37
Tatsunori Hashimoto

Tatsunori Hashimoto

Stanford University

H-index: 26
Ashwin Tulapurkar

Ashwin Tulapurkar

Indian Institute of Technology Bombay

H-index: 24
Nicolas Flammarion

Nicolas Flammarion

École Polytechnique Fédérale de Lausanne

H-index: 23
Yi-An Ma

Yi-An Ma

University of California, San Diego

academic-engine