Yatin Dandi

About Yatin Dandi

Yatin Dandi, With an exceptional h-index of 5 and a recent h-index of 5 (since 2020), a distinguished researcher at Indian Institute of Technology Kanpur, specializes in the field of Deep Learning Theory, Statistical Physics, Optimization.

His recent articles reflect a diverse array of research interests and contributions to the field:

Universality laws for gaussian mixtures in generalized linear models

Asymptotics of feature learning in two-layer networks after one gradient-step

The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap Exponents

Learning from setbacks: the impact of adversarial initialization on generalization performance

How Two-Layer Neural Networks Learn, One (Giant) Step at a Time

A Gentle Introduction to Gradient-Based Optimization and Variational Inequalities for Machine Learning

Sampling with flows, diffusion and autoregressive neural networks: A spin-glass perspective

Learning two-layer neural networks, one (giant) step at a time

Yatin Dandi Information

University

Position

___

Citations(all)

64

Citations(since 2020)

64

Cited By

3

hIndex(all)

5

hIndex(since 2020)

5

i10Index(all)

2

i10Index(since 2020)

2

Email

University Profile Page

Google Scholar

Yatin Dandi Skills & Research Interests

Deep Learning Theory

Statistical Physics

Optimization

Top articles of Yatin Dandi

Universality laws for gaussian mixtures in generalized linear models

Advances in Neural Information Processing Systems

2024/2/13

Asymptotics of feature learning in two-layer networks after one gradient-step

arXiv preprint arXiv:2402.04980

2024/2/7

The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap Exponents

arXiv preprint arXiv:2402.03220

2024/2/5

Learning from setbacks: the impact of adversarial initialization on generalization performance

2023/11/7

How Two-Layer Neural Networks Learn, One (Giant) Step at a Time

2023/11/7

A Gentle Introduction to Gradient-Based Optimization and Variational Inequalities for Machine Learning

arXiv preprint arXiv:2309.04877

2023/9/9

Yatin Dandi
Yatin Dandi

H-Index: 1

Sampling with flows, diffusion and autoregressive neural networks: A spin-glass perspective

arXiv preprint arXiv:2308.14085

2023/8/27

Yatin Dandi
Yatin Dandi

H-Index: 1

Florent Krzakala
Florent Krzakala

H-Index: 40

Learning two-layer neural networks, one (giant) step at a time

arXiv preprint arXiv:2305.18270

2023/5/29

Maximally-stable local optima in random graphs and spin glasses: Phase transitions and universality

arXiv preprint arXiv:2305.03591

2023/5/5

Yatin Dandi
Yatin Dandi

H-Index: 1

David Gamarnik
David Gamarnik

H-Index: 24

Implicit gradient alignment in distributed and federated learning

Proceedings of the AAAI Conference on Artificial Intelligence

2022/6/28

Yatin Dandi
Yatin Dandi

H-Index: 1

Martin Jaggi
Martin Jaggi

H-Index: 33

Data-heterogeneity-aware mixing for decentralized learning

arXiv preprint arXiv:2204.06477

2022/4/13

NeurInt: Learning to Interpolate through Neural ODEs

arXiv preprint arXiv:2111.04123

2021/11/7

Yatin Dandi
Yatin Dandi

H-Index: 1

Understanding Layer-wise Contributions in Deep Neural Networks through Spectral Analysis

arXiv preprint arXiv:2111.03972

2021/11/6

Yatin Dandi
Yatin Dandi

H-Index: 1

Arthur Jacot
Arthur Jacot

H-Index: 7

Model-Agnostic Learning to Meta-Learn

Proceedings of Machine Learning Research

2021

Generalized Adversarially Learned Inference

Proceedings of the AAAI Conference on Artificial Intelligence

2021/5/18

Jointly trained image and video generation using residual vectors

2020

Yatin Dandi
Yatin Dandi

H-Index: 1

See List of Professors in Yatin Dandi University(Indian Institute of Technology Kanpur)

Co-Authors

academic-engine