Gal Vardi

About Gal Vardi

Gal Vardi, With an exceptional h-index of 14 and a recent h-index of 14 (since 2020), a distinguished researcher at Weizmann Institute of Science, specializes in the field of Machine Learning, Learning Theory, Deep Learning Theory.

His recent articles reflect a diverse array of research interests and contributions to the field:

Deconstructing data reconstruction: Multiclass, weight decay and general losses

Most Neural Networks Are Almost Learnable

The double-edged sword of implicit bias: Generalization vs. robustness in relu networks

Computational complexity of learning neural networks: Smoothness and degeneracy

Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Linear Subspaces

Benign overfitting and grokking in relu networks for xor cluster data

Noisy interpolation learning with shallow univariate relu networks

Benign overfitting in linear classifiers and leaky relu networks from kkt conditions for margin maximization

Gal Vardi Information

University

Position

___

Citations(all)

434

Citations(since 2020)

419

Cited By

41

hIndex(all)

14

hIndex(since 2020)

14

i10Index(all)

15

i10Index(since 2020)

15

Email

University Profile Page

Google Scholar

Gal Vardi Skills & Research Interests

Machine Learning

Learning Theory

Deep Learning Theory

Top articles of Gal Vardi

Deconstructing data reconstruction: Multiclass, weight decay and general losses

Advances in Neural Information Processing Systems

2024/2/13

Most Neural Networks Are Almost Learnable

Advances in Neural Information Processing Systems

2024/2/13

Amit Daniely
Amit Daniely

H-Index: 16

Gal Vardi
Gal Vardi

H-Index: 5

The double-edged sword of implicit bias: Generalization vs. robustness in relu networks

Advances in Neural Information Processing Systems

2024/2/13

Computational complexity of learning neural networks: Smoothness and degeneracy

Advances in Neural Information Processing Systems

2024/2/13

Amit Daniely
Amit Daniely

H-Index: 16

Gal Vardi
Gal Vardi

H-Index: 5

Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Linear Subspaces

Advances in Neural Information Processing Systems

2024/2/13

Gilad Yehudai
Gilad Yehudai

H-Index: 3

Gal Vardi
Gal Vardi

H-Index: 5

Benign overfitting and grokking in relu networks for xor cluster data

arXiv preprint arXiv:2310.02541

2023/10/4

Noisy interpolation learning with shallow univariate relu networks

arXiv preprint arXiv:2307.15396

2023/7/28

Gal Vardi
Gal Vardi

H-Index: 5

Nathan Srebro
Nathan Srebro

H-Index: 52

Benign overfitting in linear classifiers and leaky relu networks from kkt conditions for margin maximization

2023/7/12

An agnostic view on the cost of overfitting in (kernel) ridge regression

arXiv preprint arXiv:2306.13185

2023/6/22

Gal Vardi
Gal Vardi

H-Index: 5

Nathan Srebro
Nathan Srebro

H-Index: 52

The magazine archive includes every article published in Communications of the ACM for over the past 50 years.

Communications of the ACM

2010/7

On the implicit bias in deep-learning algorithms

2023/5/24

Gal Vardi
Gal Vardi

H-Index: 5

Reconstructing Training Data from Multiclass Neural Networks

arXiv preprint arXiv:2305.03350

2023/5/5

Implicit regularization towards rank minimization in relu networks

2023/2/13

Gal Vardi
Gal Vardi

H-Index: 5

Ohad Shamir
Ohad Shamir

H-Index: 41

Reconstructing training data from trained neural networks

Advances in Neural Information Processing Systems

2022/12/6

On the effective number of linear regions in shallow univariate relu networks: Convergence guarantees and implicit bias

Advances in Neural Information Processing Systems

2022/12/6

Itay Safran
Itay Safran

H-Index: 6

Gal Vardi
Gal Vardi

H-Index: 5

The sample complexity of one-hidden-layer neural networks

Advances in Neural Information Processing Systems

2022/12/6

Gal Vardi
Gal Vardi

H-Index: 5

Ohad Shamir
Ohad Shamir

H-Index: 41

Gradient methods provably converge to non-robust networks

Advances in Neural Information Processing Systems

2022/12/6

On margin maximization in linear and relu networks

Advances in Neural Information Processing Systems

2022/12/6

Gal Vardi
Gal Vardi

H-Index: 5

Ohad Shamir
Ohad Shamir

H-Index: 41

On convexity and linear mode connectivity in neural networks

2022/11/23

Implicit bias in leaky relu networks trained on high-dimensional data

arXiv preprint arXiv:2210.07082

2022/10/13

See List of Professors in Gal Vardi University(Weizmann Institute of Science)