Ohad Shamir

Ohad Shamir

Weizmann Institute of Science

H-index: 58

Asia-Israel

About Ohad Shamir

Ohad Shamir, With an exceptional h-index of 58 and a recent h-index of 46 (since 2020), a distinguished researcher at Weizmann Institute of Science, specializes in the field of Machine Learning, Learning Theory.

His recent articles reflect a diverse array of research interests and contributions to the field:

Depth Separation in Norm-Bounded Infinite-Width Neural Networks

Accelerated zeroth-order method for non-smooth stochastic convex optimization problem with infinite variance

Initialization-Dependent Sample Complexity of Linear Predictors and Neural Networks

From tempered to benign overfitting in relu neural networks

Generalization in kernel regression under realistic assumptions

Deterministic nonsmooth nonconvex optimization

An algorithm with optimal dimension-dependence for zero-order nonsmooth nonconvex stochastic optimization

Implicit regularization towards rank minimization in relu networks

Ohad Shamir Information

University

Position

___

Citations(all)

14670

Citations(since 2020)

9828

Cited By

9418

hIndex(all)

58

hIndex(since 2020)

46

i10Index(all)

114

i10Index(since 2020)

101

Email

University Profile Page

Weizmann Institute of Science

Google Scholar

View Google Scholar Profile

Ohad Shamir Skills & Research Interests

Machine Learning

Learning Theory

Top articles of Ohad Shamir

Title

Journal

Author(s)

Publication Date

Depth Separation in Norm-Bounded Infinite-Width Neural Networks

arXiv preprint arXiv:2402.08808

Suzanna Parkinson

Greg Ongie

Rebecca Willett

Ohad Shamir

Nathan Srebro

2024/2/13

Accelerated zeroth-order method for non-smooth stochastic convex optimization problem with infinite variance

Advances in Neural Information Processing Systems

Nikita Kornilov

Ohad Shamir

Aleksandr Lobanov

Darina Dvinskikh

Alexander Gasnikov

...

2024/2/13

Initialization-Dependent Sample Complexity of Linear Predictors and Neural Networks

Advances in Neural Information Processing Systems

Roey Magen

Ohad Shamir

2024/2/13

From tempered to benign overfitting in relu neural networks

Advances in Neural Information Processing Systems

Guy Kornowski

Gilad Yehudai

Ohad Shamir

2024/2/13

Generalization in kernel regression under realistic assumptions

arXiv preprint arXiv:2312.15995

Daniel Barzilai

Ohad Shamir

2023/12/26

Deterministic nonsmooth nonconvex optimization

Michael Jordan

Guy Kornowski

Tianyi Lin

Ohad Shamir

Manolis Zampetakis

2023/7/12

An algorithm with optimal dimension-dependence for zero-order nonsmooth nonconvex stochastic optimization

arXiv preprint arXiv:2307.04504

Guy Kornowski

Ohad Shamir

2023/7/10

Implicit regularization towards rank minimization in relu networks

Nadav Timor

Gal Vardi

Ohad Shamir

2023/2/13

The implicit bias of benign overfitting

Journal of Machine Learning Research

Ohad Shamir

2023

Reconstructing training data from trained neural networks

Advances in Neural Information Processing Systems

Niv Haim

Gal Vardi

Gilad Yehudai

Ohad Shamir

Michal Irani

2022/12/6

Depth separations in neural networks: What is actually being separated?

Constructive approximation

Itay Safran

Ronen Eldan

Ohad Shamir

2022/2

The sample complexity of one-hidden-layer neural networks

Advances in Neural Information Processing Systems

Gal Vardi

Ohad Shamir

Nati Srebro

2022/12/6

Oracle complexity in nonsmooth nonconvex optimization

Journal of Machine Learning Research

Guy Kornowski

Ohad Shamir

2022

Gradient methods provably converge to non-robust networks

Advances in Neural Information Processing Systems

Gal Vardi

Gilad Yehudai

Ohad Shamir

2022/12/6

Elephant in the Room: Non-Smooth Non-Convex Optimization.

Ohad Shamir

2022

On margin maximization in linear and relu networks

Advances in Neural Information Processing Systems

Gal Vardi

Ohad Shamir

Nati Srebro

2022/12/6

On the complexity of finding small subgradients in nonsmooth optimization

arXiv preprint arXiv:2209.10346

Guy Kornowski

Ohad Shamir

2022/9/21

Width is less important than depth in ReLU neural networks

Gal Vardi

Gilad Yehudai

Ohad Shamir

2022/6/28

Gradient methods never overfit on separable data

Journal of Machine Learning Research

Ohad Shamir

2021

The connection between approximation, depth separation and learnability in neural networks

Eran Malach

Gilad Yehudai

Shai Shalev-Schwartz

Ohad Shamir

2021/7/21

See List of Professors in Ohad Shamir University(Weizmann Institute of Science)