Matus Telgarsky

About Matus Telgarsky

Matus Telgarsky, With an exceptional h-index of 22 and a recent h-index of 19 (since 2020), a distinguished researcher at University of Illinois at Urbana-Champaign, specializes in the field of deep learning theory, machine learning theory.

His recent articles reflect a diverse array of research interests and contributions to the field:

Transformers, parallel computation, and logarithmic depth

Representational strengths and limitations of transformers

Spectrum Extraction and Clipping for Implicitly Linear Layers

Large Stepsize Gradient Descent for Logistic Loss: Non-Monotonicity of the Loss Improves Optimization Efficiency

On achieving optimal adversarial test error

Stochastic linear optimization never overfits with quadratically-bounded losses on general data

Convex analysis at infinity: An introduction to astral space

Feature selection and low test error in shallow low-rotation ReLU networks

Matus Telgarsky Information

University

Position

___

Citations(all)

5392

Citations(since 2020)

4234

Cited By

2861

hIndex(all)

22

hIndex(since 2020)

19

i10Index(all)

31

i10Index(since 2020)

29

Email

University Profile Page

Google Scholar

Matus Telgarsky Skills & Research Interests

deep learning theory

machine learning theory

Top articles of Matus Telgarsky

Title

Journal

Author(s)

Publication Date

Transformers, parallel computation, and logarithmic depth

arXiv preprint arXiv:2402.09268

Clayton Sanford

Daniel Hsu

Matus Telgarsky

2024/2/14

Representational strengths and limitations of transformers

Advances in Neural Information Processing Systems

Clayton Sanford

Daniel J Hsu

Matus Telgarsky

2024/2/13

Spectrum Extraction and Clipping for Implicitly Linear Layers

arXiv preprint arXiv:2402.16017

Ali Ebrahimpour Boroojeny

Matus Telgarsky

Hari Sundaram

2024/2/25

Large Stepsize Gradient Descent for Logistic Loss: Non-Monotonicity of the Loss Improves Optimization Efficiency

arXiv preprint arXiv:2402.15926

Jingfeng Wu

Peter L Bartlett

Matus Telgarsky

Bin Yu

2024/2/24

On achieving optimal adversarial test error

arXiv preprint arXiv:2306.07544

Justin D Li

Matus Telgarsky

2023/6/13

Stochastic linear optimization never overfits with quadratically-bounded losses on general data

Matus Telgarsky

2022/6/28

Convex analysis at infinity: An introduction to astral space

arXiv preprint arXiv:2205.03260

Miroslav Dudík

Robert E Schapire

Matus Telgarsky

2022/5/6

Feature selection and low test error in shallow low-rotation ReLU networks

Matus Telgarsky

2022/9/29

Deep learning theory (DRAFT)

Matus Telgarsky

2022/9/23

Generalization bounds via distillation

arXiv preprint arXiv:2104.05641

Daniel Hsu

Ziwei Ji

Matus Telgarsky

Lan Wang

2021/4/12

Characterizing the implicit bias via a primal-dual analysis

Ziwei Ji

Matus Telgarsky

2021/3/1

Early-stopped neural networks are consistent

Advances in Neural Information Processing Systems

Ziwei Ji

Justin Li

Matus Telgarsky

2021/12/6

Deep learning theory lecture notes

Matus Telgarsky

2021

Actor-critic is implicitly biased towards high entropy optimal policies

arXiv preprint arXiv:2110.11280

Yuzheng Hu

Ziwei Ji

Matus Telgarsky

2021/10/21

Fast margin maximization via dual acceleration

Ziwei Ji

Nathan Srebro

Matus Telgarsky

2021/7/1

Gradient descent follows the regularization path for general losses

Ziwei Ji

Miroslav Dudík

Robert E Schapire

Matus Telgarsky

2020/7/15

Directional convergence and alignment in deep learning

Ziwei Ji

Matus Telgarsky

2020

See List of Professors in Matus Telgarsky University(University of Illinois at Urbana-Champaign)

Co-Authors

academic-engine