Daniel M. Roy

Daniel M. Roy

University of Toronto

H-index: 39

North America-Canada

About Daniel M. Roy

Daniel M. Roy, With an exceptional h-index of 39 and a recent h-index of 31 (since 2020), a distinguished researcher at University of Toronto, specializes in the field of Machine learning, Trustworthy AI, Mathematical Statistics, Learning Theory, Theoretical CS.

His recent articles reflect a diverse array of research interests and contributions to the field:

Probabilistic programming interfaces for random graphs: Markov categories, graphons, and nominal sets

Simultaneous linear connectivity of neural networks modulo permutation

Information Complexity of Stochastic Convex Optimization: Applications to Generalization and Memorization

The shaped transformer: Attention models in the infinite depth-and-width limit

Limitations of information-theoretic generalization bounds for gradient descent methods in stochastic convex optimization

de Finetti's theorem and the existence of regular conditional distributions and strong laws on exchangeable algebras

Existence of matching priors on compact spaces

Relaxing the iid assumption: Adaptively minimax optimal regret via root-entropic regularization

Daniel M. Roy Information

University

Position

Dept of Statistical Sciences; Computer Science

Citations(all)

8172

Citations(since 2020)

5377

Cited By

4431

hIndex(all)

39

hIndex(since 2020)

31

i10Index(all)

72

i10Index(since 2020)

61

Email

University Profile Page

University of Toronto

Google Scholar

View Google Scholar Profile

Daniel M. Roy Skills & Research Interests

Machine learning

Trustworthy AI

Mathematical Statistics

Learning Theory

Theoretical CS

Top articles of Daniel M. Roy

Title

Journal

Author(s)

Publication Date

Probabilistic programming interfaces for random graphs: Markov categories, graphons, and nominal sets

Proceedings of the ACM on Programming Languages

Nate Ackerman

Cameron E Freer

Younesse Kaddar

Jacek Karwowski

Sean Moss

...

2024/1/5

Simultaneous linear connectivity of neural networks modulo permutation

arXiv preprint arXiv:2404.06498

Ekansh Sharma

Devin Kwok

Tom Denton

Daniel M Roy

David Rolnick

...

2024/4/9

Information Complexity of Stochastic Convex Optimization: Applications to Generalization and Memorization

arXiv preprint arXiv:2402.09327

Idan Attias

Gintare Karolina Dziugaite

Mahdi Haghifam

Roi Livni

Daniel M Roy

2024/2/14

The shaped transformer: Attention models in the infinite depth-and-width limit

Advances in Neural Information Processing Systems

Lorenzo Noci

Chuning Li

Mufan Li

Bobby He

Thomas Hofmann

...

2024/2/13

Limitations of information-theoretic generalization bounds for gradient descent methods in stochastic convex optimization

Mahdi Haghifam

Borja Rodríguez-Gálvez

Ragnar Thobaben

Mikael Skoglund

Daniel M Roy

...

2023/2/13

de Finetti's theorem and the existence of regular conditional distributions and strong laws on exchangeable algebras

arXiv preprint arXiv:2312.16349

Peter Potaptchik

Daniel M Roy

David Schrittesser

2023/12/26

Existence of matching priors on compact spaces

Biometrika

Haosui Duanmu

Daniel M Roy

Aaron Smith

2023/9/1

Relaxing the iid assumption: Adaptively minimax optimal regret via root-entropic regularization

The Annals of Statistics

Blair Bilodeau

Jeffrey Negrea

Daniel M Roy

2023/8

Minimax rates for conditional density estimation via empirical entropy

The Annals of Statistics

Blair Bilodeau

Dylan J Foster

Daniel M Roy

2023/4

Statistical inference with stochastic gradient algorithms

arXiv preprint arXiv

Jeffrey Negrea

Jun Yang

Haoyue Feng

Daniel M Roy

Jonathan H Huggins

2022/11/14

Tuning Stochastic Gradient Algorithms for Statistical Inference via Large-Sample Asymptotics

arXiv preprint arXiv:2207.12395

Jeffrey Negrea

Jun Yang

Haoyue Feng

Daniel M Roy

Jonathan H Huggins

2022/7/25

Statistical minimax theorems via nonstandard analysis

arXiv preprint arXiv:2212.13250

Haosui Duanmu

Daniel M Roy

David Schrittesser

2022/12/26

Understanding generalization via leave-one-out conditional mutual information

Mahdi Haghifam

Shay Moran

Daniel M Roy

Gintare Karolina Dziugiate

2022/6/26

Pruning’s effect on generalization through the lens of training and regularization

Advances in Neural Information Processing Systems

Tian Jin

Michael Carbin

Dan Roy

Jonathan Frankle

Gintare Karolina Dziugaite

2022/12/6

The neural covariance sde: Shaped infinite depth-and-width networks at initialization

Advances in Neural Information Processing Systems

Mufan Li

Mihai Nica

Dan Roy

2022/12/6

Adaptively exploiting d-separators with causal bandits

Advances in Neural Information Processing Systems

Blair Bilodeau

Linbo Wang

Dan Roy

2022/12/6

On the role of data in PAC-Bayes bounds

Gintare Karolina Dziugaite

Kyle Hsu

Waseem Gharbieh

Gabriel Arpino

Daniel M Roy

2021

Minimax optimal quantile and semi-adversarial regret via root-logarithmic regularizers

Advances in Neural Information Processing Systems

Jeffrey Negrea

Blair Bilodeau

Nicolò Campolongo

Francesco Orabona

Dan Roy

2021/12/6

NUQSGD: Provably communication-efficient data-parallel SGD via nonuniform quantization

Journal of Machine Learning Research

Ali Ramezani-Kebrya

Fartash Faghri

Ilya Markov

Vitalii Aksenov

Dan Alistarh

...

2021

The future is log-Gaussian: ResNets and their infinite-depth-and-width limit at initialization

Advances in Neural Information Processing Systems

Mufan Li

Mihai Nica

Dan Roy

2021/12/6

See List of Professors in Daniel M. Roy University(University of Toronto)

Co-Authors

H-index: 137
Joshua B. Tenenbaum

Joshua B. Tenenbaum

Massachusetts Institute of Technology

H-index: 124
Zoubin Ghahramani

Zoubin Ghahramani

University of Cambridge

H-index: 81
Yee Whye Teh

Yee Whye Teh

University of Oxford

H-index: 76
Noah D. Goodman

Noah D. Goodman

Stanford University

H-index: 75
martin rinard

martin rinard

Massachusetts Institute of Technology

H-index: 33
Cristian Cadar

Cristian Cadar

Imperial College London

academic-engine