Hongyi Wang

Hongyi Wang

University of Wisconsin-Madison

H-index: 14

North America-United States

About Hongyi Wang

Hongyi Wang, With an exceptional h-index of 14 and a recent h-index of 14 (since 2020), a distinguished researcher at University of Wisconsin-Madison, specializes in the field of ML Systems.

His recent articles reflect a diverse array of research interests and contributions to the field:

FedNAR: Federated Optimization with Normalized Annealing Regularization

Trustllm: Trustworthiness in large language models

Llm360: Towards fully transparent open-source llms

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices

Redco: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs

Fusing Models with Complementary Expertise

Slimpajama-dc: Understanding data combinations for llm training

Maestro: Uncovering Low-Rank Structures via Trainable Decomposition

Hongyi Wang Information

University

Position

___

Citations(all)

3104

Citations(since 2020)

3090

Cited By

432

hIndex(all)

14

hIndex(since 2020)

14

i10Index(all)

15

i10Index(since 2020)

15

Email

University Profile Page

Google Scholar

Hongyi Wang Skills & Research Interests

ML Systems

Top articles of Hongyi Wang

FedNAR: Federated Optimization with Normalized Annealing Regularization

Advances in Neural Information Processing Systems

2024/2/13

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices

arXiv preprint arXiv:2310.19991

2023/10/30

Redco: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs

arXiv preprint arXiv:2310.16355

2023/10/25

Fusing Models with Complementary Expertise

arXiv preprint arXiv:2310.01542

2023/10/2

Slimpajama-dc: Understanding data combinations for llm training

arXiv preprint arXiv:2309.10818

2023/9/19

Maestro: Uncovering Low-Rank Structures via Trainable Decomposition

arXiv preprint arXiv:2308.14929

2023/8/28

Cuttlefish: Low-rank model training without all the tuning

Proceedings of Machine Learning and Systems

2023/3/18

Memory-adaptive depth-wise heterogenous federated learning

arXiv preprint arXiv:2303.04887

2023/3/8

Federated learning as variational inference: A scalable expectation propagation approach

arXiv preprint arXiv:2302.04228

2023/2/8

Does compressing activations help model parallel training?

arXiv preprint arXiv:2301.02654

2023/1/6

MPCFormer: fast, performant and private Transformer inference with MPC

arXiv preprint arXiv:2211.01452

2022/11/2

AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness

NeurIPS 2022

2022/10/13

On the utility of gradient compression in distributed training systems

Proceedings of Machine Learning and Systems

2022/4/22

Efficient federated learning on knowledge graphs via privacy-preserving relation embedding aggregation

2022/3/17

Rare Gems: Finding Lottery Tickets at Initialization

2022/11

Solon: Communication-efficient byzantine-resilient distributed training via redundant gradients

arXiv preprint arXiv:2110.01595

2021/10/4

Pufferfish: Communication-efficient models at no extra cost

Proceedings of Machine Learning and Systems

2021/3/15

Hongyi Wang
Hongyi Wang

H-Index: 8

Saurabh Agarwal
Saurabh Agarwal

H-Index: 15

See List of Professors in Hongyi Wang University(University of Wisconsin-Madison)

Co-Authors

academic-engine