Sang Michael Xie

Sang Michael Xie

Stanford University

H-index: 17

North America-United States

About Sang Michael Xie

Sang Michael Xie, With an exceptional h-index of 17 and a recent h-index of 17 (since 2020), a distinguished researcher at Stanford University, specializes in the field of Machine Learning, NLP, Reliable Machine Learning, Computational Sustainability.

His recent articles reflect a diverse array of research interests and contributions to the field:

A Survey on Data Selection for Language Models

Doremi: Optimizing data mixtures speeds up language model pretraining

Connect Later: Improving Fine-tuning for Robustness with Targeted Augmentations

Reward design with language models

Data selection for language models via importance resampling

Same pre-training loss, better downstream: Implicit bias matters for language models

Holistic evaluation of language models

Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation

Sang Michael Xie Information

University

Position

PhD Candidate

Citations(all)

7173

Citations(since 2020)

6751

Cited By

1305

hIndex(all)

17

hIndex(since 2020)

17

i10Index(all)

22

i10Index(since 2020)

22

Email

University Profile Page

Google Scholar

Sang Michael Xie Skills & Research Interests

Machine Learning

NLP

Reliable Machine Learning

Computational Sustainability

Top articles of Sang Michael Xie

Title

Journal

Author(s)

Publication Date

A Survey on Data Selection for Language Models

Alon Albalak

Yanai Elazar

Sang Michael Xie

Shayne Longpre

Nathan Lambert

...

2024/2/26

Doremi: Optimizing data mixtures speeds up language model pretraining

Advances in Neural Information Processing Systems

Sang Michael Xie

Hieu Pham

Xuanyi Dong

Nan Du

Hanxiao Liu

...

2024/2/13

Connect Later: Improving Fine-tuning for Robustness with Targeted Augmentations

arXiv preprint arXiv:2402.03325

Helen Qu

Sang Michael Xie

2024/1/8

Reward design with language models

arXiv preprint arXiv:2303.00001

Minae Kwon

Sang Michael Xie

Kalesha Bullard

Dorsa Sadigh

2023/2/27

Data selection for language models via importance resampling

Advances in Neural Information Processing Systems

Sang Michael Xie

Shibani Santurkar

Tengyu Ma

Percy S Liang

2023/12/15

Same pre-training loss, better downstream: Implicit bias matters for language models

Hong Liu

Sang Michael Xie

Zhiyuan Li

Tengyu Ma

2023/7/3

Holistic evaluation of language models

arXiv preprint arXiv:2211.09110

Percy Liang

Rishi Bommasani

Tony Lee

Dimitris Tsipras

Dilara Soylu

...

2022/11/16

Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation

International Conference on Machine Learning (ICML) 2022

Kendrick Shen

Robbie Jones

Ananya Kumar

Sang Michael Xie

Jeff Z HaoChen

...

2022/4/1

An Explanation of In-context Learning as Implicit Bayesian Inference

arXiv preprint arXiv:2111.02080

Sang Michael Xie

Aditi Raghunathan

Percy Liang

Tengyu Ma

2021/11/3

In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness

arXiv preprint arXiv:2012.04550

Sang Michael Xie

Ananya Kumar

Robbie Jones

Fereshte Khani

Tengyu Ma

...

2020/12/8

Extending the wilds benchmark for unsupervised adaptation

arXiv preprint arXiv:2112.05090

Shiori Sagawa

Pang Wei Koh

Tony Lee

Irena Gao

Sang Michael Xie

...

2021/12/9

Automated detection of skin reactions in epicutaneous patch testing using machine learning

British Journal of Dermatology

WH Chan

R Srivastava

N Damaraju

H Do

G Burnett

...

2021/8/1

Ensembles and cocktails: Robust finetuning for natural language generation

John Hewitt

Xiang Lisa Li

Sang Michael Xie

Benjamin Newman

Percy Liang

2021/10/6

Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization

Sang Michael Xie

Tengyu Ma

Percy Liang

2021/7/1

No true state-of-the-art? ood detection methods are inconsistent across datasets

ICML UDL Workshop

Fahim Tajwar

Ananya Kumar*

Sang Michael Xie*

Percy Liang

2021/9/12

On the opportunities and risks of foundation models

arXiv preprint arXiv:2108.07258

Rishi Bommasani

Drew A Hudson

Ehsan Adeli

Russ Altman

Simran Arora

...

2021/8/16

Wilds: A benchmark of in-the-wild distribution shifts

Pang Wei Koh

Shiori Sagawa

Henrik Marklund

Sang Michael Xie

Marvin Zhang

...

2021/7/1

Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning

Advances in Neural Information Processing Systems

Colin Wei

Sang Michael Xie

Tengyu Ma

2021/12/6

Understanding and mitigating the tradeoff between robustness and accuracy

Aditi Aditi Raghunathan

Sang Michael

Michael Xie Xie

Fanny Yang

John Duchi

...

2022/6

Weakly supervised deep learning for segmentation of remote sensing imagery

Remote Sensing

Sherrie Wang

William Chen

Sang Michael Xie

George Azzari

David B Lobell

2020/1/7

See List of Professors in Sang Michael Xie University(Stanford University)

Co-Authors

academic-engine