He He

He He

New York University

H-index: 30

North America-United States

About He He

He He, With an exceptional h-index of 30 and a recent h-index of 27 (since 2020), a distinguished researcher at New York University, specializes in the field of Machine Learning, Natural Language Processing.

His recent articles reflect a diverse array of research interests and contributions to the field:

Testing the general deductive reasoning capacity of large language models using ood examples

Iterative Reasoning Preference Optimization

Towards Consistent Natural-Language Explanations via Explanation-Consistency Finetuning

The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models

Solving olympiad geometry without human demonstrations

Foundational challenges in assuring alignment and safety of large language models

Your Co-Workers Matter: Evaluating Collaborative Capabilities of Language Models in Blocks World

Parallel Structures in Pre-training Data Yield In-Context Learning

He He Information

University

Position

___

Citations(all)

5214

Citations(since 2020)

4483

Cited By

2019

hIndex(all)

30

hIndex(since 2020)

27

i10Index(all)

45

i10Index(since 2020)

40

Email

University Profile Page

New York University

Google Scholar

View Google Scholar Profile

He He Skills & Research Interests

Machine Learning

Natural Language Processing

Top articles of He He

Title

Journal

Author(s)

Publication Date

Testing the general deductive reasoning capacity of large language models using ood examples

Advances in Neural Information Processing Systems

Abulhair Saparov

Richard Yuanzhe Pang

Vishakh Padmakumar

Nitish Joshi

Mehran Kazemi

...

2024/2/13

Iterative Reasoning Preference Optimization

arXiv preprint arXiv:2404.19733

Richard Yuanzhe Pang

Weizhe Yuan

Kyunghyun Cho

He He

Sainbayar Sukhbaatar

...

2024/4/30

Towards Consistent Natural-Language Explanations via Explanation-Consistency Finetuning

arXiv preprint arXiv:2401.13986

Yanda Chen

Chandan Singh

Xiaodong Liu

Simiao Zuo

Bin Yu

...

2024/1/25

The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models

arXiv preprint arXiv:2404.16019

Hannah Rose Kirk

Alexander Whitefield

Paul Röttger

Andrew Bean

Katerina Margatina

...

2024/4/24

Solving olympiad geometry without human demonstrations

Nature

Trieu H Trinh

Yuhuai Wu

Quoc V Le

He He

Thang Luong

2024/1

Foundational challenges in assuring alignment and safety of large language models

arXiv preprint arXiv:2404.09932

Usman Anwar

Abulhair Saparov

Javier Rando

Daniel Paleka

Miles Turpin

...

2024/4/15

Your Co-Workers Matter: Evaluating Collaborative Capabilities of Language Models in Blocks World

arXiv preprint arXiv:2404.00246

Guande Wu

Chen Zhao

Claudio Silva

He He

2024/3/30

Parallel Structures in Pre-training Data Yield In-Context Learning

arXiv preprint arXiv:2402.12530

Yanda Chen

Chen Zhao

Zhou Yu

Kathleen McKeown

He He

2024/2/19

Show Your Work with Confidence: Confidence Bands for Tuning Curves

arXiv preprint arXiv:2311.09480

Nicholas Lourie

Kyunghyun Cho

He He

2023/11/16

Leveraging Implicit Feedback from Deployment Data in Dialogue

arXiv preprint arXiv:2307.14117

Richard Yuanzhe Pang

Stephen Roller

Kyunghyun Cho

He He

Jason Weston

2023/7/26

Personas as a way to model truthfulness in language models

arXiv preprint arXiv:2310.18168

Nitish Joshi

Javier Rando

Abulhair Saparov

Najoung Kim

He He

2023/10/27

Efficient shapley values estimation by amortization for text classification

ACL 2023 Long Paper

Chenghao Yang

Fan Yin

He He

Kai-Wei Chang

Xiaofei Ma

...

2023

Does Writing with Language Models Reduce Content Diversity?

arXiv preprint arXiv:2309.05196

Vishakh Padmakumar

He He

2023/9/11

Measuring inductive biases of in-context learning with underspecified demonstrations

arXiv preprint arXiv:2305.13299

Chenglei Si

Dan Friedman

Nitish Joshi

Shi Feng

Danqi Chen

...

2023/5/22

Improving Multi-Hop Reasoning in LLMs by Learning from Rich Human Feedback

Nitish Joshi

Koushik Kalyanaraman

Zhiting Hu

Kumar Chellapilla

He He

...

2023/12/11

Do models explain themselves? counterfactual simulatability of natural language explanations

arXiv preprint arXiv:2307.08678

Yanda Chen

Ruiqi Zhong

Narutatsu Ri

Chen Zhao

He He

...

2023/7/17

How do decoding algorithms distribute information in dialogue responses?

arXiv preprint arXiv:2303.17006

Saranya Venkatraman

He He

David Reitter

2023/5

Pragmatic Radiology Report Generation

Dang Nguyen

Chacha Chen

He He

Chenhao Tan

2023/12/4

Extrapolative controlled sequence generation via iterative refinement

Vishakh Padmakumar

Richard Yuanzhe Pang

He He

Ankur P Parikh

2023/7/3

Creative Natural Language Generation

Tuhin Chakrabarty

Vishakh Padmakumar

He He

Nanyun Peng

2023/12

See List of Professors in He He University(New York University)

Co-Authors

H-index: 100
Luke Zettlemoyer

Luke Zettlemoyer

University of Washington

H-index: 95
Percy Liang

Percy Liang

Stanford University

H-index: 95
Yejin Choi

Yejin Choi

University of Washington

H-index: 71
Hal Daumé III

Hal Daumé III

University of Maryland, Baltimore

H-index: 59
Jason Eisner

Jason Eisner

Johns Hopkins University

academic-engine