Noah A. Smith

Noah A. Smith

University of Washington

H-index: 104

North America-United States

About Noah A. Smith

Noah A. Smith, With an exceptional h-index of 104 and a recent h-index of 83 (since 2020), a distinguished researcher at University of Washington, specializes in the field of natural language processing, machine learning, computational social science.

His recent articles reflect a diverse array of research interests and contributions to the field:

Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research

Rewardbench: Evaluating reward models for language modeling

Set the Clock: Temporal Alignment of Pretrained Language Models

Breaking the Curse of Multilinguality with Cross-lingual Expert Language Models

How far can camels go? Exploring the state of instruction tuning on open resources

Encode Once and Decode in Parallel: Efficient Transformer Decoding

Tuning language models by proxy

Fine-grained human feedback gives better rewards for language model training

Noah A. Smith Information

University

University of Washington

Position

; Allen Institute for Artificial Intelligence

Citations(all)

48512

Citations(since 2020)

30400

Cited By

27804

hIndex(all)

104

hIndex(since 2020)

83

i10Index(all)

326

i10Index(since 2020)

268

Email

University Profile Page

University of Washington

Noah A. Smith Skills & Research Interests

natural language processing

machine learning

computational social science

Top articles of Noah A. Smith

Title

Journal

Author(s)

Publication Date

Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research

arXiv preprint arXiv:2402.00159

Luca Soldaini

Rodney Kinney

Akshita Bhagia

Dustin Schwenk

David Atkinson

...

2024/1/31

Rewardbench: Evaluating reward models for language modeling

arXiv preprint arXiv:2403.13787

Nathan Lambert

Valentina Pyatkin

Jacob Morrison

LJ Miranda

Bill Yuchen Lin

...

2024/3/20

Set the Clock: Temporal Alignment of Pretrained Language Models

arXiv preprint arXiv:2402.16797

Bowen Zhao

Zander Brumbaugh

Yizhong Wang

Hannaneh Hajishirzi

Noah A Smith

2024/2/26

Breaking the Curse of Multilinguality with Cross-lingual Expert Language Models

arXiv preprint arXiv:2401.10440

Terra Blevins

Tomasz Limisiewicz

Suchin Gururangan

Margaret Li

Hila Gonen

...

2024/1/19

How far can camels go? Exploring the state of instruction tuning on open resources

Advances in Neural Information Processing Systems

Yizhong Wang

Hamish Ivison

Pradeep Dasigi

Jack Hessel

Tushar Khot

...

2024/2/13

Encode Once and Decode in Parallel: Efficient Transformer Decoding

arXiv preprint arXiv:2403.13112

Bo-Ru Lu

Nikita Haduong

Chien-Yu Lin

Hao Cheng

Noah A Smith

...

2024/3/19

Tuning language models by proxy

arXiv preprint arXiv:2401.08565

Alisa Liu

Xiaochuang Han

Yizhong Wang

Yulia Tsvetkov

Yejin Choi

...

2024/1/16

Fine-grained human feedback gives better rewards for language model training

Advances in Neural Information Processing Systems

Zeqiu Wu

Yushi Hu

Weijia Shi

Nouha Dziri

Alane Suhr

...

2024/2/13

Third-Party Language Model Performance Prediction from Instruction

arXiv preprint arXiv:2403.12413

Rahul Nadkarni

Yizhong Wang

Noah A Smith

2024/3/19

Learning Syntax Without Planting Trees: Understanding When and Why Transformers Generalize Hierarchically

arXiv preprint arXiv:2404.16367

Kabir Ahuja

Vidhisha Balachandran

Madhur Panwar

Tianxing He

Noah A Smith

...

2024/4/25

Estimating the Causal Effect of Early ArXiving on Paper Acceptance

Yanai Elazar

Jiayao Zhang

David Wadden

Bo Zhang

Noah A Smith

2024/3/15

RealTime QA: What's the Answer Right Now?

NeurIPS (Datasets and Benchmarks Track)

Jungo Kasai

Keisuke Sakaguchi

Yoichi Takahashi

Ronan Le Bras

Akari Asai

...

2023

BLINK: Multimodal Large Language Models Can See but Not Perceive

arXiv preprint arXiv:2404.12390

Xingyu Fu

Yushi Hu

Bangzheng Li

Yu Feng

Haoyu Wang

...

2024/4/18

OLMo: Accelerating the science of language models

arXiv preprint arXiv:2402.00838

Dirk Groeneveld

Iz Beltagy

Pete Walsh

Akshita Bhagia

Rodney Kinney

...

2024/2/1

Know Your Audience: The benefits and pitfalls of generating plain language summaries beyond the" general" audience

arXiv preprint arXiv:2403.04979

Tal August

Kyle Lo

Noah A Smith

Katharina Reinecke

2024/3/8

A Taxonomy of Ambiguity Types for NLP

arXiv preprint arXiv:2403.14072

Margaret Y Li

Alisa Liu

Zhaofeng Wu

Noah A Smith

2024/3/21

What's In My Big Data?

arXiv preprint arXiv:2310.20707

Yanai Elazar

Akshita Bhagia

Ian Magnusson

Abhilasha Ravichander

Dustin Schwenk

...

2023/10/31

Proceedings of the Big Picture Workshop

Yanai Elazar

Allyson Ettinger

Nora Kassner

Sebastian Ruder

Noah A Smith

2023/12

We're afraid language models aren't modeling ambiguity

arXiv preprint arXiv:2304.14399

Alisa Liu

Zhaofeng Wu

Julian Michael

Alane Suhr

Peter West

...

2023/4/27

Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering

Yushi Hu

Benlin Liu

Jungo Kasai

Yizhong Wang

Mari Ostendorf

...

2023

See List of Professors in Noah A. Smith University(University of Washington)

Noah A. Smith FAQs

What is Noah A. Smith's h-index at University of Washington?

The h-index of Noah A. Smith has been 83 since 2020 and 104 in total.

What are Noah A. Smith's top articles?

The articles with the titles of

Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research

Rewardbench: Evaluating reward models for language modeling

Set the Clock: Temporal Alignment of Pretrained Language Models

Breaking the Curse of Multilinguality with Cross-lingual Expert Language Models

How far can camels go? Exploring the state of instruction tuning on open resources

Encode Once and Decode in Parallel: Efficient Transformer Decoding

Tuning language models by proxy

Fine-grained human feedback gives better rewards for language model training

...

are the top articles of Noah A. Smith at University of Washington.

What are Noah A. Smith's research interests?

The research interests of Noah A. Smith are: natural language processing, machine learning, computational social science

What is Noah A. Smith's total number of citations?

Noah A. Smith has 48,512 citations in total.

What are the co-authors of Noah A. Smith?

The co-authors of Noah A. Smith are Eric Xing, Yejin Choi, Hannaneh Hajishirzi, Kevin Gimpel, Andre Martins, Maarten Sap.

Co-Authors

H-index: 114
Eric Xing

Eric Xing

Carnegie Mellon University

H-index: 95
Yejin Choi

Yejin Choi

University of Washington

H-index: 63
Hannaneh Hajishirzi

Hannaneh Hajishirzi

University of Washington

H-index: 46
Kevin Gimpel

Kevin Gimpel

Toyota Technological Institute

H-index: 43
Andre Martins

Andre Martins

Carnegie Mellon University

H-index: 38
Maarten Sap

Maarten Sap

University of Washington

academic-engine