Graham Neubig

Graham Neubig

Carnegie Mellon University

H-index: 82

North America-United States

About Graham Neubig

Graham Neubig, With an exceptional h-index of 82 and a recent h-index of 76 (since 2020), a distinguished researcher at Carnegie Mellon University, specializes in the field of Natural Language Processing, Machine Learning.

His recent articles reflect a diverse array of research interests and contributions to the field:

GlossLM: Multilingual Pretraining for Low-Resource Interlinear Glossing

Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes

An Incomplete Loop: Deductive, Inductive, and Abductive Learning in Large Language Models

In-Context Learning with Long-Context Models: An In-Depth Exploration

Can Large Language Models be Trusted for Evaluation? Scalable Meta-Evaluation of LLMs as Evaluators via Agent Debate

What Is Missing in Multilingual Visual Reasoning and How to Fix It

Evaluating Text-to-Visual Generation with Image-to-Text Generation

Better Synthetic Data by Retrieving and Transforming Existing Datasets

Graham Neubig Information

University

Position

Associate Professor of Computer Science

Citations(all)

28308

Citations(since 2020)

24645

Cited By

9096

hIndex(all)

82

hIndex(since 2020)

76

i10Index(all)

330

i10Index(since 2020)

280

Email

University Profile Page

Carnegie Mellon University

Google Scholar

View Google Scholar Profile

Graham Neubig Skills & Research Interests

Natural Language Processing

Machine Learning

Top articles of Graham Neubig

Title

Journal

Author(s)

Publication Date

GlossLM: Multilingual Pretraining for Low-Resource Interlinear Glossing

arXiv preprint arXiv:2403.06399

Michael Ginn

Lindia Tjuatja

Taiqi He

Enora Rice

Graham Neubig

...

2024/3/11

Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes

arXiv preprint arXiv:2402.05406

Lucio Dery

Steven Kolawole

Jean-Francois Kagey

Virginia Smith

Graham Neubig

...

2024/2/8

An Incomplete Loop: Deductive, Inductive, and Abductive Learning in Large Language Models

arXiv preprint arXiv:2404.03028

Emmy Liu

Graham Neubig

Jacob Andreas

2024/4/3

In-Context Learning with Long-Context Models: An In-Depth Exploration

arXiv preprint arXiv:2405.00200

Amanda Bertsch

Maor Ivgi

Uri Alon

Jonathan Berant

Matthew R Gormley

...

2024/4/30

Can Large Language Models be Trusted for Evaluation? Scalable Meta-Evaluation of LLMs as Evaluators via Agent Debate

arXiv preprint arXiv:2401.16788

Steffi Chern

Ethan Chern

Graham Neubig

Pengfei Liu

2024/1/30

What Is Missing in Multilingual Visual Reasoning and How to Fix It

arXiv preprint arXiv:2403.01404

Yueqi Song

Simran Khanuja

Graham Neubig

2024/3/3

Evaluating Text-to-Visual Generation with Image-to-Text Generation

arXiv preprint arXiv:2404.01291

Zhiqiu Lin

Deepak Pathak

Baiqi Li

Jiayao Li

Xide Xia

...

2024/4/1

Better Synthetic Data by Retrieving and Transforming Existing Datasets

arXiv preprint arXiv:2404.14361

Saumya Gandhi

Ritu Gala

Vijay Viswanathan

Tongshuang Wu

Graham Neubig

2024/4/22

Visualwebarena: Evaluating multimodal agents on realistic visual web tasks

arXiv preprint arXiv:2401.13649

Jing Yu Koh

Robert Lo

Lawrence Jang

Vikram Duvvur

Ming Chong Lim

...

2024/1/24

An image speaks a thousand words, but can everyone listen? On translating images for cultural relevance

arXiv preprint arXiv:2404.01247

Simran Khanuja

Sathyanarayanan Ramamoorthy

Yueqi Song

Graham Neubig

2024/4/1

VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?

arXiv preprint arXiv:2404.05955

Junpeng Liu

Yifan Song

Bill Yuchen Lin

Wai Lam

Graham Neubig

...

2024/4/9

TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks

arXiv preprint arXiv:2401.12869

Zhiruo Wang

Daniel Fried

Graham Neubig

2024/1/23

Repetition Improves Language Model Embeddings

arXiv preprint arXiv:2402.15449

Jacob Mitchell Springer

Suhas Kotha

Daniel Fried

Graham Neubig

Aditi Raghunathan

2024/2/23

What Are Tools Anyway? A Survey from the Language Model Perspective

arXiv preprint arXiv:2403.15452

Zhiruo Wang

Zhoujun Cheng

Hao Zhu

Daniel Fried

Graham Neubig

2024/3/18

Large Language Models Enable Few-Shot Clustering

TACL

Vijay Viswanathan

Kiril Gashteovski

Carolin Lawrence

Tongshuang Wu

Graham Neubig

2023/7/2

Fine-grained hallucination detection and editing for language models

arXiv preprint arXiv:2401.06855

Abhika Mishra

Akari Asai

Vidhisha Balachandran

Yizhong Wang

Graham Neubig

...

2024/1/12

RAGGED: Towards Informed Design of Retrieval Augmented Generation Systems

arXiv preprint arXiv:2403.09040

Jennifer Hsia

Afreen Shaikh

Zhiruo Wang

Graham Neubig

2024/3/14

CMULAB: An Open-Source Framework for Training and Deployment of Natural Language Processing Models

arXiv preprint arXiv:2404.02408

Zaid Sheikh

Antonios Anastasopoulos

Shruti Rijhwani

Lindia Tjuatja

Robbie Jimerson

...

2024/4/3

Instruction-tuned Language Models are Better Knowledge Learners

arXiv preprint arXiv:2402.12847

Zhengbao Jiang

Zhiqing Sun

Weijia Shi

Pedro Rodriguez

Chunting Zhou

...

2024/2/20

DIRE and its data: Neural decompiled variable renamings with respect to software class

ACM Transactions on Software Engineering and Methodology

Luke Dramko

Jeremy Lacomis

Pengcheng Yin

Ed Schwartz

Miltiadis Allamanis

...

2023/3/29

See List of Professors in Graham Neubig University(Carnegie Mellon University)

Co-Authors

H-index: 56
Tomoki Toda

Tomoki Toda

Nagoya University

H-index: 55
Satoshi Nakamura

Satoshi Nakamura

Nara Institute of Science and Technology

H-index: 37
Taylor Berg-Kirkpatrick

Taylor Berg-Kirkpatrick

University of California, San Diego

H-index: 36
Sakriani Sakti

Sakriani Sakti

Nara Institute of Science and Technology

H-index: 36
Pengfei Liu

Pengfei Liu

Carnegie Mellon University

H-index: 29
Antonis Anastasopoulos

Antonis Anastasopoulos

George Mason University

academic-engine