Xiaozhi Wang

About Xiaozhi Wang

Xiaozhi Wang, With an exceptional h-index of 16 and a recent h-index of 16 (since 2020), a distinguished researcher at Tsinghua University, specializes in the field of Natural Language Processing, Knowledge Graph.

His recent articles reflect a diverse array of research interests and contributions to the field:

Event-level Knowledge Editing

KoLA: Carefully Benchmarking World Knowledge of Large Language Models

The Devil is in the Details: On the Pitfalls of Event Extraction Evaluation

Emergent Modularity in Pre-trained Transformers

Sub-character tokenization for chinese pretrained language models

ChatLog: Recording and Analyzing ChatGPT Across Time

Parameter-efficient fine-tuning of large-scale pre-trained language models

Human Emotion Knowledge Representation Emerges in Large Language Models and Supports Discrete Emotion Inference

Xiaozhi Wang Information

University

Position

___

Citations(all)

1664

Citations(since 2020)

1664

Cited By

111

hIndex(all)

16

hIndex(since 2020)

16

i10Index(all)

17

i10Index(since 2020)

17

Email

University Profile Page

Google Scholar

Xiaozhi Wang Skills & Research Interests

Natural Language Processing

Knowledge Graph

Top articles of Xiaozhi Wang

Event-level Knowledge Editing

arXiv preprint arXiv:2402.13093

2024/2/20

The Devil is in the Details: On the Pitfalls of Event Extraction Evaluation

arXiv preprint arXiv:2306.06918

2023/6/12

Emergent Modularity in Pre-trained Transformers

arXiv preprint arXiv:2305.18390

2023/5/28

Sub-character tokenization for chinese pretrained language models

Transactions of the Association for Computational Linguistics

2023/5/18

ChatLog: Recording and Analyzing ChatGPT Across Time

arXiv preprint arXiv:2304.14106

2023/4/27

Human Emotion Knowledge Representation Emerges in Large Language Models and Supports Discrete Emotion Inference

arXiv preprint arXiv:2302.09582

2023/2/19

READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises

arXiv preprint arXiv:2302.07324

2023/2/14

OmniEvent: A comprehensive, fair, and easy-to-use toolkit for event understanding

arXiv preprint arXiv:2309.14258

2023/9/25

Benchmarking Foundation Models with Language-Model-as-an-Examiner

Advances in Neural Information Processing Systems

2024/2/13

Preserving knowledge invariance: Rethinking robustness evaluation of open information extraction

arXiv preprint arXiv:2305.13981

2023/5/23

Maven-arg: Completing the puzzle of all-in-one event understanding dataset with event argument annotation

arXiv preprint arXiv:2311.09105

2023/11/15

When does in-context learning fall short and why? a study on specification-heavy tasks

arXiv preprint arXiv:2311.08993

2023/11/15

GOAL: A challenging knowledge-grounded video captioning benchmark for real-time soccer commentary generation

2023/10/21

MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction

arXiv preprint arXiv:2211.07342

2022/11/14

Finding Skill Neurons in Pre-trained Transformer-based Language Models

arXiv preprint arXiv:2211.07349

2022/11/14

COPEN: Probing Conceptual Knowledge in Pre-trained Language Models

arXiv preprint arXiv:2211.04079

2022/11/8

CStory: A Chinese Large-scale News Storyline Dataset

2022/10/17

See List of Professors in Xiaozhi Wang University(Tsinghua University)

Co-Authors

academic-engine