Tianyi Zhou

Tianyi Zhou

University of Washington

H-index: 30

North America-United States

About Tianyi Zhou

Tianyi Zhou, With an exceptional h-index of 30 and a recent h-index of 27 (since 2020), a distinguished researcher at University of Washington, specializes in the field of Machine Learning, Artificial Intelligence, Natural Language Processing.

His recent articles reflect a diverse array of research interests and contributions to the field:

Adaptive Regularization of Representation Rank as an Implicit Constraint of Bellman Equation

Many-Objective Multi-Solution Transport

Meta-Task Prompting Elicits Embedding from Large Language Models

Corpus-Steered Query Expansion with Large Language Models

DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers

A survey on knowledge distillation of large language models

Mulan: Multimodal-llm agent for progressive multi-object diffusion

Can llms speak for diverse people? tuning llms via debate to generate controllable controversial statements

Tianyi Zhou Information

University

Position

___

Citations(all)

4955

Citations(since 2020)

3844

Cited By

2149

hIndex(all)

30

hIndex(since 2020)

27

i10Index(all)

58

i10Index(since 2020)

53

Email

University Profile Page

Google Scholar

Tianyi Zhou Skills & Research Interests

Machine Learning

Artificial Intelligence

Natural Language Processing

Top articles of Tianyi Zhou

Adaptive Regularization of Representation Rank as an Implicit Constraint of Bellman Equation

arXiv preprint arXiv:2404.12754

2024/4/19

Many-Objective Multi-Solution Transport

arXiv preprint arXiv:2403.04099

2024/3/6

Meta-Task Prompting Elicits Embedding from Large Language Models

arXiv preprint arXiv:2402.18458

2024/2/28

Corpus-Steered Query Expansion with Large Language Models

arXiv preprint arXiv:2402.18031

2024/2/28

DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers

arXiv preprint arXiv:2402.16914

2024/2/25

A survey on knowledge distillation of large language models

arXiv preprint arXiv:2402.13116

2024/2/20

Mulan: Multimodal-llm agent for progressive multi-object diffusion

arXiv preprint arXiv:2402.12741

2024/2/20

Can llms speak for diverse people? tuning llms via debate to generate controllable controversial statements

arXiv preprint arXiv:2402.10614

2024/2/16

Selective reflection-tuning: Student-selected data recycling for llm instruction-tuning

arXiv preprint arXiv:2402.10110

2024/2/15

ODIN: Disentangled Reward Mitigates Hacking in RLHF

arXiv preprint arXiv:2402.07319

2024/2/11

Superfiltering: Weak-to-strong data filtering for fast instruction-tuning

arXiv preprint arXiv:2402.00530

2024/2/1

A Time-Consistency Curriculum for Learning from Instance-Dependent Noisy Labels

IEEE Transactions on Pattern Analysis and Machine Intelligence

2024/2/1

Does Continual Learning Equally Forget All Parameters?

2023/4/9

Reflection-tuning: Data recycling improves llm instruction-tuning

arXiv preprint arXiv:2310.11716

2023/10/18

When do you need chain-of-thought prompting for chatgpt?

arXiv preprint arXiv:2304.03262

2023/4/6

How Many Demonstrations Do You Need for In-context Learning?

arXiv preprint arXiv:2303.08119

2023/3/14

Merging experts into one: Improving computational efficiency of mixture of experts

arXiv preprint arXiv:2310.09832

2023/10/15

Voting from Nearest Tasks: Meta-Vote Pruning of Pre-trained Models for Downstream Tasks

ECML/PKDD 2023, arXiv preprint arXiv:2301.11560

2023/1/27

See List of Professors in Tianyi Zhou University(University of Washington)

Co-Authors

academic-engine