Yury Zemlyanskiy

About Yury Zemlyanskiy

Yury Zemlyanskiy, With an exceptional h-index of 8 and a recent h-index of 8 (since 2020), a distinguished researcher at University of Southern California, specializes in the field of machine learning, artificial intelligence, natural language processing.

His recent articles reflect a diverse array of research interests and contributions to the field:

MEMORY-VQ: Compression for Tractable Internet-Scale Memory

Glimmer: generalized late-interaction memory reranker

Gqa: Training generalized multi-query transformer models from multi-head checkpoints

Colt5: Faster long-range transformers with conditional computation

Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute

Arithmetic Sampling: Parallel Diverse Decoding for Large Language Models

FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference

Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing

Yury Zemlyanskiy Information

University

Position

___

Citations(all)

255

Citations(since 2020)

235

Cited By

58

hIndex(all)

8

hIndex(since 2020)

8

i10Index(all)

8

i10Index(since 2020)

8

Email

University Profile Page

Google Scholar

Yury Zemlyanskiy Skills & Research Interests

machine learning

artificial intelligence

natural language processing

Top articles of Yury Zemlyanskiy

MEMORY-VQ: Compression for Tractable Internet-Scale Memory

arXiv preprint arXiv:2308.14903

2023/8/28

Glimmer: generalized late-interaction memory reranker

arXiv preprint arXiv:2306.10231

2023/6/17

Gqa: Training generalized multi-query transformer models from multi-head checkpoints

arXiv preprint arXiv:2305.13245

2023/5/22

Colt5: Faster long-range transformers with conditional computation

arXiv preprint arXiv:2303.09752

2023/3/17

Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute

2023/7/3

Arithmetic Sampling: Parallel Diverse Decoding for Large Language Models

2023/7/3

FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference

arXiv preprint arXiv:2212.08153

2022/12/15

Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing

arXiv preprint arXiv:2209.14899

2022/9/29

Mention Memory: incorporating textual knowledge into Transformers through entity mention attention

arXiv preprint arXiv:2110.06176

2021/10/12

ReadTwice: Reading Very Large Documents with Memories

2021

DOCENT: Learning Self-Supervised Entity Representations from Large Document Collections

arXiv preprint arXiv:2102.13247

2021

See List of Professors in Yury Zemlyanskiy University(University of Southern California)