Young Jin Kim
Georgia Institute of Technology
H-index: 10
North America-United States
Top articles of Young Jin Kim
Contrastive preference optimization: Pushing the boundaries of llm performance in machine translation
arXiv preprint arXiv:2401.08417
2024/1/16
Pema: Plug-in external memory adaptation for language models
arXiv preprint arXiv:2311.08590
2023/11/14
Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness
arXiv preprint arXiv:2310.02410
2023/10/3
Young Jin Kim
H-Index: 6
A paradigm shift in machine translation: Boosting translation performance of large language models
arXiv preprint arXiv:2309.11674
2023/9/20
Task-Based MoE for Multitask Multilingual Machine Translation
arXiv preprint arXiv:2308.15772
2023/8/30
Young Jin Kim
H-Index: 6
Barnabas Poczos
H-Index: 47
Finequant: Unlocking efficiency with fine-grained weight-only quantization for llms
arXiv preprint arXiv:2308.09723
2023/8/16
Young Jin Kim
H-Index: 6
How good are gpt models at machine translation? a comprehensive evaluation
arXiv preprint arXiv:2302.09210
2023/2/18
Who Says Elephants Can't Run: Bringing Large Scale MoE Models into Cloud Scale Production
arXiv preprint arXiv:2211.10017
2022/11/18
Young Jin Kim
H-Index: 6
AutoMoE: Heterogeneous Mixture-of-Experts with Adaptive Computation for Efficient Neural Machine Translation
arXiv preprint arXiv:2210.07535
2022/10/14
Xiaodong Liu
H-Index: 1
Young Jin Kim
H-Index: 6
AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers
2022/9/29
Xiaodong Liu
H-Index: 1
Young Jin Kim
H-Index: 6
Fast Vocabulary Projection Method via Clustering for Multilingual Machine Translation on GPU
arXiv preprint arXiv:2208.06874
2022/8/14
Young Jin Kim
H-Index: 6
Gating dropout: Communication-efficient regularization for sparsely activated transformers
2022/6/28
Taming sparsely activated transformer with stochastic experts
arXiv preprint arXiv:2110.04260
2021/10/8
Artificial neural network models for airport capacity prediction
Journal of Air Transport Management
2021/10/1
Young Jin Kim
H-Index: 6
Scalable and efficient moe training for multitask multilingual models
arXiv preprint arXiv:2109.10465
2021/9/22
Young Jin Kim
H-Index: 6
FastFormers: Highly efficient transformer models for natural language understanding
arXiv preprint arXiv:2010.13382
2020/10/26
Young Jin Kim
H-Index: 6