Sho Takase

Sho Takase

Tokyo Institute of Technology

H-index: 14

Asia-Japan

About Sho Takase

Sho Takase, With an exceptional h-index of 14 and a recent h-index of 13 (since 2020), a distinguished researcher at Tokyo Institute of Technology, specializes in the field of Natural Language Processing, Machine Learning, Neural Networks.

His recent articles reflect a diverse array of research interests and contributions to the field:

Device, method and program for natural language processing

Exploring effectiveness of gpt-3 in grammatical error correction: A study on performance and controllability in prompt-based methods

Nearest neighbor non-autoregressive text generation

Spike No More: Stabilizing the Pre-training of Large Language Models

Bridging the Gap between Subword and Character Segmentation in Pretrained Language Models

Dynamic Structured Neural Topic Model with Self-Attention Mechanism

B2t connection: Serving stability and performance in deep transformers

Word-level perturbation considering word length and compositional subwords

Sho Takase Information

University

Position

___

Citations(all)

755

Citations(since 2020)

645

Cited By

284

hIndex(all)

14

hIndex(since 2020)

13

i10Index(all)

18

i10Index(since 2020)

15

Email

University Profile Page

Tokyo Institute of Technology

Google Scholar

View Google Scholar Profile

Sho Takase Skills & Research Interests

Natural Language Processing

Machine Learning

Neural Networks

Top articles of Sho Takase

Title

Journal

Author(s)

Publication Date

Device, method and program for natural language processing

2024/1/4

Exploring effectiveness of gpt-3 in grammatical error correction: A study on performance and controllability in prompt-based methods

arXiv preprint arXiv:2305.18156

Mengsay Loem

Masahiro Kaneko

Sho Takase

Naoaki Okazaki

2023/5/29

Nearest neighbor non-autoregressive text generation

Journal of Information Processing

Ayana Niwa

Sho Takase

Naoaki Okazaki

2023

Spike No More: Stabilizing the Pre-training of Large Language Models

arXiv preprint arXiv:2312.16903

Sho Takase

Shun Kiyono

Sosuke Kobayashi

Jun Suzuki

2023/12/28

Bridging the Gap between Subword and Character Segmentation in Pretrained Language Models

Shun Kiyono

Sho Takase

Shengzhe Li

Toshinori Sato

2023/9

Dynamic Structured Neural Topic Model with Self-Attention Mechanism

Nozomu Miyamoto

Masaru Isonuma

Sho Takase

Junichiro Mori

Ichiro Sakata

2023/7

B2t connection: Serving stability and performance in deep transformers

arXiv preprint arXiv:2206.00330

Sho Takase

Shun Kiyono

Sosuke Kobayashi

Jun Suzuki

2022/6/1

Word-level perturbation considering word length and compositional subwords

Tatsuya Hiraoka

Sho Takase

Kei Uchiumi

Atsushi Keyaki

Naoaki Okazaki

2022/5

Single Model Ensemble for Subword Regularized Models in Low-Resource Machine Translation

arXiv preprint arXiv:2203.13528

Sho Takase

Tatsuya Hiraoka

Naoaki Okazaki

2022/3/25

NT5 at WMT 2022 General Translation Task

Makoto Morishita

Keito Kudo

Yui Oka

Katsuki Chousa

Shun Kiyono

...

2022/12

Interpretability for language learners using example-based grammatical error correction

arXiv preprint arXiv:2203.07085

Masahiro Kaneko

Sho Takase

Ayana Niwa

Naoaki Okazaki

2022/3/14

Are Neighbors Enough? Multi-Head Neural n-gram can be Alternative to Self-attention

arXiv preprint arXiv:2207.13354

Mengsay Loem

Sho Takase

Masahiro Kaneko

Naoaki Okazaki

2022/7/27

Extraphrase: Efficient data augmentation for abstractive summarization

arXiv preprint arXiv:2201.05313

Mengsay Loem

Sho Takase

Masahiro Kaneko

Naoaki Okazaki

2022/1/14

Information processing apparatus, information learning apparatus, information processing method, information learning method and program

2022/7/21

Word coding device, analysis device, language model learning device, method, and program

2021/11/4

Recurrent neural hidden Markov model for high-order transition

Transactions on Asian and Low-Resource Language Information Processing

Tatsuya Hiraoka

Sho Takase

Kei Uchiumi

Atsushi Keyaki

Naoaki Okazaki

2021/10/31

Joint optimization of tokenization and downstream model

arXiv preprint arXiv:2105.12410

Tatsuya Hiraoka

Sho Takase

Kei Uchiumi

Atsushi Keyaki

Naoaki Okazaki

2021/5/26

Lessons on parameter sharing across layers in transformers

arXiv preprint arXiv:2104.06022

Sho Takase

Shun Kiyono

2021/4/13

Rethinking perturbations in encoder-decoders for fast training

arXiv preprint arXiv:2104.01853

Sho Takase

Shun Kiyono

2021/4/5

Optimizing word segmentation for downstream task

Tatsuya Hiraoka

Sho Takase

Kei Uchiumi

Atsushi Keyaki

Naoaki Okazaki

2020/11

See List of Professors in Sho Takase University(Tokyo Institute of Technology)

Co-Authors

H-index: 45
Kentaro Inui

Kentaro Inui

Tohoku University

H-index: 42
Jun Suzuki

Jun Suzuki

Tohoku University

academic-engine