Mamoru Komachi

Mamoru Komachi

Tokyo Metropolitan University

H-index: 24

Asia-Japan

About Mamoru Komachi

Mamoru Komachi, With an exceptional h-index of 24 and a recent h-index of 19 (since 2020), a distinguished researcher at Tokyo Metropolitan University, specializes in the field of Computational Linguistics, Natural Language Processing, Machine Learning, Deep Learning.

His recent articles reflect a diverse array of research interests and contributions to the field:

A Comparative Study of Relation Classification Approaches for Japanese Discourse Relation Analysis

Revisiting Meta-evaluation for Grammatical Error Correction

Large Language Models Are State-of-the-Art Evaluator for Grammatical Error Correction

WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia

Effectiveness of Pre-Trained Language Models for the Japanese Winograd Schema Challenge

ClozEx: A Task toward Generation of English Cloze Explanation

TMU Feedback Comment Generation System Using Pretrained Sequence-to-Sequence Language Models

BERT を用いた日本語の意味変化の分析

Mamoru Komachi Information

University

Tokyo Metropolitan University

Position

Associate Professor at

Citations(all)

2747

Citations(since 2020)

1634

Cited By

1616

hIndex(all)

24

hIndex(since 2020)

19

i10Index(all)

74

i10Index(since 2020)

45

Email

University Profile Page

Tokyo Metropolitan University

Mamoru Komachi Skills & Research Interests

Computational Linguistics

Natural Language Processing

Machine Learning

Deep Learning

Top articles of Mamoru Komachi

A Comparative Study of Relation Classification Approaches for Japanese Discourse Relation Analysis

Authors

Keigo Takahashi,Teruaki Oka,Mamoru Komachi,Yasufumi Takama

Journal

Journal of Advanced Computational Intelligence and Intelligent Informatics

Published Date

2024/3/20

This paper presents a comparative analysis of classification approaches in the Japanese discourse relation analysis (DRA) task. In the Japanese DRA task, it is difficult to resolve implicit relations where explicit discourse phrases do not appear. To understand implicit relations further, we compared the four approaches by incorporating a special token to encode the relations of the given discourses. Our four approaches included inserting a special token at the beginning of a sentence, end of a sentence, conjunctive position, and random position to classify the relation between the two discourses into one of the following categories: CAUSE/REASON, CONCESSION, CONDITION, PURPOSE, GROUND, CONTRAST, and NONE. Our experimental results revealed that special tokens are available to encode the relations of given discourses more effectively than pooling-based approaches. In particular, the random insertion of a special token outperforms other approaches, including pooling-based approaches, in the most numerous CAUSE/REASON category in implicit relations and categories with few instances. Moreover, we classified the errors in the relation analysis into three categories: confounded phrases, ambiguous relations, and requiring world knowledge for further improvements.

Revisiting Meta-evaluation for Grammatical Error Correction

Authors

Masamune Kobayashi,Masato Mita,Mamoru Komachi

Journal

arXiv preprint arXiv:2403.02674

Published Date

2024/3/5

Metrics are the foundation for automatic evaluation in grammatical error correction (GEC), with their evaluation of the metrics (meta-evaluation) relying on their correlation with human judgments. However, conventional meta-evaluations in English GEC encounter several challenges including biases caused by inconsistencies in evaluation granularity, and an outdated setup using classical systems. These problems can lead to misinterpretation of metrics and potentially hinder the applicability of GEC techniques. To address these issues, this paper proposes SEEDA, a new dataset for GEC meta-evaluation. SEEDA consists of corrections with human ratings along two different granularities: edit-based and sentence-based, covering 12 state-of-the-art systems including large language models (LLMs), and two human corrections with different focuses. The results of improved correlations by aligning the granularity in the sentence-level meta-evaluation, suggest that edit-based metrics may have been underestimated in existing studies. Furthermore, correlations of most metrics decrease when changing from classical to neural systems, indicating that traditional metrics are relatively poor at evaluating fluently corrected sentences with many edits.

Large Language Models Are State-of-the-Art Evaluator for Grammatical Error Correction

Authors

Masamune Kobayashi,Masato Mita,Mamoru Komachi

Journal

arXiv preprint arXiv:2403.17540

Published Date

2024/3/26

Large Language Models (LLMs) have been reported to outperform existing automatic evaluation metrics in some tasks, such as text summarization and machine translation. However, there has been a lack of research on LLMs as evaluators in grammatical error correction (GEC). In this study, we investigate the performance of LLMs in GEC evaluation by employing prompts designed to incorporate various evaluation criteria inspired by previous research. Our extensive experimental results demonstrate that GPT-4 achieved Kendall's rank correlation of 0.662 with human judgments, surpassing all existing methods. Furthermore, in recent GEC evaluations, we have underscored the significance of the LLMs scale and particularly emphasized the importance of fluency among evaluation criteria.

WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia

Authors

Kenichiro Ando,Satoshi Sekine,Mamoru Komachi

Journal

Proceedings of the AAAI Conference on Artificial Intelligence

Published Date

2024/3/24

Wikipedia can be edited by anyone and thus contains various quality sentences. Therefore, Wikipedia includes some poor-quality edits, which are often marked up by other editors. While editors' reviews enhance the credibility of Wikipedia, it is hard to check all edited text. Assisting in this process is very important, but a large and comprehensive dataset for studying it does not currently exist. Here, we propose WikiSQE, the first large-scale dataset for sentence quality estimation in Wikipedia. Each sentence is extracted from the entire revision history of English Wikipedia, and the target quality labels were carefully investigated and selected. WikiSQE has about 3.4 M sentences with 153 quality labels. In the experiment with automatic classification using competitive machine learning models, sentences that had problems with citation, syntax/semantics, or propositions were found to be more difficult to detect. In addition, by performing human annotation, we found that the model we developed performed better than the crowdsourced workers. WikiSQE is expected to be a valuable resource for other tasks in NLP.

Effectiveness of Pre-Trained Language Models for the Japanese Winograd Schema Challenge

Authors

Keigo Takahashi,Teruaki Oka,Mamoru Komachi

Journal

Journal of Advanced Computational Intelligence and Intelligent Informatics

Published Date

2023/5/20

This paper compares Japanese and multilingual language models (LMs) in a Japanese pronoun reference resolution task to determine the factors of LMs that contribute to Japanese pronoun resolution. Specifically, we tackle the Japanese Winograd schema challenge task (WSC task), which is a well-known pronoun reference resolution task. The Japanese WSC task requires inter-sentential analysis, which is more challenging to solve than intra-sentential analysis. A previous study evaluated pre-trained multilingual LMs in terms of training language on the target WSC task, including Japanese. However, the study did not perform pre-trained LM-wise evaluations, focusing on the training language-wise evaluations with a multilingual WSC task. Furthermore, it did not investigate the effectiveness of factors (eg, model size, learning settings in the pre-training phase, or multilingualism) to improve the performance. In our study, we compare the performance of inter-sentential analysis on the Japanese WSC task for several pre-trained LMs, including multilingual ones. Our results confirm that XLM, a pre-trained LM on multiple languages, performs the best among all considered LMs, which we attribute to the amount of data in the pre-training phase.

ClozEx: A Task toward Generation of English Cloze Explanation

Authors

Zizheng Zhang,Masato Mita,Mamoru Komachi

Published Date

2023/12/1

Providing explanations for cloze questions in language assessment (LA) has been recognized as a valuable approach to enhancing the language proficiency of learners. However, there is a noticeable absence of dedicated tasks and datasets specifically designed for generating language learner explanations. In response to this gap, this paper introduces a novel task ClozEx of generating explanations for cloze questions in LA, with a particular focus on English as a Second Language (ESL) learners. To support this task, we present a meticulously curated dataset comprising cloze questions paired with corresponding explanations. This dataset aims to assess language proficiency and facilitates language learning by offering informative and accurate explanations. To tackle the task, we fine-tuned various baseline models with our training data, including encoder-decoder and decoder-only architectures. We also explored whether large language models (LLMs) are able to generate good explanations without fine-tuning, just using pre-defined prompts. The evaluation results demonstrate that encoder-decoder models have the potential to deliver fluent and valid explanations when trained on our dataset.

TMU Feedback Comment Generation System Using Pretrained Sequence-to-Sequence Language Models

Authors

Naoya Ueda,Mamoru Komachi

Published Date

2023/9

In this paper, we introduce our Tokyo Metropolitan University Feedback Comment Generation system submitted to the feedback comment generation task for INLG 2023 Generation Challenge. In this task, a source sentence and offset range of preposition uses are given as the input. Then, a system generates hints or explanatory notes about preposition uses as the output. To tackle this generation task, we finetuned pretrained sequence-to-sequence language models. The models using BART and T5 showed significant improvement in BLEU score, demonstrating the effectiveness of the pretrained sequence-to-sequence language models in this task. We found that using part-of-speech tag information as an auxiliary input improves the generation quality of feedback comments. Furthermore, we adopt a simple postprocessing method that can enhance the reliability of the generation. As a result, our system achieved the F1 score of 47.4 points in BLEU-based evaluation and 60.9 points in manual evaluation, which ranked second and third on the leaderboard.

BERT を用いた日本語の意味変化の分析

Authors

小町守

Journal

自然言語処理

Published Date

2023

BERTを用いた日本語の意味変化の分析 | CiNii Research CiNii 国立情報学研究所 学術情報 ナビゲータ[サイニィ] 詳細へ移動 検索フォームへ移動 論文・データをさがす 大学図書館の本をさがす 日本の博士論文をさがす English 検索 タイトル 人物/団体名 所属機関 ISSN DOI 期間 ~ 本文リンク 本文リンクあり データソース JaLC IRDB Crossref DataCite NDL NDL-Digital RUDA JDCat NINJAL CiNii Articles CiNii Books CiNii Dissertations DBpedia Nikkei BP KAKEN Integbio MDR PubMed LSDB Archive 公共データカタログ ムーンショット型研究開発事業 すべて 研究データ 論文 本 博士論文 プロジェクト [2023年10月31日掲載]CiNii Dissertations及びCiNii BooksのCiNii Researchへの統合について 新「国立国会図書館サーチ」公開によるCiNiiサービスへの影響について BERTを用いた日本語の意味変化の分析 オープンアクセス 小町守 東京都立大学 書誌事項 タイトル BERTを用いた日本語の意味変化の分析 著者 小林千真, 相田太一, 岡照晃, 小町守 収録刊行物 …

Cloze Quality Estimation for Language Assessment

Authors

Zizheng Zhang,Masato Mita,Mamoru Komachi

Published Date

2023/5

Cloze tests play an essential role in language assessment and help language learners improve their skills. In this paper, we propose a novel task called Cloze Quality Estimation (CQE)—a zero-shot task of evaluating whether a cloze test is of sufficient “high-quality” for language assessment based on two important factors: reliability and validity. We have taken the first step by creating a new dataset named CELA for the CQE task, which includes English cloze tests and corresponding evaluations about their quality annotated by native English speakers, which includes 2,597 and 1,730 instances in aspects of reliability and validity, respectively. We have tested baseline evaluation methods on the dataset, showing that our method could contribute to the CQE task, but the task is still challenging.

語彙内トークンを媒介とした大規模言語モデルへのソフトプロンプトの転移

Authors

中島京太郎, 金輝燦, 平澤寅庄, 岡照晃, 小町守

Journal

研究報告自然言語処理 (NL)

Published Date

2023/8/25

論文抄録プロンプトチューニングとは, 下流タスクの教師信号を基に, 埋め込みで表されたプロンプト, ソフトプロンプトを学習する手法である. ソフトプロンプトは入力文とともにモデルに与えられ, 学習された下流タスクの情報を入力文に追加する効果がある. プロンプトチューニングは少量のパラメータ更新のみで微調整に匹敵する性能を達成できる. しかし大規模言語モデルのプロンプトチューニングは大量の計算コスト・時間がかかる. 本研究では小規模な言語モデルで学習したソフトプロンプトを言語モデルの語彙内トークンに置換し, パラメータを固定したまま大規模な言語モデルに転移させる手法を提案する. 提案手法の性能を分類タスクで比較したところ, 人手で作成したプロンプトより高い性能を得た. またソフトプロンプトとの比較では, 性能は提案手法が下回るものの, 使用 GPU メモリ量や収束までの時間を削減することができた.

Visual Prediction Improves Zero-Shot Cross-Modal Machine Translation

Authors

Tosho Hirasawa,Emanuele Bugliarello,Desmond Elliott,Mamoru Komachi

Published Date

2023/12

Multimodal machine translation (MMT) systems have been successfully developed in recent years for a few language pairs. However, training such models usually requires tuples of a source language text, target language text, and images. Obtaining these data involves expensive human annotations, making it difficult to develop models for unseen text-only language pairs. In this work, we propose the task of zero-shot cross-modal machine translation aiming to transfer multimodal knowledge from an existing multimodal parallel corpus into a new translation direction. We also introduce a novel MMT model with a visual prediction network to learn visual features grounded on multimodal parallel data and provide pseudo-features for text-only language pairs. With this training paradigm, our MMT model outperforms its text-only counterpart. In our extensive analyses, we show that (i) the selection of visual features is important, and (ii) training on image-aware translations and being grounded on a similar language pair are mandatory.

異なる時期での意味の違いを捉える単語分散表現の結合学習

Authors

相田太一, 小町守, 小木曽智信, 高村大也, 持橋大地

Journal

自然言語処理

Published Date

2023

単語は時期や分野の違いによって異なる意味や用例を持つことがあり, 自然言語処理の分野では単語分散表現を用いた検出が行われている. 最近では文脈の情報を考慮した単語分散表現を生成できる BERT などを用いた研究も盛んに行われているが, 大規模な計算資源のない言語学者や社会学者などはこのような手法を適用するのが難しい. 本稿では, 既存の文書間で同時に単語分散表現を学習する手法を拡張して, 2 つの文書間における単語の意味の違いを検出するタスクに取り組んだ. 実験の結果より, 我々の手法が英語での実験や SemEval-2020 Task 1 だけでなく, これまで行われていない日本語の実験においても既存手法と同等またはそれ以上の性能を示した. また, 各手法が単語分散表現の獲得までにかかる訓練時間の比較を行った結果, 提案した手法が既存手法よりも高速に学習できることを示した. さらに, 提案した単語分散表現獲得手法を用いて, 日本語のデータにおいて意味変化した単語や意味変化の種類, 傾向などの網羅的な分析も行った.

Does Masked Language Model Pre-training with Artificial Data Improve Low-resource Neural Machine Translation?

Authors

Hiroto Tamura,Tosho Hirasawa,Hwichan Kim,Mamoru Komachi

Published Date

2023/5

Pre-training masked language models (MLMs) with artificial data has been proven beneficial for several natural language processing tasks such as natural language understanding and summarization; however, it has been less explored for neural machine translation (NMT). A previous study revealed the benefit of transfer learning for NMT in a limited setup, which differs from MLM. In this study, we prepared two kinds of artificial data and compared the translation performance of NMT when pre-trained with MLM. In addition to the random sequences, we created artificial data mimicking token frequency information from the real world. Our results showed that pre-training the models with artificial data by MLM improves translation performance in low-resource situations. Additionally, we found that pre-training on artificial data created considering token frequency information facilitates improved performance.

Query Generation Using GPT-3 for CLIP-Based Word Sense Disambiguation for Image Retrieval

Authors

Xiaomeng Pan,Zhousi Chen,Mamoru Komachi

Published Date

2023/7

In this study, we propose using the GPT-3 as a query generator for the backend of CLIP as an implicit word sense disambiguation (WSD) component for the SemEval 2023 shared task Visual Word Sense Disambiguation (VWSD). We confirmed previous findings—human-like prompts adapted for WSD with quotes benefit both CLIP and GPT-3, whereas plain phrases or poorly templated prompts give the worst results.

文法誤り訂正におけるメタ評価の再考

Authors

小林正宗, 三田雅人, 小町守

Journal

研究報告自然言語処理 (NL)

Published Date

2023/11/25

論文抄録評価尺度は文法誤り訂正の自動評価における基盤であり, 評価尺度の評価 (メタ評価) は主に人手評価との相関に基づいて行われる. しかし, 文法誤り訂正における従来のメタ評価は, 評価尺度とデータセットの評価粒度の不一致によるバイアスや, 現在の主流から乖離のある古典的システムに基づいた設定などのいくつかの問題に直面している. これらの問題は, 評価尺度に対する誤った解釈をもたらすだけでなく, 結果として文法誤り訂正技術の発展を妨げる恐れがある. これらの問題に対処するために, 本研究ではより信頼性のある文法誤り訂正のメタ評価のための新たなデータセット Sentence-based and Edit-based human Evaluation DAtaset for GEC (SEEDA) を提案する. SEEDA は 2 つの異なる評価粒度 (編集ベース, 文ベース) に沿った人手評価が付いた訂正文から構成されており, 大規模言語モデルを含む 12 の最先端システムと 2 種類の人手の訂正文を含んでいる. 相関分析の結果, 文レベルのメタ評価で粒度を揃えることで相関が改善することが分かり, 既存の研究では編集ベースの評価尺度が過小評価されてきた可能性を示唆している. さらに, ほとんどの評価尺度の相関は比較対象を古典的なシステムからニューラルシステムに変更すると低下することから, 従来の評価尺度は編集の多い流暢な訂正文の評価には不向きであることがわかった.

ニューラル機械翻訳を使った中国語古文の翻訳-訓練・評価時の時間的差異の検証

Authors

段文傑, 王鴻飛, 岡照晃, 小町守, 古宮嘉那子

Journal

じんもんこん 2023 論文集

Published Date

2023/12/2

論文抄録時代の経過とともに, 単語や文法などが変化する可能性が高い. そのため, 訓練データと評価データの時間的差異が大きくなると中国語古文の翻訳モデルの性能が低下するという仮説を立てた. そこで, 本論文では, 異なる時代スパンの中国語パラレルコーパスを使用してニューラル機械翻訳モデルを訓練し, 古文から現代文への翻訳性能を調査した. また, 事前学習済みモデルをコンテキスト埋め込みとして使用する有効性について考察した. 調査の結果, 訓練データと評価データの時代が遠いほど翻訳モデルの性能が下がることが分かった. また, 今回の事前学習済みモデルの使用方法では翻訳性能を改善できなかった.

NLP2023 テーマセッション 「ことばの評価と品質推定」

Authors

須藤克仁, 小町守, 梶原智之, 三田雅人

Journal

自然言語処理

Published Date

2023

2010 年代後半以降の計算機による言語生成技術の進展には目を見張るものがある. 特に 2020 年前後から急速に発展した大規模事前学習モデルは, 巨大な生テキストデータと大規模化を続ける計算資源を背景にモデルサイズの増大と応用タスクでの性能向上を続けている. 2022 年 11 月に公開された ChatGPT や続いて 2023 年 3 月に公開された GPT-4 は広く世間一般への認知も高まり, いよいよ計算機による言語生成が身近なものとして拡がる兆しを見せつつある. こうした背景から, 今後は社会全体での 「ことば」 に関する “生産性” が大きく向上していくことが見込まれる. これまでの社会ではともすると人間が発する 「ことば」 と計算機が生成する 「ことば」 という二項対立に近い扱いであったものが, 計算機の力を借りて人間が紡ぐ 「ことば」, もしくは人間が指示を与え計算機が代弁する 「ことば」, が無視できない存在になるであろう. そのような時代の到来を見据え, コミュニケーションや記録, そして思考の媒体である 「ことば」 を正しく用いることは,「ことば」 による相互理解の質の担保のためにますます重要性を増していくと考えられる.自然言語処理においてはこれまで人間が産出する 「ことば」 を対象とした小論文等の採点, 計算機が生成する 「ことば」 を対象とした機械翻訳や自動要約等の採点, の双方が扱われてきた. 前者においては言語習熟度や文章構成能力といった観点, 後者においては原文や参照文との同義性の観点が重視される傾向にあったと言える. この違いは小論文と翻訳や要約との目的の違いによるものであるとともに, 後者においてはそもそも計算機が生成する 「ことば」 の品質が低かったために, 高次の品質よりも 「ことば」 としての自然さや伝達内容の “正解率”, ともすれば表層の一致率のような比較的低次の品質を問わざるを得なかったことの表れでもある. 筆者らは, 先に述べたような言語生成技術の進展を受け, 計算機の言語生成においても今後より高 …

Discontinuous Combinatory Constituency Parsing

Authors

Zhousi Chen,Mamoru Komachi

Journal

Transactions of the Association for Computational Linguistics

Published Date

2023/3/22

We extend a pair of continuous combinator-based constituency parsers (one binary and one multi-branching) into a discontinuous pair. Our parsers iteratively compose constituent vectors from word embeddings without any grammar constraints. Their empirical complexities are subquadratic. Our extension includes 1) a swap action for the orientation-based binary model and 2) biaffine attention for the chunker-based multi-branching model. In tests conducted with the Discontinuous Penn Treebank and TIGER Treebank, we achieved state-of-the-art discontinuous accuracy with a significant speed advantage.

Enhancing Few-shot Cross-lingual Transfer with Target Language Peculiar Examples

Authors

Hwichan Kim,Mamoru Komachi

Published Date

2023/7

Few-shot cross-lingual transfer, fine-tuning Multilingual Masked Language Model (MMLM) with source language labeled data and a small amount of target language labeled data, provides excellent performance in the target language. However, if no labeled data in the target language are available, they need to be created through human annotations. In this study, we devise a metric to select annotation candidates from an unlabeled data pool that efficiently enhance accuracy for few-shot cross-lingual transfer. It is known that training a model with hard examples is important to improve the model’s performance. Therefore, we first identify examples that MMLM cannot solve in a zero-shot cross-lingual transfer setting and demonstrate that it is hard to predict peculiar examples in the target language, ie, the examples distant from the source language examples in cross-lingual semantic space of the MMLM. We then choose high peculiarity examples as annotation candidates and perform few-shot cross-lingual transfer. In comprehensive experiments with 20 languages and 6 tasks, we demonstrate that the high peculiarity examples improve the target language accuracy compared to other candidate selection methods proposed in previous studies.

Call for Papers: Educational NLP for a Multilingual World: Research, Applications, and Challenges

Authors

Giora Alexandron,Beata Beigman Klebanov,Mamoru Komachi,Torsten Zesch

Published Date

2023/10

BackgroundLanguage is the key medium through which students learn and demonstrate their knowledge. Advances in Natural Language Processing (NLP), together with the unprecedented amounts of educational-related language data produced within the rapidly growing digital learning environments, have led to an increasing interest in using NLP to support students and teachers in their educational journeys (Litman, 2016). Some of the tasks to which NLP techniques have been applied include automatic scoring of a variety of student-produced language data such as essays, short content-focused answers, spontaneous or read speech(eg, Burrows et al., 2015; Bernstein et al, 2018; Zhai et al., 2020; Zechner and Evanini, 2020; Beigman Klebanov & Madnani, 2021), automated feedback on students’ writing (Burstien et al, 2018; Zhang et al, 2019), automatic support for generation of learning and assessment …

See List of Professors in Mamoru Komachi University(Tokyo Metropolitan University)

Mamoru Komachi FAQs

What is Mamoru Komachi's h-index at Tokyo Metropolitan University?

The h-index of Mamoru Komachi has been 19 since 2020 and 24 in total.

What are Mamoru Komachi's top articles?

The articles with the titles of

A Comparative Study of Relation Classification Approaches for Japanese Discourse Relation Analysis

Revisiting Meta-evaluation for Grammatical Error Correction

Large Language Models Are State-of-the-Art Evaluator for Grammatical Error Correction

WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia

Effectiveness of Pre-Trained Language Models for the Japanese Winograd Schema Challenge

ClozEx: A Task toward Generation of English Cloze Explanation

TMU Feedback Comment Generation System Using Pretrained Sequence-to-Sequence Language Models

BERT を用いた日本語の意味変化の分析

...

are the top articles of Mamoru Komachi at Tokyo Metropolitan University.

What are Mamoru Komachi's research interests?

The research interests of Mamoru Komachi are: Computational Linguistics, Natural Language Processing, Machine Learning, Deep Learning

What is Mamoru Komachi's total number of citations?

Mamoru Komachi has 2,747 citations in total.

What are the co-authors of Mamoru Komachi?

The co-authors of Mamoru Komachi are Kentaro Inui, Chenhui Chu, Masahiro Kaneko, Tomoyuki Kajiwara, Tomoya MIZUMOTO, Aizhan Imankulova.

    Co-Authors

    H-index: 45
    Kentaro Inui

    Kentaro Inui

    Tohoku University

    H-index: 21
    Chenhui Chu

    Chenhui Chu

    Kyoto University

    H-index: 14
    Masahiro Kaneko

    Masahiro Kaneko

    Tokyo Institute of Technology

    H-index: 14
    Tomoyuki Kajiwara

    Tomoyuki Kajiwara

    Ehime University

    H-index: 12
    Tomoya MIZUMOTO

    Tomoya MIZUMOTO

    Tohoku University

    H-index: 8
    Aizhan Imankulova

    Aizhan Imankulova

    Tokyo Metropolitan University

    academic-engine

    Useful Links