Takashi Nose

Takashi Nose

Tohoku University

H-index: 21

Asia-Japan

About Takashi Nose

Takashi Nose, With an exceptional h-index of 21 and a recent h-index of 12 (since 2020), a distinguished researcher at Tohoku University, specializes in the field of Speech Synthesis, Speech Recognition, Spoken Dialogue System, Image Processing, Music Information Processing.

His recent articles reflect a diverse array of research interests and contributions to the field:

Scheduled Curiosity-Deep Dyna-Q: Efficient Exploration for Dialog Policy Learning

Simultaneous Adaptation of Acoustic and Language Models for Emotional Speech Recognition Using Tweet Data

Spoken term detection from utterances of minority languages

Multimodal Expressive Embodied Conversational Agent Design

Response Sentence Modification Using a Sentence Vector for a Flexible Response Generation of Retrieval-based Dialogue Systems

Multimodal Dialogue Response Timing Estimation Using Dialogue Context Encoder

A study on high-intelligibility speech synthesis of dysarthric speakers using voice conversion from normal speech and multi-speaker vocoder

Combination of deep-learning-based audio separation and speech enhancement for noise reduction of extracted signal from polyphonic music

Takashi Nose Information

University

Position

___

Citations(all)

2468

Citations(since 2020)

687

Cited By

1977

hIndex(all)

21

hIndex(since 2020)

12

i10Index(all)

47

i10Index(since 2020)

14

Email

University Profile Page

Tohoku University

Google Scholar

View Google Scholar Profile

Takashi Nose Skills & Research Interests

Speech Synthesis

Speech Recognition

Spoken Dialogue System

Image Processing

Music Information Processing

Top articles of Takashi Nose

Title

Journal

Author(s)

Publication Date

Scheduled Curiosity-Deep Dyna-Q: Efficient Exploration for Dialog Policy Learning

arXiv preprint arXiv:2402.00085

Xuecheng Niu

Akinori Ito

Takashi Nose

2024/1/31

Simultaneous Adaptation of Acoustic and Language Models for Emotional Speech Recognition Using Tweet Data

IEICE Transactions on Information and Systems

Tetsuo KOSAKA

Kazuya SAEKI

Yoshitaka AIZAWA

Masaharu KATO

Takashi NOSE

2024/3/1

Spoken term detection from utterances of minority languages

Akinori Ito

Satoru Mizuochi

Takashi Nose

2023/7/24

Multimodal Expressive Embodied Conversational Agent Design

Simon Jolibois

Akinori Ito

Takashi Nose

2023/7/9

Response Sentence Modification Using a Sentence Vector for a Flexible Response Generation of Retrieval-based Dialogue Systems

Ryota Yahagi

Akinori Ito

Takashi Nose

Yuya Chiba

2022/11/7

Multimodal Dialogue Response Timing Estimation Using Dialogue Context Encoder

Ryota Yahagi

Yuya Chiba

Takashi Nose

Akinori Ito

2022/11/1

A study on high-intelligibility speech synthesis of dysarthric speakers using voice conversion from normal speech and multi-speaker vocoder

IEICE Technical Report; IEICE Tech. Rep.

Tetsuro Takano

Takashi Nose

Aoi Kanagaki

Satoshi Watanabe

2022/3/1

Combination of deep-learning-based audio separation and speech enhancement for noise reduction of extracted signal from polyphonic music

Proceedings of the International Congress on Acoustics

Soichiro Kobayashi

Takashi Nose

Akinori Ito

2022

Effect of Data Size and Machine Translation on the Accuracy of Automatic Personality Classification

Yuki Fukazawa

Akinori Ito

Takashi Nose

2022/12/16

Spoken term detection of zero-resource language using posteriorgram of multiple languages

Akinori Ito

Masatoshi Koizumi

2018/2/26

Design and Construction of Japanese Multimodal Utterance Corpus with Improved Emotion Balance and Naturalness

Daisuke Horii

Akinori Ito

Takashi Nose

2022/11/7

Analysis of Feature Extraction by Convolutional Neural Network for Speech Emotion Recognition

Daisuke Horii

Akinori Ito

Takashi Nose

2021/10/12

Neural Spoken-Response Generation Using Prosodic and Linguistic Context for Conversational Systems.

Yoshihiro Yamazaki

Yuya Chiba

Takashi Nose

Akinori Ito

2021/9

Effect of Training Data Selection for Speech Recognition of Emotional Speech

Int. J. Mach. Learn. Comput

Yusuke Yamada

Yuya Chiba

Takashi Nose

Akinori Ito

2021/9

SMOC corpus: A large-scale Japanese spontaneous multimodal one-on-one chat-talk corpus for dialog systems

Acoustical Science and Technology

Yoshihiro Yamazaki

Yuya Chiba

Takashi Nose

Akinori Ito

2021/7/1

Improvement of Automatic English Pronunciation Assessment with Small Number of Utterances Using Sentence Speakability.

Satsuki Naijo

Akinori Ito

Takashi Nose

2021

A study on word embedding method considering accent information for Japanese text-to-speech synthesis

日本音響学会研究発表会講演論文集 (CD-ROM)

DAISUKE FUJIMAKI

TAKASHI NOSE

AKINORI ITO

2020

Language modeling in speech recognition for grammatical error detection based on neural machine translation

Acoustical Science and Technology

Jiang Fu

Yuya Chiba

Takashi Nose

Akinori Ito

2020/9/1

Integration of Accent Sandhi and Prosodic Features Estimation for Japanese Text-to-Speech Synthesis

Daisuke Fujimaki

Takashi Nose

Akinori Ito

2020/10/13

LJSing: Large-Scale Singing Voice Corpus of Single Japanese Singer

Takuto Fujimura

Takashi Nose

Akinori Ito

2020/10/13

See List of Professors in Takashi Nose University(Tohoku University)