| タイトル |
-
en
Unsupervised Cross-Lingual Speaker Adaptation for HMM-Based Speech Synthes
|
| 作成者 |
-
-
en
Oura, Keiichiro
ja
大浦, 圭一郎
ja-Kana
オオウラ, ケイイチロウ
-
-
en
Wu, Yi-Jian
ja
Wu, Yi-Jian
-
-
-
|
| 内容注記 |
-
Other
en
In the EMIME project, we are developing a mobile device that performs personalized speech-to-speech translation such that a user's spoken input in one language is used to produce spoken output in another language, while continuing to sound like the user's voice. We integrate two techniques, unsupervised adaptation for HMM-based TTS using a word-based large-vocabulary continuous speech recognizer and cross-lingual speaker adaptation for HMM-based TTS, into a single architecture. Thus, an unsupervised cross-lingual speaker adaptation system can be developed. Listening tests show very promising results, demonstrating that adapted voices sound similar to the target speaker and that differences between supervised and unsupervised cross-lingual speaker adaptation are small.
-
Other
en
14-19 March 2010Dallas, TX, USA
|
| 出版者 |
en
Institute of Electrical and Electronics Engineers
|
| 日付 |
|
| 言語 |
|
| 資源タイプ |
conference paper |
| 出版タイプ |
VoR |
| 資源識別子 |
URI
https://nitech.repo.nii.ac.jp/records/3415
|
| 収録誌情報 |
-
en
ICASSP 2010. IEEE International Conference on Acoustics, Speech and Signal Processing, 2010.
-
開始ページ4594
終了ページ4597
|
| ファイル |
|
| コンテンツ更新日時 |
2025-03-14 |