JAIST Repository >
b. 情報科学研究科・情報科学系 >
b11. 会議発表論文・発表資料等 >
b11-1. 会議発表論文・発表資料 >

このアイテムの引用には次の識別子を使用してください: http://hdl.handle.net/10119/15083

タイトル: Non-parallel training dictionary-based voice conversion with Variational Autoencoder
著者: Vu, Ho-Tuan
Akagi, Masato
発行日: 2018-03-07
出版者: Research Institute of Signal Processing, Japan
誌名: 2018 RISP International Workshop on Nonlinear Circuits, Communications and Signal Processing (NCSP2018)
開始ページ: 695
終了ページ: 698
抄録: In this paper, we present a dictionary-based voice conversion (VC) approach that does not require parallel data or linguistic labeling for training process. Dictionary-based voice conversion is the class of methods aiming to decompose speech into separate factors for manipulation. Non-negative matrix factorization (NMF) is the most common method to decomposed input spectrum into a weighted linear combination of a set of bases (dictionary) and weights. However, the requirement for parallel training data in this method causes several problems: 1) limited practical usability when parallel data are not available, 2) additional error from alignment process degrades out-put speech quality. In order to alleviate these problems, this paper presents a dictionary-based VC approach by incorporating a Variational Autoencoder (VAE) to decomposed input speech spectrum into speaker dictionary and weights without parallel training data. According to evaluation results, the proposed method achieved better speech naturalness while retaining the same speaker similarity as NMF-based VC even though un-aligned data is used.
Rights: Copyright (C) 2018 Research Institute of Signal Processing, Japan. Ho-Tuan Vu and Masato Akagi, 2018 RISP International Workshop on Nonlinear Circuits, Communications and Signal Processing (NCSP2018), 2018, 695-698.
URI: http://hdl.handle.net/10119/15083
資料タイプ: publisher
出現コレクション:b11-1. 会議発表論文・発表資料 (Conference Papers)

このアイテムのファイル:

ファイル 記述 サイズ形式
2754.pdf999KbAdobe PDF見る/開く

当システムに保管されているアイテムはすべて著作権により保護されています。

 


お問い合わせ先 : 北陸先端科学技術大学院大学 研究推進課図書館情報係