JAIST Repository >
School of Information Science >
Articles >
Journal Articles >
Please use this identifier to cite or link to this item:
http://hdl.handle.net/10119/16673
|
Title: | A Two-Stage Phase-Aware Approach for Monaural Multi-Talker Speech Separation |
Authors: | Yin, Lu Li, Junfeng Yan, Yonghong Akagi, Masato |
Keywords: | speech separation phase recovery amplitude estimation deep learning mask estimation |
Issue Date: | 2020-07-01 |
Publisher: | 電子情報通信学会 |
Magazine name: | IEICE Transactions Information and Systems |
Volume: | E103-D |
Number: | 7 |
Start page: | 1732 |
End page: | 1743 |
DOI: | 10.1587/transinf.2019EDP7259 |
Abstract: | The simultaneous utterances impact the ability of both the hearing-impaired persons and automatic speech recognition systems. Recently, deep neural networks have dramatically improved the speech separation performance. However, most previous works only estimate the speech magnitude and use the mixture phase for speech reconstruction. The use of the mixture phase has become a critical limitation for separation performance. This study proposes a two-stage phase-aware approach for multi-talker speech separation, which integrally recovers the magnitude as well as the phase. For the phase recovery, Multiple Input Spectrogram Inversion (MISI) algorithm is utilized due to its effectiveness and simplicity. The study implements the MISI algorithm based on the mask and gives that the ideal amplitude mask (IAM) is the optimal mask for the mask-based MISI phase recovery, which brings less phase distortion. To compensate for the error of phase recovery and minimize the signal distortion, an advanced mask is proposed for the magnitude estimation. The IAM and the proposed mask are estimated at different stages to recover the phase and the magnitude, respectively. Two frameworks of neural network are evaluated for the magnitude estimation on the second stage, demonstrating the effectiveness and flexibility of the proposed approach. The experimental results demonstrate that the proposed approach significantly minimizes the distortions of the separated speech. |
Rights: | Copyright (C)2020 IEICE. Lu Yin, Junfeng Li, Yonghong Yan, and Masato Akagi, IEICE Transactions Information and Systems, E103-D(7), 2020, pp.1732-1743. https://www.ieice.org/jpn/trans_online/ |
URI: | http://hdl.handle.net/10119/16673 |
Material Type: | publisher |
Appears in Collections: | b10-1. 雑誌掲載論文 (Journal Articles)
|
Files in This Item:
File |
Description |
Size | Format |
IEICE_E103D_1732.pdf | | 1100Kb | Adobe PDF | View/Open |
|
All items in DSpace are protected by copyright, with all rights reserved.
|