Generic placeholder image

Recent Advances in Computer Science and Communications

Editor-in-Chief

ISSN (Print): 2666-2558
ISSN (Online): 2666-2566

Research Article

Dynamic Feature Extraction Method of Phone Speakers Based on Deep Learning

Author(s): Hongbing Zhang*

Volume 14, Issue 8, 2021

Published on: 22 January, 2020

Page: [2411 - 2419] Pages: 9

DOI: 10.2174/2666255813666200122101045

Price: $65

Abstract

Background: Nowadays, speech recognition has become one of the important technologies for human-computer interaction. Speech recognition is essentially a process of speech training and pattern recognition, which makes feature extraction technology particularly essential. The quality of feature extraction is directly related to the accuracy of speech recognition. Dynamic feature parameters can effectively improve the accuracy of speech recognition. These parameters make the speech dynamic feature extraction to have a higher research value. The traditional dynamic feature extraction method is easier to generate more redundant information, resulting in low recognition accuracy.

Methods: Therefore, based on a new speech feature extraction method, which is based on deep learning for speech feature extraction, is proposed in the present study. Firstly, the speech signal is preprocessed by pre-emphasis, windowing, filtering, and endpoint detection. Then, the Sliding Differential Cepstral (SDC) feature is extracted, which contains the voice information of the front and back frames. Finally, the feature is used as input to extract the dynamic features that represent the depth essence of speech information through the deep self-encoding neural network.

Results: The simulation results show that the dynamic features extracted by in-depth learning have better recognition performance than the original features, and have a good effect on speech recognition.

Keywords: Speech recognition, dynamic feature extraction, sliding differential cepstral feature vector, deep learning, technology, recognition.

Graphical Abstract


Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy