Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = perceptual wavelet packet entropy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 5974 KB  
Article
Voiceprint Recognition under Cross-Scenario Conditions Using Perceptual Wavelet Packet Entropy-Guided Efficient-Channel-Attention–Res2Net–Time-Delay-Neural-Network Model
by Shuqi Wang, Huajun Zhang, Xuetao Zhang, Yixin Su and Zhenghua Wang
Mathematics 2023, 11(19), 4205; https://doi.org/10.3390/math11194205 - 9 Oct 2023
Cited by 4 | Viewed by 2415
Abstract
(1) Background: Voiceprint recognition technology uses individual vocal characteristics for identity authentication and faces many challenges in cross-scenario applications. The sound environment, device characteristics, and recording conditions in different scenarios cause changes in sound features, which, in turn, affect the accuracy of voiceprint [...] Read more.
(1) Background: Voiceprint recognition technology uses individual vocal characteristics for identity authentication and faces many challenges in cross-scenario applications. The sound environment, device characteristics, and recording conditions in different scenarios cause changes in sound features, which, in turn, affect the accuracy of voiceprint recognition. (2) Methods: Based on the latest trends in deep learning, this paper uses the perceptual wavelet packet entropy (PWPE) method to extract the basic voiceprint features of the speaker before using the efficient channel attention (ECA) block and the Res2Net block to extract deep features. The PWPE block removes the effect of environmental noise on voiceprint features, so the perceptual wavelet packet entropy-guided ECA–Res2Net–Time-Delay-Neural-Network (PWPE-ECA-Res2Net-TDNN) model shows an excellent robustness. The ECA-Res2Net-TDNN block uses temporal statistical pooling with a multi-head attention mechanism to weight frame-level audio features, resulting in a weighted average of the final representation of the speech-level feature vectors. The sub-center ArcFace loss function is used to enhance intra-class compactness and inter-class differences, avoiding classification via output value alone like the softmax loss function. Based on the aforementioned elements, the PWPE-ECA-Res2Net-TDNN model for speaker recognition is designed to extract speaker feature embeddings more efficiently in cross-scenario applications. (3) Conclusions: The experimental results demonstrate that, compared to the ECAPA-TDNN model using MFCC features, the PWPE-based ECAPA-TDNN model performs better in terms of cross-scene recognition accuracy, exhibiting a stronger robustness and better noise resistance. Furthermore, the model maintains a relatively short recognition time even under the highest recognition rate conditions. Finally, a set of ablation experiments targeting each module of the proposed model is conducted. The results indicate that each module contributes to an improvement in the recognition performance. Full article
Show Figures

Figure 1

15 pages, 2795 KB  
Article
Identity Vector Extraction by Perceptual Wavelet Packet Entropy and Convolutional Neural Network for Voice Authentication
by Lei Lei and Kun She
Entropy 2018, 20(8), 600; https://doi.org/10.3390/e20080600 - 13 Aug 2018
Cited by 10 | Viewed by 5321
Abstract
Recently, the accuracy of voice authentication system has increased significantly due to the successful application of the identity vector (i-vector) model. This paper proposes a new method for i-vector extraction. In the method, a perceptual wavelet packet transform (PWPT) is designed to convert [...] Read more.
Recently, the accuracy of voice authentication system has increased significantly due to the successful application of the identity vector (i-vector) model. This paper proposes a new method for i-vector extraction. In the method, a perceptual wavelet packet transform (PWPT) is designed to convert speech utterances into wavelet entropy feature vectors, and a Convolutional Neural Network (CNN) is designed to estimate the frame posteriors of the wavelet entropy feature vectors. In the end, i-vector is extracted based on those frame posteriors. TIMIT and VoxCeleb speech corpus are used for experiments and the experimental results show that the proposed method can extract appropriate i-vector which reduces the equal error rate (EER) and improve the accuracy of voice authentication system in clean and noisy environment. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Show Figures

Figure 1

Back to TopTop