Next Article in Journal
Impact of Image Compression on the Performance of Steel Surface Defect Classification with a CNN
Previous Article in Journal
WHISPER: Wireless Home Identification and Sensing Platform for Energy Reduction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Pre-Trained CNNs for Audio Classification Using Transfer Learning

1
Department of Informatics and Computer Engineering, School of Engineering, University of West Attica, 11521 Athens, Greece
2
Department of Electrical and Electronic Engineering Educators, School of Pedagogical and Technological Education (ASPETE), 15122 Athens, Greece
*
Author to whom correspondence should be addressed.
J. Sens. Actuator Netw. 2021, 10(4), 72; https://doi.org/10.3390/jsan10040072
Submission received: 3 October 2021 / Revised: 26 November 2021 / Accepted: 6 December 2021 / Published: 10 December 2021

Abstract

The paper investigates retraining options and the performance of pre-trained Convolutional Neural Networks (CNNs) for sound classification. CNNs were initially designed for image classification and recognition, and, at a second phase, they extended towards sound classification. Transfer learning is a promising paradigm, retraining already trained networks upon different datasets. We selected three ‘Image’- and two ‘Sound’-trained CNNs, namely, GoogLeNet, SqueezeNet, ShuffleNet, VGGish, and YAMNet, and applied transfer learning. We explored the influence of key retraining parameters, including the optimizer, the mini-batch size, the learning rate, and the number of epochs, on the classification accuracy and the processing time needed in terms of sound preprocessing for the preparation of the scalograms and spectrograms as well as CNN training. The UrbanSound8K, ESC-10, and Air Compressor open sound datasets were employed. Using a two-fold criterion based on classification accuracy and time needed, we selected the ‘champion’ transfer-learning parameter combinations, discussed the consistency of the classification results, and explored possible benefits from fusing the classification estimations. The Sound CNNs achieved better classification accuracy, reaching an average of 96.4% for UrbanSound8K, 91.25% for ESC-10, and 100% for the Air Compressor dataset.
Keywords: sound classification; transfer learning; CNN; VGGish; YAMnet sound classification; transfer learning; CNN; VGGish; YAMnet

Share and Cite

MDPI and ACS Style

Tsalera, E.; Papadakis, A.; Samarakou, M. Comparison of Pre-Trained CNNs for Audio Classification Using Transfer Learning. J. Sens. Actuator Netw. 2021, 10, 72. https://doi.org/10.3390/jsan10040072

AMA Style

Tsalera E, Papadakis A, Samarakou M. Comparison of Pre-Trained CNNs for Audio Classification Using Transfer Learning. Journal of Sensor and Actuator Networks. 2021; 10(4):72. https://doi.org/10.3390/jsan10040072

Chicago/Turabian Style

Tsalera, Eleni, Andreas Papadakis, and Maria Samarakou. 2021. "Comparison of Pre-Trained CNNs for Audio Classification Using Transfer Learning" Journal of Sensor and Actuator Networks 10, no. 4: 72. https://doi.org/10.3390/jsan10040072

APA Style

Tsalera, E., Papadakis, A., & Samarakou, M. (2021). Comparison of Pre-Trained CNNs for Audio Classification Using Transfer Learning. Journal of Sensor and Actuator Networks, 10(4), 72. https://doi.org/10.3390/jsan10040072

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop