Next Article in Journal
Hyperbolic Center of Mass for a System of Particles in a Two-Dimensional Space with Constant Negative Curvature: An Application to the Curved 2-Body Problem
Next Article in Special Issue
Time Series Clustering with Topological and Geometric Mixed Distance
Previous Article in Journal
SWIFT Calibration of the Heston Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DBTMPE: Deep Bidirectional Transformers-Based Masked Predictive Encoder Approach for Music Genre Classification

Department of Multimedia Engineering, Dongguk University–Seoul, Seoul 04620, Korea
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(5), 530; https://doi.org/10.3390/math9050530
Submission received: 29 December 2020 / Revised: 25 February 2021 / Accepted: 26 February 2021 / Published: 3 March 2021
(This article belongs to the Special Issue Data Mining for Temporal Data Analysis)

Abstract

Music is a type of time-series data. As the size of the data increases, it is a challenge to build robust music genre classification systems from massive amounts of music data. Robust systems require large amounts of labeled music data, which necessitates time- and labor-intensive data-labeling efforts and expert knowledge. This paper proposes a musical instrument digital interface (MIDI) preprocessing method, Pitch to Vector (Pitch2vec), and a deep bidirectional transformers-based masked predictive encoder (MPE) method for music genre classification. The MIDI files are considered as input. MIDI files are converted to the vector sequence by Pitch2vec before being input into the MPE. By unsupervised learning, the MPE based on deep bidirectional transformers is designed to extract bidirectional representations automatically, which are musicological insight. In contrast to other deep-learning models, such as recurrent neural network (RNN)-based models, the MPE method enables parallelization over time-steps, leading to faster training. To evaluate the performance of the proposed method, experiments were conducted on the Lakh MIDI music dataset. During MPE training, approximately 400,000 MIDI segments were utilized for the MPE, for which the recovery accuracy rate reached 97%. In the music genre classification task, the accuracy rate and other indicators of the proposed method were more than 94%. The experimental results indicate that the proposed method improves classification performance compared with state-of-the-art models.
Keywords: music genre classification; MIDI; transformer model; unsupervised learning music genre classification; MIDI; transformer model; unsupervised learning

Share and Cite

MDPI and ACS Style

Qiu, L.; Li, S.; Sung, Y. DBTMPE: Deep Bidirectional Transformers-Based Masked Predictive Encoder Approach for Music Genre Classification. Mathematics 2021, 9, 530. https://doi.org/10.3390/math9050530

AMA Style

Qiu L, Li S, Sung Y. DBTMPE: Deep Bidirectional Transformers-Based Masked Predictive Encoder Approach for Music Genre Classification. Mathematics. 2021; 9(5):530. https://doi.org/10.3390/math9050530

Chicago/Turabian Style

Qiu, Lvyang, Shuyu Li, and Yunsick Sung. 2021. "DBTMPE: Deep Bidirectional Transformers-Based Masked Predictive Encoder Approach for Music Genre Classification" Mathematics 9, no. 5: 530. https://doi.org/10.3390/math9050530

APA Style

Qiu, L., Li, S., & Sung, Y. (2021). DBTMPE: Deep Bidirectional Transformers-Based Masked Predictive Encoder Approach for Music Genre Classification. Mathematics, 9(5), 530. https://doi.org/10.3390/math9050530

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop