Next Article in Journal
Detection of Anomalous Behavior of Manufacturing Workers Using Deep Learning-Based Recognition of Human–Object Interaction
Next Article in Special Issue
Evaluation of Glottal Inverse Filtering Techniques on OPENGLOT Synthetic Male and Female Vowels
Previous Article in Journal
A Lightweight and Partitioned CNN Algorithm for Multi-Landslide Detection in Remote Sensing Images
Previous Article in Special Issue
Semisupervised Speech Data Extraction from Basque Parliament Sessions and Validation on Fully Bilingual Basque–Spanish ASR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Overview of the IberSpeech-RTVE 2022 Challenges on Speech Technologies

1
Vivolab, Aragon Institute for Engineering Research (I3A), University of Zaragoza, 50018 Zaragoza, Spain
2
Department of Electricity and Electronics, Faculty of Science and Technology, University of the Basque Country (UPV/EHU), Barrio Sarriena, 48940 Leioa, Spain
3
Institute of Technology, Universidad San Pablo-CEU, CEU Universities, Urbanización Montepríncipe, 28668 Boadilla del Monte, Spain
4
Corporación Radiotelevisión Española, 28223 Madrid, Spain
5
AUDIAS, Electronic and Communication Technology Department, Escuela Politécnica Superior, Universidad Autónoma de Madrid, Av. Francisco Tomás y Valiente, 11, 28049 Madrid, Spain
6
Fundación Vicomtech, Basque Research and Technology Alliance (BRTA), Mikeletegi 57, 20009 Donostia-San Sebastián, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(15), 8577; https://doi.org/10.3390/app13158577
Submission received: 23 June 2023 / Revised: 19 July 2023 / Accepted: 20 July 2023 / Published: 25 July 2023

Abstract

:
Evaluation campaigns provide a common framework with which the progress of speech technologies can be effectively measured. The aim of this paper is to present a detailed overview of the IberSpeech-RTVE 2022 Challenges, which were organized as part of the IberSpeech 2022 conference under the ongoing series of Albayzin evaluation campaigns. In the 2022 edition, four challenges were launched: (1) speech-to-text transcription; (2) speaker diarization and identity assignment; (3) text and speech alignment; and (4) search on speech. Different databases that cover different domains (e.g., broadcast news, conference talks, parliament sessions) were released for those challenges. The submitted systems also cover a wide range of speech processing methods, which include hidden Markov model-based approaches, end-to-end neural network-based methods, hybrid approaches, etc. This paper describes the databases, the tasks and the performance metrics used in the four challenges. It also provides the most relevant features of the submitted systems and briefly presents and discusses the obtained results. Despite employing state-of-the-art technology, the relatively poor performance attained in some of the challenges reveals that there is still room for improvement. This encourages us to carry on with the Albayzin evaluation campaigns in the coming years.

1. Introduction

The Albayzin evaluations are a series of technological benchmarks and challenges open to the scientific community within different fields of the broad area of speech technologies. Organized every two years since 2006 and supported by the Spanish Thematic Network on Speech Technologies (Red Temática en Tecnologías del Habla (RTTH), (http://www.rtth.es (accessed on 24 July 2023)), starting from 2018 the evaluations have focused on the broadcast media area. Thanks to Radio Televisión Española, RTVE (http://www.rtve.es (accessed on 24 July 2023)), the Spanish Public Broadcast Corporation, and the RTVE Chair at the University of Zaragoza (Cátedra RTVE de la Universidad de Zaragoza: (http://catedrartve.unizar.es (accessed on 24 July 2023)), new and more challenging datasets are being released to support the assessment of speech technologies in broadcast media tasks. Speech technologies have been introduced in this area due to their high potential to automate processes as subtitling and caption generation and alignment, or automatic metadata generation for audiovisual content. This requires continuous efforts to evaluate the performance of these technologies and to support new developments. In the last years, deep learning approaches have completely changed the landscape with a great boost in performance in image recognition, natural language processing and speech recognition applications. This has allowed the introduction of speech technologies in the work pipeline of broadcast archives and documentation services.
Different evaluation campaigns for broadcast speech have been carried out from the first one proposed in 1996 [1], most of them using English as the target language [2,3]. In the past few years, speech evaluations organized by the National Institute of Standards and Technology (NIST) have expanded the range of languages of interest (sometimes including low-resourced languages such as Cantonese, Pashto, Tagalog, Swahili, Tamil, to name a few) and speech domains, such as telephone and public safety communication. These evaluations included automatic speech recognition (ASR), spoken term detection and keyword spotting tasks [4,5,6,7,8,9,10]. The Albayzin evaluations have always focused on Iberian languages, mainly Spanish and to a lesser extent Catalan and Basque. Spanish is the second language with the highest number of native speakers and the fourth most spoken language in the world (https://www.ethnologue.com/insights/ethnologue200/ (accessed on 24 July 2023)). The Albayzin evaluations have the purpose of tracking the evolution of speech technologies for Iberian Languages with new and increasingly challenging datasets in each evaluation campaign. In the last editions, the evaluations have focused on Spanish language in the broadcast media sector. In the 2022 edition, four challenges have been proposed:
  • Speech to Text Challenge (S2TC), organized by RTVE and Universidad de Zaragoza, which consists of automatically transcribing different types of TV shows.
  • Speaker Diarization (SDC) and Identity Assignment Challenge (SDIAC), organized by RTVE and Universidad de Zaragoza, which consists of segmenting broadcast audio documents according to different speakers, linking those segments which originate from the same speaker and identifying a closed set of speakers.
  • Text and Speech Alignment Challenge (TaSAC), which consists of two sub-challenges:
    (Task 1) TaSAC-ST: Alignment of re-speaking generated subtitles, organized by RTVE and Universidad de Zaragoza, which consists of synchronizing the broadcast subtitles created by re-speaking of different TV shows.
    (Task 2) TaSAC-BP: Alignment and validation of speech signals with partial and inaccurate text transcriptions, organized by the University of the Basque Country (UPV/EHU), which consists of aligning text and audio extracted from a plenary session of the Basque Parliament.
  • Search on Speech Challenge (SoSC), organized by Universidad San Pablo-CEU and AuDIaS from Universidad Autónoma de Madrid, which consists of searching in audio content a list of terms/queries.
The main novelty of the IberSpeech-RTVE 2022 Challenges compared to previous evaluations [11,12] relies on the two following features: (1) A text and speech alignment challenge has been proposed for the very first time in the Albayzin evaluation series, including two different sub-tasks for two possible applications; and (2) new and more challenging databases have been released for all the evaluation tasks, covering different domains but focusing on broadcast media content, some of them, such as the RTVE and the Basque Parliament databases, being specifically created for these challenges.
A total of 7 teams from industry and academia registered to participate in the IberSpeech-RTVE 2022 challenges, with 20 different systems submitted in total. Compared to previous evaluations, the number of participants has significantly decreased. In the 2018 edition, 16 teams submitted 85 systems in the different challenges. In the 2020 edition, 12 teams submitted a total of 31 systems.
We present in this paper an overview of the IberSpeech-RTVE 2022 challenges along with the data supplied by the organization to the participants and the performance metrics. The overview includes a detailed description of the systems showcased for evaluation, their corresponding results, and a comprehensive set of conclusions derived from the 2022 evaluation campaign and previous campaigns in 2018 [11] and 2020 [12]. This paper will serve as reference for anyone wanting to use the datasets provided in the evaluation. Up to the date of writing this paper, more than 50 international research groups have asked for access to the RTVE database.
The paper is organized as follows: Section 2 presents the databases used in the different challenges; Section 3 describes the four IberSpeech-RTVE 2022 challenge tasks: speech-to-text transcription, speaker diarization and identity assignment, text and speech alignment, and search on speech, along with with the performance metrics used in each challenge; Section 4 provides a brief description of the submitted systems; Section 5 presents and discusses the results; and finally, a summary of the paper, conclusions and future work are outlined in Section 6.

2. IberSpeech-RTVE 2022 Evaluation Databases

2.1. RTVE Database

The RTVE database is a collection of TV shows that belong to diverse genres and were broadcast by the public Spanish Television (RTVE). The database has been built incrementally for the 2018, 2020 and 2022 Albayzin challenges resulting in RTVE2018DB, RTVE2020DB and RTVE2022DB datasets. The RTVE database contains nearly 1000 h of broadcast content and subtitles comprising 54 programs of diverse genres and topics, produced and broadcast by RTVE. These programs present various challenges for speech technologies, including the diversity of Spanish accents and slang, overlapping speech, spontaneous speech, acoustic variability, background noise, and specific vocabulary. Table 1 shows a summary of the content of the RTVE database in terms of genres, TV show names, dataset, speech and transcribed hours. The primary aim of the RTVE database is to offer challenging data for the Albayzin evaluations, allowing us to observe the progress of speech technologies when applied to audiovisual content.
For the IberSpeech-RTVE 2022 evaluation, the RTVE2018DB and RTVE2020DB datasets were provided for training and development purposes, while the RTVE2022DB dataset contains additional training, development and test partitions.
The RTVE2018DB dataset comprises complete TV shows spanning a variety of genres that were broadcast on Spain’s public National Television from 2015 to 2018. The audio collection amounts to 570 h, of which roughly 460 h come with subtitles, approximately 109 h have undergone human-revised transcription, and 38 h have been labeled with the speaker turns. Additionally, the database features a set of text files that were extracted from all the subtitles broadcast on the RTVE 24H Channel throughout 2017. The RTVE2018 dataset includes partitions with all the files needed to evaluate systems for speech-to-text and speaker and multimodal diarization. A full account of the RTVE2018 dataset content and formats can be found in the dataset description report [13], and its use in the IberSpeech 2018 challenge is described in [11].
The RTVE2020DB dataset is a compilation of complete TV shows from various genres that were broadcast on Spain’s public National Television between 2018 and 2019. It contains a total of 56 h of audio, which have been transcribed and reviewed by humans. Furthermore, a subset of 33 h has been categorized according to speaker, face, and scene descriptors. The RTVE2020 dataset includes partitions with all the files needed to evaluate systems for speech-to-text, speaker diarization and identity assignment and multimodal diarization and scene description. Further details about the RTVE2020 dataset content and formats can be found in the dataset description report [14], and information about its use in the IberSpeech 2020 challenge can be found in [12].
The RTVE2022DB dataset is a collection of diverse audio material recorded from the 60 s to the present. It covers historical recordings, popular TV shows and fictional shows. It contains a total of 335 h of audio, which have been transcribed and reviewed by humans. Up to 280 h of speech, corresponding to 260 TV programs of 9 different shows broadcast by RTVE, have been automatically aligned at the sentence level thanks to Vicomtech (https://www.vicomtech.org/ (accessed on 24 July 2023)). The RTVE2022DB dataset includes partitions with all the files needed to evaluate ASR systems, speaker diarization and identity assignment, text and speech alignment and search on speech. A small development partition is provided for the text and speech alignment challenge.
The audio files have been created by extracting the audio stream from the video files provided by RTVE without decoding/encoding. All the audio files are encoded in the AAC format. Stereo audio signals at 44,100 Hz sampling rate per channel have been encoded using the mp4-LC profile with a variable bit rate ranging from 48 to 96 kb/s.
The RTVE database is freely available subject to the terms of a license agreement with RTVE (http://catedrartve.unizar.es/rtvedatabase.html (accessed on 24 July 2023)).

RTVE2022DB Test Dataset

The test partition is made up of 21 different TV shows covering 54 h of diverse audio material (See Table 2). It has been human transcribed and, additionally, around 25 h of audio have been labeled in terms of speaker turns and assigned an identity from a closed set of 74 speakers. As enrollment for each speaker, an audio recording of at least 30 s is provided. Table 3 shows the distribution of the dataset among the different challenges.

2.2. MAVIR Database

The MAVIR database consists of a set of Spanish talks from the MAVIR workshops (http://www.mavir.net (accessed on 24 July 2023)) held in 2006, 2007 and 2008. It contains speech samples from Spanish speakers both from Spain and Latin America.
The MAVIR Spanish dataset consists of 7 h of spontaneous speech files from different speakers. These data were then divided into training, development, and test sets. The speech data were manually annotated in an orthographic form, but timestamps were only set for phrase boundaries (http://cartago.lllf.uam.es/mavir/index.pl?m=videos (accessed on 24 July 2023)). The training data were made available to the participants including the orthographic transcription and the timestamps for phrase boundaries. For the challenge, the timestamps for the roughly 3000 occurrences of the queries used in the development and test evaluation datasets were also provided.
Initially, the speech data were recorded in several audio formats (pulse code modulation (PCM) mono and stereo, MP3, 22.05 kHz, and 48 kHz, among others). For this evaluation, audio recordings were all converted to PCM, 16 kHz, single channel, 16 bits per sample using the SoX tool (http://sox.sourceforge.net/ (accessed on 24 July 2023)). All the recordings but one were originally made with a Digital TASCAM DAT model DA-P1 equipment. Different microphones were used, which mainly consisted of tabletop or floor-standing microphones, but one lavalier microphone was also employed. The distance from the microphone to the mouth of the speaker was not specifically controlled, but in most cases was smaller than 50 cm. The speech recordings took place in large conference rooms with a capacity for over a hundred people. This conveys additional challenges including background noise (particularly babble noise) and reverberation. Therefore, these realistic settings and the variety of phenomena in spontaneous speech make this database appealing and challenging enough for evaluation.
A summary of the main database features such as the training/development/test dataset division, the number of word occurrences, the duration, and the number of speakers is presented in Table 4.
The number of terms and the number of total occurrences both for STD (Spoken Term Detection) and QbE STD (Query-by-Example Spoken Term Detection) tasks for the MAVIR database are presented in Table 5.

2.3. RTVE2022-SoS Database

An excerpt of the whole RTVE database presented in Table 1 has been used as development data for the search on speech challenge. Specifically, the search on speech challenge provided two different development datasets. The dev1 dataset consists of about 60 h of speech with human-revised word transcriptions without time alignment. The dev2 dataset, which was the one actually used as development dataset for the search on speech challenge, consists of 15 h of speech data. For the challenge, the timestamps for the roughly 2500 occurrences of the queries used in the development (dev2) and test evaluation datasets were also provided.
A summary of the main database features such as the development (dev2)/test dataset division, the number of word occurrences, the duration, and the number of speakers is presented in Table 6.
The number of terms and the number of total occurrences both for STD and QbE STD tasks for the RTVE2022-SoS database are presented in Table 7.

2.4. SPARL22 Database

The SPARL22 database consists of spontaneous speech from Spanish parliament sessions held from 2016 up to now and amounts to about 2 h of speech extracted from 14 audio files. The timestamps for the roughly 1600 occurrences of the queries used as test data were also provided.
The original recordings are videos in MPEG format. The evaluation organizers extracted the audio from these videos and converted them to PCM, 16 kHz, single channel and 16 bits per sample using the ffmpeg 4 tool (https://ffmpeg.org/ (accessed on 24 July 2023)). It is worth mentioning that this database contains several noise types (e.g., laughing, applause, etc.), which makes it quite challenging.
This database only provides test data to measure the generalization capability of the systems in an unseen domain in training and development.
A summary of the main database features such as the number of word occurrences, the duration, and the number of speakers is presented in Table 8. The number of terms and the number of total occurrences both for STD and QbE STD tasks for the SPARL22 database are presented in Table 9.

2.5. The Basque Parliament Dataset

For the second task of the Text-and-Speech Alignment Challenge (TaSAC-BP), the audio data and the paired texts were extracted from a plenary session of the Basque Parliament (BP), including sections in two languages (Spanish and Basque). The paired texts were extracted from session minutes and included only sections in Spanish. The task focused on Spanish because most of the research groups aiming to participate in this evaluation would have ASR technology and resources for Spanish, but few would have them available for Basque.

2.5.1. Audio Data

The Basque Parliament audio data were stored in 16 kHz 16-bit signed single-channel PCM WAV files. Audio recordings were originally made through the BP audio system (desktop microphones), thus they are generally clear with high signal-to-noise ratios. Two different audio files (each approximately one hour long) were provided for development and testing, respectively. Both audios were extracted from the same plenary session, which featured speech samples from several (not many) speakers, who may switch from Spanish to Basque (or vice versa) during their turns. Speaker turn changes and voting events were both managed by the president of the BP and involved a certain amount of silent or slightly noisy regions, but speaker turn overlaps were very uncommon.

2.5.2. The Paired Texts

The text to be aligned with the audio was extracted from the session’s minutes. The Basque Parliament session minutes are based on the audio but ignore spontaneous speech events (such as filled pauses, false starts, repeated words, etc.) and include a sizable amount of editions to preserve syntactical correctness. As a consequence of this, the provided text does not match the audio, featuring word deletions, insertions and replacements. Sometimes, a word said in the audio is replaced in the minutes with a very similar variation of it (with different gender or number) and the most optimistic alignment will inescapably lead to an error, just because acoustics and spelling do not match. Both the paired texts and the ground truth transcriptions have been normalized by removing punctuation marks, replacing accented vowels with non-accented vowels and converting all letters to lowercase. Uppercase letters have been kept only for acronyms (e.g., ADN, EH, UPyD, etc.) which could be either spelled (the most common case) or read as words. This should be taken into account when performing the alignment.
The paired texts do not include the parts spoken in Basque, so there could be remarkable time leaps between one word, at the end of a part spoken in Spanish, and the following one, at the beginning of the next part spoken in Spanish (maybe several minutes ahead in the audio signal). Again, this should be taken into account when performing the alignment.

2.5.3. The Ground Truth

The ground truth is based on manually generated rich text transcriptions (including spontaneous speech events). These transcriptions follow the acoustics even though the syntactical correctness is lost. The timestamps of sentences were manually added, so they are fully reliable. Word-level timestamps inside sentences were obtained automatically by forced alignment of each sentence transcription with the corresponding audio. To verify the accuracy of word-level timestamps, an informal test was carried out using several randomly chosen sentences, by manually adding word-level timestamps and comparing them with the automatically generated timestamps. It was observed that differences between manual and automatic timestamps spanned from 0 to 20 ms. Thus, the automatic segmentation was considered good enough for the purposes of this evaluation, provided that a reasonable collar time was applied.
For this evaluation, only the words appearing in the paired text are kept in the ground truth, the remaining elements of the rich text transcription being hidden. Note that the evaluation focuses on how well the paired text is aligned with the audio. Taking this into account, if a word w in the paired text is aligned with an audio segment that is not included in the ground truth, it is guaranteed that neither the word w nor any other word in the paired text appears in that segment, so the time span of such segment is counted as error no matter the exact transcription of it. Also, to be fair with the participants, if a word w of the paired text (e.g., a proper name) appears in a part of the audio spoken in Basque, the corresponding segment is included in the ground truth, just to cover the case that the word w was aligned with that segment. Finally, to account for the uncertainty when defining the borders between words, a collar time can be established so that a certain amount of time around the borders is not evaluated at all.

3. IberSpeech-RTVE 2022 Evaluation Tasks

This section presents a brief summary of the four evaluation tasks. A more detailed description of the evaluation plans can be found on the IberSpeech2022 web page (https://iberspeech2022.ugr.es/?page_id=67 (accessed on 24 July 2023)) and the Cátedra RTVE-UZ web page (http://catedrartve.unizar.es/albayzin2022.html (accessed on 24 July 2023)).

3.1. Speech to Text Challenge

The speech-to-text transcription evaluation consists of automatically transcribing different types of TV shows. The main objective is to evaluate the state-of-the-art in automatic speech recognition (ASR) for the Spanish language in the broadcast sector. There is no specific training partition, thus participants are free to use the previous RTVE datasets (2018 and 2020) or any other data to train their systems provided that these data are fully documented in the system’s description paper. For public databases, the name of the database must be provided. For private databases, a brief description of the origin of the data must be provided. Each participant team should submit at least a primary system, but they could also submit up to three contrastive systems.

Performance Metric

The ASR system outputs are ranked by the word error rate (WER). All the participants have to provide as output for evaluation a free-form text file per test file, encoded using the UTF-8 charset (http://www.utf-8.com/ (accessed on 24 July 2023)), with no page, paragraphs, sentence, or speaker breaks. The text is normalized as follows: all the punctuation marks are removed, numbers are written with letters and all characters are lower-cased. The WER is defined as:
W E R = S + D + I N r ,
where Nr is the total number of words in the reference transcription, S is the number of substituted words in the automatic transcription, D is the number of words from the reference deleted in the automatic transcription, and I is the number of words inserted in the automatic transcription not appearing in the reference.

3.2. Speaker Diarization and Identity Assignment Challenge

The Speaker Diarization and Identity Assignment evaluation consists of segmenting broadcast audio documents according to different speakers and linking those segments which originate from the same speaker. On top of that, for a limited number of speakers, the evaluation asked for assigning the name of these people to the correct diarization labels. No prior knowledge is provided about the number of speakers participating in the audio to be analyzed. Participants are free to use any dataset for training their diarization systems provided that these data were fully documented in the system’s description paper. The organization provides all the RTVE datasets (http://catedrartve.unizar.es/rtvedatabase.html (accessed on 24 July 2023)), the Catalan broadcast news database from the 3/24 TV channel proposed for the 2010 Albayzin Audio Segmentation Evaluation [15,16] and the Corporación Aragonesa de Radio y Televisión (CARTV) database proposed for the 2016 Albayzin Speaker Diarization evaluation. For the identity assignment task, the RTVE2022 dataset provides enrolment audio files for 74 speakers (38 male, 36 female). At least 30 s of speech from each speaker to identify are included in the dataset.

3.2.1. Diarization Scoring

As in the NIST RT Diarization evaluations (https://www.nist.gov/itl/iad/mig/rich-transcription-evaluation (accessed on 24 July 2023)), to measure the performance of the proposed systems, the diarization error rate (DER) is computed as the fraction of speaker time that is not correctly attributed to that specific speaker. This score is computed over the entire file to be processed, including regions where more than one speaker is present (overlap regions).
Given the dataset to evaluate Ω , each document is divided into contiguous segments at all speaker change points found in both the reference and the hypothesis, and the diarization error time for each segment n is defined as:
E ( n ) = T ( n ) max N r e f ( n ) , N s y s ( n ) N C o r r e c t ( n )
where T ( n ) is the duration of segment n, N r e f ( n ) is the number of speakers that are present in segment n, N s y s ( n ) is the number of system speakers that are present in segment n, and N C o r r e c t ( n ) is the number of reference speakers in segment n correctly assigned by the diarization system.
D E R = n Ω E ( n ) n Ω T ( n ) N r e f ( n ) .
The diarization error time includes the time that is assigned to the wrong speaker, missed speech time, and false alarm speech time:
  • Speaker error time: The speaker error time is the amount of time that has been assigned to an incorrect speaker.
  • Missed speech time: The missed speech time refers to the amount of time that speech is present but not labeled by the diarization system.
  • False alarm time: The false alarm time is the amount of time that a speaker has been labeled by the diarization system but is not present.
Consecutive speech segments of audio labelled with the same speaker identification tag and separated by a non-speech segment less than 2 s long are merged and considered a single segment. A region of 0.25 s around each segment boundary, usually known as the forgiveness collar, is considered. These regions are excluded from the computation of the diarization error in order to take into account both inconsistent human annotations and the uncertainty about when a speaker turn begins or ends.

3.2.2. Identity Assignment Scoring

For the Identity Assignment Task, the assignment error rate (AER) is used, which is a slightly modified version of the previously described DER. This metric is defined as the amount of time incorrectly attributed to the speakers of interest divided by the total amount of time that those specific speakers are active. Mathematically, it can be expressed as:
A E R = F A + M I S S + S P E A K E R E R R O R R E F E R E N C E L E N G T H
where:
  • FA represents the False Alarm Time, which contains the length of the silence segments or speech segments that belong to unknown speakers incorrectly attributed to a certain speaker.
  • MISS represents the Missed Speech Time, which takes into account the length of the speech segments that belong to speakers of interest not attributed to any speaker.
  • SPEAKER ERROR (Speaker Error Time) considers the length of the speech segments that belong to speakers of interest attributed to an incorrect speaker.
  • REFERENCE LENGTH is the sum of the lengths of all the speech segments uttered by the people of interest (i.e., those identities for which the participants will have audio to train their models).

3.3. Text and Speech Alignment Challenge

The Text and Speech Alignment Challenge (TaSAC) consists of two subchallenges:
  • Alignment of re-speaking generated subtitles (TaSAC-ST). This challenge consists of synchronizing the broadcast subtitles created by re-speaking different TV shows.
  • Alignment and validation of speech signals with partial and inaccurate text transcriptions (TaSAC-BP). This challenge consists of aligning text and audio extracted from a plenary session of the Basque Parliament.

3.3.1. Alignment of Re-Speaking Generated Subtitles (TaSAC-ST)

The IberSPEECH-RTVE 2022 Text and Speech Alignment Challenge aims to evaluate the text-to-speech alignment systems on the actual problem of synchronizing re-speaking subtitles with the corresponding audio. The task assesses the state of the art of offline alignment technology. The purpose is to provide subtitles without delay for a new broadcast. In this task, participants are supplied with the subtitles as they originally appeared on TV, including the start and end timestamps of each subtitle. Participants must provide an output with the exact same sequence of subtitles but with new start and end timestamps for each subtitle. It should be noted that re-speaking subtitles often differ from the actual spoken words. If the speech is too fast the re-speaker tends to suppress words (deletions) or even to paraphrase, which introduces a new level of difficulty in the alignment process. The performance is measured by computing the time differences between the aligned start and end timestamps given by the alignment systems and the reference timestamps derived from a careful manual alignment.

Performance Metric

The average program time-error metric (APTEM) is the primary metric for the Text and Speech alignment task. For each program in the test, a program time-error metric (PTEM) will be calculated and the final score will be computed by averaging the PTEM of each program in the test.
A P T E M = 1 M m = 1 M P T E M ( m ) ,
where M is the number of programs in the test dataset, and the PTEM is computed as follows: Given the probability distribution of the time difference between the reference and aligned start and end timestamps of each subtitle in a program, the PTEM is computed as the median value of the distribution.
Let’s define the start-time error for the n-th subtitle, T E s ( n ) , as:
T E s ( n ) = | T s r e f ( n ) T s a l i g ( n ) | ,
where T s r e f ( n ) and T s a l i g ( n ) are the start timestamps of the reference and the automatic alignment for the n-th subtitle. Similarly, the end-time error, T E e ( n ) , is defined as:
T E e ( n ) = | T e r e f ( n ) T e a l i g ( n ) | ,
where T e r e f ( n ) and T e a l i g ( n ) are the end timestamps of the reference and the automatic alignment for the n-th subtitle. Then, the time error of the n-th subtitle, T E ( n ) is computed as:
T E ( n ) = T E s ( n ) + T E e ( n ) .
The PTEM is defined as
P T E M = m e d ( [ T E ( 1 ) , T E ( 2 ) , . . . , T E ( N ) ] ) ,
where m e d ( ) is the median operator and N is the number of subtitles broadcast in the TV program.

3.3.2. Alignment and Validation of Speech Signals with Partial and Inaccurate Text Transcriptions (TaSAC-BP)

Over the last years, with the widespread adoption of data-intensive deep learning approaches to ASR, the semi-supervised collection of training data for ASR has gained renewed interest. The Internet is plenty of resources pairing speech and text. Sometimes the paired text is an accurate transcription of the spoken content, but frequently it is only a loose or partial transcription or even a translation to some other language. Therefore, a text-to-speech alignment system able to detect and extract accurately paired speech and text segments becomes a very valuable tool. The second task of the Text-and-Speech Alignment Challenge (TaSAC-BP) was designed with that goal in mind. The alignment systems would deal with a long audio file, including sections in Spanish and Basque, but the paired texts (which could be partial or approximate transcripts of the audio) would cover only the Spanish sections. The audio parts in Basque were not expected to be paired with any text, though some words or word fragments (proper names, technical terms, etc.) may actually match (and be wrongly paired with) text in Spanish.

Task Description

The task consisted of aligning each word of the text with a segment of the audio file so that the audio content corresponds to the pronunciation of the given word. Alignments were required to be monotonous, that is, the sequence of timestamps had to be non-decreasing. Obviously, it was guaranteed that there was an optimal monotonous alignment between the audio signal X and the paired text W. Let W = { w 1 , w 2 , , w N } be the sequence of N words to be aligned with an audio signal X, and let S = { s 1 , s 2 , , s N } be the corresponding sequence of aligned segments in X. Then, if a word w i is aligned to a segment s i = ( t b e g ( i ) , t e n d ( i ) ) and another word w j is aligned to a segment s j = ( t b e g ( j ) , t e n d ( j ) ) , with i < j , then the timestamps defining those segments must be t b e g ( i ) < t e n d ( i ) t b e g ( j ) < t e n d ( j ) . Non-monotonic alignments were not allowed and non-monotonic submissions were not accepted.
The output of an alignment system was required to be a text file containing a line for each word in the paired text, each line including 5 columns (separated by any amount of spaces or tabs) with the following information:
  • t b e g : A real number with the time when the segment starts.
  • t e n d : A real number with the time when the segment ends.
  • word: The word paired with the audio segment.
  • score: A real number reflecting the confidence on the alignment, the more positive the score, the higher the confidence; the more negative the score, the lower the confidence.
  • decision: A binary value (0/1), 0 meaning Reject and 1 meaning Accept. Since only the accepted words would be evaluated, this decision should be made by applying a confidence score threshold.
The participants could submit results for at most five (one primary + four contrastive) systems. Each system should automatically align the paired text with the audio, taking into account that some parts of the audio should not be aligned with any text and that the paired text did not reflect exactly the audio contents. It was not allowed to listen to the audio or use any kind of human intervention (e.g., crowdsourcing). Otherwise, any approach could be applied with no limit to the type or amount of resources that the participants could use to perform the task, as long as the employed methods and resources were described with enough detail and, if possible, links to papers, data and/or software repositories were provided to make it easier to reproduce their approach. For each system, two separate result files were required, for the development and test sets, respectively. Finally, participants would be ranked according to the performance obtained by their primary systems on the test set.

Performance Metric

The ground truth is preprocessed before using it to compute the performance metric. First, the missing segments are added to the ground truth and assigned an out-of-vocabulary label (#). Then, the borders between segments are redefined by excluding from evaluation a collar time t c o l l a r around them (in this evaluation, t c o l l a r = 20 ms): the starting time t b e g of each segment is added t c o l l a r / 2 while the ending time t e n d of each segment is subtracted t c o l l a r / 2 .
Since the objective of the alignment is to recover as much correctly transcribed speech as possible to train acoustic models for the development of ASR systems, the performance metric should reflect this objective, but also the negative impact of wrongly aligned segments that could seriously compromise this semi-supervised training strategy. Thus, the performance metric will be just the difference between the correct and the wrongly aligned times.
Let S = { s 1 , s 2 , , s N } be the output of the alignment system for the paired text W = { w 1 , w 2 , , w N } . Only those segments accepted by the system will be evaluated, so let S = { s 1 , s 2 , , s N } (with N N ) be the sequence of accepted segments. Each accepted segment is then aligned with the ground truth, which produces a sequence of sub-segments, each of them aligned either with a ground truth segment or with a collar time segment.
Sub-segments aligned with collar time are not evaluated and will not be considered hereafter. Let C = { c ( 1 ) , c ( 2 ) , , c ( M ) } be the sequence of sub-segments obtained after aligning the accepted segments with the ground truth, excluding collar time. Each sub-segment is a 4-tuple:
c ( i ) = t b e g ( i ) , t e n d ( i ) , w a ( i ) , w g ( i ) ,
where t b e g ( i ) is the starting time, t e n d ( i ) is the ending time, w a ( i ) is a word in the paired text and w g ( i ) is a word in the ground truth. The performance metric is defined as follows:
s c o r e ( C ) = i = 1 M δ ( w a ( i ) , w g ( i ) ) · t e n d ( i ) t b e g ( i ) ,
where:
δ ( w a ( i ) , w g ( i ) ) = 1 if w a ( i ) = w g ( i ) 1 if w a ( i ) w g ( i ) .
The participants were encouraged to take into account that the higher the number of accepted segments, the higher the potential amount of correctly aligned speech, but also the higher the risk of having a large amount of wrongly transcribed speech. To find the optimal balance between both events, a suitable confidence score threshold should be applied to make decisions. The scoring script provided by the organization explores all the possible thresholds that can be applied to make decisions, and outputs the optimal score and threshold.

Scoring Script

The scoring script provided to participants is a command-line application that requires a basic installation of Python 3 including the matplotlib module (used to produce a graphical analysis of system scores). The script takes the system alignment and the ground truth files as input and allows to specify collar time (in this case, 0.02 s) as well as other optional arguments, such as the text and graphical output file names.
The output text includes two lines, the first one showing the performance obtained using system decisions, and the second one showing the best performance that can be obtained by applying a threshold on the provided scores to make decisions. By default, the text output is written on the console. The optional graphical output (a PNG file) presents the performance obtained by applying system decisions and the evolution of the correctly aligned time, the wrongly aligned time and the difference between them (that is, the performance metric) by using all the possible thresholds to make decisions. The optimal performance and the corresponding threshold are marked on the performance curve. The figure also includes the total time accepted and rejected by applying different thresholds. Obviously, applying the minimum threshold implies accepting all the words of the paired text, which does not usually yield the best performance, while applying the maximum threshold implies rejecting all the words, meaning a performance of 0. A reasonable criterion to make decisions on the test set would be to apply the optimal threshold found on the development set.

3.4. Search on Speech Challenge

The Search on Speech challenge involves searching in audio content a list of terms or queries and it is suitable for groups working on speech indexing/retrieval and speech recognition. In other words, this challenge focuses on retrieving the audio files that contain any of those terms/queries along with the corresponding timestamps.
This challenge consists of two different tasks:
  • Spoken Term Detection (STD), where the input to the system is a list of terms, but these terms are unknown when processing the audio. This task must generate a set of occurrences for each term detected in the audio files, along with their timestamps and score as output. This is the same task as in NIST STD 2006 evaluation [17] and Open Keyword Search in 2013 [4], 2014 [5], 2015 [6], and 2016 [7].
  • Query-by-Example Spoken Term Detection (QbE STD), where the input to the system is an acoustic query and hence a prior knowledge of the correct word/phone transcription corresponding to each query cannot be used. This task must generate a set of occurrences for each query detected in the audio files, along with their timestamps and score as output, as in the STD task. This QbE STD is the same task as those proposed in MediaEval 2011, 2012, and 2013 [18].
For the QbE STD task, participants are allowed to make use of the target language information (Spanish) when building their system/s (i.e., system/s can be language-dependent). Nevertheless, participants are strongly encouraged to build language-independent QbE STD systems, as in past MediaEval Search on Speech evaluations, where no information about the target language was given to participants.
This evaluation defined two different sets of terms/queries for STD and QbE STD tasks: an in-vocabulary (INV) set of terms/queries and an out-of-vocabulary (OOV) set of terms/queries from lexicon and language model perspectives. The OOV set of terms/queries will be composed of out-of-vocabulary words for the LVCSR system. This means that, in case participants employ an LVCSR system for processing the audio for any task (STD, QbE STD), these OOV terms (i.e., all the words that compose the term) must be previously removed from the system dictionary/language model and hence, other methods (e.g., phone-based systems) have to be used for searching OOV terms/queries. Participants can consider OOV words for acoustic model training if they find it suitable.
Regarding the QbE STD task, three different acoustic examples per query were provided for both development and test datasets. One example was extracted from the same dataset as the one to be searched (hence in-domain acoustic examples). This scenario considered the case in which the user finds a term of interest within a certain speech dataset and he/she wants to search for new occurrences of the same query. The two other examples were recorded by the evaluation organizers and comprised a scenario where the user pronounces the query to be searched (hence out-of-domain acoustic examples). These two out-of-domain acoustic examples amount to 3 s of speech with PCM, 16 kHz, single channel and 16 bits per sample with the microphone of an HP ProBook Core i5, 7th Gen and with a Sennheiser SC630 USR CTRL microphone with noise cancellation, respectively.
The queries employed for the QbE STD task were chosen from the STD queries. It should be noted that for both the STD and QbE STD tasks, a multi-word query was considered OOV in case any of the words that form the query was OOV.

Evaluation Metric

In search on speech systems (both for STD and QbE STD tasks), a hypothesized occurrence is called a detection; if the detection corresponds to an actual occurrence, it is called a hit; otherwise, it is called a false alarm. If an actual occurrence is not detected, this is called a miss. The main metric for the evaluation is the actual term weighted value (ATWV) metric proposed by NIST [17]. This metric combines the hit rate and false alarm rate of each query and averages over all the queries, as shown in Equation (13):
A T W V = 1 | Δ | Q Δ ( N h i t Q N t r u e Q β N F A Q T N t r u e Q ) ,
where Δ denotes the set of queries and | Δ | is the number of queries in this set. N h i t Q and N F A Q represent the numbers of hits and false alarms of query Q, respectively and N t r u e Q is the number of actual occurrences of query Q in the audio. T denotes the audio length in seconds and β is a weight factor set to 999.9 , as in the ATWV proposed by NIST [17]. This weight factor causes an emphasis placed on recall compared to precision with a 10:1 ratio.
ATWV represents the term weighted value (TWV) for an optimal threshold given by the system (usually tuned on the development data). An additional metric, called maximum term weighted value (MTWV) [17] is also used to evaluate the upper-bound system performance regardless of the decision threshold.
Additionally, p(Miss) and p(FA), which represent the probability of miss and false alarm of the system as defined in Equations (14) and (15), respectively, are also reported:
p ( M i s s ) = 1 N h i t N t r u e
p ( F A ) = N F A T N t r u e ,
where N h i t is the number of hits obtained by the system, N t r u e is the actual number of occurrences of the queries in the audio, N F A is the number of false alarms produced by the system and T denotes the audio length (in seconds). These values, therefore, provide a quantitative way to measure system performance in terms of misses (or equivalently, hits) and false alarms.
In addition to ATWV, MTWV, p(Miss) and p(FA) figures, NIST also proposed a detection error tradeoff (DET) curve [19] that evaluates the performance of a system at various miss/FA ratios. Although DET curves were not used for the evaluation itself, they are also presented in this paper for system comparison.
The NIST STD evaluation tool [20] was employed to compute the MTWV, ATWV, p(Miss) and p(FA) figures, along with the DET curves.

4. IberSpeech-RTVE 2022 Submitted Systems

4.1. Speech to Text Challenge

A total of 13 different systems from four participating teams were submitted. The most relevant characteristics of each system are presented in terms of the recognition engine, and audio and text data used for training acoustic and language models.
  • BCN2BRNO [21]. BUT Speech@FIT research group (Brno University of Technology, Czech Republic) and Telefónica Research (Spain)
    BCN2BRNO submitted a primary system based on a word-level ROVER fusion of five individual models. Three models used an Encoder-Decoder Transformer architecture (XLS-R Conformer, XLSR-53-CTC and Whisper large model), the fourth was an RNN Transducer architecture and the fifth was a hybrid DNN-HMM model. The ASR pipeline consisted of 4 conceptual blocks: (a) voice activity detection to split audio recordings into smaller segments, (b) ASR models to produce N-best lists of hypotheses and scores, (c) RNN-LM rescoring to produce 1-best hypothesis for each ASR system, and (d) shallow fusion to give a single output. The system achieved 13.7% WER on the official evaluation dataset in the previous Albayzin 2020 Speech to Text Transcription challenge where the best result was 16.04% WER. The data used for training the ASR models contained the RTVE2018, 2020 and 2022 databases and the Spanish Common Voice dataset (400 h of read speech). Noise data augmentation was used with the Spanish Common Voice dataset to produce an augmented data partition with an SNR range from 6 dB to 20 dB and different noises (restaurant, street, home, workshop, fan, and non-speech segments longer than 2 s from the RTVE dataset). The language model used for rescoring was trained on a collection of Spanish newspapers’ texts and fine-tuned to the transcripts of the training data. Three contrastive systems were submitted: (c1) the fusion of an XLSR-128-Conformer with data augmentation and a CNN-TDNN-HMM with x-vectors; (c2) a single system based on XLSR-128-Conformer, and (c3) the Whisper large model.
  • UZ [22]. ViVoLab research group (University of Zaragoza, Spain)
    The ViVoLab system was a combination of several subsystems designed to perform a full subtitle edition process from the raw audio to the creation of aligned subtitle transcribed partitions. The subsystems included a phonetic recognizer, a phonetic subword recognizer, a speaker-aware subtitle partitioner, a sequence-to-sequence translation model working with orthographic tokens to produce the desired transcription, and an optional diarization step with the previously estimated segments. Additionally, the system used recurrent network-based language models to improve results for steps that involve search algorithms like the subword decoder and the sequence-to-sequence model. The technologies involved include unsupervised models like Wavlm to deal with the raw waveform, convolutional, recurrent, and transformer layers. The acoustic models were trained using the following datasets: (a) Albayzin, a phonetically balanced corpus in Spanish with 12 h, (b) Speech-Dat-Car, a corpus recorded in a car in different driving conditions with 18 h, (c) Domolab, a corpus recorded in a domotic environment with 9 h, (d) TCSTAR transcriptions of Spanish parliament sessions with 111 h, (e) Common Voice in Spanish with 400 h and (f) RTVE 2018 train, dev and RTVE 2022 train with more than 800 h. In addition, more training material was added from a diverse scarp of online videos and social networks with a total of 10,000 h. These data were filtered after performing forced alignment and selecting transcribed segments with sufficient quality. Data augmentation was used including noises, music and artificially generated impulse responses to simulate different acoustic environments. The language model was trained using Spanish Wikipedia, RTVE challenge subtitles, and text obtained from different Spanish newspapers. The system achieves 21.86% WER on the official evaluation dataset in the previous Albayzin 2020 Speech to Text Transcription challenge, where the best result was 16.04% WER.
  • TID [23]. Telefónica I + D (Spain)
    The TID system consisted of an acoustic end-to-end ASR based on the XLSR-53 model pre-trained with 56k hours of audio in 53 languages and fine-tuned in the Spanish Common Voice dataset. A voice activity detection (VAD) model was used to filter non-speech segments. The transcription was obtained using simple greedy decoding from the frame-wise character posteriors given by the model. A self-supervised method was used to adapt the ASR to the TV broadcast domain by retrieving data from the RTVE2018 and 2022 datasets. A neural machine translation model was used as an external language model to correct the ASR hypothesis. The primary system used VAD segments and corrected the output with the language model. Three additional contrastive systems were submitted: (c1) just used the VAD segments, (c2) used 10-second-length window segmentation and corrected the output with the language model, and (c3) generated the output using a 10-s-length segmentation window size.
  • VICOM-UPM [24]. Fundación Vicomtech (Basque Research and Technology Alliance) and Polytechnic University of Madrid (Spain)
    VICOM-UPM submission consisted of a total of 4 different systems which allowed testing state-of-the-art modeling approaches focused on different learning techniques and typologies of neural networks. As primary systems, VICOM-UPM presented the pre-trained Wav2Vec2-XLS-R-1B model adapted with 300 h of in-domain data from the RTVE2018 and RTVE2020 datasets. The first contrastive system corresponded to a pruned RNN-Transducer model, composed of a Conformer encoder and a stateless prediction network using BPE word-pieces as output symbols. The second contrastive system was a Multistream-CNN acoustic model-based system with a non-pruned 3-gram model for decoding and an RNN-based language model for rescoring the initial lattices. Finally, the third contrastive system was the publicly available Large model of the Whisper engine. The acoustic corpus was composed of annotated audio contents from 9 different datasets, summing up a total of 1927 h. The datasets were: (a) RTVE2018, (b) SAVAS corpus (broadcast news contents in Spanish of the Basque Country’s public broadcast corporation), (c) IDAZLE corpus (TV shows from the Basque Country’s public broadcast corporation), (d) RTVE Play 2020 and 2022 including programs broadcast between 2018 and 2022 by RTVE, (e) RTVE youtube including contents of RTVE from the YouTube platform, (f) Spanish Common Voice, (g) Albayzin, and (h) Multext (multilingual prosodic database). The text corpus consisted of transcriptions of the training datasets, RTVE2018 subtitles, RTVE play and news subtitles and generic news gathered from digital newspapers on the Internet. With regard to the RTVE 2020 test dataset, the best results were obtained by the Whisper model (12.15% WER) followed by the primary system (13.77%).

4.2. Speaker Diarization and Identity Assignment Challenge

Only 2 teams have participated in the Speaker Diarization challenge and only one of them submitted results with Identity Assignment.
  • IRIT [25]. Université de Toulouse (France)
    The IRIT submission was based on the pyannote.audio diarization system. pyannote.audio (https://github.com/pyannote/pyannote-audio) (accessed on 24 July 2023) is an open-source toolkit written in Python for speaker diarization. Version 2.1 introduces a major overhaul of pyannote.audio default speaker diarization pipeline, made of three main stages: speaker segmentation applied to a short sliding window, neural speaker embedding of each (local) speaker, and (global) agglomerative clustering. The IRIT submission focused on Speaker Diarization.
  • IV [26]. Intelligent Voice (UK)
    The Intelligent Voice submission was based on a Variational Bayes x-vector Voice Print Extraction system. The proposed system is capable of capturing vocal variations using multiple x-vector representations with two-stage clustering and outlier detection refinement. It implements the Deep-Encoder Convolution Autoencoder Denoiser network for denoising segments with noise or music on files identified by a signal-to-noise ratio classifier. Intelligent Voice submitted systems for both Speaker Diarization and Identity Assignment.

4.3. Text and Speech Alignment Challenge

Only two participants submitted results to this evaluation, maybe due to a lack of interest, because text and speech alignment is just a preprocessing step in the development of ASR systems and off-the-shelf tools are good enough in most cases.

4.3.1. Alignment of Re-Speaking Generated Subtitles (TaSAC-ST)

Only one team submitted results to this evaluation:
  • GTTS [27]. University of the Basque Country UPV/EHU (Spain)
    This system was originally developed as a baseline for the TaSAC-BP subtask and was adjusted to meet the conditions (output file format, etc.) of TaSAC-ST. Acoustic models were trained on cross-domain (non-RTVE) materials, with different channels, background/environment conditions, speakers, etc. The original GTTS submission consisted of two systems (primary and contrastive-1), which applied the same approach but different acoustic models (trained on two independent sets of non-RTVE data). The primary system used a 332-h training set consisting of clean read speech in Basque and Spanish extracted from generic acoustic databases (including Mozilla Common Voice), while the contrastive-1 system used a 1000-h training set consisting of clean spontaneous speech in Basque and Spanish extracted from Basque Parliament plenary sessions. Two late (post-key) systems (contrastive-2 and contrastive-3, trained on the same datasets as the primary and contrastive-1 systems, respectively) were also submitted to the evaluation, yielding improved performance thanks to a kernel modification in the dynamic programming algorithm which provided more compact alignment hypotheses.
    The four GTTS systems relied on a set of acoustic models used to perform an unrestricted phone decoding of the audio signal, without any language or phonological models to help the decoding. Given an audio file X and the corresponding STM file with the subtitles S, an off-the-shelf end-to-end neural-network-based phone decoder (built with wav2letter++, see [28]) was applied to X, which produced a recognized sequence of phone-like units p X (with timing information attached). On the other hand, an in-house rule- and dictionary-based grapheme-to-phoneme (G2P) converter was applied to S, which produced a reference sequence of phone-like units p S (with word and subtitle information attached). The two sequences of phone-like units were aligned under the criterion of maximizing the number of matches in the alignment path. Finally, the timing information was transferred from p X to p S and a new STM file was created, identical to the source STM except for the timestamps, which were obtained from the alignment.

4.3.2. Alignment and Validation of Speech Signals with Partial and Inaccurate Text Transcriptions (TaSAC-BP)

For this evaluation, the organizing team developed a baseline system that aimed to establish a reference mark for participants. The baseline system was based on an off-the-shelf phone decoder (built with wav2letter++, see [28]), an in-house Grapheme-to-Phoneme (G2P) converter, and a dynamic programming alignment algorithm maximizing the number of matches in the alignment path. Given an audio X and the paired text S, the phone decoder was applied to X to get a phonetic sequence p X (with timestamps attached), the G2P converter was applied to S to get a second phonetic sequence p S (with word info attached), then the two sequences p X and p S were aligned and the timing information was passed from p X to p S , and from p S to words. Word timestamps were post-processed to fill small gaps between words and the word score was computed as the phone recognition rate in the alignment. Finally, the acceptance threshold was optimized on the development set and applied to the development and test sets.
Only one team submitted results to this evaluation:
  • BUT. BUT Speech@FIT research group (Martin Kocour and Martin Karafiát, from Brno University of Technology, Brno, Czech Republic)
    This submission leveraged the output of some of the ASR systems developed by the BCN2BRNO team for the Speech to Text Challenge [21]. It consisted of a primary system based on a fusion of three ASR systems (two of them based on an encoder-decoder transformer architecture: XLS-R Conformer and Whisper large model, and the third one based on an RNN transducer architecture) and a contrastive system based on the best single ASR system (XLS-R Conformer). The audio processing pipeline started with Voice Activity Detection (VAD), based on a simple feed-forward DNN with 2 outputs (speech/non-speech) trained on the whole RTVE 2018 dataset, which produced segmented audio. Then, ASR was performed on this segmented audio and a 1-best hypothesis was obtained. This 1-best hypothesis was aligned with the original transcript, which produced segmented text, that is, the reference text was sequentially assigned to different audio segments. The alignment process involved filtering out words not recognized by ASR; still, if a word was not found in ASR but its surrounding words (two on each side) were found, the word was kept. Finally, forced alignment was performed between the segmented text and the segmented audio, which produced a sequence of aligned words (words with timestamps). Forced alignment was run on each audio segment separately, with 10-ms steps, using a GMM-HMM-based model (Kaldi) trained on RTVE datasets.

4.4. Search on Speech Challenge

In this challenge, only two systems have been evaluated:
  • Multistream CNN system [24]. Fundación Vicomtech (Basque Research and Technology Alliance)
    This is the same system as the second contrastive system submitted by Vicomtech to the Speech to Text Challenge, without the RNN language model rescoring approach. For detecting terms, a simple search of the terms within the ASR output produces the detected term list.
  • Multistream CNN+rescoring system [24]. Fundación Vicomtech (Basque Research and Technology Alliance)
    This is the same system as the second contrastive system submitted by Vicomtech to the Speech to Text Challenge. As in the previous system, a simple search of the terms within the ASR output produces the detected term list.

5. IberSpeech-RTVE 2022 Results and Discussion

5.1. Speech to Text Challenge

Table 10 presents the overall results for the RTVE2022DB test dataset. The more competitive systems have been the ones submitted by BCN2BRNO and VICOM-UPM teams. The best results have been obtained by the BCN2BRNO team, where their primary system (a fusion of 5 individual models) achieves a word error rate of 14.35%. It is worth noting the good results of the VICOM-UPM team, as their first contrastive system based on a pruned RNN-Transducer model obtains a remarkable 14.78% WER. Two contrastive systems are based on the large model of the Whisper engine. Although the data and models are the same, there is a significant difference between the final WER obtained by both submissions. The post-processing strategy of the BCN2BRNO team was based on using zlib to compress the segment transcripts and filter out segments with a compression factor higher than 2. The VICOM-UPM team used a more sophisticated post-processing strategy: for transcriptions where more than 20% of segments were text repetitions, the audio was segmented through a VAD module, non-speech segments longer than 2 s were discarded and the decoding was remade. The decoding process of the rest of the speech segments was performed by resetting the pre-condition of the text decoder. In addition, text phrases repeated three times or more were filtered out and only a single appearance was kept. The best WER obtained by a Whisper-based system was 14.87% for the VICOM-UPM approach.
It is interesting to study the distribution of WER in terms of the different shows that make up the test set. Table 11 presents the best results by show, that is, for each show the system with the lowest WER has been taken. It is remarkable that 12 out of 21 shows have a WER below 15%, the lowest WER of 5.89% being found for the Agrosfera (AG) show. The three serial dramas, Grasa (GR), Riders (RD) and Yreal (YR) are in the range of 18% to 25% WER. In the Grasa serial drama, most of the players are using Spanish Slang typical of the drug world which makes noteworthy the 24.79% WER. The show EE (indoor interviews) is composed of raw footage of interviews where the cameraman, producer and other people besides the interviewee are talking with a low signal-to-noise ratio, which makes WER rise to 22.2% from the expected 10%. Finally, the worst WER, 47.79%, is obtained for the APB (A Pedir de Boca) show. The audio of this show comes from raw footage where some interviewees are using Spanish Slang, and most of the time people behind the cameras are asking questions or talking, which has made also hard for human annotators to transcribe these materials.
Three shows have been used in previous challenges: CA (Comando Actualidad) and AT (Aquí la Tierra) in 2022 and SyG (Saber y Ganar) in 2018. Table 12 shows the comparison among the 3 last challenges for the three shows and the best system WER over the whole test set. It is noteworthy the significant improvement in WER observed in the three shows and in the whole test set. In the latter case, there is a 10% improvement between the 2020 and 2022 challenges although the organization has tried to rise the difficulty including audio recordings from shows with only raw footage and Spanish Slang. The increase in difficulty is clear if we compare the results obtained in 2020 and 2022 using the Whisper model. Using the VICOM-UPM post-processing of the Whisper output, the WER for the 2020 test set is 12.15% (24.25% relative improvement with regard to the best 2020 system) but for the 2022 test set is 14.87% (22.3% relative increase with respect to the 2020 WER with Whisper).

5.2. Speaker Diarization and Identity Assignment Challenge

The results of the Speaker Diarization challenge are presented in Table 13. The table presents the results of the two participating teams and reference results using the best system in the previous 2020 challenge. It is noteworthy the low DER, 18.47%, obtained by IRIT using pyannote.audio. This system gives a 16% of DER over the 2020 dataset, a little bit higher than the DER, 15.25%, of the best 2020 system. However, the performance of the best 2020 speaker diarization system degrades to 34.29% on the new 2022 test dataset.
Table 14 presents diarization results disaggregated by show for the best system in terms of missed speaker error (MiSE), false alarm speaker error (FASE), speaker error (SpE) and diarization error (DER). Again, as in the Speech to Text challenge, the “Agrosfera” show gives the best results with the lowest DER of 8.57%, mainly due to speaker errors. The worst results, as expected, are given by the serial drama shows, with the worst results, 56.47% of DER, for the Yreal show. This show is a thriller with live action and many overlaps of music and speech with regard to the other serial dramas in the test set.
With respect to the Speaker Diarization and Identity Assignment challenge, only the team from Intelligent Voice (IV) presented results. The IV team submitted several systems where the main difference was the VAD and the use of a Deep-Encoder Convolutional Autoencoder Denoiser (DE-CADE) speech enhancement model. The final system submitted by the IV team implemented an SNR classifier to decide whether or not to employ the DE-CADE speech enhancement algorithm on the input audio signals for inference based on the SNR threshold of 0.2, and a final 28.88% AER was reported. Table 15 shows the results of the 4 submissions of the IV team.

5.3. Text and Speech Alignment Challenge

5.3.1. Alignment of Re-Speaking Generated Subtitles (TaSAC-ST)

Results for the four systems submitted by GTTS on the development and test sets of TaSAC-ST are shown in Table 16. In all cases, the acoustic models trained on generic databases (Primary and Con-2) perform better than those trained on BP materials (Con-1 and Con-3), suggesting that, despite being three times larger, the BP training dataset does not suitably match the TV broadcasts used in this evaluation. Clearly, using in-domain (RTVE) data to train the acoustic models would lead to better results. On the other hand, the originally submitted systems (Primary and Con-1) show degraded performance on the test set when compared to that obtained on the development set: the APTEM increases by 33.8% and 22.6%, respectively, and the mean of the alignment errors gets multiplied by almost 4, meaning that large alignment errors are being made. The alignment algorithm could be matching some words at the wrong end of relatively long non-speech intervals, thus introducing large alignment errors. This issue (the appearance of relatively long non-speech intervals) would be happening more frequently in the test set than in the development set and would explain the difference in performance. This latter issue gets fixed when using the modified kernel (see details in [27]): the APTEM of Con-2 and Con-3 is almost the same on both (development and test) datasets, while the mean of the alignment errors on the test set gets drastically reduced, from 4.0923 for the Primary system to 0.6053 for Con-2 and from 4.1990 for Con-1 to 0.7186 for Con-3, meaning 85% and 83% relative reductions, respectively.
Table 17 shows the average performance of the best GTTS system (Con-2) on the three TV programs used in the development and test sets of TaSAC-ST, with the aim to discover performance variability depending on the type of TV program. APTEM and the mean of alignment errors are quite similar for AG (Agrosfera) and CO (Corazón), but clearly worse for AT (Aquí la Tierra). Differences are not large in terms of APTEM but remarkable in terms of the mean error, which may indicate that the alignment task could be hard in the case of challenging audio recordings and/or poor subtitles.

5.3.2. Alignment and Validation of Speech Signals with Partial and Inaccurate Text Transcriptions (TaSAC-BP)

The BUT team did provide development results only for the contrastive system and test results only for the primary system. Performance figures for the baseline and BUT systems are shown in Table 18 (development) and Table 19 (test). The optimal performance is shown too, along with the optimal word confidence threshold. The search for the optimal performance is done by sorting the accepted words by confidence and then leaving aside words one by one from the bottom of the list (the words with the lowest confidence), measuring the score time in each case and keeping the best one. The confidence threshold in parentheses corresponds to that of the last word accepted, but there could be other words with the same confidence score that were rejected. BUT systems seem to be quite competitive (because they recover most of the available speech in Spanish) and are always better than the baseline. The proposed approach seems to be dealing well with the two languages and matches the reference transcripts (only in Spanish) with the corresponding audio sections. However, since word confidence scores are all set to 1 in BUT submissions, using a confidence threshold to optimize performance does not make sense in this case.

5.4. Search on Speech Challenge

The results of the search on speech challenge are presented in Table 20 for the RTVE2022DB test dataset, which was the only dataset that received submissions. They show that the Multistream CNN+rescoring system performs better than the Multistream CNN system. A paired t-test shows that this better performance is significant ( p < 0.02 ). This indicates that the rescoring approach presented in the Multistream CNN+rescoring system helps for term detection. The small performance gap between MTWV and ATWV results clearly indicates that the detection threshold is well-calibrated. When comparing the best evaluation result (ATWV = 0.6694) with the best system submitted to the 2020 SoS challenge (ATWV = 0.2123), the performance is much better. This is due to the more robust ASR system constructed for this year’s evaluation.
The DET curves of the submitted systems are presented in Figure 1. They show that the Multistream CNN+rescoring system does generally perform the same as the Multistream CNN system for high miss ratios and does outperform the Multistream CNN system for low miss ratios.

6. Conclusions

The Albayzin evaluation campaigns comprise a unique and closed framework to measure the progress of speech technologies in Iberian languages (especially Spanish). This paper provides a comprehensive overview of the most recent Albayzin evaluation campaign: the IberSpeech-RTVE 2022 Challenges, which featured four different evaluations focused on TV broadcast content: speech-to-text transcription, speaker diarization and identity assignment, text and speech alignment, and search on speech. This is the first time that the text and speech alignment task is addressed in the Albayzin evaluation campaigns. Since its first release in 2018, the RTVE database provided for system development has increased in size up to 1000 h, with half of the material human transcribed, 96 h labelled with speaker turns and 58 h annotated with identity labels from a closed set of around 200 characters. Besides RTVE broadcast data, conference talks and panels (MAVIR) and parliamentary sessions (SPARL22, Basque Parliament dataset) have been also used in some of the challenges, to expand the range of domains and conditions.
In the Speech-To-Text task, four teams participated with 13 different system submissions. The evaluation was carried out over 21 different shows with a total of 54 h and covering a broad range of acoustic conditions. The best results in terms of the lowest WER were given by a system composed of the fusion of 5 models with a WER of 14.35%. The best single system gave a WER of 14.78%. It is remarkable that the zero-shot system, the Whisper Large model, released in September 2022, when using proper post-processing of the output, obtained a WER of 14.87%. It is also noteworthy that the 2022 test dataset is more difficult than the previous one by a 22.3%, according to the results obtained using Whisper in both datasets.
With regard to the speaker diarization and identity assignment challenge, only two teams participated in the speaker diarization task and only one of them submitted systems to the identity assignment task. The organization provided a baseline system based on the best 2020 system. The evaluation was carried out over 9 different shows with a total of 25 h and covered a broad range of conditions: acoustic background, number of speakers, and amount of overlapping speech. The best system obtained 18.47% DER, which meant a big gap in terms of DER when compared to the other submissions, including the baseline system.
There were only two participants in the text and speech alignment challenge. In the first task, focused on re-speaking generated subtitles, the best submitted system obtained an average median error per program (APTEM) of 0.29 s, and a global mean error of 0.61 s, which are still too high for many applications. The system was trained on out-of-domain (non-RTVE) data so better results could be expected when using in-domain training data. Also, it was found that some programs could be more challenging than others, with almost twice global mean error. In the second task, focused on retrieving training materials from loosely transcribed audio involving two languages (Basque and Spanish), the submitted system leveraged state-of-the-art ASR technology to perform unrestricted text and speech alignments, obtaining very competitive scores and successfully matching most of the reference transcripts (only in Spanish) with the corresponding audio sections.
In the search on speech challenge, which employed a subset of the test dataset employed in the speech-to-text transcription task, a single participant submitted two different systems. The best system obtained an ATWV of 0.6694, which shows that there is still ample room for improvement in this task.
In summary, the results are showing an improvement in the performance of the speech technologies assessed in the four challenges comparing with previous challenges. Promising results in both speech-to-text transcription and speaker diarization tasks were obtained in some TV shows belonging to genres such as news, interviews, thematic magazines or daily magazines. However, there is still room for improvement when dealing with genres such as serial drama, reality shows and game shows. For text and speech alignment, more datasets are needed in order to cover all the problems associated to re-speaking where the re-speakers often end up paraphrasing, and more in-the-wild materials involving a wide range of acoustic conditions and transcript qualities should be used to assess the performance of text and speech alignment methods for leveraging loosely or partially transcribed audio resources as training data for ASR. With regard to the search on speech tasks (STD and QbE-STD), more research is needed to address the increased complexity posed by out-of-vocabulary terms and/or multi-word terms when compared to in-vocabulary and single-word terms.
We are now actively preparing a new edition of the Albayzin evaluations, to be held along with the next IberSpeech conference in 2024. An extension of the RTVE database with new challenging audiovisual material will be released by April 2024, which will hopefully help to assess new developments in speech and language technologies.

Author Contributions

Conceptualization, E.L., V.B., L.J.R.-F. and J.T.; methodology, E.L, L.J.R.-F. and J.T.; software, E.L., L.J.R.-F., A.O., A.M., V.B., C.P., A.d.P., M.P., A.V., G.B., A.Á., H.A. and D.T.-T.; validation, E.L., L.J.R.-F., J.T. and D.T.-T.; formal analysis, E.L., L.J.R.-F., J.T. and D.T.-T.; investigation, E.L., L.J.R.-F., J.T. and D.T.-T.; resources, all authors; data curation, E.L., L.J.R-F., J.T. and D.T.-T.; writing—original draft preparation, E.L., L.J.R.-F. and J.T.; writing—review and editing, all authors; visualization, all authors; supervision, E.L., L.J.R.-F. and J.T.; project administration, E.L., V.B., C.P. and A.d.P.; funding acquisition, E.L., V.B., C.P. and A.d.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by Radio Televisión Española through the RTVE Chair at the University of Zaragoza, and Red Temática en Tecnologías del Habla (RED2022-134270-T), funded by AEI (Ministerio de Ciencia e Innovación); It was also partially funded by the European Union’s Horizon 2020 research and innovation program under Marie Skłodowska-Curie Grant 101007666; in part by MCIN/AEI/10.13039/501100011033 and by the European Union “NextGenerationEU”/ PRTR under Grants PDC2021-120846C41 PID2021-126061OB-C44, and in part by the Government of Aragon (Grant Group T3623R); it was also partially funded by the Spanish Ministry of Science and Innovation (OPEN-SPEECH project, PID2019-106424RB-I00) and by the Basque Government under the general support program to research groups (IT-1704-22), and by projects RTI2018-098091-B-I00 and PID2021-125943OB-I00 (Spanish Ministry of Science and Innovation and ERDF) as well.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The RTVE database is freely available subject to the terms of a license agreement with RTVE (http://catedrartve.unizar.es/rtvedatabase.html (accessed on 24 July 2023)). Requirements for downloading the MAVIR database can be found in http://cartago.lllf.uam.es/mavir/index.pl?m=descargas (accessed on 24 July 2023). For details on SPARL22 database access, please contact Javier Tejedor ([email protected]). To access the Basque Parliament datasets, the ground-truth files and the evaluation script of the TaSAC-BP evaluation, please contact Luis Javier Rodriguez-Fuentes ([email protected]).

Acknowledgments

We gratefully acknowledge the support of the IberSpeech 2022 organizers and every one of the participants for their invaluable contribution to the Albayzin evaluations.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Garofolo, J.; Fiscus, J.; Fisher, W. Design and preparation of the 1996 HUB-4 broadcast news benchmark test corpora. In Proceedings of the DARPA Speech Recognition Workshop; Morgan Kaufmann Publishers: Burlington, MA, USA, 1997. [Google Scholar]
  2. Graff, D. An overview of Broadcast News corpora. Speech Commun. 2002, 37, 15–26. [Google Scholar] [CrossRef]
  3. Bell, P.; Gales, M.J.F.; Hain, T.; Kilgour, J.; Lanchantin, P.; Liu, X.; McParland, A.; Renals, S.; Saz, O.; Wester, M.; et al. The MGB challenge: Evaluating multi-genre broadcast media recognition. In Proceedings of the 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), Scottsdale, AZ, USA, 13–17 December 2015; pp. 687–693. [Google Scholar] [CrossRef] [Green Version]
  4. NIST. NIST Open Keyword Search 2013 Evaluation (OpenKWS13), 1st ed.; National Institute of Standards and Technology (NIST): Washington, DC, USA, 2013. [Google Scholar]
  5. NIST. NIST Open Keyword Search 2014 Evaluation (OpenKWS14), 1st ed.; National Institute of Standards and Technology (NIST): Washington, DC, USA, 2014. [Google Scholar]
  6. NIST. NIST Open Keyword Search 2015 Evaluation (OpenKWS15), 1st ed.; National Institute of Standards and Technology (NIST): Washington, DC, USA, 2015. [Google Scholar]
  7. NIST. NIST Open Keyword Search 2016 Evaluation (OpenKWS16), 1st ed.; National Institute of Standards and Technology (NIST): Washington, DC, USA, 2016. [Google Scholar]
  8. NIST. 2017 Pilot Open Speech Analytic Technologies Evaluation (2017 NIST Pilot OpenSAT), 1st ed.; National Institute of Standards and Technology (NIST): Washington, DC, USA, 2019. [Google Scholar]
  9. NIST. NIST Open Speech Analytic Technologies 2019 Evaluation Plan (OpenSAT19), 1st ed.; National Institute of Standards and Technology (NIST): Washington, DC, USA, 2019. [Google Scholar]
  10. NIST. NIST Open Speech Analytic Technologies 2020 Evaluation Plan (OpenSAT20), 1st ed.; National Institute of Standards and Technology (NIST): Washington, DC, USA, 2020. [Google Scholar]
  11. Lleida, E.; Ortega, A.; Miguel, A.; Bazán-Gil, V.; Pérez, C.; Gómez, M.; de Prada, A. Albayzin 2018 Evaluation: The IberSpeech-RTVE Challenge on Speech Technologies for Spanish Broadcast Media. Appl. Sci. 2019, 9, 5412. [Google Scholar] [CrossRef] [Green Version]
  12. Lleida, E.; Ortega, A.; Miguel, A.; Bazán-Gil, V.; Pérez, C.; Gómez, M.; de Prada, A. IberSpeech 2020 Evaluation Results. Available online: http://catedrartve.unizar.es/albayzin2020results.html (accessed on 22 June 2023).
  13. Lleida, E.; Ortega, A.; Miguel, A.; Bazán, V.; Pérez, C.; Gómez, M.; de Prada, A. RTVE2018 Database Description. Available online: http://catedrartve.unizar.es/reto2018/RTVE2018DB.pdf (accessed on 22 June 2023).
  14. Lleida, E.; Ortega, A.; Miguel, A.; Bazán-Gil, V.; Pérez, C.; Gómez, M.; de Prada, A. RTVE2020 Database Description. Available online: http://catedrartve.unizar.es/reto2020/RTVE2020DB.pdf (accessed on 22 June 2023).
  15. Zelenák, M.; Schulz, H.; Hernando, J. Albayzin 2010 Evaluation campaign: Speaker diarization. In Proceedings of the Jornadas en Tecnología del Habla and Iberian SLTech Workshop, Vigo, Spain, 10–12 November 2010; p. 301. [Google Scholar]
  16. Zelenák, M.; Schulz, H.; Hernando, J. Speaker diarization of broadcast news in Albayzin 2010 evaluation campaign. EURASIP J. Audio Speech Music. Process. 2012, 2012, 19. [Google Scholar] [CrossRef] [Green Version]
  17. Fiscus, J.G.; Ajot, J.G.; Garofolo, J.S.; Doddington, G. Results of the 2006 Spoken Term Detection Evaluation. In Proceedings of the ACM SIGIR, Amsterdam, The Netherlands, 23–27 July 2007; pp. 1–4. [Google Scholar]
  18. Metze, F.; Anguera, X.; Barnard, E.; Davel, M.; Gravier, G. Language Independent Search in Mediaeval’s Spoken Web Search Task. Comput. Speech Lang. 2014, 28, 1066–1082. [Google Scholar] [CrossRef]
  19. Martin, A.; Doddington, G.; Kamm, T.; Ordowski, M.; Przybocki, M. The DET Curve In Assessment Of Detection Task Performance. In Proceedings of the Eurospeech, Rhodes, Greece, 22–25 September 1997; pp. 1895–1898. [Google Scholar]
  20. NIST. Evaluation Toolkit (STDEval) Software; National Institute of Standards and Technology (NIST): Gaithersburg, MD, USA, 1996. [Google Scholar]
  21. Kocour, M.; Umesh, J.; Karafiat, M.; Švec, J.; López, F.; Luque, J.; Beneš, K.; Diez, M.; Szoke, I.; Veselý, K.; et al. BCN2BRNO: ASR System Fusion for Albayzin 2022 Speech to Text Challenge. In Proceedings of the IberSPEECH, Granada, Spain, 14–16 November 2022; pp. 276–280. [Google Scholar] [CrossRef]
  22. Miguel, A.; Ortega, A.; Lleida, E. ViVoLAB System Description for the S2TC IberSPEECH-RTVE 2022 Challenge. Available online: http://catedrartve.unizar.es/reto2022/83-ViVoLAB_System_Description_for_S2TC_IberSPEECH_RTVE_2022_challenge.pdf (accessed on 23 June 2023).
  23. López, F.; Luque, J. TID Spanish ASR system for the Albayzin 2022 Speech-to-Text Transcription Challenge. In Proceedings of the IberSPEECH, Granada, Spain, 14–16 November 2022; pp. 271–275. [Google Scholar] [CrossRef]
  24. Arzelus, H.; Torres, I.G.; Martín-Doñas, J.M.; González-Docasal, A.; Alvarez, A. The Vicomtech-UPM Speech Transcription Systems for the Albayzín-RTVE 2022 Speech to Text Transcription Challenge. In Proceedings of the IberSPEECH, Granada, Spain, 14–16 November 2022; pp. 266–270. [Google Scholar] [CrossRef]
  25. Bredin, H. Pyannote.Audio 2.1 Speaker Diarization Pipeline: Principle, Benchmark, and Recipe. Available online: https://catedrartve.unizar.es/reto2022/PYA_report.pdf (accessed on 22 June 2023).
  26. Shrestha, R.; Glackin, C.; Wall, J.; Moniri, M.; Cannings, N. Intelligent Voice Speaker Recognition and Diarization System for IberSpeech 2022 Albayzin Evaluations Speaker Diarization and Identity Assignment Challenge. Available online: https://catedrartve.unizar.es/reto2022/82-Albayzin_IV_paper_final.pdf (accessed on 22 June 2023).
  27. Bordel, G.; Rodriguez-Fuentes, L.J.; Peñagarikano, M.; Varona, A. GTTS Systems for the Albayzin 2022 Speech and Text Alignment Challenge. In Proceedings of the IberSPEECH, Granada, Spain, 14–16 November 2022; pp. 285–289. [Google Scholar] [CrossRef]
  28. Collobert, R.; Puhrsch, C.; Synnaeve, G. Wav2Letter: An End-to-End ConvNet-based Speech Recognition System. arXiv 2016, arXiv:1609.03193. [Google Scholar]
Figure 1. Search on speech challenge: DET curves. DET curves for the systems submitted to the search on speech challenge for the RTVE2022DB test dataset.
Figure 1. Search on speech challenge: DET curves. DET curves for the systems submitted to the search on speech challenge for the RTVE2022DB test dataset.
Applsci 13 08577 g001
Table 1. RTVE database.
Table 1. RTVE database.
GenreTV Show Name201820202022Total SpeechTranscribed
Cooking showA pedir de boca X3:41:383:41:38
Cooking showHacer de comer X10:11:0010:11:00
Cooking showMasterchef X139:40:00139:40:00
Daily magazineAquí la tierra XX21:58:4621:58:46
Daily magazineCorazón X3:00:173:00:17
Daytime televisionLa mañanaX 227:47:009:35:00
Daytime televisiónLos desayunos de tve X 10:58:3410:58:34
Game show¿Juegas o qué? X35:53:0035:53:00
Game show 3 × 4 X2:58:172:58:17
Game showArranca en VerdeX 5:38:051:00:30
Game showDicho y HechoX 10:06:001:48:00
Game showEl cazador X3:48:223:48:22
Game showSaber y GanarX X33:28:387:23:21
Game showVaya crack X 5:06:005:06:00
Interviews (indoor)Ateneo X1:40:091:40:09
Interviews (indoor)Conversatorios en Casa América X1:58:441:58:44
Interviews (indoor)Entrevistas en estudio X3:54:573:54:57
Interviews (indoor)Imprescindibles X 3:12:213:12:21
Interviews (street)Encuestas con ruido ambiente X2:08:132:08:13
News show20HX 41:35:509:13:13
News showAsuntos publicosX 69:38:008:11:00
News showEspaña en comunidadX 13:02:598:09:32
News showInformativos UMATIC X0:59:490:59:49
News showLa tarde en 24H EconomiaX 4:10:540:00:00
News showLa tarde en 24H El tiempoX 2:20:120:00:00
News showLa tarde en 24H EntrevistaX 4:54:030:00:00
News showLa tarde en 24H TertuliaX 26:42:008:52:20
News showLatinoamerica en 24HX 16:19:004:06:57
News showNoticias Nacional X2:14:322:14:32
Outdoor risky sportsAl filo de lo imposibleX 11:09:574:10:03
Reality showComando actualidadXXX25:04:4115:54:13
Reality showEl paisano X15:41:0015:41:00
Reality showEspaña Directo X4:05:574:05:57
Reality showEspañoles en el mundo X28:11:0028:11:00
Reality showLa paisana X7:31:007:31:00
Serial dramaBajo la red X 0:59:010:59:01
Serial dramaBoca norte X 1:00:461:00:46
Serial dramaGrasa X1:29:361:29:36
Serial dramaNeverfilms X 0:11:410:11:41
Serial dramaRiders X1:15:001:15:00
Serial dramaSi fueras tu X 0:51:140:51:14
Serial dramaWake-up X 0:57:280:57:28
Serial dramaYreal X1:08:461:08:46
Sketch comedyComo nos reíamos X 2:51:422:51:42
Soap operaMercado central X 8:39:478:39:47
Talk showDías de Cine X14:45:0014:45:00
Talk showEse programa del que usted me habla XX22:07:3622:07:36
Talk showLa noche en 24HX 33:11:0633:11:06
Talk showMillenniumXX 21:04:469:38:55
Talk showVersión española X 2:29:122:29:12
Thematic magazineAgrosferaX X41:49:524:15:20
Thematic magazineCerámica Popular Española X1:02:351:02:35
Thematic magazineJara y Sedal X2:29:172:29:17
Thematic magazineToros X0:49:570:49:57
TOTAL HOURS 960:05:17497:31:44
Table 2. Show content and genre of the TV shows included in the 2022 test dataset.
Table 2. Show content and genre of the TV shows included in the 2022 test dataset.
RTVE2022
ProgramGenreShow Content
3 × 4 Game showMagazine contest broadcast on TVE from 1986 to 1990, it took place live, except for certain interviews and performances by invited artists, which were recorded in advance.
A pedir de bocaCooking showRaw footage of the TV show. The program takes a tour of the history, habitat and processes of making quality food produced in Spain.
AgrosferaThematic magazineInformative program of public service and citizen participation on the news of the primary sector, rural areas and the food industry.
Aquí la TierraDaily magazineA magazine that deals with the influence of climatology and meteorology both personally and globally.
AteneoThematic magazineCultural program on art and books broadcast in the 60 s.
Cerámica Popular EspañolaThematic magazineCultural program on ceramics in rural Spain broadcast in the 80 s.
Comando ActualidadReality showA show that presents a current topic through the choral gaze of several street reporters. Four journalists who travel to the place where the news occurs show them as they are and bring their personal perspective to the subject.
Conversatorios en Casa AméricaInterviews (indoor)An interview program with renowned figures that seeks to delve into the richness and diversity of Latin American societies. Guest and journalist will talk in the halls of Casa de América.
CorazónDaily magazineShow in which you can find news about the social life of celebrities, fashion, beauty and other current issues.
El cazadorGame showFour strangers work as a team to answer general knowledge questions. They must defeat the Hunter, a ruthless quiz show genius, to win the prize money.
Encuestas con ruido ambienteInterviews (street)Raw footage of interviews in the street.
Entrevistas en estudioInterviews (indoor)Raw footage of indoor interviews.
España DirectoReality showA life magazine that makes a social chronicle of Spain, getaways, the weather, parties, reports and cooking recipes, as well as lots of entertainment.
GrasaSerial dramaA Spanish dramedy streaming television series. Set in Seville, the fiction follows the life of Pedro Marrero, aka “El Grasa”, an overweight criminal with an unhealthy lifestyle who suffers from a heart attack and then decides to radically change his life in order to improve on his health condition and stay alive.
Informativos UMATICNews showA mix of interviews and cultural magazine broadcast in the 90s in the territorial center of Aragón.
Jara y SedalThematic magazineShow dedicated to the world of hunting and fishing that takes place in Spain.
Noticias NacionalNews showNews clips broadcast since the 1960 s.
RidersSerial dramaA fiction series with a mix of thriller, comedy and elements of social drama.
Saber y GanarGame showDaily contest presented that aims to disseminate culture in an entertaining way. Three contestants demonstrate their knowledge and mental agility, through a set of general questions.
TorosThematic magazineBroadcast and raw footage of a TVE show related to bulls.
YrealSerial drama Action thriller television series which blends live-action footage with 2D animation.
Table 3. RTVE2022DB test dataset distribution among the different challenges. S2T: Speech to Text, SDIA: Speaker Diarization and Identity Assignment, TaSA: Text and Speech Alignment, SoS: Search on Speech.
Table 3. RTVE2022DB test dataset distribution among the different challenges. S2T: Speech to Text, SDIA: Speaker Diarization and Identity Assignment, TaSA: Text and Speech Alignment, SoS: Search on Speech.
RTVE2022DB
TV Show Name             Show CodeS2TSDIATaSASoS
3 × 4 3 × 4 2:58:172:58:17 0:59:21
A pedir de bocaAPB3:41:38
AgrosferaAG4:15:204:15:206:38:040:51:27
Aquí la TierraAT2:46:442:46:442:52:350:28:51
AteneoATE1:40:09
Cerámica Popular EspañolaCPE1:02:35
Comando ActualidadCA3:59:293:59:29 1:00:39
Conversatorios en Casa AméricaCCA1:58:44
CorazónCO3:00:173:00:174:33:490:30:03
El cazadorEC3:48:22
Encuestas con ruido ambienteERA2:08:13
Entrevistas en estudioEE3:54:57
España DirectoED4:05:574:05:57 0:59:01
GrasaGR1:29:361:29:36
Informativos UMATICIU0:59:49
Jara y SedalJyS2:29:17
Noticias NacionalNN2:14:32
RidersRD1:15:001:15:00
Saber y GanarSyG4:28:28
TorosTO0:49:57
YrealYR1:08:461:08:46
Total hours 54:16:0724:59:2614:04:284:49:22
Table 4. MAVIR database: number of word occurrences (#occ.), duration (dur.) in minutes (min.), and number of speakers (#spk.) for training (train), development (dev) and testing (test) datasets.
Table 4. MAVIR database: number of word occurrences (#occ.), duration (dur.) in minutes (min.), and number of speakers (#spk.) for training (train), development (dev) and testing (test) datasets.
File IDData Type#occ.dur. (min)#spk.
Mavir-02train13,43274.517 (7 males)
Mavir-06train433229.153 (2 males, 1 female)
Mavir-08train335618.901 (1 male)
Mavir-09train11,17970.051 (1 male)
Mavir-12train11,16867.661 (1 male)
Mavir-03dev668138.182 (1 male, 1 female)
Mavir-07dev383121.782 (2 males)
Mavir-04test931057.364 (3 males, 1 female)
Mavir-11test313020.331 (1 male)
Mavir-13test783743.611 (1 male)
ALLtrain43,467260.6713 (12 males, 1 female)
ALLdev10,51259.964 (3 males, 1 female)
ALLtest20,277121.36 (5 males, 1 female)
Table 5. MAVIR database query information: number of terms and number of occurrences for STD and QbE STD tasks both for development and test datasets (‘dev.’ stands for development). The term length of the development query lists varies between 4 and 27 graphemes. The term length of the MAVIR test query lists varies between 4 and 28 graphemes.
Table 5. MAVIR database query information: number of terms and number of occurrences for STD and QbE STD tasks both for development and test datasets (‘dev.’ stands for development). The term length of the development query lists varies between 4 and 27 graphemes. The term length of the MAVIR test query lists varies between 4 and 28 graphemes.
Task#dev. Terms#dev. Occurrences#test Terms#test Occurrences
STD37410142232121
QbE STD1024251061192
Table 6. RTVE2022-SoS database: number of word occurrences (#occ.), duration (dur.) in minutes (min.), and number of speakers (#spk.) development (dev2) and testing (test) datasets.
Table 6. RTVE2022-SoS database: number of word occurrences (#occ.), duration (dur.) in minutes (min.), and number of speakers (#spk.) development (dev2) and testing (test) datasets.
File IDData Type#occ.dur. (min)#spk.
LN24H-20151125dev221,049123.5022
LN24H-20151201dev219,727112.4316
LN24H-20160112dev218,617110.4019
LN24H-20160121dev218,215120.3318
millennium-20170522dev2833056.509
millennium-20170529dev2881257.9510
millennium-20170626dev2797655.6814
millennium-20171009dev2986358.7812
millennium-20171106dev2849859.5716
millennium-20171204dev2928060.2510
millennium-20171211dev2950259.7012
millennium-20171218dev2938655.5515
DG00090476test968360.6552
DG90266390test812759.3721
DG90715506test453330.0646
DG90721106test496628.8722
DG90734223test951859.0355
I00920573test839751.4677
ALLdev2149,255930.64173
ALLtest45,224289.44273
Table 7. RTVE2022-SoS database query information: number of terms and number of occurrences for STD and QbE STD tasks both for development and test datasets (‘dev.’ stands for development, specifically for the dev2 dataset). The term length of the development query lists varies between 4 and 25 graphemes. The term length of the RTVE test query lists varies between 4 and 27 graphemes.
Table 7. RTVE2022-SoS database query information: number of terms and number of occurrences for STD and QbE STD tasks both for development and test datasets (‘dev.’ stands for development, specifically for the dev2 dataset). The term length of the development query lists varies between 4 and 25 graphemes. The term length of the RTVE test query lists varies between 4 and 27 graphemes.
Task#dev. Terms#dev. Occurrences#test Terms#test Occurrences
STD39815022601039
QbE STD103574107896
Table 8. SPARL22 database: number of word occurrences (#occ.), duration (dur.) in minutes (min.), and number of speakers (#spk.) for the testing (test) dataset.
Table 8. SPARL22 database: number of word occurrences (#occ.), duration (dur.) in minutes (min.), and number of speakers (#spk.) for the testing (test) dataset.
File ID#occ.dur. (min)#spk.
13_000500_003_1_19421_6429068755.552 (1 male, 1 female)
13_000400_007_0_19432_6430975633.532 (1 male, 1 female)
13_000400_005_0_19422_6429327183.572 (1 male 1 female)
13_000400_005_0_19422_642923189811.621 (1 female)
13_000400_005_0_19422_642922173311.671 (1 female)
13_000400_004_0_19388_64244811077.431 (1 male)
13_000400_003_0_19381_64239914038.133 (2 males, 1 female)
13_000400_003_0_19381_642398127911.453 (2 males,1 female)
13_000400_002_1_19376_642375200713.702 (1 male, 1 female)
13_000400_002_1_19376_642366172010.731 (1 male)
13_000327_002_0_19437_64324114058.732 (2 males)
12_000400_153_0_18748_63300613318.332 (2 males)
12_000400_148_0_18727_63238810125.422 (1 male, 1 female)
12_000400_003_0_16430_586456148410.331 (1 ma.)
ALL18,535120.1925 (16 males, 9 females)
Table 9. SPARL22 database query information: number of terms and number of occurrences for STD and QbE STD tasks for the test dataset. The term length of the SPARL22 test query lists varies between 3 and 26 graphemes.
Table 9. SPARL22 database query information: number of terms and number of occurrences for STD and QbE STD tasks for the test dataset. The term length of the SPARL22 test query lists varies between 3 and 26 graphemes.
Task#test Terms#test Occurrences
STD2821603
QbE STD108969
Table 10. Speech to Text challenge results for the RTVE2022DB test dataset.
Table 10. Speech to Text challenge results for the RTVE2022DB test dataset.
System IDPrimaryC1C2C3
BCN2BRNO14.35%15.24%17.22%18.65%
TID23.50%23.45%24.87%25.25%
VICOM-UPM15.30%14.78%17.29%14.87%
UZ20.32%26.49%--
Table 11. Speech to Text challenge: Best WER (%) by show. The WER for the 3 best systems (TOTAL WER < 15%) is shown jointly with the best WER among all systems.
Table 11. Speech to Text challenge: Best WER (%) by show. The WER for the 3 best systems (TOTAL WER < 15%) is shown jointly with the best WER among all systems.
Show CodeBCN2BRNO(P)VICOM-UPM(C1)WHISPERBEST WERBest System
3 × 4 13.1113.3714.7812.60VICOM-UPM(P)
AG5.896.726.165.89BCN2BRNO(P)
APB49.8560.2467.0547.79UZ(P)
AT11.1611.659.609.60Whisper
ATE9.479.077.927.92Whisper
CA17.7119.0921.3017.71BCN2BRNO(P)
CCA10.319.5111.939.51VICOM-UPM(C1)
CO8.557.899.887.89VICOM-UPM(C1)
CPE13.8713.4614.5713.46VICOM-UPM(C1)
EC13.6114.3213.4513.45Whisper
ED14.3814.2113.3613.36Whisper
EE23.5525.1622.2022.20Whisper
ERA22.0819.8818.8018.80Whisper
GR26.6329.0231.0824.79VICOM-UPM(P)
IU20.3619.7919.5019.50Whisper
JYS11.7912.0411.7410.69VICOM-UPM(C2)
NN9.3310.2010.499.33BCN2BRNO(P)
RD20.6023.3020.8418.18VICOM-UPM(P)
SYG10.0510.2410.0710.05BCN2BRNO(P)
TO23.9820.6124.9120.61VICOM-UPM(C1)
YR29.7425.3321.4821.48Whisper
TOTAL14.3514.7814.8714.35BCN2BRNO(P)
Table 12. WER performance across challenges (2018, 2020 and 2022) on three different shows. The Best System figures are computed on the whole test dataset of each challenge.
Table 12. WER performance across challenges (2018, 2020 and 2022) on three different shows. The Best System figures are computed on the whole test dataset of each challenge.
Show201820202022WER Improvement
AT-13.93%9.26%31.1%
CA-20.90%17.71%15.2%
SyG14.77%-10.05%31.9%
Best System16.45%16.04%14.35%10.5%
Table 13. DER (%) performance on the Albayzin 2022 Speaker Diarization Challenge.
Table 13. DER (%) performance on the Albayzin 2022 Speaker Diarization Challenge.
System IDPrimaryC1C2
IV35.59%45.92%48.67%
IRTI18.47%19.58%-
Best System 202034.29%--
Table 14. Speaker Diarization 2022 challenge, best system results in terms of the MiSE (missed speaker error), FASE (False Alarm Speaker Error), SpE (Speaker Error) and DER.
Table 14. Speaker Diarization 2022 challenge, best system results in terms of the MiSE (missed speaker error), FASE (False Alarm Speaker Error), SpE (Speaker Error) and DER.
Program#spkrsMiSE (%)FASE (%)SpE (%)DER (%)
3 × 4 177.700.774.8013.27
AG600.580.257.758.57
AT204.821.026.2012.04
CA577.352.1916.3025.84
CO452.411.0517.5220.98
ED727.460.6410.2918.39
GR2011.586.3716.6834.64
RD1615.356.6914.3236.36
YR1126.5915.1814.7056.47
TOTAL 5.861.5711.0318.47
Table 15. Speaker Diarization and Identity Assignment 2022 challenge results in terms of MiSE (missed speaker error), FASE (False Alarm Speaker Error), SpE (Speaker Error) and AER.
Table 15. Speaker Diarization and Identity Assignment 2022 challenge results in terms of MiSE (missed speaker error), FASE (False Alarm Speaker Error), SpE (Speaker Error) and AER.
MiSE (%)FAES (%)pSE (%)AER (%)
IV (P)1.3229.79.5240.55
IV (C1)3.7160.820.9185.42
IV (C2)8.376.55.990.62
IV (C3)12.415.31.228.88
Table 16. Performance of GTTS primary and contrastive systems on the development (Dev) and test sets of TaSAC-ST. The Average Program Time-Error Metric (APTEM, in seconds) and the global mean of subtitle alignment errors (Mean-Err, in seconds) are shown.
Table 16. Performance of GTTS primary and contrastive systems on the development (Dev) and test sets of TaSAC-ST. The Average Program Time-Error Metric (APTEM, in seconds) and the global mean of subtitle alignment errors (Mean-Err, in seconds) are shown.
DevTest
SystemAPTEMMean-ErrAPTEMMean-Err
Primary0.29501.26650.39504.0923
Con-10.32501.14930.39864.1990
Con-20.29500.82330.29270.6053
Con-30.32500.79500.32770.7186
Table 17. GTTS Con-2 system performance disaggregated per programs: AG (Agrosfera, 10 programs), AT (Aquí la Tierra, 6 programs) and CO (Corazón, 10 programs).
Table 17. GTTS Con-2 system performance disaggregated per programs: AG (Agrosfera, 10 programs), AT (Aquí la Tierra, 6 programs) and CO (Corazón, 10 programs).
ProgramAPTEMMean-Err
AG0.28500.5250
AT0.31000.9177
CO0.29100.6112
Table 18. Performance (time, in seconds) of the baseline and BUT contrastive systems on the TaSAC-BP development set. Optimal performance is shown too (with the optimal threshold in parentheses).
Table 18. Performance (time, in seconds) of the baseline and BUT contrastive systems on the TaSAC-BP development set. Optimal performance is shown too (with the optimal threshold in parentheses).
SystemRejectedAcceptedCorrectWrongScore
Base11.972410.332218.70191.632027.07
Base-opt (0.11)10.802411.502219.50192.002027.50
BUT-con0.002458.892282.26176.632105.63
BUT-con-opt (1.00)109.212349.682252.0997.592154.50
Table 19. Performance (time, in seconds) of the baseline and BUT primary systems on the TaSAC-BP test set. Optimal performance is shown too (with the optimal threshold in parentheses).
Table 19. Performance (time, in seconds) of the baseline and BUT primary systems on the TaSAC-BP test set. Optimal performance is shown too (with the optimal threshold in parentheses).
SystemRejectedAcceptedCorrectWrongScore
Base4.631794.821628.96165.861463.10
Base-opt (0.25)5.851793.601628.51165.091463.42
BUT-pri0.001820.981688.50132.481556.02
BUT-pri-opt (1.00)72.131748.851668.5880.271588.31
Table 20. Search on speech challenge results for the RTVE2022DB test dataset.
Table 20. Search on speech challenge results for the RTVE2022DB test dataset.
System IDMTWVATWVp(FA)p(Miss)
Multistream CNN0.64960.64830.000000.346
Multistream CNN + rescoring0.66960.66940.000010.325
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lleida, E.; Rodriguez-Fuentes, L.J.; Tejedor, J.; Ortega, A.; Miguel, A.; Bazán, V.; Pérez, C.; de Prada, A.; Penagarikano, M.; Varona, A.; et al. An Overview of the IberSpeech-RTVE 2022 Challenges on Speech Technologies. Appl. Sci. 2023, 13, 8577. https://doi.org/10.3390/app13158577

AMA Style

Lleida E, Rodriguez-Fuentes LJ, Tejedor J, Ortega A, Miguel A, Bazán V, Pérez C, de Prada A, Penagarikano M, Varona A, et al. An Overview of the IberSpeech-RTVE 2022 Challenges on Speech Technologies. Applied Sciences. 2023; 13(15):8577. https://doi.org/10.3390/app13158577

Chicago/Turabian Style

Lleida, Eduardo, Luis Javier Rodriguez-Fuentes, Javier Tejedor, Alfonso Ortega, Antonio Miguel, Virginia Bazán, Carmen Pérez, Alberto de Prada, Mikel Penagarikano, Amparo Varona, and et al. 2023. "An Overview of the IberSpeech-RTVE 2022 Challenges on Speech Technologies" Applied Sciences 13, no. 15: 8577. https://doi.org/10.3390/app13158577

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop