Next Article in Journal
A Dual-Branch Self-Boosting Network Based on Noise2Noise for Unsupervised Image Denoising
Previous Article in Journal
Study on Local Vibration Control of the 100 m X-BOW Polar Exploration Cruise Ship
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gender and Accent Biases in AI-Based Tools for Spanish: A Comparative Study between Alexa and Whisper

by
Eduardo Nacimiento-García
*,
Holi Sunya Díaz-Kaas-Nielsen
and
Carina S. González-González
*
Women Studies Research Institute (IUEM), University of La Laguna, 38200 La Laguna, Spain
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(11), 4734; https://doi.org/10.3390/app14114734
Submission received: 21 February 2024 / Revised: 23 May 2024 / Accepted: 28 May 2024 / Published: 30 May 2024
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Considering previous research indicating the presence of biases based on gender and accent in AI-based tools such as virtual assistants or automatic speech recognition (ASR) systems, this paper examines these potential biases in both Alexa and Whisper for the major Spanish accent groups. The Mozilla Common Voice dataset is employed for testing, and after evaluating tens of thousands of audio fragments, descriptive statistics are calculated. After analyzing the data disaggregated by gender and accent, it is observed that, for this dataset, in terms of means and medians, Alexa performs slightly better for female voices than for male voices, while the opposite is true for Whisper. However, these differences in both cases are not considered significant. In the case of accents, a higher Word Error Rate (WER) is observed among certain accents, suggesting bias based on the spoken Spanish accent.

1. Introduction

In recent years, we have witnessed a surge in the utilization of speech-recognition technologies and voice interaction [1,2]. One of the domains where voice interaction is being employed is that of virtual assistants. Prominent commercial voice assistants include Amazon Alexa, Google Assistant, and Apple’s Siri [3], with Alexa being the most prevalent, occupying approximately 70% of the market share [4].
In addition to the aforementioned commercial virtual assistants and open-source software like Home Assistant [5], a variety of automatic speech recognition (ASR) tools are available, enabling the implementation of a system that transcribes speech to text (STT) [6]. This capability can be harnessed for the development of voice interaction-based systems, such as virtual assistants. Notably, among the speech-recognition systems, Whisper [7] stands out, as its introduction has prompted similar open-source tools like Coqui STT [8] to discontinue their projects due to the improvements offered by this new tool. Whisper is free software.
From the perspective of human–device interaction through speech, it is crucial to consider several key concepts that set it apart from other forms of interaction [9]. However, in our context, we will focus on the importance of the device accurately understanding the individual in their language, dialectal variation, and accent. Furthermore, it is essential to ensure that there is no significant difference in performance when these devices are used by both females and males.
Currently, it is estimated that approximately 8.1 billion people inhabit the world [10]. Nevertheless, there is no single language that is spoken or understood by the entire global population, not even by a majority. According to data published by Ethnologue [11], the most widely spoken language is English, considering both native speakers and those who speak it as a second language. English is spoken by approximately 1.456 billion people, which roughly equates to 18% of the world’s population. If we consider the top 10 most spoken languages (English, Mandarin Chinese, Hindi, Spanish, Standard Arabic, Bengali, French, Russian, Portuguese, and Urdu), we will encompass approximately 66% of the global population, leaving over 2.7 billion people (34%). The top 200 languages spoken account for approximately 88% of the world’s population [8].
Access to information and human knowledge by all individuals, regardless of their language, is paramount and should be regarded as a fundamental right. Thanks to the Internet, access to a portion of information and human knowledge has become more democratic, yet much remains to be accomplished [12].
According to UNESCO, roughly 781 million people worldwide are illiterate, with approximately two thirds of them being women [13]. Hence, voice interaction technology must be adept enough to interpret a wide range of languages, dialectal variations, and accents, regardless of gender.
Previous studies have revealed historical automatic speech recognition (ASR) system biases [14,15]. These biases hinder effective communication for certain groups of people when using voice recognition systems [16]. Some of these biases [17,18] may be attributed to cultural, social, medical, or other differences, making gender and dialectal variations or accents of the interacting individuals two significant sources of potential bias within ASR systems [19,20].
In this context, it is relevant to consider potential gender biases [21,22] in speech-recognition tools and virtual assistants, both in the responses provided by an assistant and in the actual voice recognition. This study will specifically focus on speech recognition.
In general terms, gender identification through voice primarily relies on fundamental frequency [23]. On average, female voices have a fundamental frequency of approximately one octave higher than male voices [24]. Fundamental frequency refers to the lowest vibration frequency of the vocal cords during sound production. Typically, female vocal cords tend to be shorter and thinner than male vocal cords, resulting in a higher fundamental frequency in female voices and a lower one in male voices [25].
As previously mentioned, in addition to the various languages, different dialects exist within languages. In our case, we will focus on the dialects of the Spanish language. One of the current classifications identifies eight dialectal regions of the Spanish language [26], with five in the Americas, two in Europe, and one in Africa. The dialectal regions encompass the following areas: America, which includes the Caribbean, Mexican-Central American, Andean, Austral, and Chilean regions; Europe, consisting of the Northern Iberian Peninsula (Septentrional) and the Southern Iberian Peninsula (Meridional); and Africa, which comprises the Canary Islands.
This research aims to ascertain whether there is any significant bias concerning gender or the main accents of the Spanish language. To achieve this, audio clips from the Common Voice 14 dataset by Mozilla [27] are analyzed using both Alexa and Whisper.
Section 3 introduces the tools and datasets employed for the analyses. Section 4 presents the outcomes of the tests conducted. In Section 5, a discussion of the obtained results is provided. Finally, Section 6 presents the conclusions drawn from this research.

2. Background

The examination of various technologies and tools based on Artificial Intelligence (AI) has revealed that, in many instances, different types of biases exist or have existed, adversely affecting specific social groups compared to others. For instance, biases have been identified concerning membership in various ethnic groups, the use of different accents or dialectal variants, and gender, among other factors [28,29,30]
Specifically, biases have also been detected in automatic speech recognition (ASR), both based on the speaker’s gender and the accent or dialectal variant used by individuals speaking the same language [31].
Often, these biases stem from the data itself used to train AI-based systems. Consequently, bias resulting from such data could be readily mitigated by employing datasets that are not already predisposed to bias [32]. Studies related to the topic include comparisons between English accents, such as American and Indian, along with considerations of gender. In an evaluation of the DeepSpeech (STT) tool, bias was found based on accent, although no gender bias was observed in this case [33]. In another study, the transcription performed by YouTube using ASR for different English accents was examined, revealing biases in both gender and accent [31]. Similarly, a study demonstrated the existence of gender bias unfavorable to women in some ASR systems, attributed to the use of biased data in model training.
This resulted in a higher Word Error Rate (WER) for females when interacting with these systems compared to males [34]. These prior investigations confirm the need to continue research in this field to ascertain whether such biases persist in these ASR tools or if, conversely, there has been an evolution with a reduction or elimination of such biases.

3. Materials and Methods

3.1. Objectives

Building upon the considerations outlined in Section 2 (Background), the primary objective of this research is to verify the existence of gender or accent bias in automatic speech recognition (ASR) for the Spanish language when utilizing the voice-activated virtual assistant Alexa or the Whisper system.
The aim is to gather data that facilitates comparisons based on gender and the primary accents or dialectal variations in Spanish. Through this investigation, we seek to discern potential biases in the ASR systems and contribute insights into how these biases may vary concerning gender and diverse linguistic features within the Spanish language.

3.2. Inclusion and Exclusion Criteria

The analysis was conducted on many audio segments sourced from the Mozilla Common Voice dataset for the Spanish language, version 14, released on 28 June 2023 [27]. This dataset comprises 1,608,353 audio fragments and their corresponding text transcriptions, making it suitable for applications such as automatic speech recognition. The dataset encompasses 2175 recorded hours, of which 504 h are validated, involving the participation of 25,261 contributors. In this research, we employed the Common Voice 14 dataset in Spanish to investigate speech recognition in the Spanish language, particularly in identifying potential biases related to gender and accent usage.
The Common Voice dataset is subject to human validation, where a segment is considered correct when it receives two positive votes and incorrect when it garners two negative votes. There is also a possibility that some segments may receive both positive and negative evaluations simultaneously. During the data refinement process for the Common Voices dataset, we searched to select segments meeting the following criteria:
  • Must have at least two positive votes and no negative votes to exclude uncertain segments.
  • Segments are categorized by gender: female and male; this categorization is optional, and not all segments are labeled by gender.
  • Segments are categorized based on the speaker’s accent, as this is also an optional feature when contributing to the Mozilla Common Voice project.
Once we obtained the dataset consisting of segments categorized by gender and accent and validated it without negative votes, we analyzed the various accents available.
In this case, it should be noted that not all segments use a standardized category for accent classification. This is likely due to the initial stage of the project when the accent field may have been an open-text field rather than a selection field as it currently is. Another filter applied was to ensure that the audio segments consisted of more than one word, as there are segments with single words such as “Yes”, “No”, or numbers, which could significantly influence the final analysis results as an error in a single word translates to an entire sentence error.
To obtain a sample with a manageable dataset, we proceeded to filter and retain categories where there were at least 500 audio segments for each accent and gender.
Following this final filtering, the following categories based on accent were retained:
  • Central American
  • Andean-Pacific: Colombia, Peru, Ecuador, Western Bolivia, Andean Venezuela
  • Caribbean: Cuba, Venezuela, Puerto Rico, Dominican Republic, Panama, Caribbean Colombia, Caribbean Mexico, Gulf Coast of Mexico
  • Chilean: Chile, Cuyo
  • Northern Iberian Peninsula (Asturias, Castilla y León, Cantabria, Basque Country, Aragon, La Rioja, Guadalajara, Cuenca)
  • Central-Southern Iberian Peninsula (Madrid, Toledo, Castilla-La Mancha)
  • Southern Iberian Peninsula (Andalusia, Extremadura, Murcia)
  • Canary Islands
  • Mexico
  • Rioplatense: Argentina, Uruguay, Eastern Bolivia, Paraguay
After filtering, the total dataset consisted of 202,737 audio segments, which were distributed as follows, as shown in Table 1. Many segments categorized with non-standardized labels were left out, and the only current category available in the Common Voice selection is “Español de Filipinas”. In the dataset, there are only 23 segments for this accent, with 10 from males and 13 from females, and after filtering, it would be reduced to 10 and 5, respectively.
The final accent-based categories we have retained closely align with the previously referenced dialectal classification [26], with the caveat that, in this context, some accents are further subdivided. The updated classification is as follows: The “Andean” accent corresponds to “Andean”, while the “Canary” accent is linked to the “Canary Islands”. The “Caribbean” accent aligns with “Caribbean”, and the “Chilean” accent is associated with “Chileno”. The “Meridional Spanish” accent is attributed to both the “Southern Iberian Peninsula” and the “Central-Southern Iberian Peninsula” categories. “Septentrional Spanish” corresponds to the “Northern Iberian Peninsula”, and “Mexico-Central America” is divided into the “Central America” and “Mexico” categories. Lastly, the “Austral” accent is affiliated with the “Rioplatense” category.

3.3. Tools

As highlighted in Section 1, Alexa stands out as the most widely utilized voice-activated virtual assistant [4], and Whisper, in its brief existence, has revolutionized the sector to the extent of influencing established projects such as Coqui STT, leading them to discontinue their development [8]. Currently, Alexa can be regarded as the foremost benchmark for Voice-Activated Virtual Assistants, while Whisper is a crucial reference for automatic speech-recognition (ASR) systems. This underscores the rationale for subjecting the automatic speech recognition of the voice-activated virtual assistant Alexa and the voice-to-text system Whisper to a comprehensive analysis.
Alexa [35] is a voice-activated virtual assistant developed by Amazon, available on specific devices such as Amazon Echo. It is also accessible as a mobile application for both Android and iOS. Additionally, web access to a developed Skill (App) is possible through the developer console. Alexa is proprietary software.
Whisper [7], on the other hand, is an automatic speech-recognition system developed by OpenAI, licensed under the MIT license, and characterized as open-source software. According to the developers, Whisper has been trained on 680,000 h of multilingual and multitasking supervised data collected from the web [36]. The Base and Large-v2 models of Whisper were chosen from among the six available at the time of the research. The selection of the Large-v2 model was based on its comprehensive nature, akin to the Large model, with the added advantage of being the latest version. Conversely, despite the smallest Tiny model, the decision was made to opt for the Base model, as its name implies, positioning it as the reference model.
In addition to these two main tools, Selenium [37] was employed during the research to automate the process in Alexa. Coqui TTS [38] generated the activation word preceding each phrase in Alexa. Finally, Python 3.10 was the programming language utilized to create various scripts for data analysis in both Alexa and Whisper.

3.4. Procedure

The data analysis procedure is straightforward, primarily involving processing an audio snippet from Common Voice using either Alexa or Whisper. Subsequently, the transcription of the audio is obtained, and this text is then compared with the actual transcription provided by Common Voice. This comparison is achieved by calculating the WER. Finally, after collecting the data, statistical calculations are performed to observe the data jointly based on model variants, gender, or accent variations.
An important point to consider is that Alexa has three Spanish variants: Mexican, US, and Spain Spanish. This requires conducting all analyses in triplicate to ensure a comprehensive evaluation. In the case of Whisper, there is only a generic model for Spanish, although there are different models of various sizes.
A Python script with Selenium [37] was created to analyze Alexa, enabling access to the Amazon Alexa developer console and interacting with the system from there. An external USB sound card was used for hardware to eliminate additional noise during sound capture. A simple Alexa Skill (application) was developed, which received an audio input and processed it to respond. Using the Alexa developer console allowed for straightforward capturing of the resulting text. It is worth noting that when developing the Alexa Skill, the generic data type AMAZON.SearchQuery [39] was used because an open and broad dataset was employed. In this case, using more specific data types could potentially influence the speech-recognition analysis results. The issue that needed to be addressed was that with AMAZON.SearchQuery data type, a triggering word needed to precede the phrase for Alexa to detect it. In this project, Coqui TTS [38] was used to generate an audio segment with the word “Escucha” (Listen) in Spanish, which preceded the segments during the tests. Later on, the word “Escucha” was removed from the obtained texts to avoid impacting the results. One of the project requirements for Common Voice is that the segments consist of fewer than 15 words [27]. After conducting various tests by preceding different phrases with the word ‘Listen,’ no interference was observed. The word ‘Listen’ was correctly detected in all tests.
For Whisper, the process was more straightforward, as the system is designed to be used via a Python program that takes an audio file for transcription. Currently, there are six models available for Whisper [7]. For the analysis, the Base and Large-v2 models of Whisper were used. The available models encompass a range of options, including Tiny, Base, Small, Medium, Large, and Large-v2.
For the analysis, 56,344 segments were processed for Alexa, 202,737 for Whisper (Base), and 105,375 for Whisper (Large-v2). The difference in sample sizes is due to the time constraints of conducting the tests. The Alexa tests took approximately four weeks to complete, while the Whisper Base tests were finished in only four days, and the Whisper Large-v2 tests took approximately one week. It is worth noting that according to data provided by Whisper’s creators, the Base model is 16 times faster than the Large or Large-v2 model [7].
The samples used are listed in Table 2 and Table 3 for females and males, respectively. As shown in the tables, a sample of up to a maximum of 2150 audio segments was used in the case of Alexa. If the number is lower, it is because there were no more segments with that accent for that gender that met the established requirements. For Whisper (Base), the samples consisted of all audio segments that satisfied the specified requirements. In the case of Whisper (Large-v2), the limit was set at 11,600 audio segments. In the cases of Alexa and Whisper Large-v2, the selected segments up to the cutoff limit were chosen randomly.

4. Results

After analyzing the various samples to calculate error rates related to automatic speech-recognition analysis, such as the WER [40], we proceed to compute various statistics, such as the mean, median, standard deviation, and variance. Additionally, we calculated the 95% confidence intervals to ensure that the results obtained were conclusive.
A preprocessing step was performed on the character strings to obtain a WER that is as realistic as possible. This involved removing punctuation marks, exclamation marks, question marks, etc., and converting all strings to lowercase. This ensured that the WER calculation would not be affected by different interpretations between Alexa and Whisper beyond the simple transcription of the words heard.
Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9 display the results obtained for Alexa, disaggregated by gender and Alexa variant. Table 10, Table 11, Table 12 and Table 13 present results disaggregated by gender for Whisper.
Table 14 displays the gender differences for each variant of Alexa and Whisper analyzed by weighted means of WER. In our study, we employ a weighted arithmetic mean to analyze gender differences for each variant of Alexa and Whisper. This weighting approach assigns weights to each value based on the number of elements available for each option, thus ensuring a fair and representative assessment of the observed differences.
For better visualization of the results obtained, the following figures are presented, showing a comparison of the WER means and WER medians for females and males for each of the analyzed accents. Figure 1 illustrates the comparison of means for females. Figure 2 shows the comparison of means for males. Figure 3 and Figure 4 display comparisons of medians for females and males, respectively. Additionally, in Figure 5, the weighted mean by gender is shown for each of the analyzed variants of Alexa and Whisper.
All data and results obtained during the research are available in the project repository. These datasets include transcribed and correct texts and WER for each phrase. Other error measures such as Character Error Rate (CER), Match Error Rate (MER), Word Information Lost (WIL), and Word Information Preserved (WIP) are also included in the datasets. The data are divided based on the tool used, i.e., Alexa, Whisper Base, or Whisper Large-v2, and within these categories, the data is further divided by gender and accent [41].

5. Discussion

After analyzing the results, it is observed that, as a general rule, the Alexa variant for U.S. Spanish, here identified as Alexa US, performs the worst among the three Alexa variants for any of the analyzed accents. In the case of the other two Alexa variants, for Mexican Spanish and Spanish from Spain, identified here as Alexa MX and Alexa ES, respectively, the comparison of means reveals a similar performance, with a slight improvement in Alexa ES compared to Alexa MX, except for the case of females from the Canary Islands, where Alexa MX performs better. In the case of median comparisons, we see virtually the same result, except for several cases with the same median for both Alexa MX and Alexa ES. Additionally, there are two cases where Alexa MX outperforms Alexa ES, and these cases are for both genders of the Canary accent, although the difference is more pronounced for females.
When comparing the results obtained in the different Alexa variants with the results of Whisper (Base), it is generally observed that Whisper performs significantly better than Alexa for any of the analyzed accents, except for the variant spoken in the Southern Iberian Peninsula and for males. Median values for Alexa are around 30%, while Whisper has about 15% for WER. Generally, Whisper (Base) makes approximately half the WER errors as Alexa with the Spanish language.
In the case of Whisper (Large-v2), it is observed that the results significantly improve compared to Whisper (Base), with a mean below 10% for both women and men. Particularly indicative are the median values of Whisper (Large-v2) for both genders, as they achieve medians of 0% for all cases except for the Spanish of the Southern Iberian Peninsula.
These data reveal a clearly identifiable outlier for Whisper, both for the Base model and the Large-v2 model, in the accent of Spanish from the southern part of the Iberian Peninsula when spoken by males. After analyzing the Common Voice 14 dataset, it is observed that thousands of contributions could be from the same person, which may have influenced the final result of the analysis for this accent.
Taking into account that the selected sample for the male gender and the accent of the Southern Iberian Peninsula was 30,698 audio segments for Whisper (Base), 11,600 for Whisper (Large-v2), and 2150 for Alexa, it is possible that this influence did not affect the analysis of Alexa as much as it did for Whisper. If the analysis results for females had also deteriorated for this accent, it would not have been assumed that something was affecting the male results.
In the case of using the same random sample for Whisper (Base) for the Southern Iberian Peninsula accent as used for Alexa, the results obtained are shown in Table 15. Clearly, there is bias in the dataset affecting the results. The median values for Whisper (Base) are 16.67%, which is closer to the results obtained for females for this accent. For Whisper (Large-v2), Similar results were obtained, with a median of 0.00%, which is quite similar to the results obtained for females. These data can be seen in Table 16.
If the weighted means obtained previously and shown in Table 14 and Figure 5 are recalculated using this correction for this accent, the data obtained is shown in Table 17 and Figure 6.
Let us analyze the weighted means shown in Table 17 and Figure 6. We can observe that in all cases, for the same type of Alexa variant, female voices are slightly better recognized than male voices, meaning the mean WER is lower for women than for men. On the other hand, for Whisper, the opposite is true—the mean WER is slightly lower for men than for women.
More specifically, we see that in Alexa, the difference between the mean WER for female and male voices is 3.24 pp (percentage points), 2.69 pp, and 2.88 pp, respectively, for Alexa MX, Alexa ES, and Alexa US, in favor of female voices. In the case of Whisper, these differences are 1.19 pp and 1.58 pp for the Base and Large-v2 models, respectively, but in favor of male voices. Table 18 and Figure 7 show the average data for each of the Alexa and Whisper variants, regardless of gender.
In these weighted mean data for each accent, each variant of Alexa and Whisper was analyzed, and it can be observed that the Alexa US model performs the worst in all cases compared to the other variants. In all other cases, the Alexa ES variant is slightly better than the Alexa MX variant, except for the Canarian accent, where it is the opposite.
It is noticeable that in some instances, the difference in the weighted means WER by accent; for example, for Alexa, it can reach up to 6.76 pp. This is the case when comparing the Spanish Northern and Spanish Southern accents. For Alexa MX, the difference between the Northern and Southern Spanish accents can be as high as 7.64 pp. For Alexa US, 7.98 pp between the same accents. These differences are observed across the various accents as well.
In Whisper (Base), approximately 8 pp differences are observed between the Northern Spanish accent and the Caribbean, Mexican, and Southern Spanish accents. In Whisper (Large-v2), a difference of around 4 pp between the Canarian and Caribbean accents is noted.
When analyzing the confidence intervals (CI 95%) calculated for each of the available options, it is observed that, overall, the sample fits appropriately and is accurately represented by these statistical measures. A wider confidence interval is notable in the case of Whisper Large-v2 for females, especially for Caribbean, Central American, and Central Spanish accents, although it is also significantly high for the Northern Spanish accent, albeit to a lesser extent. Although these values encompass a broader probable range than desired, they are not sufficiently large to invalidate this portion of the results. In the remaining cases, for both genders and all analyzed tools, the confidence intervals are below 2, and even in many cases below 1, with a slight exception for Canary Islands females and Alexa variants, where we obtain 2.88 for MX, 3.13 for ES, and 3.49 for US. However, considering that the mean WER values for these variants are 35.46, 42.52, and 51.19, respectively, it is concluded that these values are still significant.
In previous studies, the analysis of gender and accent bias within the same language has been explored. For instance, examining the DeepSpeech (STT) model using data from the Mozilla Common Voice project revealed bias between different English variants, specifically US and Indian English. However, the study concluded that there is no significant evidence of gender bias in the DeepSpeech model [33].
Another study that assessed YouTube’s ASR system in transcribing voice-to-text in platform-uploaded videos estimated gender bias as disadvantageous to women compared to men. It also identified accent bias among different studied variants, particularly disadvantaging the Scottish accent [31].
A separate study demonstrated that underrepresenting, for example, the female gender in the dataset used to train an ASR system results in a higher WER for that gender, indicating the presence of gender bias [34].

6. Conclusions

Based on the observed data, it can be asserted that Alexa performs better in recognizing female speech in Spanish, while, conversely, Whisper exhibits better performance for male speech. However, with a mean difference of 2.94 pp for Alexa (favoring females) and 1.39 pp for Whisper (favoring males), we consider these differences not significant enough to conclude the existence of gender bias in Alexa and Whisper for the Spanish language or at least not a bias that significantly influences the everyday functionality of these tools. This small difference could be attributed to sample error, especially considering the WER for both Alexa and Whisper.
More significant differences are observed regarding accents, reaching up to 8 pp in some cases. As a general rule for Alexa, there appears to be a bias in favor of the Northern Spanish accent and against the Southern Spanish accent primarily, as well as other accents such as Caribbean, Central American, and Canarian. For Whisper (Base), there seems to be a potential bias in favor of the northern accent compared to the southern accent in Spain. In the case of Whisper (Largev2), this bias is mainly in favor of the Canarian accent as opposed to the Caribbean accent.
An interesting fact regarding the data obtained with Whisper (Large-v2) is that the Canarian is precisely the most accurately recognized despite being the dialectal variant with the fewest speakers. The Canary Islands have just under 2.2 million inhabitants [42], compared to the nearly 500 million people with Spanish as their native language [43].
Concerning the three Alexa models tailored for Mexican Spanish, Spanish from Spain, and Spanish from the United States, there appears to be no compelling reason to maintain these three models as separate entities for the Spanish language. It would be preferable to the optimized model, similar to the approach taken by Whisper. Subsequently, if the goal is to enable Alexa to speak in different dialects, employing a TTS system trained specifically for each accent in question would be more suitable.
One of the fundamental limitations of this project arises from the fact that the audio files included in the Common Voice dataset are derived from segments of read texts, in contrast to the natural way of interacting with a voice-activated virtual assistant, which typically involves conversations. Another limitation is that the number of available audio fragments is not comparable between those read by males and females, and a similar issue exists with the distribution across different accents.
Concerning future research directions, an obvious avenue would be to extend the study to other languages and their respective dialects, enabling a more comprehensive comparison. Another potential research extension could involve a specific comparison for Alexa, examining the speech-recognition error rate when using a set of predefined words versus a set of non-defined words, as addressed in this study. In many cases, Alexa Skills employs predefined sets of words to facilitate the interaction flow.
Analyzing automatic speech recognition with other datasets and for the various languages available for each system would enable us to determine whether progress is genuinely being made in eliminating gender bias in ASR systems or if, in this case, these results are specific to the Spanish language and these two particular tools.

Author Contributions

Conceptualization, methodology, formal analysis, data curation, E.N.-G.; writing—original draft preparation, E.N.-G.; writing—review and editing, H.S.D.-K.-N. and C.S.G.-G.; supervision C.S.G.-G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was co-financed by the Canary Islands Agency for Research, Innovation, and Information Society of the Ministry of Economy, Knowledge and Employment and by the European Social Fund (ESF) Integrated Operational Program of the Canary Islands 2014–2020, Axis 3 Priority Topic 74 (85%). This work has been supported partially by the PERGAMEX ACTIVE project, Ref. RTI2018-096986-B-C32, funded by the Ministry of Science and Innovation. Spain. This work has been supported partially by the PLEISAR-Social project, Ref. PID2022-136779OB-C33, funded by the Ministry of Science and Innovation. Spain. This work has been partially supported by the COEDUIN project, Ref. 2020EDU08, funded by Fundación Caja Canarias and Fundación La Caixa.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in Zenodo at https://doi.org/10.5281/zenodo.10152506, reference number 10.5281/zenodo.10152506.

Acknowledgments

We want to thank the Canary Islands Agency for Research, Innovation, and Information Society of the Ministry of Economy, Knowledge and Employment and the European Social Fund (ESF) Integrated Operational Program of the Canary Islands 2014–2020, Axis 3 Priority Topic 74 (85%), the PERGAMEX ACTIVE project, Ref. RTI2018-096986-B-C32, funded by the Ministry of Science and Innovation. Spain and the PLEISAR-Social project, Ref. PID2022-136779OB-C33, funded by the Ministry of Science and Innovation. Spain, the COEDUIN project, Ref. 2020EDU08, funded by Fundación Caja Canarias and Fundación La Caixa, for their support.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Beirl, D.; Yuill, N.; Rogers, Y. Using Voice Assistant Skills in Family Life. June 2019. Available online: https://repository.isls.org//handle/1/1750 (accessed on 16 November 2023).
  2. Porcheron, M.; Fischer, J.E.; Reeves, S.; Sharples, S. Voice Interfaces in Everyday Life. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), Montreal, QC, Canada, 21–26 April 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–12. [Google Scholar] [CrossRef]
  3. Këpuska, V.; Bohouta, G. Next-Generation of Virtual Personal Assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Home). In Proceedings of the 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 8–10 January 2018; pp. 99–103. [Google Scholar] [CrossRef]
  4. Ford, M.; Palmer, W. Alexa, Are You Listening to Me? An Analysis of Alexa Voice Service Network Traffic. Pers. Ubiquitous Comput. 2019, 23, 67–79. [Google Scholar] [CrossRef]
  5. Home Assistant. Available online: https://www.home-assistant.io/ (accessed on 16 November 2023).
  6. Vásconez, J.J.P.; Ortiz, C.A.N.; Cordero, M.P.O.; León, P.A.P.; Orellana, P.C. Evaluación del reconocimiento de voz entre los servicios de Google y Amazon aplicado al Sistema Integrado de Seguridad ECU 911. Revista Tecnológica ESPOL 2021, 33, 2. [Google Scholar] [CrossRef]
  7. Whisper. Python. 2022. Reprint, OpenAI. Available online: https://github.com/openai/whisper (accessed on 16 November 2023).
  8. Coqui-Ai/STT: STT—The Deep Learning Toolkit for Speech-to-Text. Training and Deploying STT Models Has Never Been So Easy. Available online: https://github.com/coqui-ai/STT (accessed on 16 November 2023).
  9. Seaborn, K.; Urakami, J. Measuring Voice UX Quantitatively: A Rapid Review. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI EA ’21), Yokohama, Japan, 8–13 May 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 1–8. [Google Scholar] [CrossRef]
  10. Worldometer. Worldometer—Real Time World Statistics. Available online: http://www.worldometers.info/ (accessed on 16 November 2023).
  11. Ethnologue (Free Dev). What Are the Top 200 Most Spoken Languages? Available online: https://www.ethnologue.com/insights/ethnologue200/ (accessed on 16 November 2023).
  12. Aguirre, A.; Manasía, N. Derechos humanos de cuarta generación: Inclusión social y democratización del conocimiento. Télématique 2015, 14, 2–16. [Google Scholar]
  13. UNESCO. Education for All 2000–2015: Achievements and Challenges | Global Education Monitoring Report. Available online: https://www.unesco.org/gem-report/en/efa-achievements-challenges (accessed on 19 April 2022).
  14. Costa-jussà, M.R.; Basta, C.; Gállego, G.I. Evaluating Gender Bias in Speech Translation. arXiv 2022, arXiv:2010.14465. [Google Scholar]
  15. Reid, K.; Williams, E.T. Common Voice and accent choice: Data contributors self-describe their spoken accents in diverse ways. In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’23), Boston, MA, USA, 30 October–1 November 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1–10. [Google Scholar] [CrossRef]
  16. Ngueajio, M.K.; Washington, G. Hey ASR System! Why Aren’t You More Inclusive? In HCI International 2022—Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence; Chen, J.Y.C., Fragomeni, G., Degen, H., Ntoa, S., Eds.; Lecture Notes in Computer Sciencel; Springer Nature: Cham, Switzerland, 2022; pp. 421–440. [Google Scholar] [CrossRef]
  17. Feng, S.; Kudina, O.; Halpern, B.M.; Scharenborg, O. Quantifying Bias in Automatic Speech Recognition. arXiv 2021, arXiv:2103.15122. [Google Scholar]
  18. Markl, N. Language Variation, Automatic Speech Recognition and Algorithmic Bias. Ph.D. Thesis, The University of Edinburgh, Edinburgh, UK, 2023. [Google Scholar] [CrossRef]
  19. Wassink, A.B.; Gansen, C.; Bartholomew, I. Uneven success: Automatic speech recognition and ethnicity-related dialects. Speech Commun. 2022, 140, 50–70. [Google Scholar] [CrossRef]
  20. Markl, N. Language variation and algorithmic bias: Understanding algorithmic bias in British English automatic speech recognition. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22), Seoul, Republic of Korea, 21–24 June 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 521–534. [Google Scholar] [CrossRef]
  21. Vorvoreanu, M.; Zhang, L.; Huang, Y.-H.; Hilderbrand, C.; Steine-Hanson, Z.; Burnett, M. From Gender Biases to Gender-Inclusive Design: An Empirical Investigation. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19), Glasgow, UK, 4–9 May 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–14. [Google Scholar] [CrossRef]
  22. Breslin, S.; Wadhwa, B. Gender and Human-Computer Interaction. In The Wiley Handbook of Human Computer Interaction; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2018; pp. 71–87. [Google Scholar] [CrossRef]
  23. Mulas, C.M. Speech Signals Feature Extraction Model for a Speaker’s Gender and Age Identification System. Ph.D. Thesis, E.T.S. de Ingenieros Informáticos (UPM), Madrid, Spain, 2014. Available online: https://oa.upm.es/33121/ (accessed on 28 March 2024).
  24. Latinus, M.; Taylor, M.J. Discriminating Male and Female Voices: Differentiating Pitch and Gender. Brain Topogr 2012, 25, 194–204. [Google Scholar] [CrossRef] [PubMed]
  25. Titze, I.R. Physiologic and acoustic differences between male and female voices. J. Acoust. Soc. Am. 1989, 85, 1699–1707. [Google Scholar] [CrossRef] [PubMed]
  26. Chela-Flores, G. La División Dialectal Del Español. In Dialectología Hispánica/The Routledge Handbook of Spanish Dialectology; Routledge: Oxford, UK, 2022. [Google Scholar]
  27. Mozilla. Mozilla Common Voice. Available online: https://commonvoice.mozilla.org/ (accessed on 16 November 2023).
  28. Bias in AI: What It Is, Types, Examples & 6 Ways to Fix It in 2023. Available online: https://research.aimultiple.com/ai-bias/ (accessed on 14 November 2023).
  29. De Oliveira, C.B.; Amaral, M.A. A discourse analysis of interactions with Alexa virtual assistant showing reproductions of gender bias. Clepsydra. Rev. Int. De Estud. De Género Y Teoría Fem. 2022, 23, 37–58. [Google Scholar] [CrossRef]
  30. Oliveira, C.B.; Amaral, M.A. An Analysis of the Reproduction of Gender Bias in the Speech of Alexa Virtual Assistant. In Proceedings of the XIII Congress of Latin American Women in Computing, San José, Costa Rica, 25–29 October 2021. [Google Scholar]
  31. Tatman, R. Gender and Dialect Bias in YouTube’s Automatic Captions. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, Valencia, Spain, 4 April 2017; Association for Computational Linguistics: Valencia, Spain, 2017; pp. 53–59. [Google Scholar] [CrossRef]
  32. Sun, T.; Gaut, A.; Tang, S.; Huang, Y.; ElSherief, M.; Zhao, J.; Mirza, D.; Belding, E.; Chang, K.-W.; Wang, W.Y. Mitigating Gender Bias in Natural Language Processing: Literature Review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019; Korhonen, A., Traum, D., Màrquez, L., Eds.; Association for Computational Linguistics: Florence, Italy, 2019; pp. 1630–1640. [Google Scholar] [CrossRef]
  33. Meyer, J.; Rauchenstein, L.; Eisenberg, J.D.; Howell, N. Artie Bias Corpus: An Open Dataset for Detecting Demographic Bias in Speech Applications. In Proceedings of the Twelfth Language Resources and Evaluation Conference, Marseille, France, 11–16 May 2020; European Language Resources Association: Marseille, France, 2020; pp. 6462–6468. Available online: https://aclanthology.org/2020.lrec-1.796 (accessed on 3 July 2023).
  34. Garnerin, M.; Rossato, S.; Besacier, L. Investigating the Impact of Gender Representation in ASR Training Data: A Case Study on Librispeech; Association for Computational Linguistics: Stroudsburg, PA, USA, 2021; pp. 86–92. [Google Scholar] [CrossRef]
  35. Amazon Alexa. Amazon Alexa Voice AI | Alexa Developer Official Site. Available online: https://developer.amazon.com/enUS/alexa.html (accessed on 16 November 2023).
  36. Introducing Whisper. Available online: https://openai.com/research/whisper (accessed on 16 November 2023).
  37. Selenium. Available online: https://www.selenium.dev/ (accessed on 16 November 2023).
  38. Eren, G.; The Coqui TTS Team. Coqui TTS. Python. January 2021. [Google Scholar] [CrossRef]
  39. Amazon Alexa. Slot Type Reference | Alexa Skills Kit. Available online: https://developer.amazon.com/en-US/docs/alexa/customskills/slot-type-reference.html (accessed on 16 November 2023).
  40. Morris, A.C.; Maier, V.; Green, P. From WER and RIL to MER and WIL: Improved Evaluation Measures for Connected Speech Recognition. In Proceedings of the Interspeech, Jeju Island, Republic of Korea, 4–8 October 2004; pp. 2765–2768. [Google Scholar] [CrossRef]
  41. Nacimiento-Garcia, E. Menceybencomo/Comparative-Analysis-of-Gender-and-Accent-Biases-in-Alexa-and-Whisper-forthe-Spanish-Language: Paper Version. [CrossRef]
  42. Instituto Canario de Estadística. Demografía. Available online: https://www.gobiernodecanarias.org/istac/estadisticas/demografia/index.html (accessed on 16 November 2023).
  43. Centro Virtual Cervantes. CVC. Anuario 2023. Informe 2023. El Español en Cifras. Instituto Cervantes. Available online: https://cvc.cervantes.es/lengua/anuario/anuario_23/informes_ic/p01.htm (accessed on 16 November 2023).
Figure 1. Comparative of females means WER.
Figure 1. Comparative of females means WER.
Applsci 14 04734 g001
Figure 2. Comparative of males means WER.
Figure 2. Comparative of males means WER.
Applsci 14 04734 g002
Figure 3. Comparative of females’ medians WER.
Figure 3. Comparative of females’ medians WER.
Applsci 14 04734 g003
Figure 4. Comparative of males’ medians WER.
Figure 4. Comparative of males’ medians WER.
Applsci 14 04734 g004
Figure 5. Comparative of weighted means.
Figure 5. Comparative of weighted means.
Applsci 14 04734 g005
Figure 6. Comparative of weighted means WER by gender.
Figure 6. Comparative of weighted means WER by gender.
Applsci 14 04734 g006
Figure 7. Comparative of weighted means WER by Accent.
Figure 7. Comparative of weighted means WER by Accent.
Applsci 14 04734 g007
Table 1. Number of segments by gender and accent.
Table 1. Number of segments by gender and accent.
AccentFemaleMale
Central American15013908
Andean-Pacific201611,493
Caribbean15376041
Chilean12683939
Spain Northern361031,406
Spain Central19586930
Spain Southern145630,698
Canary Islands5621482
Mexico34,63947,020
Rioplatense29148359
Table 2. Sample sizes used for females.
Table 2. Sample sizes used for females.
AccentAlexaWhisper BaseWhisper
Large-v2
Andean201620162016
Canary562562562
Caribbean153715371537
Central American150115011501
Chilean126812681268
Spain Central195819581958
Spain Northern215036103610
Spain Southern145614561456
Mexican215034,63911,600
Rioplatense215029142915
Total16,74851,46128,423
Table 3. Sample sizes used for males.
Table 3. Sample sizes used for males.
AccentAlexaWhisper
Base
Whisper
Large-v2
Andean215011,49311,493
Canary148214821482
Caribbean215060416041
Central American215039083908
Chilean215039393939
Spain Central215069306930
Spain Northern215031,40611,600
Spain Southern215030,69811,600
Mexican215047,02011,600
Rioplatense215083598359
Total20,832151,27676,952
Table 4. WER (%): Alexa, MX, Female.
Table 4. WER (%): Alexa, MX, Female.
AccentMeanMedianStdevVarianceCI 95%
Andean39.8328.5736.0212.981.57
Canary35.4625.0034.7112.052.88
Caribbean42.4333.3337.0113.701.85
Central American42.4530.7736.8113.551.86
Chilean42.1233.3336.3513.212.00
Spain Central35.7622.2236.2113.111.60
Spain Northern34.3222.2235.5612.651.50
Spain Southern42.9033.3335.9612.931.85
Mexican38.2925.0036.3813.241.54
Rioplatense40.5128.5736.6613.441.55
Table 5. WER (%): Alexa, MX, Male.
Table 5. WER (%): Alexa, MX, Male.
AccentMeanMedianStdevVarianceCI 95%
Andean43.3730.7737.9014.371.60
Canary41.3930.0034.7812.101.77
Caribbean41.5327.2738.8215.071.64
Central American43.0230.7737.3313.931.58
Chilean40.4727.2737.4514.031.58
Spain Central41.9428.5738.5314.841.63
Spain Northern41.3727.9238.7114.991.64
Spain Southern47.2535.7139.5315.621.67
Mexican44.7130.7739.3315.471.66
Rioplatense40.0127.2737.7214.231.60
Table 6. WER (%): Alexa, ES, Female.
Table 6. WER (%): Alexa, ES, Female.
AccentMeanMedianStdevVarianceCI 95%
Andean36.7225.0033.3811.141.46
Canary42.5230.0037.7714.273.13
Caribbean40.5830.7735.5112.611.78
Central American39.6428.5734.4611.881.74
Chilean41.1530.7735.6012.671.96
Spain Central33.4222.2233.5411.251.49
Spain Northern31.7022.2231.9510.211.35
Spain Southern40.5033.3333.3211.111.71
Mexican34.0723.0833.4211.171.41
Rioplatense37.4528.5733.8011.431.43
Table 7. WER (%): Alexa, ES, Male.
Table 7. WER (%): Alexa, ES, Male.
AccentMeanMedianStdevVarianceCI 95%
Andean40.0630.0035.4712.581.50
Canary40.8230.7732.6910.691.67
Caribbean38.7927.2736.4913.321.54
Central American41.9630.7735.0612.301.48
Chilean39.5327.2735.8512.851.52
Spain Central39.4627.2736.1913.101.53
Spain Northern37.9225.0036.2013.101.53
Spain Southern42.3030.7736.4813.311.54
Mexican38.4426.1436.1713.091.53
Rioplatense37.1825.0034.6211.981.46
Table 8. WER (%): Alexa, US, Female.
Table 8. WER (%): Alexa, US, Female.
AccentMeanMedianStdevVarianceCI 95%
Andean44.0628.5739.5315.621.73
Canary51.1939.2342.1417.763.49
Caribbean49.3337.5040.3916.312.02
Central American46.7933.3340.5516.442.05
Chilean49.4640.0040.0116.002.20
Spain Central40.3323.0840.0116.001.77
Spain Northern39.1723.0839.4715.581.67
Spain Southern48.8537.5039.6815.752.04
Mexican43.0027.2740.0916.071.70
Rioplatense44.5530.7739.6615.731.68
Table 9. WER (%): Alexa, US, Male.
Table 9. WER (%): Alexa, US, Male.
AccentMeanMedianStdevVarianceCI 95%
Andean47.9135.7140.8216.660.75
Canary48.8536.3639.4915.592.01
Caribbean47.0133.3341.5817.291.05
Central American48.3736.3640.0616.051.26
Chilean46.7333.3340.7616.621.27
Spain Central47.2633.3341.3117.060.97
Spain Northern46.0130.7741.2216.990.46
Spain Southern51.7340.0041.3417.090.46
Mexican48.9333.3342.0517.680.38
Rioplatense43.5828.5739.9615.970.86
Table 10. WER (%): Whisper (Base), Female.
Table 10. WER (%): Whisper (Base), Female.
AccentMeanMedianStdevVarianceCI 95%
Andean25.7215.3838.3814.731.68
Canary18.0810.0025.236.372.09
Caribbean26.0516.6732.6610.661.63
Central American22.3414.2927.397.501.39
Chilean24.7116.6735.6112.681.96
Spain Central20.8512.5039.4615.571.75
Spain Northern21.6914.2932.9810.871.08
Spain Southern25.5216.6733.7911.421.74
Mexican25.6718.1830.819.490.32
Rioplatense21.1814.2938.4314.771.40
Table 11. WER (%): Whisper (Base), Male.
Table 11. WER (%): Whisper (Base), Male.
AccentMeanMedianStdevVarianceCI 95%
Andean22.7114.2953.1128.210.97
Canary21.7816.6723.245.401.18
Caribbean26.4616.6737.2413.870.94
Central American20.7014.2928.998.400.91
Chilean21.0614.2927.097.340.85
Spain Central21.0214.2927.777.710.65
Spain Northern17.9510.0037.2813.900.41
Spain Southern64.1057.1463.3740.160.71
Mexican26.3516.6745.1420.370.41
Rioplatense20.7514.2926.957.260.58
Table 12. WER (%): Whisper (Large-v2), Female.
Table 12. WER (%): Whisper (Large-v2), Female.
AccentMeanMedianStdevVarianceCI 95%
Andean10.710.0042.3517.941.85
Canary6.980.0016.462.711.36
Caribbean12.520.00130.09169.226.51
Central American11.710.00134.44180.756.81
Chilean8.230.0019.823.931.09
Spain Central15.080.00222.53495.229.86
Spain Northern8.610.00109.80120.573.58
Spain Southern8.670.0022.044.861.13
Mexican8.380.0036.9013.620.67
Rioplatense8.370.0054.0729.241.96
Table 13. WER (%): Whisper (Large-v2), Male.
Table 13. WER (%): Whisper (Large-v2), Male.
AccentMeanMedianStdevVarianceCI 95%
Andean7.150.0019.353.740.35
Canary5.870.0011.791.390.60
Caribbean9.440.0057.8633.471.46
Central American7.010.0035.9412.921.13
Chilean7.170.0018.063.260.56
Spain Central6.600.0030.579.340.72
Spain Northern7.570.0042.2917.890.77
Spain Southern21.448.3334.5411.930.63
Mexican9.180.0024.125.820.44
Rioplatense6.450.0019.593.840.42
Table 14. Weighted mean WER (%) by gender.
Table 14. Weighted mean WER (%) by gender.
GenderAlexa
MX
Alexa ESAlexa
US
Whisper BaseWhisper
Large-v2
Female39.3036.9244.7224.769.42
Male42.5439.6147.6031.119.70
Table 15. Whisper (Base): South of Spain.
Table 15. Whisper (Base): South of Spain.
GenderMeanMedianStdevVarianceCI 95%
Female25.5216.6733.7911.421.74
Male (2150)26.9216.6749.6024.602.10
Table 16. Whisper (LargeV2): South of Spain.
Table 16. Whisper (LargeV2): South of Spain.
GenderMeanMedianStdevVarianceCI 95%
Female8.670.0022.044.861.13
Male (2150)9.100.0019.623.850.83
Table 17. Weighted means WER by gender.
Table 17. Weighted means WER by gender.
GenderAlexa
MX
Alexa ESAlexa
US
Whisper BaseWhisper
Large-v2
Female39.3036.9244.7224.769.42
Male42.5439.6147.6023.577.84
Table 18. Weighted means WER by accent.
Table 18. Weighted means WER by accent.
GenderAlexa
MX
Alexa ESAlexa
US
Whisper BaseWhisper
Large-v2
Andean41.6638.4446.0523.167.68
Canary39.7641.2949.4920.766.18
Caribbean41.9139.5447.9826.3810.06
Central American42.7941.0147.7221.168.31
Chilean41.0840.1347.7421.957.43
Spain Central38.9936.5843.9620.988.47
Spain Northern37.8534.8142.5918.347.82
Spain Southern45.4941.5750.5726.869.05
Mexican41.5036.2645.9726.068.78
Rioplatense40.2637.3244.0720.866.95
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nacimiento-García, E.; Díaz-Kaas-Nielsen, H.S.; González-González, C.S. Gender and Accent Biases in AI-Based Tools for Spanish: A Comparative Study between Alexa and Whisper. Appl. Sci. 2024, 14, 4734. https://doi.org/10.3390/app14114734

AMA Style

Nacimiento-García E, Díaz-Kaas-Nielsen HS, González-González CS. Gender and Accent Biases in AI-Based Tools for Spanish: A Comparative Study between Alexa and Whisper. Applied Sciences. 2024; 14(11):4734. https://doi.org/10.3390/app14114734

Chicago/Turabian Style

Nacimiento-García, Eduardo, Holi Sunya Díaz-Kaas-Nielsen, and Carina S. González-González. 2024. "Gender and Accent Biases in AI-Based Tools for Spanish: A Comparative Study between Alexa and Whisper" Applied Sciences 14, no. 11: 4734. https://doi.org/10.3390/app14114734

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop