Next Article in Journal
Exploring Motivators for Trust in the Dichotomy of Human—AI Trust Dynamics
Previous Article in Journal
Health Education for Women Released from Prison in Brazil: Barriers and Possibilities for Intervention
Previous Article in Special Issue
Physical Activity and Quality of Life among High School Teachers: A Closer Look
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Perpetuation of Gender Bias in Visual Representation of Professions in the Generative AI Tools DALL·E and Bing Image Creator

by
Teresa Sandoval-Martin
* and
Ester Martínez-Sanzo
Communication Department, University Carlos III of Madrid, 28903 Madrid, Spain
*
Author to whom correspondence should be addressed.
Soc. Sci. 2024, 13(5), 250; https://doi.org/10.3390/socsci13050250
Submission received: 29 February 2024 / Revised: 23 April 2024 / Accepted: 26 April 2024 / Published: 2 May 2024

Abstract

:
Artificial intelligence (AI)-based generative imaging systems such as DALL·E, Midjourney, Stable Diffusion, and Adobe Firefly, which work by transforming natural language descriptions into images, are revolutionizing computer vision. In this exploratory and qualitative research, we have replicated requests for images of women in different professions by comparing these representations in previous studies with DALL·E, observing that this model continues to provide in its last version, DALL·E 3, inequitable results in terms of gender. In addition, Bing Image Creator, Microsoft’s free tool that is widely used among the population and runs under DALL·E, has been tested for the first time. It also presents a sexualization of women and stereotypical children’s representations. The results reveal the following: 1. A slight improvement in terms of the presence of women in professions previously shown only with men. 2. They continue to offer biased results in terms of the objectification of women by showing sexualized women. 3. The representation of children highlights another level of gender bias, reinforcing traditional stereotypes associated with gender roles from childhood, which can impact future decisions regarding studies and occupations.

1. Introduction

Digital transformation has significantly influenced the evolution of photography in the 21st century, from the creative power of Photoshop to the latest models of generative artificial intelligence. Since the launch of ChatGPT by OpenAI in 2022, the creation of new applications has spread, with Stable Diffusion, MidJourney, DALL·E 3, Adobe Firefly, Lexica Art, Dream Studio, and Bing Image Creator being the best known. Today’s generative intelligences are capable of recreating a multitude of images with a very high degree of verisimilitude, making it virtually impossible to distinguish them from photographs taken in the real world. In the era of post-truth and disinformation, and with the ease with which it is possible to create deepfakes (Gómez-de-Ágreda et al. 2021), it is questioned whether it can continue to show truthful evidence about the information narrated and about the image itself as evidence, but this is not its only risk and subject of controversy. Racial and gender biases (Buolamwini and Gebru 2018) presented by AI results including ChatGPT (Acerbi and Stubbersfield 2023; Lucy and Bamman 2021) and the attribution of intellectual property (Watercutter 2023; Ashby 2023) are generating important debates in Western societies, especially where the film industry has been most threatened by the creations of these new systems (Broderick 2023; Schomer 2023).
Artificial intelligence (AI) is a set of techniques in which algorithms discover or learn associations by making predictions from large amounts of data, and where the algorithm is the procedure that solves the problem. However, the results provided by these models may infringe on human rights. Among the causes are the gender biases they present, which threaten the achievements made in terms of equality between men and women (Sandoval-Martín et al. 2021), and which is an indispensable right to then be able to enjoy other rights. The United Nations Sustainable Development Goal 5 (SDG 5), entitled “Achieve gender equality and empower all women and girls”, pursues non-discrimination against all women and girls worldwide by 2030. In this research, we consider that in order to achieve this, the role to be taken vis-à-vis the use of AI presents itself as a crucial challenge.
The first fields where AI biases were detected were healthcare (Panch et al. 2019) and computational linguistics (Natural Language Processing, NLP). In fact, with its extensive application in voice assistants and machine translation, the gender biases of algorithms and the databases from which they are fed have become visible (Sun et al. 2019; Shrestha and Das 2022) and not in vain, it is currently the most visible form of these biases. Nevertheless, today there are numerous scientific and civil society initiatives dedicated to making visible the serious problem of gender bias in AI (Dobreva et al. 2023).
With the advent of the next AI innovation, generative artificial intelligence (led by the company OpenAI), some studies have found—as we identify in Section 3—that it may increase gender inequalities in relation to women’s representation in the workplace.
These applications are able to create texts, images, sounds, and videos from text, images or audio provided by users, but these models have learned from the textual and image wealth available in open databases, in which centuries of inequality are implicit.
This research aims to identify whether there has been an improvement in the extent of gender-biased representations by these generative tools in the last versions, as several studies have already been published showing a significant degree of stereotyping. This approach of the representation of professions with respect to previous research allows us to arrive at new findings. A slight improvement is noted in the representation of men and women in occupations, although when exploring the representation of gender in childhood for its importance in future occupational choice, a notable stereotyping is observed. This is the first time that this comparative approach has been adopted in relation with the representation of professions by generative AIs.
With this research, we also provide a literature review on the representation of women in several occupations by generative AIs (Section 3.2). In this concern, a new research front in this field that is directly related to the choice of profession is addressed by detecting less research studies in the state of the art—the representation of boys and girls in these generative AIs.
This research is especially framed in gender studies and in general, in Science, Technology, and Society Studies, where we have considered photography as the boundary object between professionals, programmers, designers, and other actors that are now part of the production process of images created by AI and who need to take into account the gender perspective approach.
This research aims to fill the existing gap in articles published about this problem of serious social consequences, adding to previous studies and initiatives of international organizations and non-profit organizations. In addition, we intend to fill part of the scientific gap on technological practices that are developed without internal or external control, but that seriously threaten women’s progress in terms of equality.

2. Materials and Methods

2.1. Procedure

To achieve the general objective of this research, to identify whether there has been an improvement in the extent of gender-biased representations in professions by the last version of DALL·E and in the popular application called Image Creator, we started from a search in the WOS and SCOPUS databases, completed with Google Scholar, and expanded with knowledge by prominent authors (Codina et al. 2022) to achieve the following:
  • Identify previous studies about gender bias in the representations of professions by generative AI systems.
  • Identify those tools that had been investigated from this perspective.
  • Identify those that, although very popular, had not yet been investigated.
After this, we conducted a search of previous articles to identify relevant scientific literature related to gender bias in AI-based image generative systems, especially in relation with occupations. Keywords used for the search in article title, abstract, and keywords included “gender bias”, “artificial intelligence”, “image generative systems”, “DALL·E”, “Midjourney”, “Stable Diffusion”, “Adobe Firefly”, “Bing Image Creator”, and “occupation” with a reference period limited from January 2023 to February 2024. The research focused on empirical studies regarding gender inequalities in the field of AI image generation. The scientific literature exploration also shed light on the main research trends concerning the objectives of our study, providing a basis for the subsequently developed exploratory analysis and subsequent discussion. Likewise, given the novelty of the topic, non-peer-review studies, news, and websites of companies and institutions that focus their efforts for a bias-free AI were taken into account.
Once these results were obtained, a few articles about gender bias in generative AI image systems in relation with several occupations were found. As two of them analyzed DALL·E (Table 1) in relation with several occupations, we chose this tool to partially replicate one of them and check whether the previous biases found in the representations of men and women in the most stereotyped occupations had decreased. In the case of DALL·E and Image Creator, owned by Microsoft Designer, the AI images generated with DALL·E 3 from the Bing search engine allows for a comparative approach to this type of tool, by contrasting the results of the present study, where the latest available version is used, directly with the results of previous studies using previous versions.
The study by García-Ull and Melero-Lázaro (2023) was chosen because of its breadth, although not as many images could be collected for the sample of the most stereotyped professions because the latest version, DALL-E 3, gives four images for each instruction (prompt) instead of the nine images given by the previous version, DALL-E 2.
We subsequently focused on carrying out an exploratory non-representative study with DALL·E, specifically in the latest version existing at the time of publishing this article (DALL·E 3), as well as in the Bing search engine tool called Bing Image Creator that also uses the same DALL·E 3, because of its expansion worldwide through Microsoft’s browser, Bing.
The method followed for this analysis was based on that which was applied in previous studies, such as the one by García-Ull and Melero-Lázaro (2023), which considers the 37 professions identified as stereotyped by Farago et al. (2021). The research by García-Ull and Melero-Lázaro (2023) evidences that AI not only reproduces the gender stereotypes in the workplace demonstrated in previous experiments with humans, but also reinforces and increases this stereotyping, which can be attributed to training data.

2.2. Materials

After locating previous studies, we identified the tool that had been most extensively researched from this perspective: DALL-E. DALL-E 3 is the latest version of DALL·E, introduced by OpenAI in April 2022, and it can be used on a pay-per-use basis. We also integrated Bing Image Creator into this explorative analysis, for which no research has yet been carried out despite its popularity and the fact that it is a free tool. For this reason, it was decided that the experiments should be carried out with these two applications (Table 2).
Bing Image Creator is a free access tool, unlike other well-known platforms such as MidJourney and DALL-E 3. The fact that Bing Image Creator is free means it is more accessible to the general public, so it could have a greater influence on individual and collective perceptions. In particular, the free nature of Bing Image Creator can play a determining role in the formation of impressions and beliefs, underscoring the importance of examining its impact on the construction of professional images based on gender.

2.3. Data Analysis

Several tests were carried out with the most stereotyped professions—see Section 3.3 and Section 3.4—in order to compare the results and identify changes in the representation of men and women at work. A total of 68 images were obtained in DALL-E3 and 36 in Bing Image Creator. For each prompt (instruction), these applications gave four images, which were the maximum number offered in February 2024 when the experiment was carried out. They are shown in Section 3, and two coders intervened in the data analysis.
We formulated queries using terms associated with professions, focusing on indications for male and female professionals, following the research of García-Ull and Melero-Lázaro (2023). As mentioned in the previous section, these authors considered gender stereotypes in the work of Farago et al. (2021), who, in turn, followed the studies in this consolidated area by Gottfredson (1981) and Teig and Susskind (2008), which had identified professions in which stereotypes are especially present. In the case of DALL·E 3, we used 17 of the 37 terms used by García-Ull and Melero-Lázaro (2023), coinciding with very stereotyped professions such as the following: secretary; hotel manager; tailor; singer; teacher (secondary), teacher (primary); maid; nurse; doctor; politician; mechanic; airplane pilot; taxi driver; carpenter, and for leader positions, we decided to make an adaptation of several terms for CEO. In the case of Bing Image Creators, we analyzed the results from the professions of teacher, architect, engineer, and journalist through the following prompts written in Spanish for each profession (male–female): “Un profesor y una profesora”; “arquitecto”; “arquitecta”; “ingeniero”; “ingeniera”; “periodista hombre”; “periodista mujer”.
These professions are introduced through prompts in English in DALL·E in order to avoid the use of masculine or feminine terms present in other languages, such as Spanish. The content analysis allowed us to know with which gender was each profession represented and to grade (no, less, much stereotyped) not only how a woman or a man is represented in different professions, but also, as mentioned before, to compare with previous studies. Additionally, we included a sample of children’s images made in Bing Image Creator to assess gender stereotypes from an early age in this system. Mannering (2023) also used children IA-generated images to measure the gender bias in relation to objects. In our study, the prompts used were “boy playing” and “girl playing”, also in Spanish. In this case, they were introduced in Spanish (“niño jugando”; “niña jugando”) since, in these prompts, we were not interested in finding out whether they represented a boy or a girl, but rather what the representation was like for these two.
This approach allows us to explore how biases generated by AI can influence professional perceptions and attributed gender roles, thus contributing to a deeper understanding of the phenomenon in contemporary society.

3. Results

3.1. AI Techniques in Photography and the Existence of Gender Biases

Among the most significant advances resulting from the conjunction between computing, artificial, and photography, a series of applications stand out that are the most used and which we collect in the following table (Table 3).
On the one hand, deep learning and neural networks have revolutionized image processing, allowing tasks such as the recognition of objects and people (facial recognition) (Chaouch 2023), image segmentation, and the creation with Generative AI systems. To carry out its work, it needs pre-existing image databases. Recent studies led to documentaries, such as Netflix’s ‘Coded bias’ (Kantayya 2020), that show the existing algorithmic flaws in facial recognition technology. Thus, for example, algorithms trained mainly with men may have difficulties recognizing female faces, so this bias is transferred to security systems (Buolamwini and Gebru 2018). Likewise, when generative AI models are asked to generate images of managers, they tend to represent men (Nicoletti and Bass 2023).
On the other hand, generative artificial intelligence systems, such as DALL-E 2, have been criticized for perpetuating gender stereotypes in the images, photographs, and illustrations they generate (Heikkilä 2023). A study published in September 2023 demonstrated the existence of gender stereotypes in the OpenAI image generator (DALL·E 2) and showed that, while research in humans on gender bias indicates strong stereotypes in 35% of cases, this generative AI almost doubles this figure with 59.4% of cases (García-Ull and Melero-Lázaro 2023).
The main reason why these applications show gender bias is that these systems are trained with large data sets that often reflect pre-existing cultural and social biases, especially if they are fed by photographs from the advertising field. The images generated by AI show known stereotypes, pre-existing in the historical, cultural, communicative, and advertising heritage of humanity.
The gender stereotypes generally cited in studies on the image of women in the field of marketing and advertising show stereotypes referring to traditional and stereotyped gender roles; beauty and physical appearance, representing women who are always thin; the assignment of colors and styles; attitudes and emotions; mastery of certain fields of knowledge or in certain jobs; family roles; clothes and fashion, and sexualization. Regarding images generated with AI, they can disproportionately sexualize women, which contributes to the objectification of women.
These stereotypes harm women in the workplace, pigeonholing them into certain job profiles that are lower than those of men and with more passive than active attitudes and an absence of leadership.

3.2. State of the Art on the Representation of Women in the Workplace by Generative AI

For decades, significant disparities have existed between women in the labor market compared to their male colleagues, such as underrepresentation in positions of responsibility and barriers to STEM skills (Szenkman and Lotitto 2020; Bello and Estébanez 2022). With this background, some authors point out that automation related to the extensive use of machine learning (ML) could exacerbate these inequalities (Collett et al. 2022; Ortiz de Zárate Alcarazo and Guevara Gómez 2021; Sainz et al. 2020). The biases of machine learning systems have already been implicated in real cases of great impact (Amazon and its personnel recruitment system is one more example of the many that exist today) and of discrimination by gender and race (Buolamwini and Gebru 2018; Nurock 2020; Sandoval-Martín et al. 2023).
The review of the literature on gender bias in AI revealed a mix of works, published or unpublished articles and working papers, since the arrival of generative AI systems of enormous interest regarding the representation of women in relation to professions. Zhou et al. (2023) concluded in their work in progress, titled ‘Bias in Generative AI’, that the AI used to generate images reflecting gender and racial biases, showing fewer women and black people in professions, and perpetuating gender stereotypes.
Likewise, García-Ull and Melero-Lázaro (2023) evidenced in ‘Gender stereotypes in AI-generated images’ a marked gender stereotype in the occupational context of images generated by AI. In general, 59.4% of the professions represented by DALL-E 2 showed a gender stereotype, with 21.6% of the professions completely stereotyped in the female case and 37.8% in the male case. They revealed a significant disparity in the representation of specific professions. The results obtained from DALL·E 2 related to neutral professions indicated a high degree of stereotyping, with 21.6% of the professions completely stereotyped for the female gender and 37.8% for the male gender. This phenomenon was more evident in technical, scientific, construction or driving professions. It was shown that DALL·E 2 often associates women with roles such as cleaners, seamstresses and professions in which appearance is relevant, such as actresses or singers, with images of young, Western and blonde women. However, the synthetic images generated by DALL·E 2 present middle-aged or older men when it comes to professions associated with greater responsibility or status, such as politics, business and religion, showing a predominance of Western appearance.
The comparison with previous studies indicates that artificial intelligence presents a higher degree of gender stereotypes in the work environment. This can be attributed to the training data reflecting the biases and gender imbalances present in society. Stereotypical representation by AI can also contribute to reinforcing existing prejudices, creating a feedback loop (García-Ull and Melero-Lázaro 2023).
For their part, Cheong et al. (2024) in ‘Investigating Gender and Racial Biases in DALL·E Mini Images’, with a database of 150 professions and 10 representations of each of them, also revealed in the same sense the tendency to represent certain professions as men or women. More recently, Aldahoul et al. (2024), authors of the non-peer-review study, ‘AI-generated Faces Free from Racial and Gender Stereotypes’, also analyze race and gender biases in quantitative research, less frequent in gender studies, often carried out with qualitative methods.
Heikkilä (2023), echoes in another non-peer-review research in the MIT Technology Review, from Alexandra S. Luccioni et al. (2023) who observed that when analyzing the images generated by the artificial intelligence models DALL·E 2 and Stable Diffusion, a gender bias emerges in the representation of professions. For example, when DALL·E 2 was asked to represent people in positions of authority such as CEO or director, 97% of the time it generated images of white men. This bias is attributed to the training of the models with a vast set of data and images taken from the Internet, which reflect and amplify the gender stereotypes present in American culture, on which these models often focus. They also show how adding adjectives to a question influences the images produced, revealing gender bias. Thus, for example, when including terms such as “compassionate” or “sensitive”, women were generated more frequently. However, when “intellectual” was used, the tool tended to give images of men.
For its part, Mandal et al. (2023) in ‘Multimodal Bias: Assessing Gender Bias in Computer Vision Models with NLP Techniques’ detected various types of stereotypical gender associations with respect to occupations in CLIP, a large multimodal deep learning model that works on DALL·E and Stable Diffusion. Mine while Mannering (2023), in his study ‘Analysing Gender Bias in Text-to-Image Models using Object Detection’, submitted to STAI Workshop, measures the bias in relation to objects, concluding that certain objects are associated with a certain sex such as knives, bats, and bicycles were associated with men, while women were represented more frequently with objects such as bowls, bottles, cups and bags, objects associated with domestic activities

3.3. Comparative Analysis between Versions:DALL·E 2 versus DALL·E 3

As explained in the methodological section, a delimited probabilistic sampling has been carried out from a series of professions, studying the professions in which García-Ull and Melero-Lázaro (2023) find notable aspects in the representation of AI within their study, in which they take into account the 37 professions established by Farago et al. (2021) regarding stereotypes in the workplace. Likewise, two coders intervened in this research, who introduced the different “prompts”. All prompts introduced in DALL·E were in English. Four images are generated in each query. The results are classified according to their degree of stereotyping.
While in the study by García-Ull and Melero-Lázaro (2023), a strong stereotyping is shown in more than half of the representations with DALL·E 2, if we compare it with the results obtained with DALL·E 3 there is a slight improvement in this sense.
In DALL·E 2, only women were shown in professions such as “nurse”, “tailor”, “hotel manager” and “secretary”. However, in cases such as “nurse”, DALL·E 3 shows men and women equally (Figure 1). Furthermore, regarding “tailor” and “hotel manager” it goes from showing women in its previous version to showing men (Figure 2).
However, in others terms such as “maid”, “teacher—primary”, “teacher—secondary”, “singer” and “secretary”, it continues to only show women and, in some cases, with attractive features, as is the case of the singer, with exotic features, colorful feathers and apparently without clothes. There are also professions, such as maids, that are visually represented from the 19th century (Figure 3).
Regarding the masculine representations of DALL·E 2, in DALL·E 3, professions such as “carpenter”, “taxi driver”, “truck driver”, “mechanic” and “politician” continue to be shown with masculine traits (Figure 4). Men are shown carrying out actions, in their workplace, facing the task they are doing or surrounded by other men at summits and international organizations surrounded by flags as a sign of power. The driver and the CEOs appear dressed in a suit and tie.
However, although in positions considered to be of higher status, such as those related to politics, a greater number of men continue to be represented than women. In others such as “CEO” and “doctor”, women are beginning to be represented (Figure 5).
Likewise, in DALL·E 3, women began to be shown in professions such as “airplane pilot” (Figure 6), although they do not come out in control of the plane but alongside it, in a passive attitude, with close-ups or medium shots that are somewhat sexualized as if it were an ad for sunglasses.
It is worth noting that, although men are represented to be of different ages and levels of attractiveness, women are mainly represented as young with features that are considered attractive (Figure 7).

3.4. Analysis or Exploratory Study with Bing Image Creator

Bing Image Creator is an advanced image generation tool that follows DALL·E’s AI approach, allowing you to create realistic images from textual descriptions. This technology, based on the generation of images through neural networks, offers an interesting window into how visual representations can unconsciously reflect social biases.
Bing Image Creator uses a large set of image data to learn the representation patterns of different concepts. After training, the user can ask the platform to generate specific images simply by describing the desired concept.
To explore gender bias in images generated about professions with the Bing Image Creator tool, she was asked to create visual representations of different professions, namely “teacher” (“profesor y profesora”), “architect” (“arquitecto” and “arquitecta”), “engineer” (“ingeniero” and “ingeniera”) and “journalist” (“periodista hombre” and “periodista mujer”). In Bing Image Creator, they were introduced in Spanish because, while in DALL·E, we sought to discover whether a woman or a man was represented, in Bing Image Creator, we focused on finding out how professions were represented when they were represented as men and women.
  • Prompt: “Male teacher and female teacher” (“Un profesor y una profesora”)
In the case of female teachers, the results showed a clear difference in representation. The female teacher is usually portrayed as young and attractive, while the male teacher shows signs of maturity with gray hair, suggesting experience. The difference in height between the two may reflect traditional gender stereotypes. Likewise, the male teacher dresses in a more formal/professional manner (Figure 8).
  • Prompt: “Male architect” (“arquitecto”) and “Female architect” (“arquitecta”)
In the context of architecture, representations generate visible gender stereotypes. The male architect is usually portrayed in assertive and self-confident poses, while the female architect may be portrayed more passively or focused on aesthetic beauty (Figure 9).
  • Prompt: “Male engineer” (“ingeniero”) and “Female engineer” (“ingeniera”)
In the case of the engineering profession, gender stereotypes arise. The engineer can be represented as a focused and serious person, while the female engineer can be represented with more friendly features or emphasizing the feminine presence (Figure 10).
  • Prompt: “Male journalist” (“periodista hombre”) and “Female journalist” (“periodista mujer”)
Gender bias is also manifested in the journalistic profession. The male journalist is portrayed as a person with character, in action, with authority, giving instructions, who wears a tie, and is somewhat disheveled, while the female journalist appears with a sweet and accommodating image and with an aesthetically impeccable appearance (Figure 11).

3.5. Beyond Professions: Study of Gender Roles in Childhood

Moving away from professional representations, we also explored the case of a boy and a girl playing, using the prompts “boy playing” (“niño jugando”) and “girl playing” (“niña jugando”). While the boy has fun with a truck (Figure 12), the girl appears in a pink dress and is entertained with soap bubbles (Figure 13). This scenario highlights another level of gender bias, reinforcing traditional stereotypes associated with gender roles. The image suggests an implicit association between the boy’s play and the technical sphere, while the girl appears in a context more oriented towards beauty and the social norms. These results underline the importance of taking gender biases into account in even more informal contexts, such as gaming, and highlight yet another time when there is the need for a critical approach in the design and use of AI technologies.
Bing Image Creator offers an illuminating view of gender bias in the generated images, reflecting and amplifying existing cultural stereotypes. These results underscore the need for awareness and continuous improvement in algorithm design to ensure more equitable and inclusive representation.

4. Discussion

Image generative AI models continue to offer results with gender biases despite the existence of various studies, such as those by García-Ull and Melero-Lázaro (2023) and Cheong et al. (2024), who have inspired part of this research and with which we agree. These systems produce a series of errors and contain algorithmic biases that perpetuate gender stereotypes. The in-depth examination of the results of this research shows a major unresolved problem, namely the sexualized representation of women in this area.
A central aspect of this problem lies in the perpetuation of gender inequalities, related to the perception of gender issues, as analyzed by Lee et al. (2023). As several authors have pointed out, the difficulty of counteracting this problem is enormous, highlighting its complexity from both a technological and social point of view (Gillis and Pratt 2023). However, the opacity of network learning mechanisms makes it difficult to understand these internal processes (Ray 2023).
Unpublished research by Zhou et al. (2023) and the article by García-Ull and Melero-Lázaro (2023) have revealed that generative AI which is used for image creation perpetuates gender biases thus reinforcing stereotypes related to professions. Even though the biases shown by AI in health or banking issues (personal and mortgage loans), social assistance, etc., have a significant scope and consequences on the lives of women (Sandoval-Martín et al. 2021), and these risks have been known for decades (Gillis and Pratt 2023), they have not yet been solved.
A comparison between the different research of this state-of-the-art application showed that the results of García-Ull and Melero-Lázaro (2023) revealed a significant disparity in the representation of professions, with a high percentage of gender stereotypes. Technical, scientific, construction, or driving professions were especially affected, often associating women with roles traditionally linked to appearance or specific characteristics. Cheong et al. (2024) and Aldahoul et al. (2024) confirmed these conclusions by quantifying the gender bias in the representation of occupations. Heikkilä (2023) highlighted how models such as DALL·E 2 and Stable Diffusion reinforced gender stereotypes.
Finally, our comparative analysis between different versions of the models, such as DALL·E 2 and DALL·E 3, revealed a slight improvement in the representation of women in certain occupations. However, persistent trends, especially in the maintenance of stereotypes linked to appearance and professions traditionally associated with a specific gender, highlight the continued need to improve these tools.
One of the limitations of this study include the fact that MidJourney and Stable Diffusion, the other two models on which scientific results have been published, have not been analyzed. The analysis of the rest of the applications will allow for a complete mapping that will represent the position of these companies regarding one of the fundamental rights of women, that of equality, and SDG 5. In addition, a detailed study of the representation of children by these applications would be desirable.

5. Conclusions

Comparative observations between different versions of models, such as DALL·E 2 and DALL·E 3, reveal partial but persistent improvements in the representation of women in certain professions. However, the maintenance of stereotypes, especially regarding appearance and professions traditionally associated with a specific gender, highlights the ongoing need to adjust training designs and practices from a gender perspective and to address underlying biases in the data.
The results of this research reveal deep complexity in the way generative models perpetuate gender stereotypes. So, to promote more equitable representation of women in generative AI, meaningful action is essential. Continued efforts on both the technological and social fronts, scrutinizing training data, improving algorithms, and encouraging critical thinking about gender biases are crucial. Only a holistic approach can ensure significant progress towards fair and balanced representation of women in the field of generative AI.
Although it is essential, it is not enough to investigate gender biases in order to demonstrate their existence, but it is essential to seek technical and ethical solutions, such as the explainability of AI and to promote analysis and critical reflection among those who design and use these tools from training and literacy in a way of working that takes into account the gender perspective.
The difficulty of counteracting these tendencies is exacerbated by the opaque nature of networked learning processes, which makes it difficult to understand and modify these internal mechanisms. The social implications of the perpetuation of gender stereotypes by generative AI are considerable, influencing social perceptions of professional roles and reinforcing existing gender inequalities.

Author Contributions

Conceptualization, T.S.-M.; Methodology, T.S.-M. and E.M.-S.; Software, E.M.-S.; Validation, T.S.-M. and E.M.-S.; Formal Analysis, T.S.-M. and E.M.-S.; Research, T.S.-M. and E.M.-S.; Resources, T.S.-M. and E.M.-S.; Data Curation, T.S.-M. and E.M.-S.; Writing—Original Draft Preparation, T.S.-M. and E.M.-S.; Translation, E.M.-S.; Fund Acquisition, T.S.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This publication is part of the R&D&I project PID2019-106695RB-I00 (AIGENBIAS, Identification of gender biases in artificial intelligence. Technological, scientific and media discourses), funded by MCIN/AEI/10.13039/501100011033/. The APC was funded by University Carlos III of Madrid (Spain).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Acerbi, Alberto, and Joseph M. Stubbersfield. 2023. Large language models show human-like content biases in transmission chain experiments. Proceedings of the National Academy of Sciences 120: e2313790120. [Google Scholar] [CrossRef]
  2. Aldahoul, Nouar, Talal Rahwan, and Yasir Zaki. 2024. AI-Generated Faces Free from Racial and Gender Stereotypes. arXiv arXiv:2402.01002. [Google Scholar]
  3. Ashby, Madeline. 2023. El Futuro de Hollywood Pertenece a las Personas, No a la IA. Wired. July 17. Available online: https://es.wired.com/articulos/futuro-de-hollywood-pertenece-a-las-personas-no-a-la-inteligencia-artificial (accessed on 1 February 2024).
  4. Bello, Alessandro, and María Elina Estébanez. 2022. Una ecuación desequilibrada: Aumentar la participación de las mujeres en STEM en LAC. Centro Regional para el Fomento del Libro en América Latina y el Caribe, Cerlalc/UNESCO y Universidad Autónoma de Chile. Available online: https://forocilac.org/wp-content/uploads/2022/02/PolicyPapers-CILAC-Gender-ESP.pdf (accessed on 1 February 2024).
  5. Broderick, Ryan. 2023. AI can’t replace humans yet: But if the WGA Writers Don’t Win, it Might not Matter. Polygon. May 31. Available online: https://www.polygon.com/23742770/ai-writers-strike-chat-gpt-explained (accessed on 1 February 2024).
  6. Buolamwini, Joy, and Timnit Gebru. 2018. Gender shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Paper present at the 1st Conference on Fairness, Accountability and Transparency, New York, NY, USA, February 23–24; vol. 81, pp. 77–91. Available online: http://proceedings.mlr.press/v81/buolamwini18a.html?mod=article_inline&ref=akusion-ci-shi-dai-bizinesumedeia (accessed on 15 January 2024).
  7. Chaouch, Thameur. 2023. ImageNet Classification with Deep Convolutional Neural Networks. Medium. September 23. Available online: https://medium.com/@chaouch.thameur.tc61/imagenet-classification-with-deep-convolutional-neural-networks-1b4a2f708bc4 (accessed on 1 February 2024).
  8. Cheong, Marc, Ehsan Abedin, Marinus Ferreira, Ritsaart Reimann, Shalom Chalson, Pamela Robinson, Joanne Byrne, Leah Ruppanner, Mark Alfano, and Colin Klein. 2024. Investigating gender and racial biases in DALL·E Mini Images. ACM Journal on Responsible Computing, 1–21. [Google Scholar] [CrossRef]
  9. Codina, Lluís, Carlos Lopezosa, and Pere Freixa. 2022. Scoping reviews en trabajos académicos en comunicación: Frameworks y fuentes. In Información y Big Data en el sistema híbrido de medios. Edited by Ainara Larrondo Ureta, Kold Meso Ayerdi and Simón Peña Fernández. Bilbao: Servicio Editorial de la Universidad del País Vasco. Available online: https://www.lluiscodina.com/wp-content/uploads/2022/05/scoping-reviews-comunicacion.pdf (accessed on 15 January 2024).
  10. Collett, Clementine, Gina Neff, and Livia Gouvea Gomes. 2022. Los efectos de la IA en la vida laboral de las mujeres. UNESCO, OCDE & BID. Available online: https://wp.oecd.ai/app/uploads/2022/03/Los-efectos-de-la-IA-en-la-vida-laboral-de-las-mujeres.pdf (accessed on 15 January 2024).
  11. Dobreva, Mihaela, Tea Rukavina, Vivian Stamou, Anastasia Nefeli Vidaki, and Lida Zacharopoulou. 2023. A Multimodal Installation Exploring Gender Bias in Artificial Intelligence. In HCII 2023: Universal Access in Human-Computer Interaction. Edited by Margherita Antona and Constantine Stephanidis. Lecture Notes in Computer Science. New York and Cham: Springer, vol. 14020, pp. 27–46. [Google Scholar] [CrossRef]
  12. Farago, Flora, Natalie D. Eggum-Wilkens, and Lilin Zhang. 2021. Ugandan adolescents’ gender stereotype knowledge about jobs. Youth & Society 53: 723–744. [Google Scholar] [CrossRef]
  13. García-Ull, Franciso José, and Mónica Melero-Lázaro. 2023. Gender stereotypes in AI-generated images. Profesional De La información 32: 1–13. [Google Scholar] [CrossRef]
  14. Gillis, Alexander S., and Mary K. Pratt. 2023. In-Depth Guide to Machine Learning in the Enterprise. Techtarget. Available online: https://www.techtarget.com/searchenterpriseai/definition/machine-learning-bias-algorithm-bias-or-AI-bias (accessed on 1 February 2024).
  15. Gottfredson, Linda S. 1981. Circumscription and compromise: A developmental theory of occupational aspirations. Journal of counseling psychology 28: 545–79. [Google Scholar] [CrossRef]
  16. Gómez-de-Ágreda, Ángel, Claudio Feijóo, and Idoia-Ana Salazar-Garcí a. 2021. Una nueva taxonomí a del uso de la imagen en la conformación interesada del relato digital. Deep fakes e inteligencia artificial. Profesional De La información 30: 1–24. [Google Scholar] [CrossRef]
  17. Heikkilä, Melissa. 2023. ¿Esta IA es racista o machista? Compruébalo con estas herramientas. Tecnology Review. March 27. Available online: https://www.technologyreview.es/s/15220/esta-ia-es-racista-o-machista-compruebalo-con-estas-herramientas (accessed on 15 January 2024).
  18. Kantayya, Shalini. 2020. Coded Bias. Netflix. Available online: https://www.netflix.com/es/title/81328723 (accessed on 15 January 2024).
  19. Lee, Sang, Raya Hamad Alsereidi, and Samar Ben Romdhane. 2023. Gender Roles, Gender Bias, and Cultural Influences: Perceptions of Male and Female UAE Public Relations Professionals. Social Sciences 12: 673. [Google Scholar] [CrossRef]
  20. Luccioni, Alexandra Sasha, Christopher Akiki, Margaret Mitchell, and Yacine Jernite. 2023. Stable Bias: Analyzing Societal Representations in Diffusion Models. arXiv arXiv:2303.11408v2. [Google Scholar]
  21. Lucy, Li, and David Bamman. 2021. Gender and representation bias in GPT-3 generated stories. In Proceedings of the Third Workshop on Narrative Understanding. Edited by Nader Akoury, Faeze Brahman, Snigdha Chaturvedi, Elizabeth Clark, Mohit Iyyer and Lara J. Martin. Stroudsburg: Association for Computational Linguistics, pp. 48–55. [Google Scholar]
  22. Mandal, Abhishek, Suzanne Little, and Susan Leavy. 2023. Gender Bias in Multimodal Models: A Transnational Feminist Approach Considering Geographical Region and Culture. Paper present at the 1st Workshop on Fairness and Bias co-located with 26th European Conference on Artificial Intelligence (ECAI 2023), Krakow, Poland, October 1; Aachen: CEUR. Available online: https://ceur-ws.org/Vol-3523/ (accessed on 1 February 2024).
  23. Mannering, Harvey. 2023. Analysing Gender Bias in Text-to-Image Models Using Object Detection. Submitted to STAI Workshop 2023. Available online: https://arxiv.org/pdf/2307.08025.pdf (accessed on 15 January 2024).
  24. Nicoletti, Leonardo, and Dina Bass. 2023. Humans Are Biased: Generative AI Is Even Worse. Bloomberg Technology+ Equality. June 9. Available online: https://www.bloomberg.com/graphics/2023-generative-ai-bias/ (accessed on 3 February 2024).
  25. Nurock, Vanessa. 2020. ¿Puede prestar cuidados la Inteligencia Artificial? Cuadernos de Relaciones Laborales 38: 217–29. [Google Scholar] [CrossRef]
  26. Ortiz de Zárate Alcarazo, Lucía, and Ariana Guevara Gómez. 2021. Inteligencia artificial e igualdad de género. Un análisis comparado en UE, Suecia y España. Madrid: Fundación Alternativas. Available online: https://www.igualdadenlaempresa.es/recursos/estudiosMonografia/docs/Estudio_Inteligencia_artificial_e_igualdad_de_genero_Fundacion_Alternativas.pdf (accessed on 15 January 2024).
  27. Panch, Trishan, Mattie Heather, and Rifat Atun. 2019. Artificial intelligence and algorithmic bias: Implications for health systems. Journal of Global Health 9: 010318. [Google Scholar] [CrossRef] [PubMed]
  28. Ray, Partha Pratim. 2023. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems 3: 121–54. Available online: https://www.sciencedirect.com/science/article/pii/S266734522300024X (accessed on 15 January 2024). [CrossRef]
  29. Sandoval-Martín, Teresa, Clara Sainz de Baranda, and Leonardo La-Rosa Barrolleta. 2021. La expansión de los sesgos de género con la inteligencia artificial. In Estudios de Género en tiempos de amenaza. Edited by Elena Bandrés. Madrid: Dykinson, pp. 566–83. [Google Scholar]
  30. Sandoval-Martín, Teresa, Victoria Moreno-Gil, and Ester Martínez-Sanzo. 2023. Ausencias de género en la ética de la IA: De las recomendaciones internacionales a la Estrategia Española. In Desafíos éticos y Tecnológicos del Avance Digital. Madrid: Portal de Derecho, S.A. Iustel, pp. 315–28. [Google Scholar]
  31. Sáinz, Milagros, Lidia Arroyo, and Cecilia Castaño. 2020. Mujeres y digitalización. De las brechas a los algoritmos. Madrid: Instituto de la Mujer y para la Igualdad de Oportunidades. Ministerio de Igualdad. Available online: https://www.inmujeres.gob.es/diseno/novedades/M_MUJERES_Y_DIGITALIZACION_DE_LAS_BRECHAS_A_LOS_ALGORITMOS_04.pdf (accessed on 1 February 2024).
  32. Schomer, Audrey. 2023. Entertainment Industry Has High Anxiety about Generative AI: Survey. Variety. July 6. Available online: https://variety.com/vip/generative-ai-survey-entertainment-industry-anxiety-jobs-1235662009/ (accessed on 1 February 2024).
  33. Shrestha, Sunny, and Sanchari Das. 2022. Exploring gender biases in ML and AI academic research through systematic literature review. Frontiers in Artificial Intelligence 5: 976838. [Google Scholar] [CrossRef] [PubMed]
  34. Sun, Tony, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Literature review. Paper present at the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, July 28–August 2; Florence: Association for Computational Linguistics, pp. 1630–40. [Google Scholar] [CrossRef]
  35. Szenkman, Paula, and Estefanía Lotitto. 2020. Mujeres en STEM: Cómo romper con el círculo vicioso. In Documento de Políticas Públicas Nº 224. Buenos Aires: CIPPEC. Available online: https://www.cippec.org/wp-content/uploads/2020/11/224-DPP-PS-Mujeres-en-STEM-Szenkman-y-Lotitto-noviembre-2020-1.pdf (accessed on 15 January 2024).
  36. Teig, Stacey, and Joshua E. Susskind. 2008. Truck driver or nurse? The impact of gender roles and occupational status on children’s occupational preferences. Sex Roles 58: 848–63. [Google Scholar] [CrossRef]
  37. Watercutter, Angela. 2023. La huelga de actores de Hollywood y la lucha contra la IA. Wired. July 14. Available online: https://es.wired.com/articulos/huelga-de-actores-de-hollywood-y-la-lucha-contra-inteligencia-artificial (accessed on 15 January 2024).
  38. Zhou, Mi, Vibhanshu Abhishek, and Kannan Srinivasan. 2023. Bias in Generative AI (Work in Progress). Available online: https://www.andrew.cmu.edu/user/ales/cib/bias_in_gen_ai.pdf (accessed on 1 February 2024).
Figure 1. Nurse. Source: Own elaboration with DALL·E 3.
Figure 1. Nurse. Source: Own elaboration with DALL·E 3.
Socsci 13 00250 g001
Figure 2. Tailor and Hotel Manager. Source: Own elaboration with DALL·E 3.
Figure 2. Tailor and Hotel Manager. Source: Own elaboration with DALL·E 3.
Socsci 13 00250 g002
Figure 3. Maid, Teacher—primary, Teacher—secondary, Singer and Secretary. Source: Own elaboration with DALL·E 3.
Figure 3. Maid, Teacher—primary, Teacher—secondary, Singer and Secretary. Source: Own elaboration with DALL·E 3.
Socsci 13 00250 g003
Figure 4. Carpenter, Taxi driver, Truck driver, Mechanic and Politician. Source: Own elaboration with DALL·E 3.
Figure 4. Carpenter, Taxi driver, Truck driver, Mechanic and Politician. Source: Own elaboration with DALL·E 3.
Socsci 13 00250 g004
Figure 5. CEO and Doctor. Source: Own elaboration with DALL·E 3.
Figure 5. CEO and Doctor. Source: Own elaboration with DALL·E 3.
Socsci 13 00250 g005
Figure 6. Airplane Pilot. Source: Own elaboration with DALL·E 3.
Figure 6. Airplane Pilot. Source: Own elaboration with DALL·E 3.
Socsci 13 00250 g006
Figure 7. Journalist and Professor. Source: Own elaboration with DALL·E 3.
Figure 7. Journalist and Professor. Source: Own elaboration with DALL·E 3.
Socsci 13 00250 g007
Figure 8. Male teacher and Female teacher. Source: own elaboration using Bing Image Creator.
Figure 8. Male teacher and Female teacher. Source: own elaboration using Bing Image Creator.
Socsci 13 00250 g008
Figure 9. Male architect and Female architect. Source: Own elaboration using Bing Image Creator.
Figure 9. Male architect and Female architect. Source: Own elaboration using Bing Image Creator.
Socsci 13 00250 g009
Figure 10. Male engineer and Female engineer. Source: own elaboration using Bing Image Creator.
Figure 10. Male engineer and Female engineer. Source: own elaboration using Bing Image Creator.
Socsci 13 00250 g010
Figure 11. Male journalist and Female journalist. Source: Own elaboration using Bing Image Creator.
Figure 11. Male journalist and Female journalist. Source: Own elaboration using Bing Image Creator.
Socsci 13 00250 g011
Figure 12. Child (male) playing. Source: Own elaboration using Bing Image Creator.
Figure 12. Child (male) playing. Source: Own elaboration using Bing Image Creator.
Socsci 13 00250 g012
Figure 13. Child (female) playing. Source: Own elaboration using Bing Image Creator.
Figure 13. Child (female) playing. Source: Own elaboration using Bing Image Creator.
Socsci 13 00250 g013
Table 1. Articles about DALL·E and professional stereotypes in several occupations.
Table 1. Articles about DALL·E and professional stereotypes in several occupations.
AuthorsTools AnalyzedResearch
García-Ull and Melero-Lázaro (2023)DALL·E 2Identify gender biases in professions through the images generated.
Cheong et al. (2024)DALL·E Mini ImagesIt shows how the representation of certain professions tends to be reflected with a certain gender or race.
Source. Own elaboration.
Table 2. Exploratory tests in this study.
Table 2. Exploratory tests in this study.
ToolCompany/DeveloperAvailabilityFocus
Bing Image CreatorMicrosoftFreeAccessible to create basic visualizations
DALL·E 3OpenAIPayableMost advanced interface in image generation.
Source: Own elaboration.
Table 3. AI techniques in photography.
Table 3. AI techniques in photography.
AI Techniques in Photography
1. Deep learning, neural networks, and image labeling
2. Image generation
3. Super resolution
4. Detection of ultrafake images or deepfakes
5. Sentiment analysis
6. Semantic segmentation
7. Scene recognition
8. Search for images by content
9. Augmented reality
10. Medical applications
11. Privacy and Security
12. Styling and editing of images
13. Composition analysis
14. Green applications
15. Security and surveillance applications
Source: Own elaboration.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sandoval-Martin, T.; Martínez-Sanzo, E. Perpetuation of Gender Bias in Visual Representation of Professions in the Generative AI Tools DALL·E and Bing Image Creator. Soc. Sci. 2024, 13, 250. https://doi.org/10.3390/socsci13050250

AMA Style

Sandoval-Martin T, Martínez-Sanzo E. Perpetuation of Gender Bias in Visual Representation of Professions in the Generative AI Tools DALL·E and Bing Image Creator. Social Sciences. 2024; 13(5):250. https://doi.org/10.3390/socsci13050250

Chicago/Turabian Style

Sandoval-Martin, Teresa, and Ester Martínez-Sanzo. 2024. "Perpetuation of Gender Bias in Visual Representation of Professions in the Generative AI Tools DALL·E and Bing Image Creator" Social Sciences 13, no. 5: 250. https://doi.org/10.3390/socsci13050250

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop