Next Article in Journal
Identification of Sexual Behaviour of Feminist Men Who Have Sex with Woman in Indonesia
Next Article in Special Issue
Digital Teaching Competence Regarding Foreign Languages and Learning Modes at Official Language Schools in Andalusia (Spain)
Previous Article in Journal
Conflict Management Strategies as Moderators of Burnout in the Context of Emotional Labor
Previous Article in Special Issue
Global Patterns of Parental Concerns About Children’s Education: Insights from WVS Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ethical AI in Social Sciences Research: Are We Gatekeepers or Revolutionaries?

1
Centre of Research Development and Innovation in Psychology, Faculty of Educational Sciences, Aurel Vlaicu University of Arad, 310032 Arad, Romania
2
Centre for Economic Research and Consultancy, Faculty of Economics, Aurel Vlaicu University of Arad, 310032 Arad, Romania
*
Author to whom correspondence should be addressed.
Societies 2025, 15(3), 62; https://doi.org/10.3390/soc15030062
Submission received: 7 February 2025 / Revised: 23 February 2025 / Accepted: 4 March 2025 / Published: 6 March 2025

Abstract

:
The rapid expansion of artificial intelligence (AI) in social sciences research introduces both transformative potential and critical ethical dilemmas. This study examines the role of researchers as either ethical gatekeepers or pioneers of AI-driven change. Through a bibliometric analysis of 464 records from the Web of Science Core Collection, we identify key themes in ethical AI discourse using VOSviewer Version 1.6.20. The findings highlight dominant ethical concerns, including governance, bias, transparency, and fairness, emphasizing the need for interdisciplinary collaborations and responsible AI frameworks. While AI offers efficiency and scalability in research, unresolved issues related to algorithmic bias, governance, and public trust persist. The overlay visualization underscores emerging trends such as generative AI, policy-driven governance, and ethical accountability frameworks. This study calls for a shift from passive oversight to proactive ethical stewardship in AI-driven social science research.

1. Introduction

Artificial intelligence (AI) has become an essential tool in social sciences research, offering new opportunities for data analysis, predictive modeling, and interdisciplinary collaboration. However, as AI systems are increasingly embedded in academic inquiry and decision-making, concerns about fairness, transparency, bias, and governance have become central to ethical debates [1,2]. Researchers are faced with a dual responsibility: to harness AI’s transformative potential while ensuring its ethical and responsible implementation. This tension raises a fundamental question: are social scientists merely gatekeepers enforcing existing ethical constraints, or are they active participants in shaping AI’s role in research and policy?
While existing scholarship has extensively explored AI ethics frameworks, algorithmic fairness, and regulatory principles, much of this work remains theoretical or fragmented, lacking an empirical, data-driven overview of how ethical AI discourse is evolving within the social sciences [3,4,5]. The current literature highlights conceptual frameworks for responsible AI but offers limited insights into how these principles translate into practice across different research domains. Furthermore, previous studies tend to focus on specific case studies or sectoral applications, without a comprehensive mapping of emerging trends, conceptual interconnections, and governance challenges [6,7]. This study addresses these gaps by employing bibliometric analysis to systematically examine the thematic landscape of ethical AI in social sciences research, identifying dominant research trends and key governance debates.
Using 464 records from the Web of Science Core Collection, we identify 43 recurring concepts related to AI ethics. The findings reveal persistent gaps between AI ethics principles and their practical implementation, particularly in areas such as fairness-aware AI, algorithmic transparency, and regulatory oversight [8,9]. This study also examines AI’s implications for higher education, interdisciplinary policymaking, and the governance of emerging AI technologies, shedding light on how social scientists navigate the ethical dilemmas posed by AI-driven methodologies.
One of the core challenges in AI ethics is the difficulty of transitioning from theoretical principles to practical applications. While ethical guidelines for AI have been widely discussed [10], scholars argue that the lack of standardized implementation methods has led to concerns about “ethics washing”, where AI ethics frameworks are adopted in rhetoric but lack meaningful enforcement mechanisms [11]. AI-driven decision-making in social sciences raises further concerns regarding algorithmic bias, fairness-aware models, and the exclusion of marginalized voices in knowledge production [12]. Additionally, some researchers caution that overly rigid ethical regulations may inadvertently stifle AI-driven innovation in academic and policy settings [13,14].
One proposed strategy for addressing these concerns is the “ethics by design” approach, which advocates for integrating ethical considerations into every stage of AI development [15,16]. However, studies suggest that such frameworks remain aspirational rather than operational, as ethical AI guidelines frequently fail to accommodate the complexities of real-world implementation [17]. This disconnect has prompted critiques of AI’s role in shaping exclusionary pedagogies, where dominant perspectives in AI ethics may marginalize alternative or non-Western viewpoints [18].
Beyond academia, ethical AI research intersects with broader legal, social, and economic concerns, requiring interdisciplinary governance approaches that balance technological advancement with ethical responsibility [19]. As AI increasingly influences decision-making in various domains, scholars emphasize the need for reflexivity in AI ethics research, ensuring that ethical discussions remain context-sensitive and adaptable to technological advancements [20].
This study contributes to the growing discourse on ethical AI governance by critically assessing the role of social scientists as both regulators and enablers of AI ethics. The findings call for a shift from passive oversight to active ethical engagement, underscoring the importance of interdisciplinary collaboration, reflexivity in AI ethics, and context-driven governance frameworks. The study highlights the need for dynamic, inclusive, and adaptive ethical frameworks that evolve alongside AI’s expanding role in social sciences research. By providing a structured, empirical mapping of ethical AI discourse, this study offers a data-driven foundation for future discussions on AI governance, policy frameworks, and responsible AI integration across academic and professional fields.

2. Methodology

This study employs bibliometric analysis to systematically examine the thematic landscape of ethical AI in social sciences research. Bibliometric methods are widely used to map research trends, identify conceptual linkages, and assess the intellectual structure of a given field. In this study, we utilized the Web of Science Core Collection as the primary data source due to its extensive coverage of high-impact, peer-reviewed publications.
Data were retrieved using the search query “ethical AI AND social sciences research” and applied across all fields. This search yielded 464 records, including full metadata and cited references, which were exported in text file format for further analysis. To ensure the dataset was relevant to the research focus, articles were filtered based on their titles, abstracts, and keywords, prioritizing studies explicitly addressing AI ethics within social sciences contexts.
To account for potential regional variations in ethical AI discourse, we conducted an additional country-based analysis of the retrieved records. Using the metadata available in the Web of Science Core Collection, we classified publications based on the primary country affiliation of their corresponding authors. This allowed us to identify national trends in AI ethics research and assess whether governance frameworks and ethical debates varied across geopolitical contexts. However, the scope of this study remains limited to bibliometric mapping rather than in-depth legal and policy analysis at the national level.
The selection of literature sources was guided by the objective of capturing the most influential and widely cited works in ethical AI research. The bibliometric dataset drawn from the Web of Science Core Collection was cross-referenced with key theoretical and empirical contributions to ensure that the analyzed records aligned with foundational texts in AI ethics. Additionally, sources were selected based on their impact factor, citation count, and relevance to AI governance, bias mitigation, and interdisciplinary AI research. By comparing the bibliometric dataset with existing literature reviews and policy documents, this study ensures that both academic and policy-oriented perspectives are incorporated into the analysis. This methodological triangulation enhances the reliability of the thematic clusters identified and strengthens the study’s contribution to ongoing ethical AI debates.
To identify core research themes, an initial keyword extraction was conducted, resulting in 312 unique terms. These terms were refined by applying a minimum frequency threshold of two occurrences, meaning only terms appearing at least twice in the dataset were included in the final analysis. This filtering process led to 43 key terms, ensuring that only recurrent and significant concepts were considered.
To enhance the accuracy and coherence of keyword analysis, a thesaurus file was implemented to standardize terminology and reduce redundancy. For example, “AI” and “artificial-intelligence” were merged into “artificial intelligence”, and “bibliometrics” was standardized as “bibliometric analysis”. This process improved conceptual clarity and clustering accuracy, facilitating a more precise thematic classification.
To explore conceptual relationships and thematic structures, we used VOSviewer, a specialized software for bibliometric mapping. The network visualization revealed co-occurrence patterns between key terms, highlighting interconnections among ethical concerns, governance frameworks, fairness-aware AI, and transparency. To identify temporal trends, we applied an overlay visualization, which provided a chronological perspective on the evolution of ethical AI discourse.
A minimum cluster size of 10 items was set to ensure meaningful groupings of related concepts. This allowed for the identification of distinct research trajectories within ethical AI scholarship. The clustering process revealed major thematic categories, including:
  • Governance and policy regulation;
  • Bias, fairness, and ethical concerns;
  • Transparency and trust in AI systems;
  • Methodological and interdisciplinary considerations.
This methodological approach ensures a systematic, replicable, and data-driven investigation of ethical AI in social sciences. By leveraging bibliometric techniques and visualization tools, this study provides a structured overview of emerging research trends, conceptual intersections, and governance challenges in the evolving discourse on ethical AI.

3. Results

The bibliometric analysis of ethical AI in social sciences research identified key research trends, dominant themes, and emerging areas of interest. A total of 464 records were retrieved from the Web of Science Core Collection using the search query “ethical AI AND social sciences research”. These records, including full metadata and cited references, were exported in text format for further analysis. To identify conceptual relationships and evolving research directions, a keyword co-occurrence analysis was performed using VOSviewer.
An initial extraction of 312 unique keywords provided a broad thematic overview of ethical AI research. To ensure conceptual clarity and avoid redundancy, a thesaurus file was applied to merge synonymous and variant terms. For instance, “AI” and “artificial-intelligence” were standardized as “artificial intelligence”, “bibliometrics” was consolidated under “bibliometric analysis”, and “AI ethics” was refined to “ethical artificial intelligence”. To focus on the most relevant terms, a minimum frequency threshold of two occurrences was applied, resulting in a final dataset of 43 key terms that formed the basis for further thematic analysis.
We also looked at the Oxford Insights AI Readiness Index, which ranks nations’ readiness for artificial intelligence adoption based on governance, infrastructure, and ethical compliance policies, to help frame these results. The rankings of the AI Readiness Index offer a macro-level awareness of how various countries include artificial intelligence into their legislative agendas. High AI readiness scores and the number of AI ethics research point to a correlation whereby nations with advanced AI strategies—such as the United States, Canada, and the European Union—tend to generate more scholarship on governance, accountability, and fairness-aware AI. On the other hand, nations adopting new AI governance models can prioritize pragmatic regulatory issues over theoretical AI ethics debate. Including these concepts into AI ethics research can provide a more detailed picture of regional AI policy implementations, especially in underdeveloped countries where AI governance structures are still forming.
The network visualization generated through VOSviewer provided a structured representation of thematic relationships in ethical AI research, as seen in Figure 1.
The visualization revealed a highly interconnected conceptual framework, with artificial intelligence serving as the central node linking multiple subdomains of AI ethics. The analysis identified three major clusters, each representing a distinct thematic concentration in the ethical AI discourse.
The first cluster focused on ethics, fairness, and machine learning, emphasizing ongoing discussions on fairness-aware AI, algorithmic bias, and responsible AI development. Key terms in this cluster included “ethical artificial intelligence”, “fairness-aware”, “responsible AI”, and “bias”. The strong association between “machine learning” and fairness discussions indicates that scholars are particularly concerned with algorithmic transparency and bias mitigation in automated decision-making systems.
The second cluster centered on methodological and research-oriented themes, highlighting AI’s dual role as both an analytical tool and an ethical subject. This cluster contained terms such as “bibliometric analysis”, “education”, “innovation”, and “research ethics”, demonstrating the increasing use of AI-driven research tools to assess scientific impact and ethical concerns. The inclusion of “ChatGPT” in this cluster suggests a growing scholarly focus on generative AI models and their implications for academic integrity, authorship, and knowledge dissemination.
The third major cluster revolved around privacy, trust, and governance, encapsulating broader concerns about AI’s role in society and policymaking. Key terms such as “privacy”, “trust”, “ethical concerns”, and “information quality” were closely linked, indicating that issues of transparency and public confidence in AI systems remain central to ethical debates. The presence of “alexa” as a connected term suggests increasing awareness of the ethical dimensions of AI-powered voice assistants and personal data security.
Beyond these individual clusters, the network structure itself demonstrated the deeply intertwined nature of AI ethics research. The frequent overlaps between governance, bias, and fairness underscore the ongoing challenge of developing regulatory frameworks that balance technological innovation with ethical responsibility. Additionally, the connections between methodological advances, educational applications, and AI ethics suggest that scholars are not only analyzing AI from an ethical standpoint but also employing AI-based tools to examine its societal implications.
The overlay visualization (Figure 2) provides a temporal lens through which the evolution of ethical AI research in social sciences can be examined. The color-coded timeline, ranging from blue (2021) to yellow (2024), reflects the emergence, persistence, and transformation of key ethical concerns in AI discourse over time.
At the center of the visualization, “artificial intelligence” and “ethics” maintain strong interconnections, appearing in shades of green, indicating that they have remained consistently relevant over multiple years. These terms serve as anchor points around which discussions on bias, governance, fairness, and AI responsibility have evolved [21]. The longstanding presence of “bias” and “fairness”, appearing in blue and green tones, suggests that concerns about algorithmic discrimination and fairness-aware AI systems have been persistent since at least 2021 [22]. However, while these topics continue to be investigated, new ethical challenges have emerged in recent years, particularly with the rise of generative AI models.
A particularly notable development is the rapid emergence of generative AI research, as indicated by the bright yellow shading of terms such as “ChatGPT” (2024) and “generative AI” (2024). This signals a recent surge in scholarly interest, with researchers increasingly exploring the ethical implications of AI-generated content, misinformation, and its impact on academic integrity [23,24]. The connection between “ChatGPT” and “bibliometric analysis” suggests that scholars are employing quantitative methods to assess how generative AI is influencing knowledge production, publication practices, and ethical considerations in research [25].
The overlay visualization also reveals an emerging focus on governance and policy. Terms such as “governance” (2023), “risk” (2024), and “information quality” (2024) reflect a growing academic engagement with AI regulation, transparency, and trustworthiness [26]. The relatively recent inclusion of “trust” and “privacy” in lighter green and yellow tones indicates a shift toward interdisciplinary approaches, where legal scholars, ethicists, and social scientists collaborate on AI accountability frameworks [27,28].
The relationship between AI and healthcare ethics is visible through the blue and green connections between “health”, “health-care”, and “responsible AI”. The presence of these terms in earlier timeframes (2021–2023) suggests that initial AI ethics discussions were heavily focused on medical applications, patient data privacy, and the ethical dilemmas of AI-driven decision-making in clinical settings [29,30]. While this topic remains relevant, it appears that the ethical discourse in AI has since expanded to broader societal applications, including generative AI, governance, and fairness-aware systems.
The overlay visualization highlights the dynamic and expanding nature of ethical AI research. While bias, fairness, and governance remain central themes, recent years have witnessed a significant shift toward examining generative AI, interdisciplinary governance frameworks, and policy-driven approaches to AI accountability. This shift underscores the need for adaptive ethical frameworks that can balance innovation with responsibility, ensuring that AI remains aligned with evolving societal values and expectations [31].
Table 1 presents a structured overview of the key thematic terms in ethical AI research, their frequency of occurrence, link strength, and temporal trends. By analyzing the coefficients, we can identify patterns in the evolution of ethical AI discourse, its central debates, and emerging concerns.
The most frequently occurring term, artificial intelligence (26 occurrences, total link strength of 77), serves as the conceptual anchor of this research field. Its high connectivity with other key terms, such as ethics (10 occurrences, link strength of 29), bias (2 occurrences, link strength of 8), and responsible AI (3 occurrences, link strength of 5), highlights the centrality of ethical considerations in AI deployment. The relatively high citation score of ethical issues (63 citations, normalized score 2.2183) further emphasizes the continued academic focus on addressing AI-related moral dilemmas.
Recent trends suggest a growing scholarly interest in applied and interdisciplinary AI ethics. The term ChatGPT (5 occurrences, link strength of 17), with an average publication year of 2024.2, indicates that generative AI has become a major topic of debate only recently. Similarly, generative AI (3 occurrences, link strength of 8) appears in 2024.7, reinforcing the idea that discussions on the ethical implications of AI-generated content are relatively new but rapidly gaining traction.
Interdisciplinary collaborations (2 occurrences, link strength of 11, publication year 2024) suggest a shift toward cross-sectoral approaches in addressing AI ethics. The growing association of AI with fields like education (2 occurrences, link strength of 7) and social sciences (3 occurrences, link strength of 10) reflects an increasing interest in how AI ethics is integrated into academic research and policymaking.
Terms related to governance (2 occurrences, link strength of 8) and privacy (3 occurrences, link strength of 16) signal a sustained interest in regulatory frameworks and data protection. The relatively recent emergence of trust (3 occurrences, link strength of 11, publication year 2023.3) aligns with ongoing discussions on the need for transparent AI systems that build user confidence. Meanwhile, transparency (2 occurrences, link strength of 6) exhibits strong linkages with machine learning (7 occurrences, link strength of 16) and fairness-aware AI (2 occurrences, link strength of 4), reinforcing the importance of ethical algorithmic decision-making.
The term bibliometric analysis (4 occurrences, link strength of 11) suggests a methodological shift in AI ethics research, where scholars increasingly rely on data-driven assessments to map ethical concerns. Similarly, big data (5 occurrences, link strength of 19) continues to be a critical concept, linking AI’s ethical discourse to data governance and predictive analytics. Normative ethics (2 occurrences, link strength of 4, publication year 2024) indicates a persistent but lower-frequency theoretical focus. This suggests that while foundational ethical debates remain relevant, there is an increasing shift toward applied ethical AI discussions rather than purely philosophical explorations.
The overall trends point to key challenges that require further research. First, the relatively lower frequency of risk (2 occurrences, link strength of 3) compared to bias and trust suggests that while AI risks are recognized, their operationalization in ethical frameworks may still be underdeveloped. Additionally, the link strength of fairness (4 occurrences, link strength of 4) remains moderate, indicating that while fairness in AI is widely discussed, its practical integration into AI governance structures is still evolving.

4. Discussion

The integration of artificial intelligence (AI) into social sciences research has ignited both enthusiasm and deep ethical scrutiny, positioning scholars at a critical juncture between regulation and transformation. AI offers unprecedented opportunities for enhancing research methodologies, data analysis, and predictive modeling, yet it also raises concerns about fairness, transparency, bias, and governance. This study highlights the evolving discourse on ethical AI, revealing that AI ethics is no longer a static theoretical construct but an adaptive and dynamic field, shaped by technological advancements, interdisciplinary engagement, and global governance challenges [32].
While the study highlights emerging trends in AI ethics, a key limitation is the lack of direct engagement with international ethical standards set by organizations such as UNESCO, the OECD, the World Bank, and the Red Cross, which have developed governance frameworks for AI regulation. For instance, UNESCO’s Recommendation on the Ethics of Artificial Intelligence (2021) emphasizes transparency, accountability, and human rights-based approaches, whereas the OECD AI Principles focus on responsible AI innovation and fair regulatory mechanisms. The OECD AI Observatory also provides valuable insights into how countries implement these principles, offering a quantitative perspective on AI governance structures worldwide. Additionally, national regulations significantly impact ethical AI debates. For example, China’s regulatory measures on generative AI have influenced global discussions on misinformation and content moderation, while the EU AI Act has sought to establish a risk-based classification system for AI applications. The country-specific analysis of policy trends reveals that legal enforcement mechanisms vary widely, leading to different interpretations of fairness, bias mitigation, and AI transparency.
One of the core tensions in AI ethics research lies in the discrepancy between ethical principles and their practical implementation. While ethical guidelines exist, their enforcement remains inconsistent, leading to concerns about “ethics washing”, where organizations publicly commit to ethical AI principles without substantive action [33]. Scholars argue that key principles such as explainability, transparency, and fairness-aware AI systems often remain aspirational rather than enforceable, largely due to the lack of standardized methodologies for integrating ethics into AI design, deployment, and evaluation [34]. These challenges underscore the need to move beyond broad ethical frameworks toward context-sensitive, interdisciplinary approaches that account for real-world complexities [35].
A particularly urgent concern is the growing skepticism toward AI ethics as a regulatory tool. Some scholars suggest that, rather than safeguarding responsible AI use, AI ethics may be strategically used to delay or soften necessary regulatory interventions [36]. This concern has been exacerbated by the rise of generative AI models, such as ChatGPT, which introduce new risks related to AI-generated misinformation, biases in training data, and the automation of knowledge production [37]. The overlay visualization confirms that generative AI and interdisciplinary governance have emerged as dominant research themes in 2023–2024, signaling a shift toward applied AI ethics rather than purely theoretical discussions [38]. This transition reflects an increasing emphasis on regulatory and policy-driven approaches, as ethical AI discourse moves beyond traditional concerns of bias and fairness toward more pressing questions surrounding accountability, authorship, and the epistemological implications of AI-driven research.
The ethical landscape of AI in social sciences research has also become increasingly politicized. The debate over algorithmic fairness, bias, and accountability has moved beyond technical concerns to engage with broader socio-political issues, particularly regarding AI’s role in reinforcing systemic inequalities [39]. The network visualization highlights this shift, as privacy, governance, and policy-related terms form strong conceptual connections, suggesting that ethical AI is now recognized as a central policy issue rather than just an academic debate [40]. Some scholars advocate for a more explicitly political approach to AI ethics, arguing that discussions must incorporate perspectives from historically marginalized communities to counteract technological determinism and corporate-driven narratives about AI’s neutrality [41]. Without inclusive governance mechanisms, AI ethics risks becoming a performative exercise, reinforcing existing power imbalances rather than addressing them.
One of the most pressing concerns identified in this study is AI’s expanding role in higher education and scholarly knowledge production. The overlay visualization highlights a convergence between AI ethics, bibliometric analysis, and education, suggesting that AI is not only shaping research methodologies but also transforming academic integrity, authorship norms, and the credibility of scholarly outputs [42]. As generative AI tools become more prevalent, concerns arise regarding the automation of knowledge production and the epistemological validity of AI-generated content [43].
A key issue is that AI models are trained on datasets that may reflect historical biases, exclusionary epistemologies, or Western-centric perspectives. This raises critical questions about whose knowledge is prioritized, who has access to AI-generated insights, and whether AI is reinforcing dominant academic paradigms rather than challenging them. Addressing these concerns requires a more profound engagement with AI’s role in shaping research itself, rather than focusing solely on governance and fairness-aware AI systems [43].
Despite these challenges, this study also identifies emerging frameworks and solutions that could foster responsible AI governance. The network analysis reveals strong interconnections between “trust”, “responsible AI”, and “transparency”, suggesting that scholars are actively engaged in developing mechanisms for AI accountability and oversight [44]. Ethical AI is increasingly shifting toward a practice-oriented approach, where solutions such as embedded ethics, participatory AI design, and co-regulation with policymakers are gaining traction [45].
Rather than serving as passive gatekeepers of AI ethics, scholars are positioning themselves as active participants in shaping AI ecosystems that balance innovation with social responsibility. This transition reflects a broader recognition that ethics cannot be separated from AI design and implementation; instead, it must be embedded into AI development processes from the outset. Ethical considerations must be proactive and structural, rather than reactive and externally imposed.
In conclusion, this study emphasizes that ethical AI in social sciences research must move beyond compliance-driven models toward a more reflexive, interdisciplinary, and policy-oriented approach. The findings highlight the urgent need for contextual AI ethics, emphasizing the role of social scientists as both critics and co-creators of ethical AI systems. The increasing prominence of generative AI, interdisciplinary governance, and epistemological debates signals a growing demand to rethink AI ethics beyond static principles and toward dynamic, socially responsive frameworks [46].
Ultimately, the future of ethical AI in social sciences depends on whether scholars choose to act as gatekeepers, reinforcing existing ethical constraints, or as revolutionaries, pioneering new paradigms that redefine how AI interacts with society, knowledge production, and policymaking. The findings suggest that the evolving discourse on AI ethics must remain critical, inclusive, and adaptable, ensuring that AI serves as a catalyst for ethical progress rather than an amplifier of existing biases and inequalities [47].

5. Conclusions

The rapid integration of artificial intelligence (AI) in social sciences research has necessitated a critical examination of its ethical dimensions, as AI-driven methodologies become increasingly central to academic inquiry, policy development, and societal decision-making. This study has provided a comprehensive bibliometric analysis of ethical AI discourse, identifying governance, fairness, algorithmic bias, transparency, and generative AI as dominant research themes. The findings highlight a persistent tension between regulatory oversight and AI-driven innovation, positioning scholars at the intersection of ethical responsibility and technological transformation. While AI offers unprecedented opportunities for enhancing research methodologies, decision-making, and knowledge production, its implementation raises profound ethical questions that extend beyond theoretical discussions to practical applications in academia, governance, and public policy.
A key takeaway from this study is that AI ethics is not a fixed or static discipline but an evolving field shaped by interdisciplinary engagement, technological advancements, and global governance frameworks. The network visualization confirms that ethical AI discourse is increasingly focusing on applied concerns, such as trust, privacy, transparency, and governance, rather than purely theoretical debates. The overlay visualization further illustrates a shift toward addressing emerging ethical risks in generative AI, interdisciplinary policy coordination, and accountability structures in AI development and deployment. These findings suggest that social scientists are not merely gatekeepers enforcing ethical standards but also active participants shaping AI’s role in research and policy.
Despite these insights, this study has certain limitations that must be acknowledged. While bibliometric analysis is effective in mapping research trends, it does not account for contextual variations in ethical AI applications across diverse cultural, political, and institutional settings. Future research should adopt qualitative methodologies, such as in-depth interviews, case studies, and ethnographic research, to explore how policymakers, researchers, and practitioners navigate AI ethics in practice. Additionally, reliance on the Web of Science Core Collection may have excluded non-indexed and non-English publications, potentially limiting the diversity of perspectives captured in this analysis. Expanding future bibliometric studies to include regional, multilingual, and non-Western AI ethics discourses could provide a more comprehensive understanding of global AI governance challenges.
Moving forward, research on ethical AI in the social sciences must embrace a more interdisciplinary and reflexive approach. Future studies should examine how AI ethics evolves in response to emerging technologies, such as multimodal AI, federated learning, and neuro-symbolic AI models, which challenge existing ethical and regulatory frameworks. Additionally, as AI becomes increasingly embedded in knowledge production, authorship, and academic integrity, there is a growing need to critically assess the epistemological consequences of AI-generated content in scholarly research. Key areas that warrant further investigation include the following:
  • The governance of AI-generated content: How should academic institutions and policymakers regulate AI-generated publications, knowledge synthesis, and authorship attribution?
  • Bias and fairness-aware AI: How can AI ethics frameworks be designed to mitigate algorithmic bias, address historical inequalities, and promote inclusivity?
  • Regulatory challenges in AI policymaking: What policy mechanisms can balance AI innovation with ethical responsibility, particularly in the context of social sciences research and public decision-making?
This study highlights the urgent need for participatory AI ethics, ensuring that ethical frameworks reflect diverse stakeholder perspectives, particularly those of historically marginalized groups. AI ethics must be dynamic and adaptable, rather than constrained by static compliance models. As AI continues to influence decision-making structures, ethical guidelines must evolve alongside technological advancements, ensuring that AI serves societal interests rather than reinforcing existing power imbalances. The social sciences play a crucial role in bridging the gap between technological innovation and human-centered ethical considerations, ensuring that AI remains a tool for societal progress rather than an instrument of exclusion or control.
Building on the findings of this study, it is essential to translate ethical AI discourse into actionable strategies that inform both regulatory frameworks and future research directions, leading to well-grounded recommendations for policy and research:
  • Strengthening AI policy integration: Governments should align their national AI policies with international ethical guidelines (e.g., UNESCO, OECD, EU AI Act) to ensure harmonized regulatory oversight that mitigates risks associated with AI bias, fairness, and accountability;
  • Enhancing AI ethics education: Universities and research institutions should incorporate mandatory AI ethics training in social sciences and interdisciplinary fields, equipping scholars with the necessary skills to evaluate AI’s ethical implications in real-world contexts;
  • Developing participatory AI governance models: Policymakers should engage diverse stakeholders, including civil society organizations, legal experts, and marginalized communities, in AI ethics discussions to create more inclusive governance frameworks that reflect societal concerns;
  • Expanding AI risk assessment frameworks: AI regulatory bodies should move beyond compliance-based ethics audits to continuous monitoring systems that assess long-term societal impacts of AI deployment. This could involve cross-national comparisons of AI enforcement measures and tracking AI-driven biases in social sciences research;
  • Bridging the global AI ethics divide: Future research should prioritize regional case studies to document disparities in AI ethics implementation across different legal and institutional contexts. Expanding bibliometric studies to include non-English and non-indexed sources can enhance the inclusivity of global AI governance discussions.
Final reflections emphasize the need for a paradigm shift in AI ethics from passive regulatory compliance to active ethical engagement. Rather than treating AI ethics as a checklist of principles, scholars must adopt flexible, context-driven ethical models that evolve in tandem with AI advancements. Ethical AI is not just a governance issue; it is a socio-political and epistemic concern that shapes the distribution of power, knowledge, and agency in an increasingly AI-driven world.
The trajectory of AI ethics in social sciences will ultimately depend on how scholars choose to engage with ethical challenges. Will they act as gatekeepers, reinforcing existing constraints, or as revolutionaries who reshape the discourse, fostering AI’s potential for responsible and inclusive innovation? The findings suggest that the future of ethical AI must remain critical, inclusive, and adaptable, ensuring that AI serves as a catalyst for ethical progress rather than an amplifier of existing biases and inequalities.
AI is no longer merely a supporting tool in social sciences research; it is actively transforming the way knowledge is produced, interpreted, and applied. As AI-driven methodologies become more deeply embedded in research and decision-making, scholars must critically examine AI’s ethical, social, and political implications. Moving forward, it is essential to ask the following:
  • Who defines AI ethics?
  • Whose interests does AI serve?
  • How can AI be leveraged to promote social justice, equity, and meaningful human-AI collaboration?
In this rapidly evolving landscape, social scientists must go beyond acting as ethical overseers and instead take an active role in shaping AI’s trajectory. AI’s ethical governance must reflect democratic values, foster inclusivity, and align with the broader goals of societal well-being. Only by embracing a critical, reflexive, and participatory approach can scholars ensure that AI remains a force for ethical progress rather than a tool for reinforcing existing inequities.

Author Contributions

Conceptualization, R.R., V.H. and O.T.; methodology, D.R., G.C., M.G.-A. and L.D.C.; validation, V.H., A.C. and T.D.; formal analysis, R.R., M.G.-A., T.D. and G.C.; investigation, O.T., V.H., D.R. and L.D.C.; resources, D.R., A.C. and T.D.; data curation, R.R., G.C. and L.D.C.; writing—original draft preparation, R.R., D.R. and O.T.; writing—review and editing, A.C., T.D. and V.H.; visualization, D.R., O.T. and M.G.-A.; supervision, V.H., A.C. and T.D.; project administration, R.R., G.C. and D.R.; funding acquisition, G.C., L.D.C. and D.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Centre of Research Development and Innovation in Psychology of Aurel Vlaicu University of Arad (protocol code 76/09.12.2024).

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Whittlestone, J.; Clarke, S. AI Challenges for Society and Ethics. In The Oxford Handbook of AI Governance; Oxford University Press: Oxford, UK, 2022. [Google Scholar]
  2. Morley, J.; Floridi, L.; Kinsey, L.; Elhalal, A. From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci. Eng. Ethics 2020, 26, 2141–2168. [Google Scholar] [CrossRef]
  3. Raji, I.D.; Scheuerman, M.K.; Amironesei, R. You can′t sit with us: Exclusionary pedagogy in AI ethics education. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Toronto, ON, Canada, 3–10 March 2021; pp. 515–525. [Google Scholar]
  4. Iphofen, R.; Kritikos, M. Regulating artificial intelligence and robotics: Ethics by design in a digital society. Contemp. Soc. Sci. 2021, 16, 170–184. [Google Scholar] [CrossRef]
  5. d’Aquin, M.; Troullinou, P.; O’Connor, N.E.; Cullen, A.; Faller, G.; Holden, L. Towards an “ethics by design” methodology for AI research projects. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA, 2–3 February 2018; pp. 54–59. [Google Scholar]
  6. Farina, M.; Zhdanov, P.; Karimov, A.; Lavazza, A. AI and society: A virtue ethics approach. AI Soc. 2024, 39, 1127–1140. [Google Scholar] [CrossRef]
  7. Hallamaa, J.; Kalliokoski, T. AI ethics as applied ethics. Front. Comput. Sci. 2022, 4, 776837. [Google Scholar] [CrossRef]
  8. Benefo, E.O.; Tingler, A.; White, M.; Cover, J.; Torres, L.; Broussard, C.; Shirmohammadi, A.; Pradhan, A.K.; Patra, D. Ethical, legal, social, and economic (ELSE) implications of artificial intelligence at a global level: A scientometrics approach. AI Ethics 2022, 2, 667–682. [Google Scholar] [CrossRef]
  9. Stahl, B.C. Concepts of ethics and their application to AI. In Artificial Intelligence for a Better Future: An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies; Springer: Cham, Switzerland, 2021; pp. 19–33. [Google Scholar]
  10. Dubber, M.D.; Pasquale, F.; Das, S. The Oxford Handbook of Ethics of AI; Oxford Handbooks: Oxford, UK, 2020. [Google Scholar]
  11. Lauer, D. You cannot have AI ethics without ethics. AI Ethics 2021, 1, 21–25. [Google Scholar] [CrossRef]
  12. Georgieva, I.; Lazo, C.; Timan, T.; van Veenstra, A.F. From AI ethics principles to data science practice: A reflection and a gap analysis based on recent frameworks and practical experience. AI Ethics 2022, 2, 697–711. [Google Scholar] [CrossRef]
  13. Bakiner, O. What do academics say about artificial intelligence ethics? An overview of the scholarship. AI Ethics 2023, 3, 513–525. [Google Scholar] [CrossRef]
  14. Berendt, B.; Büchler, M.; Rockwell, G. Is it research or is it spying? Thinking-through ethics in Big Data AI and other knowledge sciences. KI-Künstl. Intell. 2015, 29, 223–232. [Google Scholar] [CrossRef]
  15. Schiff, D.; Biddle, J.; Borenstein, J.; Laas, K. What′s next for AI ethics, policy, and governance? A global overview. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, 7–9 February 2020; pp. 153–158. [Google Scholar]
  16. Reinhardt, K. Trust and trustworthiness in AI ethics. AI Ethics 2023, 3, 735–744. [Google Scholar] [CrossRef]
  17. Astobiza, A.M.; Toboso, M.; Aparicio, M.; López, D. AI ethics for sustainable development goals. IEEE Technol. Soc. Mag. 2021, 40, 66–71. [Google Scholar] [CrossRef]
  18. Al-Zahrani, A.M.; Alasmari, T.M. Exploring the impact of artificial intelligence on higher education: The dynamics of ethical, social, and educational implications. Humanit. Soc. Sci. Commun. 2024, 11, 912. [Google Scholar] [CrossRef]
  19. Hosseini, M.; Wieczorek, M.; Gordijn, B. Ethical issues in social science research employing big data. Sci. Eng. Ethics 2022, 28, 29. [Google Scholar] [CrossRef]
  20. Balmer, A. A sociological conversation with ChatGPT about AI ethics, affect and reflexivity. Sociology 2023, 57, 1249–1258. [Google Scholar] [CrossRef]
  21. Willem, T.; Fritzsche, M.C.; Zimmermann, B.M.; Sierawska, A.; Breuer, S.; Braun, M.; Buyx, A. Embedded Ethics in Practice: A Toolbox for Integrating the Analysis of Ethical and Social Issues into Healthcare AI Research. Sci. Eng. Ethics 2025, 31, 3. [Google Scholar] [CrossRef]
  22. Castelfranchi, C. For a Science-Oriented, Socially Responsible, and Self-Aware AI: Beyond Ethical Issues. In Proceedings of the 2020 IEEE International Conference on Human-Machine Systems (ICHMS), Rome, Italy, 7–9 September 2020; Volume 1, pp. 1–4. [Google Scholar]
  23. Peterson, C.; Broersen, J. Understanding the Limits of Explainable Ethical AI. Int. J. Artif. Intell. Tools 2024, 33, 2460001. [Google Scholar] [CrossRef]
  24. Ferreyra, N.E.D.; Aïmeur, E.; Hage, H.; Heisel, M.; van Hoogstraten, C.G. Persuasion Meets AI: Ethical Considerations for the Design of Social Engineering Countermeasures. arXiv 2020, arXiv:2009.12853. [Google Scholar]
  25. Usher, M.; Barak, M. Unpacking the Role of AI Ethics in Online Education for Science and Engineering Students. Int. J. STEM Educ. 2024, 11, 35. [Google Scholar] [CrossRef]
  26. Borger, J.G.; Ng, A.P.; Anderton, H.; Ashdown, G.W.; Auld, M.; Blewitt, M.E.; Naik, S.H. Artificial Intelligence Takes Center Stage: Exploring the Capabilities and Implications of ChatGPT and Other AI-Assisted Technologies in Scientific Research and Education. Immunol. Cell Biol. 2023, 101, 923–935. [Google Scholar] [CrossRef]
  27. Hawkins, W.; Mittelstadt, B. The Ethical Ambiguity of AI Data Enrichment: Measuring Gaps in Research Ethics Norms and Practices. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, IL, USA, 12–15 June 2023; pp. 261–270. [Google Scholar]
  28. Yuan, Q.; Chen, T.; Yang, Q.; Zhang, Z. Unveiling Ethical AI: An In-Depth Bibliometric and Visual Exploration. Indian J. Pharm. Educ. Res. 2024, 58, 1084–1101. [Google Scholar] [CrossRef]
  29. Sinclair, D.; Dowdeswell, T.; Goltz, N. Artificially Intelligent Sex Bots and Female Slavery: Social Science and Jewish Legal and Ethical Perspectives. Inf. Commun. Technol. Law 2023, 32, 328–355. [Google Scholar] [CrossRef]
  30. Hiratsuka, V.Y.; Beans, J.A.; Reedy, J.; Yracheta, J.M.; Peercy, M.T.; Saunkeah, B.; Spicer, P.G. Fostering Ethical, Legal, and Social Implications Research in Tribal Communities: The Center for the Ethics of Indigenous Genomic Research. J. Empir. Res. Hum. Res. Ethics 2020, 15, 271–278. [Google Scholar] [CrossRef] [PubMed]
  31. Bojić, L.; Cinelli, M.; Ćulibrk, D.; Delibašić, B. CERN for AI: A Theoretical Framework for Autonomous Simulation-Based Artificial Intelligence Testing and Alignment. Eur. J. Futures Res. 2024, 12, 15. [Google Scholar] [CrossRef]
  32. Hauer, T. Importance and Limitations of AI Ethics in Contemporary Society. Humanit. Soc. Sci. Commun. 2022, 9, 1–8. [Google Scholar] [CrossRef]
  33. van Maanen, G. AI Ethics, Ethics Washing, and the Need to Politicize Data Ethics. Digit. Soc. 2022, 1, 9. [Google Scholar] [CrossRef] [PubMed]
  34. Bryson, J.J. The Artificial Intelligence of the Ethics of Artificial Intelligence; Oxford Handbook Ethics AI: Oxford, England, 2020; Volume 1, p. 25. [Google Scholar]
  35. Bleher, H.; Braun, M. Reflections on Putting AI Ethics into Practice: How Three AI Ethics Approaches Conceptualize Theory and Practice. Sci. Eng. Ethics 2023, 29, 21. [Google Scholar] [CrossRef]
  36. Munn, L. The Uselessness of AI Ethics. AI Ethics 2023, 3, 869–877. [Google Scholar] [CrossRef]
  37. Brown, B.A.; Heitner, K.L. Ethical Considerations for Generative AI in Social Science Research. In Generative AI and Implications for Ethics, Security, and Data Management; IGI Global: Hershey, PA, USA, 2024; pp. 155–207. [Google Scholar]
  38. Loi, D.; Wolf, C.T.; Blomberg, J.L.; Arar, R.; Brereton, M. Co-Designing AI Futures: Integrating AI Ethics, Social Computing, and Design. In Proceedings of the Companion Publication of the 2019 on Designing Interactive Systems Conference, San Diego, CA, USA, 23–28 June 2019; pp. 381–384. [Google Scholar]
  39. Saheb, T. Ethically Contentious Aspects of Artificial Intelligence Surveillance: A Social Science Perspective. AI Ethics 2023, 3, 369–379. [Google Scholar] [CrossRef]
  40. Leslie, D. The Ethics of Computational Social Science. In Handbook of Computational Social Science for Policy; Springer International Publishing: Cham, Switzerland, 2023; pp. 57–104. [Google Scholar]
  41. Baum, S.D. Social Choice Ethics in Artificial Intelligence. AI Soc. 2020, 35, 165–176. [Google Scholar] [CrossRef]
  42. Kazim, E.; Koshiyama, A.S. A High-Level Overview of AI Ethics. Patterns 2021, 2, 9. [Google Scholar] [CrossRef]
  43. Hagendorff, T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef]
  44. Hagerty, A.; Rubinov, I. Global AI Ethics: A Review of the Social Impacts and Ethical Implications of Artificial Intelligence. arXiv 2019, arXiv:1907.07892. [Google Scholar]
  45. Powers, T.M.; Ganascia, J.G. The Ethics of the Ethics of AI; Oxford Handbook Ethics AI: Oxford, UK, 2020; pp. 25–51. [Google Scholar]
  46. Gavrila-Ardelean, M.; Gavrila-Ardelean, L. Technology and the Future. Actes du 4-e Colloque International COMSYMBOL IARSIC-ESSACHESS, M.A. Tudor, & S. Bratosin (Éditeur Scientifique), Les Arcs: Éditions IARSIC. 2018, Volume 1, pp. 76–83. Available online: https://catalogue.bnf.fr/ark:/12148/cb456267062 (accessed on 6 February 2025).
  47. Rekow-Fond, L.; Gavrila-Ardelean, M.; Gavrila-Ardelean, L.; Fond-Harmant, L. For Scientific Ethics: Interview With People Whose Opinions Are Not Usually Taken Into Consideration: Crossed Viewpoints. Agora Psycho-Pragmatica 2019, 13, 135–146. [Google Scholar]
Figure 1. Network visualization.
Figure 1. Network visualization.
Societies 15 00062 g001
Figure 2. Overlay visualization.
Figure 2. Overlay visualization.
Societies 15 00062 g002
Table 1. Key concepts and their network characteristics in ethical AI research.
Table 1. Key concepts and their network characteristics in ethical AI research.
IDLabelxyClusterWeight <Links>Weight <Total Link Strength>Weight <Occurrences>Score <Avg. Pub. Year>Score <Avg. Citations>Score <Avg. Norm. Citations>
2acceptance0.28740.224337722022195.4826
6age−0.57640.019821011220234.50.4922
7agency0.28720.863636622022.540.3994
21alexa0.27070.971136722023.52.50.2353
27artificial intelligence−0.0692−0.195413477262022.615414.61540.775
36bias0.6161−0.36517822022.517.51.8912
37bibliometric analysis−0.51720.4988261142023.534.251.247
38big data0.1849−0.3011121952022.210.20.7778
48chatgpt−0.81090.27472101752024.28.62.3354
89education−0.85360.033226722023.580.7529
93elsi−0.384−0.126221011220219.50.6682
96ethical artificial intelligence0.8957−0.2273169520231.40.3077
99ethical concerns0.44650.741338822023.530.3866
102ethical implications−0.78760.505425822024.52.51.2566
105ethical issues0.7099−0.628813322020632.2183
108ethics−0.1335−0.322811829102022.47.60.8418
118fairness0.9162−0.540414422023.520.2925
119fairness-aware1.007−0.133612422022.500
127gender0.35050.98236722023.52.50.2353
129generative ai−0.91940.28527832024.66670.33330.7675
135governance−0.3895−0.85351782202380.9716
139health−0.2247−0.209711620320223.66670.3516
140health-care0.3892−0.3911191032020.6667562.7629
149information quality0.41810.4963552202418.55.5987
150innovation−0.38490.20222151632023.66674.33331.065
153intelligence0.1750.439834422024185.4474
156interdisciplinary collaborations−0.3224−0.70371911220240.50.1513
177machine learning0.7345−0.046811116720226.57140.8502
191nanotechnology−0.4258−0.47841141832023.33333.66670.5299
198normative ethics1.0011−0.5323144220240.50.1513
218politics−0.2595−0.86621710320240.66670.2018
222privacy−0.02950.54813151632021.666760.5319
224program−0.3332−0.027521011220219.50.6682
225protection−0.7409−0.12592111222021.5100.5953
235research−1−0.054329113202390.904
236research ethics−1.0684−0.22712342202270.2443
239responsible ai0.8728−0.60715532020.66670.66670.4342
244risk0.540.56323332202410.3026
250science−0.16210.045221307202313.57142.0845
270social science−0.6657−0.30541910320240.66670.3961
293technology−0.0024−0.55711121852022.81.20.1402
300transparency0.42320.87536622023222.0706
301trust0.53480.25813101132023.3333111.2675
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Runcan, R.; Hațegan, V.; Toderici, O.; Croitoru, G.; Gavrila-Ardelean, M.; Cuc, L.D.; Rad, D.; Costin, A.; Dughi, T. Ethical AI in Social Sciences Research: Are We Gatekeepers or Revolutionaries? Societies 2025, 15, 62. https://doi.org/10.3390/soc15030062

AMA Style

Runcan R, Hațegan V, Toderici O, Croitoru G, Gavrila-Ardelean M, Cuc LD, Rad D, Costin A, Dughi T. Ethical AI in Social Sciences Research: Are We Gatekeepers or Revolutionaries? Societies. 2025; 15(3):62. https://doi.org/10.3390/soc15030062

Chicago/Turabian Style

Runcan, Remus, Vasile Hațegan, Ovidiu Toderici, Gabriel Croitoru, Mihaela Gavrila-Ardelean, Lavinia Denisia Cuc, Dana Rad, Alina Costin, and Tiberiu Dughi. 2025. "Ethical AI in Social Sciences Research: Are We Gatekeepers or Revolutionaries?" Societies 15, no. 3: 62. https://doi.org/10.3390/soc15030062

APA Style

Runcan, R., Hațegan, V., Toderici, O., Croitoru, G., Gavrila-Ardelean, M., Cuc, L. D., Rad, D., Costin, A., & Dughi, T. (2025). Ethical AI in Social Sciences Research: Are We Gatekeepers or Revolutionaries? Societies, 15(3), 62. https://doi.org/10.3390/soc15030062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop