1. Introduction
In recent years, artificial intelligence has grown to influence numerous aspects of daily life and professional domains, with OpenAI’s ChatGPT standing out as one of the most impactful tools. First launched in 2021, ChatGPT uses advanced generative pre-trained transformer (GPT) models to deliver human-like conversational responses, enabling diverse applications. With over 200 million users weekly worldwide, its rapid adoption underscores the need for a critical examination of its implications and potential. ChatGPT’s versatility allows it to produce essays, solve mathematical problems, compose music, generate code, and much more, rendering it invaluable across education, healthcare, business, and creative industries. However, its widespread use also raises ethical, environmental, and social concerns that demand careful evaluation.
This paper aims to reflect on the following guiding question: “How is ChatGPT reshaping various sectors and disciplines, and what are the associated benefits, challenges, and future implications of its widespread adoption?” It also aims to systematically review the growing body of literature on ChatGPT, focusing on its applications, limitations, and future potential. To provide a structured analysis, research was categorised into six key themes: sustainability, health, education, work, social media, and energy. These themes were derived through a combination of internal brainstorming, systematic review consultations, and leveraging ChatGPT itself to identify major trends in its own academic discourse.
For each theme, the paper reviews the most relevant literature, prioritising either all available studies or the 20 most cited articles to ensure a focus on influential research. The analysis examines consensus areas, controversies, and recommendations for future research or policy initiatives. Through this comprehensive approach, the paper seeks to offer insights into how ChatGPT is reshaping various domains and to contribute to the ongoing dialogue about its responsible development and use.
This paper provides a comprehensive thematic synthesis of ChatGPT’s multifaceted impacts, offering a critical evaluation of its applications, limitations, and societal implications. By categorising existing research into six key themes—sustainability, health, education, work, social media, and energy—this study fills a significant gap in the literature. Unlike prior studies that focus on isolated applications, this paper integrates findings across domains to provide a holistic understanding of ChatGPT’s role in reshaping modern society.
One of the paper’s key contributions is its emphasis on actionable solutions for addressing the challenges associated with ChatGPT’s adoption. For instance, it identifies specific strategies for mitigating biases, improving accuracy, and reducing energy consumption, providing a roadmap for more ethical and sustainable deployment. Additionally, the paper highlights emerging trends, such as the rise of AI-driven decision-making and the democratisation of creative processes, situating ChatGPT within broader societal transformations.
The paper begins with an overview of ChatGPT, detailing its capabilities, development, and rapid adoption. It then delves into the methodology used for this review, describing how relevant studies were identified and categorised. The subsequent sections analyse ChatGPT’s role in each of the six identified themes, highlighting its benefits and drawbacks in each domain. Finally, the paper concludes by synthesising the findings and offering recommendations for future research, ethical frameworks, and policy measures to ensure ChatGPT’s sustainable and equitable integration into society.
Background on ChatGPT
ChatGPT is a large language model which interacts with users in a conversational way, launched by OpenAI on 30 November 2022 [
1]. Trained on a diverse data set drawn from online samples of human writing and code, the programme can produce outputs that mimic the quality of actual human responses when given prompts by users. The growing popularity of the service is highlighted by it having reached over 100 million users by January 2023, making it the fastest-growing consumer application to date [
2].
ChatGPT uses a series of ‘generative pre-trained transformer (GPT)’ models and is developed for conversational applications using human feedback [
3]. The developers used a process of ‘fine-tuning’, an approach to transfer learning in which pre-trained models are trained on new data. For example, when using RLHF (reinforcement learning from human feedback), the developers ranked responses the chatbot had made in conversations in order to fine-tune the model [
4].
Since its initial development, the developers have released new versions of the chatbot [
5]. The first version was ‘Legacy ChatGPT-3.5′ which was released in November 2022. In 2023, this version was updated to the default GPT-3.5 which offered better accuracy with responses. In March 2023, GPT-4 was introduced alongside the subscription which had enhanced accuracy and detail. Most recently, in May of 2024, the latest version GPT-4o was released and was capable of accepting various forms of input like images, audio, and video, as well as having a quicker response time and more conversational fluidity.
The chatbot is extremely versatile when it comes to mimicking human output. For example, it can be used for producing essays, composing music, writing poetry, translating, solving mathematical problems, and much more [
6,
7]. These wide-ranging abilities have meant that it can be integrated into a diverse set of roles and functions within firms and government. It has so far been used as a digital helper on consumer shopping websites, a medical education tool, and to aid management consultants [
8]. As more specialised models and institutions adjust, it will likely be adopted into more different and varied settings.
2. Materials and Methods
To conduct a comprehensive analysis of the current research on ChatGPT across various themes, we employed a systematic database search using the Web of Science. This database was chosen due to its extensive coverage of over 34,000 journals, offering a broad range of citation index databases that encompass interdisciplinary international studies.
Our systematic review involved searching for the term “ChatGPT” within the abstracts of international peer-reviewed articles. We focused on abstracts as we assumed that if “ChatGPT” was present in the title or listed as a keyword, it would necessarily appear in the article’s abstract as well. This approach allowed us to compile a relevant and focused list of articles for further analysis.
As presented in
Figure 1 and in
Table 1 below, after retrieving the initial list of articles containing “ChatGPT” in their abstracts, we performed sub-searches using a set of thematic keywords. These keywords were specifically selected to explore the intersection of ChatGPT with six identified themes: sustainability, health, education, work, social media, and energy. The thematic keywords were developed through a combination of internal brainstorming, consulting ChatGPT for common themes in ChatGPT-related literature, and reviewing existing systematic reviews on similar topics.
For each theme, we searched for combinations of “ChatGPT” with the corresponding keyword(s) (e.g., “ChatGPT + sustainability”). In cases where a keyword search yielded more than 20 articles, we further refined our selection by filtering to the 20 most cited articles, ensuring that we focused on the most influential and widely recognised research. These articles were then meticulously analysed to extract and evaluate arguments concerning the advantages and drawbacks of ChatGPT in each thematic area.
Following the thematic analysis, we summarised the key findings from these articles, highlighting the predominant perspectives on ChatGPT’s impact within each domain. This synthesis provided a structured overview of the current academic discourse surrounding ChatGPT, offering insights into both its potential benefits and limitations across different contexts.
We did not limit our search by geographic region, as part of our objective was to map the global distribution of research on ChatGPT. To achieve this, we utilised the ‘Countries/Regions’ search tool within the Web of Science, which provided us with a breakdown of the number of articles published per country or region. The location data was based on the addresses of the authors listed in the articles, meaning that a single article could be associated with multiple countries or regions if it had co-authors from different locations.
Finally, our search and study took place in summer 2024 and was then updated in January 2025. As shown in
Figure 2 and
Table 2 below, there has been a growing number of publications on the topic over time, especially in 2024. Our final search was restricted to studies published between the 1 January 2022 and the 31 December 2024, aligning with the period following the launch of ChatGPT by OpenAI in 2022. This timeframe allowed us to capture the most up-to-date research and discussions relevant to the ongoing development and application of ChatGPT technology. The chosen timeframe reflects the rapid evolution and adoption of ChatGPT since its launch, ensuring that the review captures the most current and relevant developments. The six themes were selected based on their prominence in existing discourse and their significance in understanding ChatGPT’s multifaceted impacts across diverse sectors and disciplines.
While this systematic review provides a comprehensive analysis of ChatGPT-related literature, several limitations must be acknowledged. The reliance on the Web of Science database may exclude relevant studies published in non-indexed journals or grey literature, potentially narrowing the scope of the findings. Additionally, the emphasis on highly cited articles could introduce bias, as these studies may not fully capture emerging or underexplored perspectives.
3. Results
3.1. Sustainability
The relationship between ChatGPT and environmental sustainability is multifaceted, presenting both opportunities and challenges. On one hand, ChatGPT holds significant potential to advance sustainability efforts through predictive analytics, education, and process optimisation. On the other hand, its operational demands, including high energy consumption and greenhouse gas emissions, raise concerns about its environmental footprint. This duality underscores the need for a balanced approach to leveraging ChatGPT’s capabilities while mitigating its ecological impact.
ChatGPT’s ability to analyse large datasets and generate insights makes it a valuable tool for sustainability initiatives. For instance, GPT models have demonstrated effectiveness in predicting energy price trends, which can enhance energy security and support climate goals. Medina and Heredia [
9] found that GPT-based models accurately forecasted electricity price fluctuations in Spain, enabling more efficient energy market operations and better alignment with renewable energy integration. Such predictive capabilities can empower policymakers and businesses to make data-driven decisions that reduce carbon emissions and promote sustainable energy practices.
Moreover, ChatGPT can optimise supply chain operations, a critical area for reducing environmental degradation. By generating sustainability reports, predicting demand, and streamlining communication, ChatGPT can help manufacturers and suppliers minimise waste and improve resource efficiency [
10]. For example, AI-driven demand forecasting can reduce overproduction and excess inventory, which are major contributors to environmental waste in global supply chains. Additionally, ChatGPT’s ability to automate project planning and documentation can reduce the energy consumption of data centres, which are traditionally energy-intensive [
11].
Beyond operational efficiency, ChatGPT can play a pivotal role in raising awareness and educating the public about environmental issues. Its conversational interface makes it an accessible tool for disseminating information on climate change, sustainable practices, and conservation efforts. For instance, ChatGPT can be integrated into educational platforms to provide personalized learning experiences on sustainability topics, fostering greater environmental literacy [
12].
Despite its potential benefits, the environmental costs of deploying ChatGPT and similar AI models cannot be overlooked. The training and operation of large language models (LLMs) require substantial computational resources, leading to significant energy consumption and carbon emissions. Recent studies estimate that training a single GPT-3 model consumes approximately 1,287,000 kWh of energy, equivalent to 552 tonnes of CO
2 emissions [
11]. Furthermore, the energy required to generate a single AI-generated image is comparable to charging a smartphone, highlighting the resource intensity of AI systems [
13].
The environmental impact of AI is expected to grow as adoption increases. By 2027, the total energy consumption of AI systems could rival that of entire countries, such as Argentina or the Netherlands [
14]. This raises critical questions about the sustainability of AI technologies, particularly as their use becomes more widespread across industries. The cooling requirements for data centres housing these models further exacerbate their environmental footprint, as they demand significant water and energy resources [
15].
To harness ChatGPT’s potential for sustainability while addressing its environmental drawbacks, a multi-pronged approach is necessary. First, regulatory frameworks must be established to ensure that AI development prioritises energy efficiency and carbon neutrality. For example, guidelines could mandate the use of renewable energy sources for training and operating AI models, as well as the adoption of energy-efficient algorithms [
14]. Second, advancements in AI, such as the development of low-power processors and optimised training techniques, can reduce the energy demands of LLMs [
16].
Additionally, interdisciplinary collaboration between AI researchers, environmental scientists, and policymakers is essential to develop sustainable AI practices. For instance, the integration of AI with green technologies, such as smart grids and energy-efficient data centres, can mitigate its environmental impact. Public and private sector initiatives, such as the Partnership on AI’s Climate Change Working Group, are already exploring ways to align AI development with global sustainability goals.
3.2. Social Media
As the use of social media has risen exponentially, especially following the pandemic in 2020, the question of whether ChatGPT is beneficial for social media has been debated. While there seems to be many positive uses for it, like content analysis and detecting toxic behaviour, the chatbot can be argued to be problematic for social media. From harmful biases to inaccuracies, the use of ChatGPT could lead to more problems than solutions on social media.
ChatGPT has many uses for those on social media platforms. Firstly, ChatGPT can be used for content analysis. This is important for brands and individuals who are looking to track consumer behaviours and emotions towards them. For example, a content creator can use ChatGPT to analyse whether their followers prefer specific content over another, which would then allow them to cater their content toward their following more accurately. Content analysis is often performed using human labour or supervised by ‘machine learning techniques’; however, this is often argued to be expensive, laborious, and inefficient. It is also argued that it is useful for content analysis, as language models like ChatGPT can analyse large volumes of data with ‘higher accuracy and depth’.
The chatbot can also be used to detect toxic and harmful behaviour on and with social media, which has been rising [
17,
18]. The first issue it can address is the growing emotional and mental issues arising from problematic social media use. With ChatGPT, users can share their confusion and anxieties about social media with the chatbot, which can provide emotional support and solutions [
19]. Furthermore, ChatGPT can monitor social media usage by analysing posts, content, frequency, and time periods spent on platforms [
19]. By doing so, the chatbot is able to recognise and address problematic content on social media. Although costly and time-consuming to develop, this should reduce the amount of harmful content which is polarising communities, inciting hate, and negatively impacting individuals [
20].
While ChatGPT could provide such benefits to the world of social media, the chatbot may not be able to address such concerns like harmful content and may lead to adverse implications. For example, the chatbot may flag content as ‘toxic’ without accurately understanding context (sarcasm, humour, cultural subtleties, etc.) [
20]. This could cause concern as it may infringe upon freedom of speech or cause social media participation to decline. Additionally, there may also be concerns with how AI is used for social media and its conflict with privacy, especially if personal content is being monitored and scanned [
20].
3.3. Education
In education, there is broad consensus that ChatGPT will be transformative in teaching, learning, and academic research [
21,
22]. Discussion broadly centres on how ChatGPT will be incorporated into educational processes, as well as how inaccuracies and the disadvantages of ChatGPT can be averted [
23].
ChatGPT holds potential for education for numerous reasons. Its human-like responses render it well-suited as a personalised training tool, able to answer queries, and generate learning material at different difficulty levels to suit the user and provide feedback. ChatGPT also has generative capacities; it can produce course materials for educators, therefore decreasing teacher workload. Further, it has potential in facilitating international education through its fluency in various languages, as well as language teaching directly [
12].
The AI assistant can also change the world of research. To some extent, it will increase the capacity of researchers if they do not have to spend so long writing up their notes, translating papers, or proofreading; there is more opportunity for work to be conducted to produce valuable research output [
24].
The concerns in the implications of ChatGPT in education fall into three areas. Firstly, there is the risk of informational inaccuracies, which deteriorates the quality of the learning tool or the piece of research as it degrades trust in the tool. This risk is exacerbated by the tendency of ChatGPT to use confident language regardless of its accuracy. In particular, in the case of research, there is a potential for undermining trust. This might be especially acute if actors intentionally produce misleading or malicious research [
25].
Secondly, ChatGPT provides a new scope for plagiarism concerns in education–which entails a host of legal, ethical, and copyright issues, alongside the corruption of student learning. As the quality of the output increases, there is a danger that students will pass this as their own work. There are two downsides to such usage. The first is that student work will no longer be a realistic standard of their abilities as the output of these LLMs becomes more sophisticated. There is already evidence that it can pass law courses at universities [
26]. Such advances risk that the validity of qualifications will be called into question and might lead to more in-person assessments. Secondly, it might lead to students taking “shortcuts”. If ChatGPT could produce a better essay than the average student, then the average student might simply not bother and so would not receive the practice and refinements that such work entails. The result is that the quality of students might decline. This might be especially true of soft skills such as determination and help, which cannot be tested through in-person exams.
Finally, there is a risk of bias and discrimination in the educational content ChatGPT produces thanks to the training processes and disparity in usage of AI tools between high- and low-income countries.
Ultimately, there is a broad consensus that a code of ethics, institutionalised through regulatory frameworks, is required to ensure ChatGPT’s effective usage in education and research.
3.4. Work
Large language models such as ChatGPT-4 will change the world of work primarily through their ability to automate several commonplace tasks. Given the right prompt, ChatGPT can write emails, provide feedback on writing, proofread, produce code, create images, take minutes, and translate between different languages. As workers are no longer required to do these tasks or are required to do them less frequently, the world of work will change. The shifting time and focus of workers will change the kind of work they undertake, the quality of the work, and ultimately the workers themselves.
The most notable effect will be increased productivity. Simply, workers must spend less time on tasks, with a study estimating it could reduce time taken by 40% [
27]. The freeing up of extra capacity will also allow these workers to spend more time on more complicated tasks, allowing for some cognitive offloading onto the Artificial Intelligences. For a smaller group, it might create job losses. Large language models might be able to replace the number of secretaries needed to take minutes, the number of translators needed on call, and the number of customer support assistants. The jobs which might have required a team of people might be able to be accomplished with far fewer employees [
28]. Here, this raises the potential of structural unemployment.
Secondly, the use of ChatGPT will change the quality of the output of a worker as they are no longer solely producing it. There is a degree to which it might increase the quality of work, particularly for those who are the lowest ability in the given activity, as the AI can produce mostly competent output [
27]. There is also the fact that if a human needs to spend less time on a given task, they might spend more time refining it and increasing its quality. However, over-reliance on ChatGPT might produce problems. As it is trained on a dataset, it will have biases. Therefore, the use of its outputs might produce subpar results, especially if it is not checked by a human. In addition, ChatGPT often has limited knowledge of up-to-date events and news and can frequently ‘hallucinate’, producing information where there was none [
29]. The quality of output can considerably suffer in such cases.
Lastly, it will likely change the workers. As mentioned elsewhere in the article, ChatGPT has the potential to be a great teacher as it has large amounts of knowledge, can provide personal feedback, produce unique teaching resources, and all at a very low cost. The potential for the upskilling of workers is enormous. Likewise, there might be more, less tangible changes to workers. There is the potential to reduce the alienation of workers when they are liberated from the endless repetitive tasks that ChatGPT could do for them. Outside of the workplace, the potential for ChatGPT to reduce the cost of therapy and mental health care might add to the well-being and productivity of workers [
29].
The downside to using ChatGPT is that there can be cognitive atrophying as workers lose the skills that they do not practise. Rather than using it as a tool to help them produce output, it becomes the only way that they can, leading to a reduction in the skills available [
27]. In the long run, this might reduce the skills and abilities of workers.
The benefits are that workers will be able to produce more, higher-quality work, and gain skills that they would not have been able to do otherwise. The risks are that fewer workers are needed, that the work produced is no longer reliable, and that they lose the skills which allowed them to produce the work in the past.
3.5. Health
In the search conducted, we retrieved 1571 articles concerning ChatGPT and health. There is a wide variety of contexts with which ChatGPT is discussed in the literature including clinical practice, healthcare access, medical education, medical research, and public health [
30]. Across these disparate contexts, however, there is some consistency in the kinds of concerns and limitations of ChatGPT which are cited, including ethical concerns, legal concerns, inaccuracy in ChatGPT’s content, and the risk of infodemics [
31,
32].
The majority of the 20 most cited articles engage with broad questions evaluating the applicability of ChatGPT’s use in medical contexts. The identified advantages of ChatGPT’s use have been that it is able to respond to ‘free text’ queries without being specifically trained for the task given by the query, it is capable of mimicking the nuances of human language such that its responses are appropriate and contextually relevant, and it is extremely efficient at analysing and organising extensive data sets in short periods of time [
33,
34]. All of this renders ChatGPT well positioned for medical use, as identified by recent research whose random controlled trial identified ChatGPT responses to patient queries were overall preferred to physician responses, and Liu et al.’s [
35] study that found that AI suggestions on clinical decisions were unique and highly relevant [
36,
37,
38]. Equally, ChatGPT has a more basic function that it is well suited for alongside this, which is that ChatGPT is able to effectively produce medical documentation including patient clinical letters, medical notes and radiology/pathology reports which could vastly improve the efficiency/time management capacities of medical professionals [
39].
The key limitations of ChatGPT’s usage in medical contexts is something heavily focused on in the literature. In general, there is a problem of inaccuracy with regard to the information it provides. It has been found ChatGPT performed better in basic knowledge, lifestyle, and treatment questions than in the domains of diagnosis and preventive medicine—they found that ChatGPT answers 76/9% of questions correctly owing to its lack of understanding on treatment durations and regional guideline variations. Inaccuracies in ChatGPT’s responses were a consistent limitation across a majority of the studies. A key risk of such inaccuracies cited by X was that of ‘infodemics’–which is when the ability of large language models like ChatGPT to rapidly produce vast amounts of information runs the risk of spreading inaccurate information on public health on an unprecedented scale.
There are also legal and ethical concerns with the use of ChatGPT in the medical field. Those include issues of copyright, which in turn generate issues of accountability and lack of transparency when it is unclear what sources that ChatGPT’s suggestions or recommendations have originated from. In general, several articles emphatically conclude that ChatGPT cannot supplant human judgement and the role of medical professionals in medical settings; there is, thus, a push for a code of ethics to regulate ChatGPT’s use in medical settings which involves fact checking and the contextualisation/comparison of ChatGPT’s insights alongside those provide by medical professionals.
3.6. Energy
The intersection of ChatGPT and energy has garnered significant attention in recent research, with nine articles identified in this review. Six of these articles focus on the substantial energy consumption associated with ChatGPT and other large language models (LLMs), two explore the use of ChatGPT for energy price forecasting, and one examines its potential energy-saving applications, such as in lens recycling. This section critically evaluates these findings, highlighting both the environmental challenges posed by ChatGPT and its potential contributions to energy efficiency.
The energy demands of ChatGPT and similar LLMs are a pressing concern, particularly given the rapid adoption of these technologies across industries. Training and operating these models require significant computational resources, leading to high energy consumption and carbon emissions. Recent studies estimate that training a single GPT-3 model consumes approximately 1,287,000 kWh of energy, equivalent to 552 tonnes of CO
2 emissions [
11]. This energy-intensive process is further exacerbated by the need for continuous model updates and fine-tuning, which can increase the cumulative environmental footprint over time [
13].
The financial costs associated with ChatGPT’s energy consumption are equally prohibitive, particularly for small businesses. Estimates suggest that operating ChatGPT can cost over
$21,000 per month, primarily due to the energy required for hardware manufacturing and model operation [
14]. These costs are driven by the need for advanced GPUs and other high-performance computing infrastructure, which are both energy-intensive and expensive to maintain. As a result, the accessibility of ChatGPT is limited, particularly for organizations in low-resource settings, raising concerns about equitable access to AI technologies.
Another critical environmental challenge is the cooling requirements for data centres that house LLMs. The heat generated by these facilities necessitates extensive cooling systems, which consume additional energy and water resources. For example, data centres supporting large-scale AI operations can require millions of gallons of water annually for cooling, further straining local water supplies and contributing to environmental degradation [
15]. This dual burden of energy and water consumption underscores the need for more sustainable AI infrastructure.
Despite these challenges, ChatGPT also offers promising applications in energy efficiency and sustainability. Two studies in this review highlight its potential for energy price forecasting, a critical tool for optimizing energy markets and supporting renewable energy integration. For instance, Medina and Heredia [
9] demonstrated that GPT-based models could accurately predict electricity price fluctuations in the Spanish market, enabling more efficient energy trading and better alignment with climate goals. Such predictive capabilities can empower policymakers and businesses to make data-driven decisions that reduce carbon emissions and promote sustainable energy practices.
Additionally, one study explored ChatGPT’s potential to contribute to energy-saving initiatives, such as in lens recycling. By optimizing supply chain operations and reducing waste, ChatGPT can help industries minimise their environmental footprint. For example, AI-driven demand forecasting can reduce overproduction and excess inventory, which are major contributors to environmental waste in global supply chains. These applications highlight the dual role of ChatGPT as both a consumer of energy and a facilitator of energy efficiency.
Addressing the energy consumption of ChatGPT and similar LLMs requires a multifaceted approach. First, advancements in AI hardware and software are essential to reduce the energy demands of these models. For example, the development of low-power processors and optimised training techniques, such as “sparse training”, can significantly decrease energy consumption during model training and operation [
16]. Additionally, the use of renewable energy sources for powering data centres can mitigate the carbon footprint of AI operations [
14].
Second, regulatory frameworks and industry standards must be established to promote energy-efficient AI practices. For instance, guidelines could mandate the use of energy-efficient algorithms and require transparency in reporting the environmental impact of AI models [
40]. Such measures would encourage developers to prioritise sustainability in AI design and deployment.
Finally, interdisciplinary collaboration between AI researchers, environmental scientists, and policymakers is crucial to developing sustainable AI solutions. Initiatives such as the Partnership on AI’s Climate Change Working Group are already exploring ways to align AI development with global sustainability goals [
40]. By fostering collaboration across sectors, we can ensure that the benefits of ChatGPT are realised without compromising environmental sustainability.
4. Discussion
4.1. Critical Evaluation
Across the articles identifying the huge energy consumption of ChatGPT is an impetus and emphasis on the need to reduce the energy consumption of LLMs like ChatGPT, which requires efforts from research, academia, and industry to do so. An example of efforts to reduce energy consumption is the ‘Reservoir Computing’ computing algorithm used to analyse time-series data, which is able to make the LLM training process faster and more efficient. One article, about forecasting Spanish energy prices, emphasises how accurate energy price prediction is key in improving abilities to meet climate goals and reducing the volatility of the Spanish energy market.
While ChatGPT holds promise across multiple domains, significant limitations must be addressed to harness its full potential responsibly. One critical concern is the inherent biases present in its outputs. As a product of training on large datasets scraped from the internet, ChatGPT often reflects societal biases, including those related to gender, race, and culture. For example, studies have shown that ChatGPT can produce stereotypical or offensive responses, which could perpetuate inequality if deployed in sensitive fields such as hiring or content moderation. Mitigating these biases requires a multi-pronged approach: training the model on curated datasets that emphasise diversity and inclusivity, implementing post-training corrections through reinforcement learning from human feedback, and instituting robust testing protocols to identify and rectify biased responses before deployment.
In addition to bias, inaccuracies in ChatGPT’s outputs remain a pressing issue. Its tendency to “hallucinate”—confidently generating information that is factually incorrect—poses risks in high-stakes applications such as healthcare or legal advice. A solution lies in developing hybrid systems that combine ChatGPT’s generative abilities with verified databases or decision support systems. For instance, integrating ChatGPT with real-time access to verified medical databases could reduce the likelihood of providing erroneous advice. Further, implementing transparent disclaimers and encouraging user feedback loops can foster accountability and allow the continuous improvement of the model’s reliability.
Ethical concerns also arise from the lack of transparency in ChatGPT’s decision-making processes. Users and developers often struggle to understand why the model generates specific outputs. Addressing this challenge involves developing explainable AI (XAI) techniques that allow end-users to trace and interpret the logic behind ChatGPT’s responses. This could be achieved by embedding metadata into outputs, which outlines key data sources and reasoning pathways. Lastly, creating an international regulatory framework for AI governance, focusing on ethical standards and accountability mechanisms, will help mitigate these risks while promoting trust in ChatGPT’s applications.
4.2. Emerging Societal and Technological Shifts
ChatGPT’s rapid adoption signifies its influence in shaping broader societal and technological trends. A notable shift is the growing reliance on AI for decision-making in fields like healthcare and education. In healthcare, ChatGPT is part of a broader movement toward AI-assisted diagnostics and personalised medicine. By processing large datasets, ChatGPT-like systems can support clinicians in generating insights and providing treatment recommendations tailored to individual patients. This reflects a transition from traditional medical practices to a data-driven, precision-focused model of care. However, this trend also raises concerns about the potential marginalisation of human judgment and the ethical implications of delegating life-critical decisions to algorithms.
In education, ChatGPT is accelerating the move toward personalised learning environments. AI-powered virtual tutors can adapt to individual students’ needs, creating opportunities for tailored educational experiences. This trend aligns with a broader societal push for equity in education, as ChatGPT could bridge learning gaps for students in under-resourced regions. Yet, it also exacerbates existing digital divides, as access to such advanced tools is often limited by infrastructure and socioeconomic constraints.
Another transformative trend is the democratisation of creative processes. ChatGPT empowers individuals to produce high-quality content with minimal expertise, revolutionising industries such as publishing, marketing, and design. This trend reflects a larger shift toward user-generated content and decentralised production models, where AI becomes a collaborative partner rather than a tool exclusively for specialists. However, the ease of content creation also raises challenges, including intellectual property concerns and the proliferation of misinformation.
Finally, ChatGPT’s integration into professional workflows exemplifies the shift toward automation in white-collar jobs. Industries ranging from customer service to software development are leveraging ChatGPT to streamline operations, reduce costs, and increase productivity. While this trend highlights AI’s potential to augment human labour, it also introduces uncertainties about job displacement and workforce adaptation. The societal implications of these shifts underscore the need for proactive policymaking to ensure that the benefits of ChatGPT and similar tools are equitably distributed.
5. Conclusions
The review reveals that ChatGPT has become a significant tool in diverse fields, demonstrating its transformative potential while also highlighting key challenges that must be addressed. In sustainability, ChatGPT shows promise in forecasting trends and identifying efficient processes, which can support climate goals and resource optimisation. However, its high energy consumption and contribution to greenhouse gas emissions pose a paradox, necessitating urgent innovations in energy-efficient AI technologies.
In education and higher education, ChatGPT’s ability to provide personalised learning experiences, create teaching materials, and support independent study positions it as a valuable resource for students and educators. Nonetheless, its propensity for inaccuracies, coupled with ethical concerns about plagiarism and the erosion of critical thinking, underscores the need for clear usage guidelines and robust academic integrity policies.
Similarly, in professional and healthcare settings, ChatGPT improves efficiency by automating administrative tasks, providing clinical decision support, and reducing workload pressures. Yet, the limitations of AI-generated outputs, including inaccuracies and “hallucinations”, highlight the need for human oversight, particularly in high-stakes sectors like medicine [
41,
42]. Moreover, the potential for cognitive atrophy and over-reliance on ChatGPT in the workplace could undermine the long-term development of essential skills.
The paper also underscores ChatGPT’s dual impact on social media, where it aids in content analysis and toxicity detection but risks infringing on privacy and freedom of expression. In the energy sector, while ChatGPT can assist in energy price forecasting, its own energy demands necessitate urgent measures to minimise its environmental footprint.
Across these domains, a recurring theme emerges: ChatGPT excels at enhancing productivity and efficiency, yet its limitations demand responsible integration. Ethical guidelines, regulatory frameworks, and technological advancements are essential to mitigate risks, including misinformation, environmental degradation, and inequitable access. Furthermore, interdisciplinary collaboration between researchers, policymakers, and developers is crucial to ensuring that ChatGPT’s adoption aligns with societal needs and values.
By synthesising insights across disciplines, this paper offers practical recommendations for policymakers, educators, and industry leaders. It advocates for robust regulatory frameworks, interdisciplinary collaboration, and equitable access to ensure that the benefits of ChatGPT are maximized while its risks are mitigated. Ultimately, this study not only advances academic discourse but also provides a foundation for informed decision-making in the deployment of ChatGPT and similar technologies.
Future Directions and Research Agenda
Future research should prioritise enhancing ChatGPT’s reliability, transparency, and inclusivity while addressing its energy consumption challenges. As large language models continue to evolve, it is vital to balance their transformative potential with ethical, social, and environmental considerations to ensure sustainable progress. This includes the development of robust regulatory frameworks, sustainable AI practices, and mechanisms to address disparities in access. Interdisciplinary collaboration among technologists, policymakers, and researchers will be crucial to harnessing ChatGPT’s potential responsibly and inclusively.
Beyond these priorities, underexplored areas merit significant attention. For instance, ChatGPT’s integration into global labour markets raises important questions about employment dynamics, skill development, and income distribution. Industries incorporating ChatGPT must navigate the dual challenges of automating tasks while reskilling workers to transition into complementary roles. Research should explore policies and practices to mitigate job displacement and support equitable workforce adaptation.
Cross-cultural adoption of ChatGPT presents another fertile area for investigation. Examining how ChatGPT is received and adapted across diverse cultural and socioeconomic contexts can illuminate variations in user behaviour, ethical expectations, and accessibility challenges. Such research would provide critical insights for bridging the digital divide and ensuring the equitable distribution of ChatGPT’s benefits, particularly in regions with limited technological infrastructure.
Technological advancements in ChatGPT itself also warrant further exploration. Efforts to improve the model’s interpretability are essential for making its decision-making processes transparent and understandable to users. Similarly, advancing energy-efficient training and deployment methods, such as sustainable computing architectures, should remain a top priority. Collaborative efforts among computer scientists, environmentalists, and policymakers will be instrumental in driving these innovations.
Effectively addressing the challenges and opportunities presented by ChatGPT requires interdisciplinary collaboration that brings together expertise from diverse fields. AI researchers must work alongside ethicists, policymakers, and domain-specific experts—such as educators, healthcare professionals, and sustainability scientists—to develop frameworks that ensure responsible AI deployment. For example, integrating AI ethics with legal studies can help create robust regulatory mechanisms that prevent biases and misinformation, while collaborations between environmental scientists and computer engineers can drive advancements in energy-efficient AI architectures. Additionally, insights from the social sciences are crucial for understanding how AI adoption impacts labour markets, knowledge production, and societal trust in technology. By fostering interdisciplinary dialogue, we can develop AI governance models that balance innovation with ethical responsibility, ensuring ChatGPT’s integration aligns with broader societal needs.
As ChatGPT and similar AI systems become more embedded in decision-making processes worldwide, issues of data sovereignty and equitable AI governance must be addressed. Many AI models are developed and controlled by a small number of corporations based in the Global North, raising concerns about data ownership, privacy, and the potential reinforcement of geopolitical asymmetries. Countries in the Global South, which often lack the infrastructure to develop their own large-scale AI models, risk becoming dependent on foreign technologies that do not reflect their linguistic, cultural, or regulatory contexts. Ensuring equitable AI development requires policies that promote open-source alternatives, regional AI initiatives, and localised datasets that respect diverse sociopolitical realities. Moreover, international collaborations should prioritise fair data-sharing agreements, enabling countries to retain control over their digital assets while benefiting from AI-driven advancements. It is important to address these challenges to prevent AI from exacerbating existing global inequalities.
Finally, future studies must address ChatGPT’s role in shaping societal norms and ethical frameworks. As AI becomes increasingly embedded in decision-making, its impact on human values, privacy, and autonomy must be critically examined. By engaging with these philosophical and ethical questions, research can guide ChatGPT’s development toward aligning with societal well-being and equity.
Author Contributions
H.H. ideation of the project and supervising, all authors contributed to the different phases of the research project implementation to different extent: Conceptualization, H.H.; methodology, H.H., M.G., C.H., R.F. and S.W.; validation, H.H., M.G., C.H., R.F. and S.W.; formal analysis, H.H., M.G., C.H., R.F. and S.W.; investigation, H.H., M.G., C.H., R.F. and S.W.; data curation, H.H., M.G., C.H., R.F. and S.W.; writing—original draft preparation, H.H., M.G., C.H., R.F. and S.W.; writing—review and editing, H.H., M.G., C.H., R.F. and S.W.; visualization, R.F.; supervision, H.H. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The data presented in this study are available in Web of Science.
Acknowledgments
This paper is a result of collaboration within a research internship programme supported by Somerville College, Oxford. The authors are grateful to Claire Cockcroft, Director of the Margaret Thatcher Scholarship Programme, and the academic/careers ‘Skills Hub’, at Somerville, for her kind support in setting up the internship.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Open AI. Introducing ChatGPT; Open AI: San Francisco, CA, USA, 2022. [Google Scholar]
- Milmo, D. ChatGPT reaches 100 million users two months after launch. Guardian 2023, 3, 1017–1054. [Google Scholar]
- Open AI. OpenAI API; Open AI: San Francisco, CA, USA, 2023. [Google Scholar]
- Open AI. ChatGPT: Optimising Language Models for Dialogue; Open AI: San Francisco, CA, USA, 2022. [Google Scholar]
- Open AI. ChatGPT Release—Note; Open AI: San Francisco, CA, USA, 2024. [Google Scholar]
- Patil, D. ChatGPT and Similar Generative Artificial Intelligence in Art, Music, and Literature Industries: Applications and Ethical Challenges. Music. Lit. Ind. Appl. Ethical Chall. 2024. [Google Scholar]
- Saurini, E. Creativity in art and academia: Analyzing the effects of AI technology through the lens of ChatGPT. Regis Univ. Stud. Publ. 2023, 1102. [Google Scholar]
- Lin, B. PwC Set to Become OpenAI’s Largest ChatGPT Enterprise Customer. Wall Str. J. 2024. [Google Scholar]
- Medina, M.A.; Heredia, A.J.A. Using Generative Pre-Trained Transformers (GPT) for Electricity Price Trend Forecasting in the Spanish Market. Energies 2024, 17, 2338. [Google Scholar] [CrossRef]
- Haddud, A. ChatGPT in supply chains: Exploring potential applications, benefits and challenges. J. Manuf. Technol. Manag. 2024, 35, 1293–1312. [Google Scholar] [CrossRef]
- Song, A.; Chen, D.; Zong, Z. Unveiling the truth: An analysis of the energy and carbon footprint of training an OPT model using DeepSpeed on the H100 GPU. In Proceedings of the 14th International Green and Sustainable Computing Conference, Toronto, ON, Canada, 28–29 October 2023. [Google Scholar] [CrossRef]
- Kohnke, L.; Moorhouse, B.L.; Zou, D. ChatGPT for language teaching and learning. Relc J. 2023, 54, 537–550. [Google Scholar] [CrossRef]
- Strubell, E.; Ganesh, A.; McCallum, A. Energy and policy considerations for modern deep learning research. In Proceedings of the AAAI conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 13693–13696. [Google Scholar]
- Hacker, P. Sustainable AI regulation. Common Mark. Law Rev. 2024, 61, 345–386. [Google Scholar] [CrossRef]
- Patterson, D.; Gonzalez, J.; Le, Q.; Liang, C.; Munguia, L.-M.; Rothchild, D.; So, D.; Texier, M.; Dean, J. Carbon emissions and large neural network training. arXiv 2021, arXiv:2104.10350. [Google Scholar]
- Thompson, N.C.; Greenewald, K.; Lee, K.; Manso, G.F. The computational limits of deep learning. arXiv 2020, arXiv:2007.05558. [Google Scholar]
- Lin, Z.; Wang, Z.; Tong, Y.; Wang, Y.; Guo, Y.; Wang, Y.; Shang, J. Toxicchat: Unveiling hidden challenges of toxicity detection in real-world user-ai conversation. arXiv 2023, arXiv:2310.17389. [Google Scholar]
- Si, W.M.; Backes, M.; Blackburn, J.; De Cristofaro, E.; Stringhini, G.; Zannettou, S.; Zhang, Y. Why so toxic? measuring and triggering toxic behavior in open-domain chatbots. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, Los Angeles, CA, USA, 7–11 November 2022; pp. 2659–2673. [Google Scholar]
- Liu, J.; Wang, C.; Liu, S. Utility of ChatGPT in clinical practice. J. Med. Internet Res. 2024. preprint. [Google Scholar] [CrossRef] [PubMed]
- Li, L.; Fan, L.; Atreja, S.; Hemphill, L. “HOT” ChatGPT: The promise of ChatGPT in detecting and discriminating hateful, offensive, and toxic comments on social media. ACM Trans. Web 2024, 18, 1–36. [Google Scholar] [CrossRef]
- Tlili, A.; Shehata, B.; Agyemang, B. What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn. Environ. 2023, 10, 15. [Google Scholar] [CrossRef]
- Lo, C.K. What is the impact of ChatGPT on education? A rapid review of the literature. Educ. Sci. 2023, 13, 410. [Google Scholar] [CrossRef]
- Farrokhnia, M.; Banihashem, S.K.; Wals, A. A SWOT analysis of ChatGPT: Implications for educational practice and research. Innov. Educ. Teach. Int. 2024, 61, 460–474. [Google Scholar] [CrossRef]
- Lund, B.D.; Wang, T. Chatting about ChatGPT: How may AI and GPT impact academia and libraries? Libr. Hi Tech News 2023, 40, 26–29. [Google Scholar] [CrossRef]
- Alkaissi, H.; McFarlane, S.I. Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus 2023, 15, e35179. [Google Scholar] [CrossRef]
- Chow, K.; Tang, Y.; Lyu, Z.; Rajput, A.; Ban, K. Performance optimization in the LLM world 2024. In Proceedings of the ICPE ‘24 Companion: Companion of the 15th ACM/SPEC International Conference on Performance Engineering, London, UK, 7–11 May 2024. [Google Scholar] [CrossRef]
- Noy, S.; Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 2023, 381, 187–192. [Google Scholar] [CrossRef]
- Richter, S.; Richter, A. Human-AI Collaboration in the Metaverse—How to Research the Future of Work? ECIS 2024 Proc. 2024, 4. [Google Scholar]
- Budhwar, P.; Chowdhury, S.; Wood, G.; Aguinis, H.; Bamber, G.J.; Beltran, J.R.; Boselie, P.; Cooke, F.L.; Decker, S.; DeNisi, A.; et al. Human resource management in the age of generative artificial intelligence: Perspectives and research directions on ChatGPT. Hum. Resour. Manag. J. 2023, 33, 606–659. [Google Scholar] [CrossRef]
- Vaishya, R.; Misra, A.; Vaish, A. ChatGPT: Is this version good for healthcare and research? Diabetes Metab. Syndr. Clin. Res. Rev. 2023, 17, 102744. [Google Scholar] [CrossRef] [PubMed]
- Yang, R.; Tan, T.F.; Lu, W.; Thirunavukarasu, A.J.; Ting, D.S.W.; Liu, N. Large language models in health care: Development, applications, and challenges. Health Care Sci. 2023, 2, 255–263. [Google Scholar] [CrossRef]
- Thirunavukarasu, A.J.; Ting, D.S.J.; Elangovan, K.; Gutierrez, L.; Tan, T.F.; Ting, D.S.W. Large language models in medicine. Nat. Med. 2023, 29, 1930–1940. [Google Scholar] [CrossRef]
- Antaki, F.; Touma, S.; Milad, D.; El-Khoury, J.; Duval, R. Evaluating the performance of ChatGPT in ophthalmology: An analysis of its successes and shortcomings. Ophthalmol. Sci. 2023, 3, 100324. [Google Scholar] [CrossRef]
- Mihalache, A.; Popovic, M.M.; Muni, R.H. Performance of an artificial intelligence chatbot in ophthalmic knowledge assessment. JAMA Ophthalmol. 2023, 141, 589–597. [Google Scholar] [CrossRef]
- Liu, S.; Wright, A.P.; Patterson, B.L.; Wanderer, J.P.; Turer, R.W.; Nelson, S.D.; Wright, A. Using AI-generated suggestions from ChatGPT to optimize clinical decision support. J. Am. Med. Inform. Assoc. 2023, 30, 1237–1245. [Google Scholar] [CrossRef]
- De Angelis, L.; Baglivo, F.; Arzilli, G.; Privitera, G.P.; Ferragina, P.; Tozzi, A.E.; Rizzo, C. ChatGPT and the rise of large language models: The new AI-driven infodemic threat in public health. Front. Public Health 2023, 11, 1166120. [Google Scholar] [CrossRef]
- Lee, H. The rise of ChatGPT: Exploring its potential in medical education. Anat. Sci. Educ. 2024, 17, 926–931. [Google Scholar] [CrossRef]
- Cascella, M.; Montomoli, J.; Bellini, V.; Bignami, E. Evaluating the feasibility of ChatGPT in healthcare: An analysis of multiple clinical and research scenarios. J. Med. Syst. 2023, 47, 33. [Google Scholar] [CrossRef]
- Khan, R.A.; Jawaid, M.; Sajjad, M. ChatGPT—Reshaping medical education and clinical management. Pak. J. Med. Sci. 2023, 39, 605. [Google Scholar] [CrossRef] [PubMed]
- Partnership on AI. 2023. UNFCCC. Available online: https://unfccc.int/ (accessed on 20 December 2024).
- Dave, T.; Athaluri, S.A.; Singh, S. ChatGPT in medicine: An overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front. Artif. Intell. 2023, 6, 1169595. [Google Scholar] [CrossRef] [PubMed]
- Clusmann, J.; Kolbinger, F.R.; Muti, H.S.; Carrero, Z.I.; Eckardt, J.-N.; Laleh, N.G.; Kather, J.N. The future landscape of large language models in medicine. Commun. Med. 2023, 3, 141. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).