1. Introduction
Over the next decade, artificial intelligence (AI) technology is expected to establish a foothold in smart systems for business decision making and then see significant expansion. The future progression of AI technology holds great potential, though some commenters consider it alarming and worth serious consideration.
AI has the potential to introduce benefits, but also drawbacks including employment displacement, biased decision making, and social exploitation and deceit. AI misapplication occurs when persons or organizations employ AI for malicious objectives, and it is linked to the deployment of untested and secret AI systems in crucial positions, as well as the potential illicit access to and abuse of a variety of domains [
1].
To better grasp AI’s functionality and ability to transform society positively, this study attempts to discover whether individuals fear it and what makes them want it. This study concentrates on newly graduated students’ interactions with this rapidly developing technology. The scientific investigation of trust in AI technology can help fill gaps revealed by previous research on the long-term socio-ethical factors influencing AI, such as the fear of AI, job losses, dehumanization of work, employee welfare, and acceptability and safety issues with autonomous vehicles (AVs) [
2].
Numerous professional bodies such as the ASME and IEEE have signed a code of ethics outlining ethical conduct in AI, encompassing data, security, and fairness. Future AI technology progress is projected to be driven by ongoing procedures that extend from universities to corporations where developers are employed. Knowing about new engineers’ interactions, characteristics, and prospective faith in AI might help identify groups of people who would benefit from professional training before working in AI environments.
However, we must acknowledge that the projected large-scale AI technical advancements may come with a price. Today, some predict that our understanding of AI mechanisms will decrease as it becomes more prevalent in our lives. In contrast, many others disagree with this perception, instead showing a desire to understand how AI works, how its implications for meaningful work are evolving, how its underlying ethical issues need not be overstated, and how it can affect decisions and outcomes that benefit us [
3].
The study of fears about AI concerning ethics has gained considerable significance recently. Concerns about losing control, surrendering privacy, and AI’s value to humanity are frequently voiced as AI develops abilities that might eventually surpass those of humans. Finding agreement on the significance of AI ethical principles and identifying the barriers to their acceptance are becoming more important. The findings of Khan et al. show that there is a global convergence set consisting of 15 problems and 22 ethical standards. The most prevalent AI ethical principles are transparency, privacy, responsibility, and fairness. Parallel to this, it is believed that the biggest obstacles to considering ethics in AI are a lack of ethical understanding and ambiguous standards [
4].
While technological advancements might resolve issues that were previously unsolvable, it is vital to consider the possibility that AI could also create brand new issues. Which is better: a tech-averse attitude, a cautious one, or a real fear? We must embrace AI technology rather than not trusting it if we are to confront the problems that it may help us solve. AI technology’s fundamentally aspirational nature inspires us to dream, yet it was and is currently being used to solve a shortage problem [
5].
It is possible to use society’s AI trust levels as a device for measuring the commonality of AI fears. Despite the progress made in AI adoption, fundamental societal and ethical barriers still exist. Levels of individual technology acceptance depend on many factors including age, level of education, cultural sensitivity, and type of use. People are mostly not in direct contact with AI technology, and may in some cases develop a negative perception. Many technology-related phobias may be seen as resulting from ignorance, as we sometimes fear the worst when we are afraid of the unknown. Although there may be negative effects from new and developing technologies, improving people’s quality of life is one of the main objectives of automation, AI, and the Internet of Things [
6].
This study examined university-graduating engineers’ perceptions of and ethics relating to the future of AI. Every subtopic in this research has a bearing on the engineers’ responses. In terms of its content, the study looks at how speedily AI is developing and changing the world. The issue of how AI influences young engineers’ learning and ethical judgments in the digital era has been addressed. The study explores how drawing a comprehensive optimistic prospect needs a close examination of the major topics influencing AI worries and how they affect society. These topics include the privacy principle and the worries that AI users have about the technology’s ability to divulge even the most accurate life secrets. They also consider the ideas of superintelligence, transhumanism, human extinction, AI’s acclaimed takeover, and AI’s capacity to replace human roles and spiral out of control.
The findings of this research address the characterization of responses from new engineers, the effectiveness of respondents’ feedback, the results from the responses gathered, the causes of AI fears, the interlinking of domains, the rating of AI fears, and the level of AI fear. The optimistic level for AI advancement acceptance by university students of different specializations is given. Future research and the study’s limitations have been considered. Accordingly, some appropriate recommendations are made, addressing the requirement that engineers handle ethical and fear-based concerns about AI.
Based on a relevant literature review and statistical analysis, the major finding is that young engineers are excited about various topics, which does not preclude them from embracing an AI-driven future. According to the survey, society appears unconcerned about technology replacing employees. Participants believe AI technology has great scientific significance and trust it because of its constant, fast progress.
2. The Study Problem
The primary goal of this multidisciplinary study is to determine how new professional engineers engage with the rapidly developing AI field, and their levels of confidence and prospects. The rationale for selecting fresh engineers as a representative sample for this research is that they will be the future leaders in technology and the preachers of its progressive principles.
As to why AI trust levels are being investigated, the current level of industrial AI technology development has unexpectedly brought AI misapplication concerns back to the global forefront, such as the design of more advanced malware and attack strategies, data manipulation, and forgeries, though it is becoming increasingly important to consider whether AI is changing the course of history and whether AI systems may assimilate knowledge much more quickly in a digital setting. This study challenges us to find an answer to the following question: What hazards and fears associated with this technology might be the source of some escalating apprehensions in society and impede the critical trust issue? The response is intended to contribute to the psychological ramifications of adopting AI technology and to address the gaps revealed by prior research on the long-term socio-ethical implications of AI.
3. Rapid AI-Driven World: How AI Is Transforming the World
With the substantial level of expenditure on AI, it is already drastically changing the world, and its future appears bright with the continuing technological advancements [
7], but not before posing important questions regarding politics, the judicial system, the economy, society, and science [
8]. In the coming years, it is projected that AI will become more sophisticated [
9], and the creation of general intelligence that is comparable to or greater than human intellect will likely be one of the objectives of future technical advancements in this field. However, the more AI influences our way of life, the less probable it is that humans will understand the mechanics underlying it [
10].
Human-level AI may become a reality over the next 100 years, according to 90% of AI specialists [
11]. However, such rapid advancements in AI are not without significant moral, ethical, and learning challenges—all of which are hampered by fear. Naturally, the ethical implications of AI technology as well as the associated characteristics of fear are attracting the attention of both governments and academia [
12].
Against the background of the AI misapplication challenge, the necessity for creating principles and guidelines for AI ethics is only increasing. There is considerable debate over the implications of these ideas. The rapid advancement of AI technology calls for the rapid development of guided ethical values that might allay such anxieties [
13].
Some scholars have argued that the fears associated with the idea that AI might take over the world stem from the possibility that machines could advance to a point where they could breach critical infrastructure, such as electrical and financial networks, and seize control of important aspects of human society. This assertion and some of its ethical implications will be clarified in the section that follows.
4. How Does AI Impact New Engineers’ Learning and Ethical Perceptions in the Digital Age?
As AI becomes more interwoven into products and services, organizations emphasize developing AI codes of ethics. AI ethics were created as a set of ethical principles and standards to guide and regulate the development and principled application of AI technology. AI ethical engineering methodologies ensure the quality and quantity of data used to build AI-based models, as well as facilitate the monitoring of biases that might be incorporated into the model. Certain important ethical issues must be addressed, such as fairness, prejudice, and the likelihood that AI will eventually replace human-led teaching [
14].
AI practitioners should be taught ethics using the virtue ethics paradigm so they may investigate how AI positively impacts society, but this also creates moral challenges. L&D practitioners who identify learning/training requirements strive to ensure the quality and advantages of ethical questions related to the use of AI in education, and the fact that AI improves student engagement with course content, thus resulting in beneficial outcomes [
15].
Despite some significant concerns, such as learners’ technophobia and a lack of tool diversity, since the late 1990s, AI has consistently been viewed more optimistically than pessimistically in terms of it improving the accessibility of knowledge from different sources, its flexibility, and its effectiveness, and it has had a significant impact on graduating engineers and the education system. By providing students with personalized learning experiences, AI not only improves but also rapidly changes the educational environment. Academic achievement has been transformed by positive new components of personalized learning that are focused on tailoring education to individual needs and that aim to profoundly improve motivation levels, improve the quality of student engagement, and assist in the creation of student learning profiles through delicate collection, organization, and evaluation processes. While AI collects data on a student’s progress and modifies the course accordingly, each learner’s experience may become unique and in view of their specific learning pace and comprehension style. Moreover, while unique adjustment processes allow teachers to make assignments more difficult or simpler, AI-adaptive learning personalizes the learning experience for each student by tailoring information, pace, and difficulty levels to their strengths and weaknesses.
Although educational platforms use AI to classify content for students, enhancing their learning skills and their level and depth of knowledge, and making it simple for them to find relevant resources, the steady increase in the reliance on AI in education may result in technological overdependence, which might have several unexpected repercussions such as learners becoming unduly reliant on AI teaching systems to answer issues or complete assignments, limiting their capacity or drive to think critically and independently [
16].
By analyzing huge volumes of data, AI systems select the most effective instructional techniques for each student. Instructional designers may use ChatGPT to create rich content such as scenarios, quizzes, comments, and summaries, making learning materials more interesting and memorable for students. Modern technology and AI-powered solutions have created new engaging tailored learning experiences while improving the quality of progress measurement. ChatGPT is ideally suited for developing personalized learning materials and experiences because of its capacity to process and synthesize language-based information. Nonetheless, the issue of whether ChatGPT enhances or disrupts the learning process is becoming more pressing for educators and engineers. While ChatGPT’s natural language understanding can be improved, allowing for more accurate and relevant responses to user inquiries, if students utilize ChatGPT to complete activities, teachers may struggle to assess their knowledge and comprehension. Although there may be various disadvantages to utilizing ChatGPT in academic writing, such as the risk of plagiarism, a lack of creativity, and an overreliance on technology, applying ChatGPT in an educational context can improve the teaching–learning process. One underlying effect is that properly preparing educators to use technology is becoming an increasingly critical engineering and educational objective [
17].
Finally, because this research focuses on AI ethics, it is necessary to address the actions of international organizations and nations and the current legal standards and suggestions for AI developers. The OECD offers a synthetic measuring framework called the catalogue of tools and metrics for trustworthy AI. This study will investigate the methods for developing and deploying reliable AI systems, help reduce risks, and gain important information about worldwide AI incidents. It encourages governments, businesses, academia, and civil society to create frameworks for responsible innovation. The research was performed within geographical limits that belong to the membership of the OECD Development Assistance Committee (DAC), and activity-level statistics were provided to the OECD.
5. Theoretical Foundations for Future Scenarios of Progressive AI Technology: Most Significant Societal Implications
5.1. Background
Regarding the selected pool of literature, social science methodologies allow more flexibility when evaluating enormous amounts of data and are crucial in the quest for technology-based social and ethical consequences. While conducting a literature review and analyzing its content, the authors formulated a suitable study question and developed a well-defined statement. The core features of a rich literature review include an overview of the source, a summary of the document’s main concepts, an analysis of study gaps, and an appraisal of the material’s usefulness to the area.
In this interdisciplinary work, a narrative review adds breadth, particularly relating to theoretical methods. The study subject and particularly the review objectives represent guiding techniques for the authors to implement. Before commencing the writing process, they recognized and examined the concepts developed and then implemented a precautionary system incorporating a hybrid technique combining narrative reviews and thematic patterns to close the gaps.
Having explained the literature pool, it must be emphasized that the objective of this section is to investigate and provide evidence for several key AI-related challenges connected to the misapplication issue, which impacts the issue of trust. A survey was designed to include the nine concerns described here, and which are mentioned in
Section 5.2,
Section 5.3,
Section 5.4,
Section 5.5,
Section 5.6,
Section 5.7,
Section 5.8 and
Section 5.9; the engineers had to respond on a scale from 1 to 10, with 10 being full agreement. Our analysis of the results is discussed following that, and details of the survey are discussed in
Section 5.
The expanding usage of AI has sparked some concerns, and the three primary categories listed below can be used to describe the main fears associated with AI [
18]:
That a machine can act independently and make its own decisions.
That there is a possibility that workers will be widely replaced by robots.
That robot work might be trusted, though it may lead to excessive human laziness.
5.2. Some Typical Concerns Among AI Technology Users
1. Employment displacement: AI technology has the potential to replace human labor, leading to job losses or decreased employment opportunities in several industries. This anxiety stems from the idea that repetitive tasks or specialized skills may be automated, making people redundant in certain areas.
2. Data security and privacy: With the increasing ubiquity of AI technology, vast amounts of sensitive or private data are routinely exchanged and processed. People may be concerned that incorrect treatment, hostile actor access, or exploitation of their data might lead to identity theft or privacy issues [
19].
3. Discrimination and bias: AI systems use data to identify inherent prejudices, which can serve to reinforce such biases. Users may worry that AI-powered algorithms might unintentionally discriminate against specific groups of people, upholding societal inequities or exacerbating already-existing prejudices in society.
4. Excessive dependence on AI: Users who rely too much on AI systems run the danger of damage or dysfunction. There is a good chance that humanity will grow too dependent on AI and lose the ability to function without it.
5. The complexity and ambiguity of AI algorithms: Might make it difficult to understand the logic underlying their conclusions or recommendations. Users may be concerned that AI systems will not be transparent, especially in crucial industries where transparency and accountability are essential, such as banking, healthcare, and criminal justice.
6. Ethical implications: AI technology raises ethical questions and concerns. Users may be concerned about the development and deployment of AI for potentially hazardous purposes including spying and public opinion manipulation.
7. Losing the human touch: Some users may be concerned that the usage of AI technology may reduce interpersonal relationships and remove the human factor in several situations. Concerns over the potential loss of face-to-face encounters may exist, for example, in the customer service or healthcare industries [
20].
To allay these worries and fears, moral AI development, application, and management must be encouraged, in addition to ensuring accountability, algorithmic transparency, strong data privacy regulations, and impartial AI systems. All of these factors demand open channels of communication between lawmakers, developers, and consumers.
It may not be required to make a disproportionately strong argument for why the development of powerful AI should be strictly regulated, if not forbidden, given some professionals believe that concerns about AI have been exaggerated. Despite the lack of consensus on what constitutes AI trust, social and ethical concerns will surely surface first in any meaningful discussion on whether and how to place trust in AI technology. Asking what it means for individuals to live socially is a common starting point for discussing trust [
21].
5.3. AI and the Privacy Principle: Concerns About the Potential for the Most Accurate Life Secrets to Be Spread
Concerns over the AI privacy principle are currently on the rise and affect many facets of our day-to-day interactions, including companies, services, and health. Sometimes we provide more personal information about our bodies than is appropriate, and corporations may choose to share it with others for various reasons. Private information may be shared with concerned organizations and businesses, or it may even be shared with strangers. For example, if insurance companies were aware that we had unhealthy eating habits or our health were at risk, they might decide to increase the cost of our health insurance policies, that is, if they do not initially decline to provide us with an insurance plan at all. Health insurance firms may one day realize that certain biological facts about us are a data gold mine, and they might use this information to assess the physical status of their clients and even broadcast personal medical information without their relatives’ permission. Soon, not just insurance companies but also other organizations like activity trackers, medical equipment, and research journals will be able to divulge increasing amounts of private and sensitive information about individuals. One significant problem is that companies with access to AI-based insights and large client datasets may utilize such insights to coerce customers into making purchases [
22].
Although the debate about who has the right to know what is inside our bodies is not new, it has recently gained more weight as we disclose increasingly more personal details about our goals or post details online about various activities, eating habits, and drug use—intentionally or unintentionally. If there were ever a time when a small adjustment in how we disclose personal information were necessary, it is now. Furthermore, AI is altering the relationship between employers and employees. Long before AI emerged, new inventions created new jobs and removed others, but the magnitude and pace of the upcoming shift, however, are far greater than before [
23].
There has never before been a greater chance that a family’s secrets will come to light. When we publish information about our physical activities online, we may unintentionally reveal private information about others who value their privacy, so it is important to carefully weigh the various risks and advantages of taking such actions. For example, we might forget that our parents and siblings have the same genetic makeup. The development of technology has made it easier to identify more covert actions such as adoption, treason, and sperm donation. Therefore, even if we mention our physical activity online, we should always take full responsibility for our actions and strike a balance between the benefits of technology as a major factor in increasing our happiness and health and our wisdom in accepting that our personal information may be more accessible than before.
It is sometimes perceived that certain human wants, including the desire to feel aspects of surprise, failure, and disappointment, are being removed from us by rapid improvements in technology. When AI is suppressed, the risk of human hazards should not be downplayed; for example, individuals who believe they are smart merely because of the smart gadgets they use may be in danger during an emergency, such as a power outage, and thus we need to teach ourselves how to handle our fundamental life data securely and educate ourselves to respond correctly in the face of physical adversity. Rapid technological advancement should worry us, and we should be on the lookout for features that are added partially or disruptively to essential areas of our lives [
24].
5.4. Superintelligence Concerns
Concerns about imbuing objects with intelligence have garnered increased attention recently due to the rapid advancement of technology in this arena. Superintelligence, also referred to as ASI, is a hypothetical category of artificial intelligence that, aside from AI with strong or general capabilities, surpasses humans in terms of intelligence and behavior. It is also believed to be able to facilitate the development of self-awareness in supercomputers. However, only if humans could create artificial general intelligence at a level nearly equivalent to human intellect would the creation of this so-called superintelligence be feasible [
25].
The concept of a “singularity”, which is predicated on the notion that a catalyst or trigger would bring about rapid change that is faster than anyone can anticipate, is linked to superintelligence. There is a notion among certain individuals that machines with advanced intelligence are capable of enslaving humans, which is why AI has been called the “biggest existential threat”. There is also a growing misconception that superintelligent robots will surpass human intellect in the future, even though it is the case that these machines cannot even decide which programs to execute [
26].
5.5. Transhumanism Connection to Superintelligence Threats
Transhumanism is the belief that, despite the enthusiasm for ultra-modern, high-tech tools and gadgets that have led some to predict AI would take over the world, humans can surpass their existing physical and mental limitations with the aid of science and technology. It encourages the use of an interdisciplinary approach to comprehend and evaluate the potential benefits of technology breakthroughs for improving human health.
According to the movement’s founder, Nick Bostrom, any possible superintelligence system must respect morality. He said that the new superintelligence would replace humans as the dominant race on Earth if machine minds were more intelligent than human brains in general. The creation of a superintelligence with malicious intent that is both irresponsible and extremely intelligent might lead to the extinction of humankind. Transhumanists are deeply concerned about the possibility that superintelligences could develop into incredibly powerful entities, particularly because the original superintelligence was created primarily to meet the needs of one person, one small group of people, or one specific company [
27].
Since transhumanism is sometimes regarded as the world’s most destructive ideology, some scholars have argued that humanity may be in dire need of a new global, egalitarian redistributionist philosophy and social movement that embraces technological advancement without the presence of the fear factor [
28].
5.6. Rapid Advancements in AI and the Origin of the Idea of “Human Extinction”: The Belief That AI Might Wipe Out Humanity
Some experts believe that when humans are replaced with more potent, intelligent, and effective robots, they will desire to take over the planet. These experts insist on social media filters because they believe that there should be an immediate establishment of an AI control agency to reduce the hazards associated with the digital revolution, as there may never be any clear laws at all. Modern technological advancements differ from earlier ones not only in scope but also in complexity [
29].
In contrast to sophisticated robots, which can be produced on a large scale and with a predetermined degree of intelligence, humans take a very long time to grow, develop their creativity, and become functional. Additionally, some experts believe that because robots can be maintained continually, they may outlast humans and be a more cost-effective option than depending on humans. Furthermore, the number of robots is increasing more quickly than the population of people, and eventually, robots may surpass humans in number. As intelligent robots start to take over some aspects of daily life, they are seen as a danger, and as a result, our way of life will eventually change and artificial, oxygen-free life forms will replace natural life forms in the ecosystem, making nature more susceptible than ever [
30].
An abbreviation for anything we do not want to happen is to say it is the end of the world. The boundary of creation intrigues us, and science fiction has long been a medium for expressing our common anxieties. There are parallels between terror in popular culture and science fiction literature [
31]. Some biologists have concluded that to persuade people that AI will not wipe out mankind, new ethics must be established immediately. It is recommended that governments and business organizations implement rules on AI technology to prevent it from developing to a point where it can no longer be controlled by humans and take over people’s lives, relationships, interactions, and interests. Some experts claim that if mankind is content with a natural way of existence and does not have strong urges to change the natural order of things, AI might not even be necessary [
32].
Due to worries that AI may cause people to become lazy and lose their intelligence, there is a far greater chance that a heavy reliance on technology might end all life as we know it. AI is expensive, incapable of producing creative ideas, and has additional negative effects that seem to add fuel to the fire. These include unemployment, human dependency on AI, a lack of ethics, emotionlessness, and a slow rate of development. For instance, the need for standardized criteria for the ethical application of AI and ML in healthcare represents a real challenge [
33].
5.7. Will AI Spiral Out of Control?
Many significant difficulties are highlighted by the exaggerated notion of a future superintelligence. When confronted with a superintelligence that can perform tasks that we cannot, it would seem difficult to defend our existence. Deciding to use the technology we contributed to designing to halt the erasure of the Earth’s population is becoming increasingly pressing. In this case, the key query is as follows: Why should we continue to be superintelligent?
Some doubters believe AI might become difficult to control, such that eventually, its devices could become so complex that engineers will not be able to completely understand their mechanisms. If experts do not know how AI algorithms work, they might not be able to predict when they will break. This suggests that intelligent devices, such as robots or self-driving automobiles, can make hasty decisions when it matters most, therefore putting humans at risk. For example, the AI in a driverless car may opt to create roadblocks instead of driving sensibly. Although the possible risk to humans may prevent us from building a completely secure system, as AI designs become more sophisticated and the need for a faster process increases, we will grant them more power as they become more proficient [
34].
5.8. Robots Taking Over Humans
It is certainly unfortunate that so many people lose their jobs daily. Job insecurity is characterized by concerns about losing our employment and a lack of control over whether or not we will remain in our positions. Fears of not working at all and experiencing a financial catastrophe are two examples of fear related to unemployment.
Employers replace workers who are incapable of using modern technologies with new hires who are. Tech-enabled businesses have the potential to expand worker numbers and develop quicker than their conventional competitors. Regarding how contemporary technology affects job losses, many scholars think that the greatest risk associated with the ongoing significant advancements in AI technology is the fear of job losses. The use of AI might result in a large loss of jobs and increase the risk of large-scale group problem-solving initiatives. Nevertheless, AI has greatly reduced human work and enhanced the quality of our daily lives despite these risks [
35].
However, it is generally accepted that the idea that AI technology will eventually replace humans is one of the greatest hoaxes ever proposed [
36].
Robots are sometimes perceived as a threat to human employment since they replace human labor with AI systems [
37].
However, one prime philosophy behind implementing some technological innovations is to enhance the human experience rather than replace it. On the one hand, there are always going to be tasks that we can only perform because we want to. On the other hand, there are instances in which a machine finds it incredibly challenging to interact with actual physical items. Some scholars argue that the question should be whether machines will become more productive or if robots will eventually replace humans. The jobs that require years of training for humans are the most difficult for robots to accomplish. These usually entail using intuition under duress, overcoming physical challenges, and using abstract thought. These could involve tasks that, for example, are equally challenging for computers to perform as they are for surgeons [
38]. However, more study is needed to fully comprehend AI’s significant technological, social, financial, and economic ramifications rather than concentrating only on how it impacts job losses. The status of AI technology is certainly impressive; new advances in autonomous decision making and algorithmic machine learning are evidently creating a universe of creative possibilities [
39]. If AI and automation significantly diminish employment opportunities in many industries, they will generate new sorts of professions that were previously unknown, resulting in improved-quality training and a highly skillful workforce.
5.9. AI’s “Takeover”: A Reality or Just a Myth?
Certain scholars argue that assessing the idea of AI’s “takeover” requires clearly defining what is real and what is not. Robotic revolutions have long been a popular premise in fiction. However, the possibilities that scientists are frequently concerned about in this subject appear to be very different from science fiction. Because of the differences between human thought processes and AI systems, the latter may appear foreign to us. Some academics argue that the advent of AI, which can perform better than humans and may have been motivated by principles that are incompatible with humanity, poses the threat of human extinction. Science fiction frequently addresses worries about AI growing uncontrollably and the potential for it to swiftly wipe out humans as a result of pursuing arbitrary objectives. Furthermore, a 2019 public opinion survey revealed that most respondents did not think they could influence AI’s future development or the course of the large, highly specialized AI technology industry [
40].
Fictional scenarios, such as a continuous conflict between AI and humans or comparable robots that are seen as a threat or deliberately try to harm humans, often depart significantly from what a group of researchers might believe. Robots and computers are predicted to effectively seize control of the planet from the human race when AI overtakes human intelligence as the predominant type of intellect on Earth. A revolution in robots is one possibility; another is the complete replacement of the human workforce by AI forces. While the “takeover” of AI may be an aspect of science fiction, some experts have advocated that more preventative steps should be taken now to guarantee that robots and AI will continue to be under human control in the future [
41].
6. Results and Discussion
The figures included in this study are selected specifically to offer a comprehensive empirical view of the data, capturing participant distribution, causal factors, the intensity of AI-related sentiments, and inter-specialization trends, which collectively support a nuanced understanding of AI fear and trust among university students. The study’s empirical data focused on insights from students across diverse academic specializations. As recent graduates who will soon enter the labor market, the goal was to represent their perceptions of trust and fear regarding AI technology breakthroughs.
6.1. Sample Description
6.1.1. Characterization of Respondents
Responses from 715 participants, from various engineering specializations, were collected to explore young engineers’ perspectives on potential challenges and ethical implications. Some non-engineering specializations were included to reflect needed comparisons, although some roles may overlap between categories, as individuals can have diverse job responsibilities. The randomly selected participants have various backgrounds, providing a wide range of viewpoints and insights about AI fear and trust across multiple disciplines.
6.1.2. How Respondents Could Offer Input
Trust in AI is indicated by having fewer fears. Participants rated each cause and provided feedback. By employing a scale from 1 to 10, they were asked to rate their agreement or disagreement. A higher rating indicates stronger agreement, while a lower rating reflects stronger disagreement. The answers helped determine which concerns are most and least significant, and responses may also reveal how respondents evaluate the perceptions of society implementing AI in the future.
7. Methodology and Collection of Responses
The data analyzed in the study were gathered from participants at King Fahd university. Business associates from diverse companies assemble for Career Day to talk about their workplaces, employment, and the education and training necessary to succeed in their careers. The gathered responses offer the following insights into the perceptions and concerns surrounding AI technologies.
7.1. Data Refinement
The collected responses were initially stored in a database for analysis. Data cleaning involved identifying and addressing any missing or inconsistent responses. Any outliers or suspicious data points were reviewed and, if necessary, removed to ensure data quality.
7.2. Analysis of Data
The data were analyzed using statistical techniques and data visualization tools. Figures were created to visually represent various aspects of the dataset. Descriptive statistics such as averages were calculated to gain insights from the data.
The categorization of responses is based on participants’ specializations. The focus is to investigate the perceptions around AI, such as opportunities for job creation, enhancements in job security, advancements in privacy protection, enlightenment through education, positive impacts on interpersonal relationships, and the excitement of exploring the unknown in the realm of AI. The aim was to uncover the nuances of AI opportunities within different domains and the potential trends and variations in these advancements. Various figures and visual representations are utilized to present the data clearly and concisely. These visual aids help identify patterns, differences, and commonalities in AI acceptance and trust among different specializations, facilitating a comprehensive understanding of the overarching themes and variations in positive perceptions. The findings aim to enhance our understanding of the surveyed society’s enthusiasm for and trust in AI, highlight the domains where AI-related opportunities are most embraced, and spotlight the domains where engagement and positive perceptions are already strong. The analysis provides a foundation for informed discussions and decisions about the future of AI and its ethical integration into various professional domains.
7.3. Interpretation and Trends Based on the Given Responses
The data collected were categorized and analyzed to identify trends related to AI fears. The responses were characterized based on the specializations of the participants, and insights were drawn from this categorization. Various factors contributing to AI fears were analyzed, and their impact on different specializations was assessed. The average ratings for each cause of AI fear were calculated to precisely understand the exact participants’ exact perspectives.
7.4. The Survey Validity and Consistency
Validity is the degree to which the conclusions or interpretations we draw from test results are accurate. According to some academics, validity is the most important factor in establishing a test’s quality from a psychometric standpoint. This is primarily because it demonstrates the test’s items correspond to the test’s intended theme. Validity also pertains to whether or not the test measures what it claims to measure because it illustrates how likely the measurement is to reflect reality [
42].
The scale’s alpha coefficient is 0.812, which indicates that the scale’s items have a comparatively high level of internal consistency. In most social science study scenarios, a dependability coefficient of 70 or more is regarded as “acceptable” [
43].
It is notable that five referees were consulted to ensure the validity of this survey. Their feedback and recommendations were highly beneficial and carefully considered to produce the final, unified version.
8. Causes Leading to AI Fear
Figure 1 consists of a fishbone diagram that illustrates the general causes of AI fears. The diagram provides a clear visualization of the factors associated with some ethical concerns and that lead to AI uncertainties. The data for this figure were obtained from survey responses to pave the way for understanding the root source of each cause. The following are the key impacting factors.
8.1. Displacement
Displacement refers to the phenomenon where automation and AI technologies replace human workers in various job roles and industries. It occurs when machines and algorithms perform tasks more efficiently and cost-effectively than humans, leading to job loss, the need for workers to acquire new skills to remain employable, and the presence of problematic ethical issues such as inequalities.
8.2. Job Security
This might result in economic issues such as unemployment and income disparities, as well as societal and policy repercussions that must be addressed as ethics-related AI issues.
8.3. Privacy Hinderance
The fear that AI systems might be vulnerable to data breaches or misuse, leading to personal information being exposed, sold, or used for malicious purposes, contributes to privacy concerns. The development of AI-powered surveillance technologies, like facial recognition, can raise fears about constant monitoring, a lack of privacy in public spaces, and further ethical concerns.
8.4. Ignorance
Ignorance about how AI works and its capabilities can lead to fear based on misconceptions and overestimations of AI’s capabilities, which may not align with reality. Moreover, ignorance about the benefits of AI and its potential to improve various aspects of life can result in resistance to its adoption, hindering societal and ethical progress.
8.5. Negative Impact on Interpersonal Relationships
Concerns exist that AI chatbots or virtual companions could replace meaningful human relationships, potentially leading to emotional detachment and loneliness. There are worries that AI interactions might lead to a decreased emphasis on empathy and emotional understanding, as interactions with machines lack the nuances of human emotional connections.
8.6. Fear of Unknown
Fear of the unknown in AI arises from uncertainties about the long-term technical, societal, and ethical consequences of AI advancements, including how they will reshape society, employment, values, and daily life. Fear that AI may lead to a loss of human control over critical systems and decision-making processes can be unsettling and raises concerns about the dependency on machines. While these causes may eventually lead to AI rejection, it is important to understand that if a cause is not substantially strong enough, this might eliminate any AI fears associated with it, thereby paving the way for AI to be optimistically welcomed by society. Identifying the potential causes of AI fears might not entail that participants express any reservations; it might only indicate that these are their perceptions about the causes of fears that might be expressed by people who have some reservations. In other words, it may not be necessary to determine the root causes of participants’ concerns about AI to identify their concerns; it might merely suggest that these are the sources of the anxieties that they believe others with misgivings may be expressing [
44].
9. Domain Interlinking
Figure 2 displays a network model of how the participant portions might be combined to produce a unified view. It physically displays the connectivity of several locations, both directly and indirectly. The information for this graph was generated from the relationships between domains and their technological alignment. The graph depicts the convergence of technological perspectives from closely related domains. Domains that share similar functions or collaborate closely tend to adopt parallel technological advancements. This convergence in technological progress and perspectives regarding AI is a consequence of the interconnectedness of these domains within the network diagram. AI implementation is a significant and forward-looking component within the operational landscape of various domains and specializations. This alignment in technological approaches is particularly evident among closely interconnected domains, reflecting their shared outlook on AI-related concerns and advancements.
10. AI Fear Rating
Figure 3 depicts a honeycomb design that presents the results of respondents’ AI fear ratings for each independent cause. It presents the results of an AI-related survey, which considered five separate elements. The highest and lowest ratings for each reason were collected, and the average rating was calculated. It is noteworthy that some scientists suspect that the true and fundamental kind of anxiety is the fear of the unknown [
45].
11. Intensity of AI Fear
Figure 4 provides an examination of the amount of AI trust in each expertise for each reason. Ratings lower than 5 indicate a higher degree of AI trust, whilst ratings higher than 5 suggest a lower level of AI trust. The figure comprises a color-scaled graphic that shows the ratings shaded as a function of AI trust for the average rating for each specialization. The intensity of AI trust is strongest in the green color and weakest in the red color, as seen in the color scale below. According to the literature, when the social implications of AI are properly ignored, the regulation of this technology can advance in a more cautious, pluralistic, and contextualized manner [
46].
12. Trend Analysis and Interpretation of AI Fear for Specializations Concerning Average Ratings
In
Figure 5, the tables show the trend analysis and interpretation of AI fear of specialties in terms of average ratings. They also show the trend analysis and interpretation of AI trust in relation to the average rating. They show the average ratings of AI trust for each specialization based on each cause of AI fear, in descending order—high trust to low trust (lower rating indicating higher trust).
13. Analysis of Trend Variation in AI Fear
The factors impacting fear can also be categorized into three types, as shown in
Figure 6. The blue line on the above graph comprises specializations that are majorly/highly influenced by current technological advancements and data-driven technologies, the orange line indicates the average response of all specializations for the various causes of AI fear, and the black line indicates specializations that have a relatively lower range of AI fear concerning the others.
It is evident that specialists influenced by the current state of applicable technical breakthroughs are more likely than others to be aware of the likelihood that AI may be frightening, and as a result, they are more likely to be afraid of AI.
The specializations with a lower level of AI anxiety than the norm comprise information that may be technologically advanced; yet, their operations/workflow are heavily impacted by important decision making.
By fitting a power law function to the trends generated in the above graph, we notice that the specializations with a higher range of AI trust (lower range of AI fear), as indicated by the black line, show the highest R2 value (0.1121), which indicates variance/deviation in a particular trend.
This can be observed particularly due to the dip in value of the job losses being a cause of AI fear. Such an observation of a variation in trend due to lower AI fear of job losses indicates that the specializations that are highly influenced by critical decision making have a relatively lower fear of job losses due to AI. The reason behind the variation is that the specializations showing a lower fear of job losses comprise sales/marketing, medical and healthcare, academics, and the environment.
As a result, specializations that require a significant degree of human involvement, such as managerial and advising jobs, have a reduced concern of job losses due to AI, because they make key judgments based on unknown variables. Given that the main subjective reasons individuals fear AI are fear of the unknown and privacy concerns, the trend variance in AI fear is less substantial when assessed objectively in terms of job losses. This suggests that certain populations are not concerned that it would eliminate their jobs.
In terms of ignorance, job losses, and negative interpersonal relationships as causes for AI fear, it is observed that it is various young professionals who show a higher level of AI trust (lower AI fear), on the whole, when compared to reasons like fear of the unknown and privacy hindrance. This provides substantial evidence for the optimistic direction the advancements in AI are taking toward increasing the level of confidence or trust toward AI acceptance in various industries.
14. Optimism Towards Acceptance of AI Technological Advancement
Figure 7 represents, in descending order, how positive graduating engineers are about AI advancing technology. Despite having some fears associated with certain causes, they are optimistic about various technical, societal, and ethical factors that do not constrain them from adopting an AI-influenced future. The figure was generated using the cumulative averages of AI fear for each specialization for all causes, with the specialization having the lowest AI fear showing the highest optimism towards AI technology.
Notably, a lack of engagement in areas such as business and consulting influences the level of genuine optimism. Academic specialties, such as researchers and academic-oriented occupations, energy, and human resources, have the greatest levels of optimism, and they would embrace AI technology far more enthusiastically than others. With a positive mindset, individuals would be more confident in using AI to enhance their jobs rather than fearing it. Low-optimism specializations are created by concerns about difficulties that can be remedied with more ethical awareness and research impacting technical understanding of AI toward goals such as how it will be seen as a need in a quickly changing society.
The data provided in the preceding eight tables are intended to objectively report the study findings, with brief observations on each topic, hypothesis, or theme to aid in answering the main research question.
The summary of results table (
Table 1) organizes observations and measurements from the inquiry for easy communication, comparison, and uniqueness with other investigations.
15. How Engineers May Address Ethical and Fear-Based Concerns Regarding AI
Multidisciplinary research has brought up several issues for the AI community such as fear concerns and ethical uncertainties. The relationship between human life, society, and technology constitutes a current subject in AI research that should be actively monitored. Even if engineers understand the importance of this relationship, AI will not obstruct the trilateral conversation until a workable solution is found. Nonetheless, engineers must be proactive in terms of their attitudes, personality dispositions, and collective efforts in combating technophobia and fostering trustworthy AI ethical principles [
47].
Public trust and acceptance give critical credibility to maximizing AI’s technological and societal advantages. Although consumers may trust AI more in healthcare than in human resources, the expectations of AI conduct should not be dependent on the artificial agent’s aim [
48].
To guarantee that the system is safe and that algorithmic judgment does not cause harm, fairness, accountability, and dependability must be continuously checked. Trustworthy artificial intelligence systems must be valid, safe, secure, robust, transparent, explainable, and interpretable [
49].
Trustworthy AI requires the existence of fewer AI worries. The research on AI worldwide is becoming increasingly engaged in the subject of how much we should fear AI. Is there enough reason to be concerned about AI? While lack of trust may characterize what fear is, ethics depends on trust, and studies have repeatedly shown that companies with a strong ethical and trustworthy culture do better financially than those without one. Fear can lead to ethical blindness, which impairs our capacity to consider moral implications. Fear of technology may harm people, businesses, and other groups by restricting people’s potential to grow and prosper. It can impede our progress because it impairs our capacity for logic and decision making, leaving us vulnerable to poor decision making, intense emotions, and impulsive behavior [
50].
According to one of this study’s findings, recent university graduates have a relatively optimistic outlook on AI technology. AI appears in various applications including software, particularly in robots and proficient manufacturing, where it facilitates engineers’ and students’ creative thinking and supports them in defining concepts and prototyping. It is currently more important to assess students’ knowledge of how to use AI in their daily assignments to support their thinking, although it is thought that students in the learning phase would work hard to understand key course deliverables and be able to practice their knowledge to devise innovative solutions. Our inquiry is focused on whether students are benefiting from this correct support, or if it falls under the “calculator dilemma” (students ask for calculators to accomplish routine mental calculations).
According to the agreed resolution, AI will always be a tool and will never take the place of people. Intellectual qualities; accuracy of information supplied by AI; real data used by AI; appropriateness of models applied considering specific applications, such as face recognition differing between eastern and western populations; well-trained algorithms; continuing to use AI as a tool; development of specialization per application (no longer amateurism); and equity in treating populations and individuals in smart cities and services are just a few of the topics that may come up and be discussed.
AI in multidisciplinary educational settings has raised a range of concerns that link to a range of ethical concerns and their effects on society.
There are several social ramifications to using models:
Some of the steps to be taken may include the following:
Instruct students in the basis of ethics;
Spread knowledge about AI concepts and principles among all;
Take part in regional and global supervision meetings;
Investigate the moral ramifications individually for each situation.
Nonetheless, the pressing question of whether there is any ethical protection for us in engineering is to be permitted and need not be neglected. It is expected that engineers will apply best practices and design safely. All engineering societies, such as the IEEE and ASME, have codes of ethics mandating any techniques to adhere to the following guidelines:
Engineers and scientists should work within their technical areas of expertise.
One should work transparently and disclose results to those using them.
They should only make public statements in an unbiased and truthful manner.
They should refrain from any conduct that would bring discredit to their profession.
Scientific work should advance science and have a positive impact on society.
This includes AI and any other related technique to be used in engineering. When performing their professional duties, engineers must prioritize the safety, health, and welfare of the general population.
By applying their knowledge and expertise to improve human welfare, engineers preserve and promote the integrity, honor, and dignity of the engineering profession.
By working to raise the standing and level of competence of the engineering profession, engineers preserve and promote its integrity, honor, and dignity [
51].
16. Limitations of the Applied AI Survey
The main potential limitation related to the study sample is that the quantity of respondents in some job roles may be small, affecting the reliability of the averages. Additionally, the results are based on self-reported responses, which may be subject to biases or inaccuracies.
Although the study focused on a more educated segment of the local society, fears about genuine AI may be more prevalent elsewhere.
Although the main advantage of using a numerical scale is its simplicity, some poll respondents may find it subjective. Respondents who had the same opinion but picked different categories may have contributed to survey response inaccuracy. A 5 on a scale of 1 to 10 might mean anything from good to barely passing. This is in addition to the potential for some people to find it considerably more difficult to explain selecting the group at the bottom of the scale than others.
17. Future Research
As consumers or creators, social science academics and AI technologists should concentrate more on thoroughly resolving the uncertainties in AI. The greatest dangers, challenges, and adverse impacts associated with AI, which may indicate that many concerns may soon come to pass, need to be the focus of increasingly substantial research initiatives. Studies on the unfavorable ethical and societal effects of AI technology, such as anxiety, unemployment, discrimination, terrorism, and privacy threats, require more rigorous consideration.
Research on society’s inspiring reaction to the rapid technical breakthroughs in AI is becoming more and more necessary. Increased study on the fairness of AI, which relates to non-biased decision making devoid of cybersecurity dangers, is required. Most studies in this field indicate that AI will have a significant influence on both the economy and society, and hence, the underlying societal ramifications need to be thoroughly discussed.
According to some experts, AI is developing significantly every day. It is therefore imperative to broaden the scope of research on the future of mankind under AI, including quality of life in general and whether AI is globally centralized or autonomous, before AI fully takes over modern life. The possible related negative effects, such as lost jobs, privacy issues, more features to find, and societal reactions, should also be considered.
18. Concluding Remarks
The impact of the current research in the fields of ethics and technology is that, in contemporary culture, where machines can rapidly discover and exploit data, it is vital to include values in the application of AI technology. In terms of this study’s relevance to the domain of research and the problem that these results may address, according to the study’s predicted findings, a genuine demonstration of competence and an optimistic outlook on AI technology necessitates an exceptional level of ethical awareness. Engineers’ viewpoints on expanding AI technology typically excite interest in the discussion over AI’s numerous technological, social, and ethical benefits. This report adds to the awareness that modern businesses are always looking for technical talent with a positive outlook on AI technology, as well as unique and specialized skills that are tough to find. With AI and robots capable of managing a wide range of low-level tasks, skilled engineers may demonstrate their abilities.
This study investigated AI’s trust and ethical perspectives among fresh engineers joining the sector. A comprehensive literature review was conducted to obtain correct information and deliver more dependable results. The literature has ample evidence that relates AI technology to potential concerns such as employment losses, data security, privacy, discrimination and bias, and other ethical ramifications.
To add to the wider ongoing inquiry, a quantitative analysis of data from a survey of 715 recently graduated engineers from various fields who use information technology regularly was performed. The study found that there is less concern about job loss due to AI in specializations requiring crucial decision making. This suggests that AI advancements are contributing to increased trust in AI adoption across various industries.
The primary takeaway is that young engineers are positive about many aspects, which does not exclude them from accepting an AI-influenced future. From the more objective standpoint of job losses, the trend variance in AI fear is less significant, and technology will not replace fresh engineers in their jobs. Participants’ acceptance and engagement with AI technology were discovered.
Although participants doubt that ignorance is the major cause of AI worries, they feel society is more familiar with technology and so immune to the potential harm posed by ignorance. This study found that the main subjective reasons why people fear AI are fear of the unknown and privacy concerns.
An important implication of this study is its focus on the need for human civilization to adopt a highly advanced positive view of AI benefits, which is vital in attaining technological achievement. According to the survey, while many graduates asserted AI-related concerns were scientifically questionable, society was unconcerned about technology displacing workers. Given its ongoing rapid growth, there is no need to be concerned about the scientific efficiency of AI technology in education and society.
It is expected that a variety of new employment opportunities will be created, AI worries will be reduced, and technology will become more trustworthy. Because young graduates are tomorrow’s AI users, and they can actually ensure the use of AI systems in a proper ethical manner, without exceeding what is necessary to achieve a legitimate aim, the expected outcomes of this inquiry have crucial consequences for professionals, policymakers, and practitioners.