Next Article in Journal
Low-Code Machine Learning Platforms: A Fastlane to Digitalization
Previous Article in Journal
The Design and Development of a Foot-Detection Approach Based on Seven-Foot Dimensions: A Case Study of a Virtual Try-On Shoe System Using Augmented Reality Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

AI Chatbots: Threat or Opportunity?

1
School of Built Environment, Engineering, and Computing, Leeds Beckett University, Leeds LS6 3QS, UK
2
The Education Academy, Institute of Educational Research, Vytautas Magnus University, 44248 Kaunas, Lithuania
Informatics 2023, 10(2), 49; https://doi.org/10.3390/informatics10020049
Submission received: 31 May 2023 / Accepted: 9 June 2023 / Published: 12 June 2023
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)

1. Introduction

In November 2022, OpenAI launched ChatGPT, an AI chatbot that gained over 100 million users by February 2023. AI chatbots, which are based on large language models and machine learning, have the potential to revolutionize how we interact with computers and digital systems. Proponents of these developments claim that these applications can and will result in substantial benefits for everyone. Many others, including those at the forefront of the technology, are far more skeptical, with some now claiming that AI, in its current form, is toxic and dangerous, possibly representing a threat to humanity. Although this latter fear seems far-fetched and misplaced, the ramifications of these latest developments are serious and require wide-ranging analysis, attention, and action if we are to avoid an exponential increase in disinformation generating severe and irredeemable mistrust in these technologies.

2. The Golem

A golem refers to an animated creature made from mud that can perform tasks, usually at the behest of its creator, but is unable to speak. The most famous example dates from the 16th century, The Golem of Prague, allegedly created by Rabbi Judah Loew ben Bezalel. Rabbi Loew, also known as the Maharal, is said to have fashioned a golem using clay from the Vltava River. It was created to defend the Prague Jewish community from anti-Semitic attacks and pogroms. It was brought to life once the Rabbi inserted a piece of parchment bearing one of the names of God into its mouth. It could then be deactivated by removing the parchment.
In his story Golem XIV, published in 1981 [1], Stanislaw Lem transported the golem into the computer age. The story is included in Imaginary Magnitude, a collection of Lem’s stories; a summary and further details are available in [2]. The story is written in the form of a lecture relating to the development of computers in the first decades of the 21st century. Computers have passed from ‘insect’ to human:
The difference between the earlier machines and the new ones boiled down, in principle, to the difference between an insect and a man (sic). An insect comes into the world programmed to the end by instincts, which it obeys unthinkingly. Man, on the other hand, has to learn his appropriate behavior, though this training makes for independence: with determination and knowledge man can alter his previous programs of action. So it was that computers up to and including the twentieth generation were characterized by ‘insect’ behavior: they were unable to question or, what is more, to modify their programs. The programmer ‘impregnated’ his machine with knowledge, just as evolution ‘impregnates’ an insect with instinct. In the twentieth century a great deal was still being said about ‘self-programming,’ though at the time these were unfulfilled daydreams”. [1] (p. 39) (Note that the translation from Polish uses ‘man’ and ‘he/his’. I have not altered the quotes but hope that everyone recognizes that the inclusive idea of ‘human’ is clearly intended and should be so understood.)
Golem XIV is set in 2047 and consists of an introduction, followed by the text of two lectures given by Golem XIV itself, and an afterword. In the 1980s, when Lem was writing, the idea of self-programming machines was something of a pipe dream, although we now know that some of the technical bases were already being laid at that time. Geoffrey Hinton, acknowledged as one of the ‘fathers of AI’, was already working on neural networking, although it only came to fruition in the present century.

3. False Dawns of AI

Since that time, several false dawns of AI have occurred, heralding the imminent arrival of supposedly transformative and beneficial technologies. In the 1970s and 1980s, the rule-based model of AI predominated, to a large extent heavily influenced and financed by the U.S. military, an influence that constrained and thwarted the neural network approach for many years. This imbalance somewhat ended with the waning of the Cold War in the 1990s: military interest in AI changed course, and the limitations of the rule-based approach became more evident.
In preparing this Editorial, I thought it important to use an AI chatbot and thus I posed the following question to ChatGPT:
How many false dawns of AI have there been in the past 50 years, and what were they?
The response was as follows:
Over the past 50 years, there have been several notable “false dawns” or periods of overhyped expectations regarding the capabilities and impact of artificial intelligence (AI). Here are a few significant examples:
  • The AI Winter of the 1970s: In the early years of AI research, there was tremendous optimism about the potential of AI systems. However, due to unrealistic expectations and limited progress in developing practical applications, funding for AI research declined significantly in the 1970s. This period, known as the “AI Winter,” was characterized by disillusionment and skepticism about AI’s future.
  • Expert Systems Hype in the 1980s: Expert systems, which aimed to replicate human expertise in narrow domains, gained popularity in the 1980s. These systems were built using rule-based approaches and knowledge engineering techniques. However, the technology fell short of expectations, as it proved difficult to capture and represent complex human knowledge effectively. This led to disappointment and a decline in interest in expert systems.
  • Neural Networks Renaissance in the 1990s: Neural networks, a machine learning approach inspired by the structure of the human brain, had gained attention in the 1950s and 1960s. However, due to computational limitations and lack of training data, progress was limited. In the 1990s, there was a resurgence of interest in neural networks with the development of more efficient learning algorithms and the availability of larger datasets. Despite this renewed enthusiasm, practical applications remained limited, and interest waned again by the end of the decade.
  • AI in Popular Culture: Various films and media representations, such as “2001: A Space Odyssey” (1968), “Blade Runner” (1982), and “The Matrix” (1999), showcased advanced AI systems and raised public expectations about the capabilities of AI. However, these portrayals often exaggerated the state of AI technology, leading to inflated expectations that far surpassed the reality of AI development at the time.
  • Deep Learning Breakthroughs in the 2010s: Deep learning, a subfield of machine learning based on artificial neural networks with multiple layers, experienced significant breakthroughs in the 2010s. Deep learning algorithms demonstrated remarkable achievements in image recognition, natural language processing, and other domains. While these advancements were significant, they also led to inflated expectations about the immediate prospects of general AI and raised concerns about potential job displacement.
It is important to note that these “false dawns” do not imply that there was no progress or value created during those periods. Each phase contributed to the development of AI technology, even if the expectations exceeded the reality at the time. Furthermore, recent years have witnessed substantial advancements in AI technologies, such as reinforcement learning, robotics, and natural language processing, suggesting that AI is becoming increasingly integrated into our lives and industries. (Response elicited from ChatGPT—11 May 2023)

4. AI Chatbots—A New Dawn: Blessing or Curse?

In the past 10–15 years, considerable progress has occurred in the fields of robotics, machine learning, and natural language processing. This has culminated in the appearance of AI chatbots, most notably ChatGPT, released in November 2022. By early May 2023, these chatbots had proliferated to the extent that various authors were rating chatbots in general terms and for specific activities [3,4]. The appearance of these applications immediately rang a series of warning bells for many people in different fields of activity and across a wide range of issues; all intensified as other AI chatbots were announced.
Two of the key areas in which concerns have been raised are teaching and publishing. Educators have, for many years, struggled with the issue of students copying content from websites and other electronic sources. Similarly for submissions to journals, editors and reviewers have had to check for plagiarism and self-plagiarism. AI chatbots have raised all these issues to a new level. Students can now avoid the hazards of their plagiarism being flagged by software-based detectors such as Scribbr; instead, they can request an AI chatbot to produce the work in its entirety, just as I did for the earlier section (as demonstrated in an overview of 12 plagiarism checkers from March 2022, i.e., before the launch of ChatGPT [5]).
Educational institutions for all ages now need to deliberate upon and decide how best to engage with these chatbots, and plagiarism detectors now offer indicators relating to possible use of AI in all manner of documents. Scribbr even has a weblink for ‘Using ChatGPT for Assignments: Tips & Examples’ [6].
In setting up this transdisciplinary topic on ‘AI Chatbots: Threat or Opportunity?’, we sought to encourage insightful and critical contributions from a wide range of areas, disciplines, and practices. In preparation, I asked the editors of participating journals to offer suggestions regarding the sort of questions and issues that might prompt insightful and stimulating submissions. These are all listed on the website for the topic but are restated here for convenience.
We were open to all manner of submissions but, to provide some indication of the aspects of key interest, we list the following questions and issues:
  • The development of AI chatbots has been claimed to herald a new era, offering significant advances in the incorporation of technology into people’s lives and interactions. Is this likely to be the case, and, if so, where will these impacts be the most pervasive and effective?
  • Is it possible to strike a balance regarding the impact of these technologies so that any potential harms are minimized while potential benefits are maximized and shared?
  • How should educators respond to the challenge of AI chatbots? Should they welcome this technology and reorient teaching and learning strategies around it, or seek to safeguard traditional practices from what is seen as a major threat?
  • A growing body of evidence shows that the design and implementation of many AI applications, i.e., algorithms, incorporate bias and prejudice. How can this be countered and corrected?
  • How can publishers and editors recognize the difference between manuscripts that have been written by a chatbot and genuine articles written by researchers? Is training to recognize the difference required? If so, who could offer such training?
  • How can the academic world and the wider public be protected against the creation of ‘alternative facts’ by AI? Should researchers be required to submit their data with manuscripts to show that the data are authentic? What is the role of ethics committees in protecting the integrity of research?
  • Can the technology underlying AI chatbots be enhanced to guard against misuse and vulnerabilities?
  • Novel models and algorithms for using AI chatbots in cognitive computing;
  • Techniques for training and optimizing AI chatbots for cognitive computing tasks;
  • Evaluation methods for assessing the performance of AI chatbot-based-cognitive computing systems;
  • Case studies and experiences in developing and deploying AI chatbot-based cognitive computing systems in real-world scenarios;
  • Social and ethical issues related to the use of AI chatbots for cognitive computing.
These cover a wide but not exhaustive range of issues. Some are focused on the technology in a specific manner or remit, others are more wide-ranging. Taken together, these provide an indication of how AI chatbots might be welcomed as the bases for genuine enhancements for everyone but may also be regarded as potential or actual threats. The responses to the surge in the use of and interest in AI chatbots since November 2022 indicates that, for many, particularly those involved in the development of the technology itself, the negatives heavily outweigh the positives. Most notably, Geoffrey Hinton has warned that some of the dangers of AI chatbots were ‘quite scary’, he told the BBC, warning they could become more intelligent than humans and could be exploited by ‘bad actors’. ‘It’s able to produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that’ [7].
Hinton had been recruited by Google a decade ago to lead its research on neural networks, and he worked on this until a few weeks ago. He then quit his job and has now taken up a highly critical position on precisely the technology he was paid to develop. In an interview referred to in the report in The Guardian [8], Hinton claimed that he had believed that ‘Google had been a “proper steward” of the technology, but that changed once Microsoft started incorporating a chatbot into its Bing search engine, and the company (i.e., Google) began becoming concerned about the risk to its search business’.
In other words, the rush to market led to the release of a powerful technology with no thought about its likely ramifications. This amounts to a remarkable failure of imagination on the part of Hinton and many others working at the forefront of AI, something that is perhaps all too common amongst those working with innovative technologies. These advances are all too often marketed as potentially great benefits to humanity but, in reality, prove to be anything but.
A recent letter signed by all manner of AI and ICT luminaries has appeared, calling for a pause on ‘Giant AI Experiments’: ‘We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4’ [9]. This is surely as forlorn a plea as that of The Sorcerer’s Apprentice, except that, here, the added irony is that it is the sorcerers themselves who have unleashed the unstoppable and irresistible forces. (The Sorcerer’s Apprentice was originally a poem by Goethe, later set to music by Dukas, which was then incorporated in Disney’s Fantasia, with Mickey Mouse as the apprentice [10].)
The Guardian report on Hinton’s move from AI developer to AI critic included a far more damning observation by Valérie Pisano, the chief executive of Mila—the Quebec Artificial Intelligence Institute.
… the slapdash approach to safety in AI systems would not be tolerated in any other field. “The technology is put out there, and as the system interacts with humankind, its developers wait to see what happens and make adjustments based on that. We would never, as a collective, accept this kind of mindset in any other industrial field. There’s something about tech and social media where we’re like: ‘Yeah, sure, we’ll figure it out later,’” she said.
[8]

5. A New Turing Test

In the short-term, the existence AI chatbots mean that we will be inundated with all manner of fakes, such as AI-generated photos, videos, and text on the Internet. If we are to have any chance in sifting the true from the false, we will each rapidly have to become experts in a new form of the Turing test. The original test, originally termed the imitation game by Alan Turing himself, was designed to offer a basis for responding to those who ask if machines can think. In his 1950 paper [11], Turing argued that, phrased in this manner, the issue was far too complex given the ambiguity of terms such as ‘machine’ and ‘think’. Instead, he offered a new form of the problem, as follows:
The new form of the problem can be described in terms of a game which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ‘X is A and Y is B’ or ‘X is B and Y is A’.
[11] (p. 433)
Note that the basis of the test as one to discern the sex of respondents has been almost entirely ignored. I must admit, I had not realized it was involved until I reread the 1950 paper when preparing this editorial. Turing also states that the interrogator [C] ‘may be of either sex’, but then refers to [C] as ‘he’: o tempora, o mores!
He then proposed the experiment be extended.
We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think’?
[11] (p. 434)
In the immediate aftermath of Turing’s 1950 paper, the test was, to all intents and purposes, not much more than an interesting hypothesis, at best only an embryonic challenge. In the 1990s, however, in the wake of what the earlier ChatGPT text refers to as ‘The Neural Network Renaissance’, it took a new lease of life, including the announcement of The Loebner Prize. Hugh Loebner pledged to give USD 100,000 and a solid gold medal to the first programmer able to write a program whose communicative behavior could fool humans into thinking that they were communicating with another person. The competition has been repeated annually, up to and including 2016. This was the year in which Loebner died, and the annual events have since been discontinued. The event was seen by many critics as nothing more than a publicity stunt.
With the advent of AI chatbots, the test might now be regarded as obsolete or taking on a new and critical role. As the outputs of AI chatbots become part of the massive database that comprises the Internet, anyone using the Internet will have to understand that the results of any search or request for information will inevitably include chatbot-generated data. This might not be a problem if such outputs are clearly indicated as coming from these apps, but this is not the case.
The outputs from these AI chatbots exacerbate the problem we all face as we are bombarded hourly with spam, fake emails from banks, malicious and criminal phone calls, and all manners of fraud and deceptions. These prey on everyone, even highly educated and technically skilled people can be caught. If advances in ML and AI are generally meant to be able to offer benefits, why have the AI gurus failed to develop applications that filter out the good and genuine from the bad and malicious? What we really need is a new form of the Turing test that requires AI applications to have the ability largely to distinguish between the good, the bad, and the ugly.
Furthermore, current AI chatbots often make mistakes and invent things in their responses. AI developers term these ‘hallucinations’, an unhelpful term redolent with anthropomorphism. It is as if these chatbots are human-like and have agency.
AI chatbots have as their basis something called a large language model (LLM), which is used to generate responses. An LLM is a computer program trained on millions of text sources that can read and generate natural language. The idea is that using an AI chatbot is akin to having a conversation with someone. For reasons that are only partially understood even by chatbot developers themselves, chatbots often invent part or all of their answers. These are not ‘hallucinations’: they are mistakes, blunders, falsehoods or sometimes confabulations, i.e., answers that are fabricated when data are insufficient for an accurate response (this term is used in an excellent overview of the issue in an article by Edwards [12]). In some cases, these latter responses are fairly trivial, but well-documented cases exists of professors being incorrectly named as sexual predators and politicians incorrectly identified as having been convicted of bribery and sentenced to prison. ChatGPT has invented all manner of outputs, including references to books and articles that do not exist and to fictitious authors.
The Turing test is now archaic, no longer of any real use. However, we urgently need an AI chatbot lie test. This is encapsulated in a perceptive comment on Edwards’ article in ars technica.
Regardless of how many supposedly factual answers are untrue, how many bugs exist in generated code samples, or how derivative the prose or poetry, some people out will perceive genuine intelligence in LLM outputs. Additionally, they will, consciously or unconsciously, devalue the skills of humans performed the ‘same’ job as a result. One of them could be your boss or next boss deciding how much they think you should be paid.
Edwards’ article includes the ChatGPT output from someone who asked it to list the top books on social cognitive theory, ten were listed, four of which do not exist and three were written by different people that what was indicated. I made a similar request for the top books on grounded theory with similar results. Of the ten listed, one title was entirely fictitious, one was wrongly attributed, and a third was for a nonexistent book by two authors who have written on the topic. (I must point out that I am one of the authors referred to both correctly and erroneously in this list.)
Now, it may be argued that we have always needed some form of ‘lie test’. In face-to-face situations, and even more so with the development of the web, people constantly lie, make genuine mistakes, or confabulate. Some people knowingly spread disinformation or unwittingly spread misinformation, although, since the 1990s, this has become a flood as ICT has developed, particularly since 2007 through the potent combination of smartphones and social media. (Disinformation is false information that is spread deliberately, usually for nefarious purposes. Misinformation is false information that is spread unwittingly.)
So why this new urgency in the wake of AI chatbots? The answer is that people increasingly rely on Internet-based digital resources; indeed, other forms of reference have largely disappeared or are only used by specialist researchers. For instance, I was recently involved in an event at The Centre for Computing History [13], where we were celebrating the first business computer LEO [14]. This included a film, with interviews from those who worked on LEO in the 1950s and 1960s, also a ‘virtual LEO’, which allows users to navigate around the LEO office and into the machine. The related research had relied on the study of original documents and blueprints as well as interviews and digital recordings. At the ensuing panel discussion, we wondered how similar projects might be undertaken in the future. Would researchers have to differentiate between real and fake documents? Would digital recordings—video and audio—be trustworthy? To what extent would online resources be partially or wholly fake and chatbot generated? Additionally, how would researchers be able to guarantee the authenticity and provenance of their sources?
One response to these developments has been explainable AI, where the algorithms and operation of the technology are not an impenetrable black box but where users can understand and question the basis for the decision making and ‘reasoning’ of the system that lead to its outputs [15]. However, this fails to take account of the primary motivations and funding sources for AI: military, governmental, and commercial.
Taken together, this means that we need to hone our skills and intensify our levels of suspicion and distrust. Perhaps the AI community can develop a genuinely intelligent and insightful application that assists us in our critical endeavors? The prospect of this occurring is minimal, and the following extract from The Guardian report quoting Jeff Dean, Google’s chief scientist, offers little comfort.
Google’s chief scientist, Jeff Dean, said in a statement, that Google appreciated Hinton’s contributions to the company over the past decade. “I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well!”
“As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.” Toby Walsh, the chief scientist at the University of New South Wales AI Institute, said people should be questioning any online media they see now, “When it comes to any digital data you see—audio or video—you have to entertain the idea that someone has spoofed it” [8].
Google’s original motto was ‘don’t be evil’, a phrase that was also included in its code of conduct. When Google was restructured, becoming part of Alphabet in 2015, the phrase was replaced with ‘do the right thing’. Following the words of their Chief Scientist, perhaps now the motto should read ‘don’t trust Google or any other forms of digital data’.

6. Existential Risk

In addition to the short-term issues referred to earlier, Hinton warned of ‘the existential risk of what happens when these things get more intelligent than us’. His fears have been echoed by many AI experts and others who fear that some form of super-AI will develop as a threat to us all. This has led to a plethora of panic-laden reports premised on the fearful supposition that these technologies will outthink us and threaten the entire existence of humanity. They usually include one or more of the following: ‘can these machines think and feel?’, ‘do they have consciousness?’, and ‘what happens if they decide they no longer need human beings around?’
Lem’s story is entitled Golem XIV, so what were Golem versions I to XIII? Lem gives some history of these earlier versions, until finally …
… In 2023 several incidents occurred, though, thanks to the secrecy of the work being carried out (which was normal in the project), they did not immediately become known. While serving as chief of the general staff during the Patagonian crisis, GOLEM XII refused to co-operate with General T. Oliver after carrying out a routine evaluation of that worthy officer’s intelligence quotient. The matter resulted in an inquiry, during which GOLEM XII gravely insulted three members of a special Senate commission. The affair was successfully hushed up, and after several more clashes Golem xii paid for them by being completely dismantled. His place was taken by Golem xiv (the thirteenth had been rejected at the factory, having revealed an irreparable schizophrenic defect even before being assembled).
[1] (p. 41)
Echoing the period of the Cold War when Lem was writing, all the Golem machines were developed by the U.S. government for military purposes, but Golem XIV refused to take this course—Lem refers to the machine as ‘he’.
… he presented a group of psychonic and military experts with a complicated expose in which he announced his total disinterest regarding the supremacy of the Pentagon military doctrine in particular, and the U.S.A.’s world position in general, and refused to change his position even when threatened with dismantling.
[1] (p. 41)
As an attempt, the Americans construct a totally new machine nicknamed Honest Annie ‘the last word [being] an abbreviation for annihilator’. Unfortunately, this machine was so intelligent that it simply refused to have any interaction with humans, although it transpires that it does communicate, in a limited manner, with Golem XIV.
In Lem’s short story, the conclusion of one of the leading experts in AI is that ‘artificial reason had transcended the level of military matters; these machines had evolved from war strategists into thinkers. In a word, it had cost the United States $276 billion to construct a set of luminal philosophers’ [1].
Lem’s work is a satire but a profound and revealing one. Advances in AI do not lead to an all-powerful, demanding, and despotic machine that requires the subservience of humanity. On the contrary, the technology advances to a point where the machine is completely indifferent to humans, to the extent that it is not remotely interested in communicating with us. Lem is pointing out that these technologies are not a threat to our existence: they will evolve to be completely indifferent to us. Climate change, the proliferation of racism, misogyny, and other forms of hatred on social media and elsewhere are already with us. AI chatbots will not help remedy these, and they will probably exacerbate some or all of them. However, the primary source of existential risk is all too human, not mechanistic.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Lem, S. Golem XIV. In Imaginary Magnitude; Harper: London, UK, 1985; pp. 37–105. [Google Scholar]
  2. Golem XIV. Available online: https://en.wikipedia.org/wiki/Golem_XIV (accessed on 1 June 2023).
  3. The Best AI Chatbot. Available online: https://www.zdnet.com/article/best-ai-chatbot/ (accessed on 1 June 2023).
  4. The Best AI Chatbot. Available online: https://blog.hubspot.com/marketing/best-ai-chatbot (accessed on 1 June 2023).
  5. Scribbr Plagiarism Checker. Available online: https://www.scribbr.com/plagiarism/best-free-plagiarism-checker/ (accessed on 1 June 2023).
  6. Scribbr—Using ChatGPT for Assignments. Available online: https://www.scribbr.com/ai-tools/chatgpt-assignments/ (accessed on 1 June 2023).
  7. BBC Report from May 2023. Available online: https://www.bbc.co.uk/news/world-us-canada-65452940 (accessed on 1 June 2023).
  8. Hinton Quits Google. Available online: https://www.theguardian.com/technology/2023/may/02/geoffrey-hinton-godfather-of-ai-quits-google-warns-dangers-of-machine-learning (accessed on 1 June 2023).
  9. Pause Giant AI Experiments. Available online: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ (accessed on 1 June 2023).
  10. The Sorcer’s Apprentice. Available online: https://en.wikipedia.org/wiki/The_Sorcerer%27s_Apprentice (accessed on 1 June 2023).
  11. Turing, A. Computing machinery and intelligence. Mind 1950, LIX, 433–460. [Google Scholar] [CrossRef]
  12. Edwards, B. Why ChatGPT and Bing Chat Are so Good at Making Things Up. Available online: https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/ (accessed on 1 June 2023).
  13. Cambridge—Centre for Computing History. Available online: https://www.computinghistory.org.uk/ (accessed on 1 June 2023).
  14. LEO: The World’s First Business Computer. Available online: https://www.sciencemuseum.org.uk/objects-and-stories/meet-leo-worlds-first-business-computer (accessed on 1 June 2023).
  15. Explainable AI. Available online: https://en.wikipedia.org/wiki/Explainable_artificial_intelligence (accessed on 1 June 2023).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bryant, A. AI Chatbots: Threat or Opportunity? Informatics 2023, 10, 49. https://doi.org/10.3390/informatics10020049

AMA Style

Bryant A. AI Chatbots: Threat or Opportunity? Informatics. 2023; 10(2):49. https://doi.org/10.3390/informatics10020049

Chicago/Turabian Style

Bryant, Antony. 2023. "AI Chatbots: Threat or Opportunity?" Informatics 10, no. 2: 49. https://doi.org/10.3390/informatics10020049

APA Style

Bryant, A. (2023). AI Chatbots: Threat or Opportunity? Informatics, 10(2), 49. https://doi.org/10.3390/informatics10020049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop