AI Chatbots: Threat or Opportunity?
1. Introduction
2. The Golem
The difference between the earlier machines and the new ones boiled down, in principle, to the difference between an insect and a man (sic). An insect comes into the world programmed to the end by instincts, which it obeys unthinkingly. Man, on the other hand, has to learn his appropriate behavior, though this training makes for independence: with determination and knowledge man can alter his previous programs of action. So it was that computers up to and including the twentieth generation were characterized by ‘insect’ behavior: they were unable to question or, what is more, to modify their programs. The programmer ‘impregnated’ his machine with knowledge, just as evolution ‘impregnates’ an insect with instinct. In the twentieth century a great deal was still being said about ‘self-programming,’ though at the time these were unfulfilled daydreams”. [1] (p. 39) (Note that the translation from Polish uses ‘man’ and ‘he/his’. I have not altered the quotes but hope that everyone recognizes that the inclusive idea of ‘human’ is clearly intended and should be so understood.)
3. False Dawns of AI
How many false dawns of AI have there been in the past 50 years, and what were they?
- The AI Winter of the 1970s: In the early years of AI research, there was tremendous optimism about the potential of AI systems. However, due to unrealistic expectations and limited progress in developing practical applications, funding for AI research declined significantly in the 1970s. This period, known as the “AI Winter,” was characterized by disillusionment and skepticism about AI’s future.
- Expert Systems Hype in the 1980s: Expert systems, which aimed to replicate human expertise in narrow domains, gained popularity in the 1980s. These systems were built using rule-based approaches and knowledge engineering techniques. However, the technology fell short of expectations, as it proved difficult to capture and represent complex human knowledge effectively. This led to disappointment and a decline in interest in expert systems.
- Neural Networks Renaissance in the 1990s: Neural networks, a machine learning approach inspired by the structure of the human brain, had gained attention in the 1950s and 1960s. However, due to computational limitations and lack of training data, progress was limited. In the 1990s, there was a resurgence of interest in neural networks with the development of more efficient learning algorithms and the availability of larger datasets. Despite this renewed enthusiasm, practical applications remained limited, and interest waned again by the end of the decade.
- AI in Popular Culture: Various films and media representations, such as “2001: A Space Odyssey” (1968), “Blade Runner” (1982), and “The Matrix” (1999), showcased advanced AI systems and raised public expectations about the capabilities of AI. However, these portrayals often exaggerated the state of AI technology, leading to inflated expectations that far surpassed the reality of AI development at the time.
- Deep Learning Breakthroughs in the 2010s: Deep learning, a subfield of machine learning based on artificial neural networks with multiple layers, experienced significant breakthroughs in the 2010s. Deep learning algorithms demonstrated remarkable achievements in image recognition, natural language processing, and other domains. While these advancements were significant, they also led to inflated expectations about the immediate prospects of general AI and raised concerns about potential job displacement.
4. AI Chatbots—A New Dawn: Blessing or Curse?
- The development of AI chatbots has been claimed to herald a new era, offering significant advances in the incorporation of technology into people’s lives and interactions. Is this likely to be the case, and, if so, where will these impacts be the most pervasive and effective?
- Is it possible to strike a balance regarding the impact of these technologies so that any potential harms are minimized while potential benefits are maximized and shared?
- How should educators respond to the challenge of AI chatbots? Should they welcome this technology and reorient teaching and learning strategies around it, or seek to safeguard traditional practices from what is seen as a major threat?
- A growing body of evidence shows that the design and implementation of many AI applications, i.e., algorithms, incorporate bias and prejudice. How can this be countered and corrected?
- How can publishers and editors recognize the difference between manuscripts that have been written by a chatbot and genuine articles written by researchers? Is training to recognize the difference required? If so, who could offer such training?
- How can the academic world and the wider public be protected against the creation of ‘alternative facts’ by AI? Should researchers be required to submit their data with manuscripts to show that the data are authentic? What is the role of ethics committees in protecting the integrity of research?
- Can the technology underlying AI chatbots be enhanced to guard against misuse and vulnerabilities?
- Novel models and algorithms for using AI chatbots in cognitive computing;
- Techniques for training and optimizing AI chatbots for cognitive computing tasks;
- Evaluation methods for assessing the performance of AI chatbot-based-cognitive computing systems;
- Case studies and experiences in developing and deploying AI chatbot-based cognitive computing systems in real-world scenarios;
- Social and ethical issues related to the use of AI chatbots for cognitive computing.
… the slapdash approach to safety in AI systems would not be tolerated in any other field. “The technology is put out there, and as the system interacts with humankind, its developers wait to see what happens and make adjustments based on that. We would never, as a collective, accept this kind of mindset in any other industrial field. There’s something about tech and social media where we’re like: ‘Yeah, sure, we’ll figure it out later,’” she said.[8]
5. A New Turing Test
The new form of the problem can be described in terms of a game which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ‘X is A and Y is B’ or ‘X is B and Y is A’.[11] (p. 433)
We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think’?[11] (p. 434)
6. Existential Risk
… In 2023 several incidents occurred, though, thanks to the secrecy of the work being carried out (which was normal in the project), they did not immediately become known. While serving as chief of the general staff during the Patagonian crisis, GOLEM XII refused to co-operate with General T. Oliver after carrying out a routine evaluation of that worthy officer’s intelligence quotient. The matter resulted in an inquiry, during which GOLEM XII gravely insulted three members of a special Senate commission. The affair was successfully hushed up, and after several more clashes Golem xii paid for them by being completely dismantled. His place was taken by Golem xiv (the thirteenth had been rejected at the factory, having revealed an irreparable schizophrenic defect even before being assembled).[1] (p. 41)
… he presented a group of psychonic and military experts with a complicated expose in which he announced his total disinterest regarding the supremacy of the Pentagon military doctrine in particular, and the U.S.A.’s world position in general, and refused to change his position even when threatened with dismantling.[1] (p. 41)
Funding
Data Availability Statement
Conflicts of Interest
References
- Lem, S. Golem XIV. In Imaginary Magnitude; Harper: London, UK, 1985; pp. 37–105. [Google Scholar]
- Golem XIV. Available online: https://en.wikipedia.org/wiki/Golem_XIV (accessed on 1 June 2023).
- The Best AI Chatbot. Available online: https://www.zdnet.com/article/best-ai-chatbot/ (accessed on 1 June 2023).
- The Best AI Chatbot. Available online: https://blog.hubspot.com/marketing/best-ai-chatbot (accessed on 1 June 2023).
- Scribbr Plagiarism Checker. Available online: https://www.scribbr.com/plagiarism/best-free-plagiarism-checker/ (accessed on 1 June 2023).
- Scribbr—Using ChatGPT for Assignments. Available online: https://www.scribbr.com/ai-tools/chatgpt-assignments/ (accessed on 1 June 2023).
- BBC Report from May 2023. Available online: https://www.bbc.co.uk/news/world-us-canada-65452940 (accessed on 1 June 2023).
- Hinton Quits Google. Available online: https://www.theguardian.com/technology/2023/may/02/geoffrey-hinton-godfather-of-ai-quits-google-warns-dangers-of-machine-learning (accessed on 1 June 2023).
- Pause Giant AI Experiments. Available online: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ (accessed on 1 June 2023).
- The Sorcer’s Apprentice. Available online: https://en.wikipedia.org/wiki/The_Sorcerer%27s_Apprentice (accessed on 1 June 2023).
- Turing, A. Computing machinery and intelligence. Mind 1950, LIX, 433–460. [Google Scholar] [CrossRef]
- Edwards, B. Why ChatGPT and Bing Chat Are so Good at Making Things Up. Available online: https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/ (accessed on 1 June 2023).
- Cambridge—Centre for Computing History. Available online: https://www.computinghistory.org.uk/ (accessed on 1 June 2023).
- LEO: The World’s First Business Computer. Available online: https://www.sciencemuseum.org.uk/objects-and-stories/meet-leo-worlds-first-business-computer (accessed on 1 June 2023).
- Explainable AI. Available online: https://en.wikipedia.org/wiki/Explainable_artificial_intelligence (accessed on 1 June 2023).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bryant, A. AI Chatbots: Threat or Opportunity? Informatics 2023, 10, 49. https://doi.org/10.3390/informatics10020049
Bryant A. AI Chatbots: Threat or Opportunity? Informatics. 2023; 10(2):49. https://doi.org/10.3390/informatics10020049
Chicago/Turabian StyleBryant, Antony. 2023. "AI Chatbots: Threat or Opportunity?" Informatics 10, no. 2: 49. https://doi.org/10.3390/informatics10020049
APA StyleBryant, A. (2023). AI Chatbots: Threat or Opportunity? Informatics, 10(2), 49. https://doi.org/10.3390/informatics10020049