Next Article in Journal
New Evidence of Traditional Japanese Dyeing Techniques: A Spectroscopic Investigation
Next Article in Special Issue
ChatGPT as a Digital Assistant for Archaeology: Insights from the Smart Anomaly Detection Assistant Development
Previous Article in Journal
Developing a Multitasking Augmented Reality Application for Theatrical and Cultural Content
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generative Artificial Intelligence, Human Agency and the Future of Cultural Heritage

by
Dirk H. R. Spennemann
Gulbali Institute, Charles Sturt University, P.O. Box 789, Albury, NSW 2640, Australia
Heritage 2024, 7(7), 3597-3609; https://doi.org/10.3390/heritage7070170
Submission received: 15 May 2024 / Revised: 1 July 2024 / Accepted: 5 July 2024 / Published: 9 July 2024
(This article belongs to the Special Issue AI and the Future of Cultural Heritage)

Abstract

:
The first half of 2023 was dominated by a public discussion of the nature and implications of generative artificial intelligence (genAI) models that are poised to become the most significant cross-cultural global disruptor since the invention of the World-Wide Web. It can be predicted that genAI will affect how cultural heritage is being managed and practiced, primarily by providing analysis and decision-making tools, but also by genAI generated texts and images, in particular reconstructions of objects and sites. The more speculative interpretations of contexts and alternative interpretations generated by genAI models may constitute manifestations of cultural heritage in their own right. But do these constitute human cultural heritage, or are they AI cultural heritage? This paper is a deliberation of the realities and future(s) of cultural heritage in a genAI and post-genAI world.

1. Introduction

The public release of ChatGPT, a generative artificial intelligence (genAI) language model, in November 2022 generated widespread public interest in the abilities of genAI tools, but also concern about the implications of the application on academia, depending on whether it was deemed benevolent (e.g., supporting analysis or education) [1,2,3,4] or malevolent (e.g., assignment writing and academic misconduct) [5,6,7]. During the past few months, a plethora of papers has been written looking at the role genAI tools may have in a wide range of professions, such as agriculture [8], architecture [9], chemistry [10], computer programming [11], diabetes education [12], medicine [13], nursing education [14], and radiology [15], as well as cultural heritage management and museum studies (see below). GenAI tools are poised to become the most significant cross-cultural global disruptor since the invention of the World Wide Web thirty years ago.
Prior to the public release of ChatGPT, public perceptions of artificial intelligence were largely based on representations in the sci-fi literature and films, which are known to both generate expectations and markets but also fears and distrust [16,17,18,19]. Since its release, there has been widespread media attention as to its opportunities and potential impact on “the world as we know it”. In the area of cultural heritage, genAI will affect how this will be managed and practiced, primarily by offering analytical and decision-making tools that not only can rapidly aggregate and synthesize large amounts of data, but also can generate reconstructions of objects and sites, along with speculative interpretations of contexts. The question arises whether the latter constitute manifestations of cultural heritage in their own right, and, if so, whether they are expressions of human cultural heritage or whether they form part of an incipient cultural heritage of AI.
Following a brief background on the nature of genAI, this paper will examine the relationship between genAI and human agency as well as aspects of genAI and authorship before discussing genAI as it is currently utilized in the cultural heritage field. These sections lay the foundation for a deliberation of the future(s) of cultural heritage in a genAI and post-genAI world. Given that this paper is a deliberation, it does not follow the standard IMRAD (Introduction, Methodology, Results and Discussion) format of papers.

2. Background to Generative Artificial Intelligence Tools

Generative artificial intelligence (genAI) tools generate a textual, visual, or auditory output based on plain-language instructions (‘prompts’) provided by a user via an interface. GenAI large language models, such as GoogleBard or those based on the GPT (Generative Pre-trained Transformer) model (ChatGPT, GPT-4, BingChat, DeepAI, are deep learning models that use transformer architecture to detect the statistical connections, and patterns in textual data. From these connections they can generate coherent and contextually relevant, human-like responses based on the input they receive via prompts [20,21]. A large and diverse body of textual materials, such as books (both fiction and non-fiction), articles, and webpages, provides a reference dataset that is used to pre-train such models. This pre-training, which is carried out with human interaction and guidance, teaches such models to anticipate the following word in a text string by moderating statistical and patterns with linguistic patterns and semantic fields. As the outputs of genAI language models are fundamentally merely complex predictions stemming from statistical relationships and patterns, the output of genAI language models can be subject to inverted logic phenomena [22]. The depth and complexity of responses a genAI large language model is capable of is correlated with the size of the training dataset and the nature of the textual resources incorporated into that dataset [23,24]. The currency of the responses depends on the cut-off time for the data included in the training dataset and whether the language model has the ability to search the World Wide Web in real time (such as BingChat or GoogleBard). While designed to respond within ethical boundaries and provide answers that do not cause harm, the ethical valence of genAI applications can be inverted with a judicious use of prompts [25].
GenAI image generators, such as DALL-E, Midjourney, or DeepAI’s Image Generator, are deep learning models that use autoencoder architecture to encode the text of a prompt into representation space, then map this to corresponding image encoding that captures the semantic information of the prompt and then stochastically generates an image which is a visual manifestation of this semantic information. As is the case with large language models, the size and composition of the image dataset defines the complexity and accuracy of the output. DALL-E, for example, draws on OpenAI’s CLIP (Contrastive Language-Image Pre-training) model that had been trained on hundreds of millions of images and their associated captions [26,27,28,29]. As with genAI language models, the outputs of genAI image models are fundamentally just sophisticated predictions stemming from statistical relationships and patterns found in the tagged image data in their training sets. Depending on the nature and depth of the dataset and the nature of the human prompts, genAI image models will generate images from the surreal (F) to images such as “The Next Rembrandt”, a painting generated with 3D printing that is indistinguishable in style from a real Rembrandt [30].

3. GenAI and Human Agency

People’s perspectives and standpoints are shaped by the familial, communal, educational, socio-political, and historical contexts to which they are enculturated and the subsequent life experiences they have gained [31,32,33]. In a world without genAI, these prompt intent and then inform interpersonal communication, shape a person’s research drawing on and interpreting publications (which themselves were generated by authors with their own enculturation and perspective), and influence a person’s creativity (Figure 1).
In a world with genAI, a person’s enculturation and perspective continue to shape the intent of their communications, research, and creativity, and their interaction with genAI is circumscribed with complex sets of instructions (prompts) which can be modified to influence the output. The resulting product is generated partially or in its entirety by genAI language models or image generators (Figure 2). Unlike ‘traditional’ human interaction and creativity, where an outcome or creation is generated incrementally and thus modifiable at each step along the way to the final product, a genAI-created output is complete, and any modification will again produce a new, fully complete product. Irrespective of this, human intent and creativity (via prompts) are critical to the eventual outcome.
GenAI language and image models are tools, guided by human input. Unlike other tools of the digital age, such as word processors, spreadsheets, image manipulators (such as Photoshop), or statistics packages, where the process is incremental and the programs merely automate otherwise ‘manual’ step-by-step processes in a transparent fashion, genAI functions as a ‘black box’ where the processes that lead to an output/result are not transparent. This creates a sense of distance and dissonance, which results in an overestimation of both the capabilities and the dangers inherent in genAI.
Fundamentally, genAI lacks the capacity for intent and autonomous creative thinking. As noted, the outputs of genAI language models (image generators) are fundamentally just sophisticated text (image) predictions stemming from statistical relationships and patterns found in the text (pictorial) data in their training sets. Any perceived creativity from genAI models is purely based on an interpretation by the individual who is interacting with that model. Whether a genAI-written poem, for instance, is considered creative and ‘fit for purpose’ or simply dismissed as bad poetry, is determined by the reader’s personal experiences, expectations, and interpretation of the output. The same applies to images generated by genAI (Figure 3).

4. GenAI and Authorship

It is a universally accepted principle that the authorship of a literary, dramatic, musical, or artistic work rests with its creator (singular or plural) who is expected to have displayed intent and creativity, to have determined the work’s form and content, and to have invested skill and effort in creating the work [34]. The legal interpretation of copyright law under U.S. law, for example, holds that authorship can only be vested in humans and that works created by animals or by machines without human intervention cannot be copyrighted [35,36]. The boundaries of authorship were tested in the case of the ‘monkey selfie’, where a Sulawesi crested macaque (Macaca nigra), who had been given access to a tripod-mounted camera, accidentally took a ‘selfie’ when exploring the object. The courts ruled that as non-human beings, monkeys did neither have standing in court nor were they entitled to copyright protection over their ‘works’, which, in essence, lacked evidence of intent, creativity, and the determination of the work’s form and content [37,38,39]. The same argument currently extends to works created by non-sentient genAI [36,37,40].
Fundamental to authorship is that the author initiates and conceptualizes an original piece of writing, or in the case of academic authorship, initiates and conceptualizes an original piece of research and then proceeds to report on framing, methodology, process, results, and implications. Also fundamental to authorship is that authors must be accountable for the content expressed in their published work and are able to manage copyright and license agreements. In the case of academic authorship, authors must be accountable for the academic integrity of the design and conduct of their research, the originality of the written text, and for declaring any conflicts of interest. By their nature, genAI models are, at least at this point in their development, incapable of initiation and conceptualization of creative or research ideas without human activation through prompting and prompt manipulation.
Furthermore, as non-sentient constructs, genAI models are unable to be held accountable for their output and thus cannot be regarded as authors as defined by the Committee on Publication Ethics [41,42] or the World Association of Medical Editors [43]. Numerous academic publishers and organizations have followed suit [44,45,46,47,48,49,50].

5. GenAI and Cultural Heritage

Any given community’s cultural heritage is comprised of a range of tangible and intangible manifestations of people’s cultural, spiritual, and economic lives and their interactions with their natural environment [51,52]. Intangible cultural heritage, the result of people’s interactions with each other, finds its expression, inter alia, in language, music, customs, and skills [53,54,55,56,57]. Tangible heritage is derived from people’s interaction with the physical environment and manifests itself as the built environment, refuse sites and cultural landscapes, but also in the form of moveable items and artifacts [52,54,58,59].
Artificial intelligence applications have been widely used in archaeology and other sub-disciplines of cultural heritage, ranging from the transcription of archival materials [60] to pattern recognition and reconstruction of decorations [61], pottery fragments [62,63,64], or torn paper documents [65], as well as ancient coin classification [66] and automated detection of sites utilizing LiDAR data [67]. In these cases, the genAI tools have not only been trained on well-circumscribed datasets with known and well-curated content, but they have also been used for specific research questions commensurate with the datasets.
The recent popularity of general, less specific genAI language models such as ChatGPT or GoogleBard has also seen experimentation in the cultural heritage field, such as exploring their use and understanding in areas such as remote sensing in archaeology, artifact analysis [68], approaching memorialization [69], developing visitor query systems in museums [70,71], conceptualizing entire exhibitions [72], as well as the understanding of heritage values [22] and exploring how genAI language models ‘predict’ how generative AI might affect how cultural heritage will be managed in the future [73]. Some papers have been exploring the social perception of the use of AI in the cultural heritage field [60,74].
Not only will genAI affect how cultural heritage is managed and practiced, primarily by providing analysis and decision-making tools, but it will also be used as a ‘go-to’ source of information by the public. In many cases, genAI was used as an exploratory tool, drawing on the publicly accessible models such as ChatGPT or GoogleBard, without any user input into the dataset or its pre-training. Such uses are likely to become more common, with the associated ethical and conceptual shortcomings. At present, the responses provided by genAI models have limited depth and occasionally suffer from inverted logic, making them unsuitable for heritage research and problematic when used as a public education tool [22,75]. Given the proprietary nature of the genAI models, the extent and specific nature of the datasets remain confidential. Empirical studies have identified some of the sources used by ChatGPT, for example [76], with a study related to the sources of ChatGPT’s knowledge of archaeology noting that it seems to rely (almost?) solely on Wikipedia [24].
Given current trends and general popularity, it can be safely posited that genAI applications, both in their specific and general forms, will play an increased role in cultural heritage, its documentation, and management.

6. Culture, Heritage and the Future of Cultural Heritage in the Age of GenAI

When considering the future of cultural heritage in the age of genAI, we need to move beyond the considerations of what benefits genAI can bring to the discipline as a tool, however, and need to consider to what extent creations facilitated by genAI are or can become cultural heritage items in their own right. Beyond that, we also need to consider whether or when genAI can be considered to have a heritage of its own.
There can be little doubt that creations facilitated by genAI are or can become cultural heritage items. Creations facilitated by genAI, as well as the genAI application itself, form part of virtual heritage, the emerging third domain of cultural heritage (in addition to the well-established domains of tangible and intangible heritage). Virtual heritage is comprised of hardware heritage (i.e., computers, keyboards, storage media, printers); digital artifacts (i.e., items of material culture generated by computers, such as a paper printouts or 3D products); virtual artifacts (i.e., virtual, computer-generated content that is visually or auditorily perceivable by humans via computer screens or speakers); latent digital signatures (i.e., volatile data written on ferromagnetic or optical media surfaces); and digital ephemera (i.e., programs, interactions and content performed and generated by computers without a tangible output) [77]. Whether any of the components of virtual heritage are deemed significant will depend on the cultural values ascribed to them and the role they played in the cultural and historic trajectories.
Using principles of strategic foresight [78,79], we can safely predict that some forms of genAI will become a cross-cultural socioeconomic disruptor on a global scale, being in the same ‘league’ as the inventions of the printing press and the World Wide Web. Using the methodology of futurist hindsight [80], we can also predict that ChatGPT, which belongs to the class of digital ephemera, will be culturally significant as it popularized genAI, led to mass experimentation by both the general public and academia, and broad-scale uptake by sections of the general public. Likewise, it can be safely predicted that the 3D-printed output generated by “The Next Rembrandt” project [30], which constitutes a good example of a genAI-generated digital artifacts, will become an item of cultural heritage in its own right as an early, tangible demonstration of the capabilities of genAI image generators. This paper, on the other hand, which with all its text and figures constitutes a virtual artifact until such time that the paper is printed out (when it becomes a digital artifact with physical form), is unlikely to become a cultural heritage item in its own right and will, like the vast bulk of the academic literature, only form part of the foundations and body of human knowledge. The same applies to the various essays on cultural heritage values written by ChatGPT (which are also virtual artifacts) that are discussed in an earlier paper [22].
When considering whether or when genAI can be considered to have a heritage of its own, we need to move away from preconceived constructs of heritage and, initially, return to basics and first principles.
On the most fundamental level, culture is the intergenerational transmission of learned behavior and skills among conspecifics that varies in circumscribed geographic areas and that cannot be explained either environmentally or genetically [81,82]. Culture has been observed among numerous animal species, from carrion crows (Corvus corone) [83], to bottlenose dolphins [84], chimpanzees [85], and orangutans [86]. Humans are merely another animal species, but one where the intergenerational learning is more complex [87]. Definitions of human culture vary, biased by the ideological standpoint of the interpreter. UNESCO, in its ‘Universal Declaration on Cultural Diversity’, holds that “culture should be regarded as the set of distinctive spiritual, material, intellectual and emotional features of society or a social group, and that it encompasses, in addition to art and the literature, lifestyles, ways of living together, value systems, traditions and beliefs” [88].
Heritage is commonly defined as “our legacy from the past, what we live with today, and what we pass on to future generations” [89]. Central to heritage as a concept is a temporal dimension, the understanding that cultural practices or productions (sites, objects) of the past have at least some modicum of relevance for the present generation [90] and that there is a desire to ensure their perpetuation into the future [91]. To evaluate the significance of heritage assets [92], the profession relies on hindsight, requiring some passage of time to ensure that trends in values emerge [80,92]. The understanding of these values ranges from emotive responses of nostalgia and solastalgia [93,94] to rule- and criteria-based assessments [91,95,96,97]. An understanding of heritage values also needs to take into account their inter- and intra-generational mutability [98] as well as the epistemological foundations of their assessors who are enculturated in familial, communal, educational, socio-political, and historical contexts.
It is implied in the foregoing that this heritage is a human-generated and human-centered concept [94]. Prior work by the author considered a future of thinking and self-aware robots, and argued that it is conceivable that these could have their own heritage, both in terms of evidence of ‘genealogical lineage’ and possibly also in terms of their own value sets [82,99]. At the time, that work ascribed the potential of ‘culture’ and possible values to future self-aware robots but did not provide suggestions on how that culture might evolve into concepts of ‘heritage’. At present, while genAI and other AI systems are not sentient, lack consciousness [100], and are incapable of initiation and conceptualization of creative ideas [40], various design concepts are being floated to effect this [101] (with some conceptual limitations [102]). This has raised numerous ethical concerns as it may impact not only human life as we know it, but also human and other existence as a whole [103,104,105].
Given that the development of genAI has progressed at a faster pace in the past demi-decade, we need to revisit the question of robotic and AI cultural heritage. We can conceptualize a multi-part test to assess whether a lifeform (carbon-based, silicon-based, or otherwise) has an understanding of ‘heritage’:
  • The individual must be sentient, i.e., possess the ability to consciously experience sensations, emotions, and feelings; and possess consciousness, i.e., have awareness of internal (self) and external existence;
  • The individual demonstrates traces of culture at least at a basic level, i.e., intergenerationally transmitted behavior or skills learned from ‘conspecifics’ that cannot be explained either environmentally or genetically (or in the case of artificial intelligence are not due to human-designed algorithms);
  • As heritage is about the relevance of past cultural manifestations to the present, the individual must not only have an understanding of time, including the concepts of past and future, but also an understanding of “being” in the present;
  • The individual must have a basic sense of foresight, i.e., that any action or inaction may have consequences in the immediate to near future;
  • Beyond basic self-awareness, the individual must be aware of their identity as an individual who is enculturated in familial, communal, educational, socio-political, and historical contexts;
  • The individual must be able to understand the concept of ‘values’ either implicitly by experiencing feelings of nostalgia or solastalgia, or conceptually by recognizing values as non-static, relative, and conditional constructs that are projected on behaviors and actions, or on tangible elements of the natural or created environment;
  • Finally, ideally (but not as a conditio sine qua non), individuals should also be capable of initiation and conceptualization of creative or research ideas.
Using this test, genAI models, at least at this point in their development, already fail at the first hurdle as by their nature they are not sentient. Given current trends and investments in innovation, it can be posited, however, that this is only a matter of time and sentient Ai systems will emerge. It is self-evident that, if this future were to become reality, humanity will have to face a reality where cultural heritage is not a uniquely human concept, but where multiple cultural heritages, human and Ai, will coexist in parallel as well as with an overlap of the two. Humanity will have to consider a reality wherein sentient AI systems may well not conceptualize the constructs that we perceive as ‘culture’ and ‘heritage’ in a similar way to us, or even conceptualize them at all. If this future were to become reality, the above-stated multi-part test with its inherent anthropocentric framing is a mute proposition. Finally, given the long history of value conflicts over tangible and intangible manifestations of cultural heritage [106,107,108], it can be posited that in a future where multiple cultural heritages, human and AI, will coexist, value conflicts between these two heritages are bound to emerge. How will humanity respond to this if and when these futures eventuate?

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Jeon, J.; Lee, S. Large language models in education: A focus on the complementary relationship between human teachers and ChatGPT. Educ. Inf. Technol. 2023, 28, 15873–15892. [Google Scholar] [CrossRef]
  2. Zhu, Y.; Han, D.; Chen, S.; Zeng, F.; Wang, C. How Can ChatGPT Benefit Pharmacy: A Case Report on Review Writing. Preprints.org 2023. [Google Scholar] [CrossRef]
  3. Sok, S.; Heng, K. ChatGPT for education and research: A review of benefits and risks. Cambodian J. Educ. Res. 2023, 3, 110–121. [Google Scholar] [CrossRef]
  4. Rao, A.S.; Pang, M.; Kim, J.; Kamineni, M.; Lie, W.; Prasad, A.K.; Landman, A.; Dryer, K.; Succi, M.D. Assessing the utility of ChatGPT throughout the entire clinical workflow. medRxiv 2023. [Google Scholar] [CrossRef]
  5. Rudolph, J.; Tan, S.; Tan, S. ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn. Teach. 2023, 6, 242–263. [Google Scholar]
  6. King, M.R.; chatGPT. A Conversation on Artificial Intelligence, Chatbots, and Plagiarism in Higher Education. Cell. Mol. Bioeng. 2023, 16, 1–2. [Google Scholar] [CrossRef] [PubMed]
  7. Spennemann, D.H.R.; Biles, J.; Brown, L.; Ireland, M.F.; Longmore, L.; Singh, C.J.; Wallis, A.; Ward, C. ChatGPT giving advice on how to cheat in university assignments: How workable are its suggestions? Res. Sq. 2023. [Google Scholar] [CrossRef]
  8. Biswas, S. Importance of Chat GPT in Agriculture: According to Chat GPT. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4405391 (accessed on 28 June 2023).
  9. Neves, P.S. Chat GPT AIS “Interview” 1, December 2022. AIS-Archit. Image Stud. 2022, 3, 58–67. [Google Scholar]
  10. Castro Nascimento, C.M.; Pimentel, A.S. Do Large Language Models Understand Chemistry? A Conversation with ChatGPT. J. Chem. Inf. Model. 2023, 63, 1649–1655. [Google Scholar] [CrossRef]
  11. Surameery, N.M.S.; Shakor, M.Y. Use chat gpt to solve programming bugs. Int. J. Inf. Technol. Comput. Eng. (IJITC) 2023, 3, 17–22. [Google Scholar] [CrossRef]
  12. Sng, G.G.R.; Tung, J.Y.M.; Lim, D.Y.Z.; Bee, Y.M. Potential and pitfalls of ChatGPT and natural-language artificial intelligence models for diabetes education. Diabetes Care 2023, 46, e103–e105. [Google Scholar] [CrossRef]
  13. Grünebaum, A.; Chervenak, J.; Pollet, S.L.; Katz, A.; Chervenak, F.A. The exciting potential for ChatGPT in obstetrics and gynecology. Am. J. Obstet. Gynecol. 2023, 228, 696–705. [Google Scholar] [CrossRef]
  14. Qi, X.; Zhu, Z.; Wu, B. The promise and peril of ChatGPT in geriatric nursing education: What We know and do not know. Aging Health Res. 2023, 3, 100136. [Google Scholar] [CrossRef]
  15. Currie, G.; Singh, C.; Nelson, T.; Nabasenja, C.; Al-Hayek, Y.; Spuur, K. ChatGPT in medical imaging higher education. Radiography 2023, 29, 792–799. [Google Scholar] [CrossRef] [PubMed]
  16. Spennemann, R.; Orthia, L. Creating a Market for Technology through Film: Diegetic Prototypes in the Iron Man Trilogy. Arb. Aus Angl. Am. 2023, 47, 225–242. [Google Scholar] [CrossRef]
  17. Kelley, P.G.; Yang, Y.; Heldreth, C.; Moessner, C.; Sedley, A.; Kramm, A.; Newman, D.T.; Woodruff, A. Exciting, useful, worrying, futuristic: Public perception of artificial intelligence in 8 countries. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, Virtual Event, 19–21 May 2021; pp. 627–637. [Google Scholar]
  18. Beets, B.; Newman, T.P.; Howell, E.L.; Bao, L.; Yang, S. Surveying Public Perceptions of Artificial Intelligence in Health Care in the United States: Systematic Review. J. Med. Internet Res. 2023, 25, e40337. [Google Scholar] [CrossRef]
  19. Subaveerapandiyan, A.; Sunanthini, C.; Amees, M. A study on the knowledge and perception of artificial intelligence. IFLA J. 2023, 49, 503–513. [Google Scholar] [CrossRef]
  20. Markov, T.; Zhang, C.; Agarwal, S.; Eloundou, T.; Lee, T.; Adler, S.; Jiang, A.; Weng, L. New and Improved Content Moderation Tooling. Available online: https://web.archive.org/web/20230130233845mp_/https://openai.com/blog/new-and-improved-content-moderation-tooling/ [via Wayback Machine] (accessed on 28 June 2023).
  21. Collins, E.; Ghahramani, Z. LaMDA: Our Breakthrough Conversation Technology. Available online: https://blog.google/technology/ai/lamda/ (accessed on 1 September 2023).
  22. Spennemann, D.H.R. ChatGPT and the generation of digitally born “knowledge”: How does a generative AI language model interpret cultural heritage values? Knowledge 2023, 3, 480–512. [Google Scholar] [CrossRef]
  23. Wu, T.; He, S.; Liu, J.; Sun, S.; Liu, K.; Han, Q.L.; Tang, Y. A Brief Overview of ChatGPT: The History, Status Quo and Potential Future Development. IEEE/CAA J. Autom. Sin. 2023, 10, 1122–1136. [Google Scholar] [CrossRef]
  24. Spennemann, D.H.R. What has ChatGPT read? References and referencing of archaeological literature by a generative artificial intelligence application. arXiv 2023, arXiv:2308.03301. [Google Scholar]
  25. Spennemann, D.H.R. Exploring ethical boundaries: Can ChatGPT be prompted to give advice on how to cheat in university assignments? arXiv 2020, arXiv:202308.1271.v1. [Google Scholar] [CrossRef]
  26. O’Connor, R. How DALL-E 2 Actually Works. Available online: https://www.assemblyai.com/blog/how-dall-e-2-actually-works/ (accessed on 10 September 2023).
  27. Marcus, G.; Davis, E.; Aaronson, S. A very preliminary analysis of DALL-E 2. arXiv 2022, arXiv:2204.13807. [Google Scholar]
  28. Borji, A. Generated faces in the wild: Quantitative comparison of stable diffusion, midjourney and dall-e 2. arXiv 2022, arXiv:2210.00586. [Google Scholar]
  29. Ruskov, M. Grimm in Wonderland: Prompt Engineering with Midjourney to Illustrate Fairytales. arXiv 2023, arXiv:2302.08961. [Google Scholar]
  30. Korsten, B.; Haanstra, B. The Next Rembrandt. Available online: www.nextrembrandt.com (accessed on 1 September 2023).
  31. Kim, B.S. Acculturation and enculturation. Handb. Asian Am. Psychol. 2007, 2, 141–158. [Google Scholar]
  32. Alcántara-Pilar, J.M.; Armenski, T.; Blanco-Encomienda, F.J.; Del Barrio-García, S. Effects of cultural difference on users’ online experience with a destination website: A structural equation modelling approach. J. Destin. Mark. Manag. 2018, 8, 301–311. [Google Scholar] [CrossRef]
  33. Hekman, S. Truth and method: Feminist standpoint theory revisited. Signs J. Women Cult. Soc. 1997, 22, 341–365. [Google Scholar] [CrossRef]
  34. Ginsburg, J.C. The concept of authorship in comparative copyright law. DePaul Law Rev. 2002, 52, 1063–1091. [Google Scholar] [CrossRef]
  35. U.S. Copyright Office. Compendium of U.S. Copyright Office Practices, 3rd ed.; U.S. Copyright Office: Washington, DC, USA, 2021.
  36. U.S. Copyright Office. Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence; U.S. Copyright Office: Washington, DC, USA, 2023.
  37. Nguyen, P. The monkey selfie, artificial intelligence and authorship in copyright: The limits of human rights. Pub. Int. LJNZ 2019, 6, 121. [Google Scholar]
  38. Ncube, C.B.; Oriakhogba, D.O. Monkey selfie and authorship in copyright law: The Nigerian and South African perspectives. Potchefstroom Electron. Law J. 2018, 21, 2–35. [Google Scholar] [CrossRef]
  39. Rosati, E. The Monkey Selfie case and the concept of authorship: An EU perspective. J. Intellect. Prop. Law Pract. 2017, 12, 973–977. [Google Scholar] [CrossRef]
  40. Chatterjee, A. Art in an age of artificial intelligence. Front. Psychol. 2022, 13, 1024449. [Google Scholar] [CrossRef]
  41. Committee on Publication Ethics. Authorship and AI Tools. Available online: https://publicationethics.org/cope-position-statements/ai-author (accessed on 15 September 2023).
  42. Levene, A. Artificial Intelligence and Authorship. Available online: https://publicationethics.org/news/artificial-intelligence-and-authorship (accessed on 15 September 2023).
  43. Zielinski, C.; Winker, M.A.; Aggarwal, R.; Ferris, L.E.; Heinemann, M.; Lapeña, J.; Florencio, J.; Pai, S.A.; Ing, E.; Citrome, L.; et al. Chatbots, Generative AI, and Scholarly Manuscripts. WAME Recommendations on Chatbots and Generative Artificial Intelligence in Relation to Scholarly Publications. WAME. 31 May 2023. Available online: https://wame.org/page3.php?id=106 (accessed on 15 September 2023).
  44. Flanagin, A.; Bibbins-Domingo, K.; Berkwits, M.; Christiansen, S.L. Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. Jama 2023, 329, 637–639. [Google Scholar] [CrossRef]
  45. Wiley. Best Practice Guidelines on Research Integrity and Publishing Ethics. Available online: https://authorservices.wiley.com/ethics-guidelines/index.html (accessed on 15 September 2023).
  46. Sage. ChatGPT and Generative AI. Available online: https://us.sagepub.com/en-us/nam/chatgpt-and-generative-ai (accessed on 15 September 2023).
  47. Emerald Publishing’s. Emerald Publishing’s Stance on AI Tools and Authorship. Available online: https://www.emeraldgrouppublishing.com/news-and-press-releases/emerald-publishings-stance-ai-tools-and-authorship (accessed on 15 September 2023).
  48. Elsevier. The Use of AI and AI-Assisted Writing Technologies in Scientific Writing. Available online: https://www.elsevier.com/about/policies/publishing-ethics/the-use-of-ai-and-ai-assisted-writing-technologies-in-scientific-writing (accessed on 15 September 2023).
  49. Elsevier. Publishing Ethics. Available online: https://beta.elsevier.com/about/policies-and-standards/publishing-ethics (accessed on 15 September 2023).
  50. Taylor & Francis. Taylor & Francis Clarifies the Responsible Use of AI Tools in Academic Content Creation. Available online: https://newsroom.taylorandfrancisgroup.com/taylor-francis-clarifies-the-responsible-use-of-ai-tools-in-academic-content-creation/ (accessed on 15 September 2023).
  51. Vecco, M. A definition of cultural heritage: From the tangible to the intangible. J. Cult. Herit. 2010, 11, 321–324. [Google Scholar] [CrossRef]
  52. Munjeri, D. Tangible and intangible heritage: From difference to convergence. Mus. Int. 2004, 56, 12–20. [Google Scholar] [CrossRef]
  53. Parker, M.; Spennemann, D.H.R. Classifying sound: A tool to enrich intangible heritage management. Acoust. Aust. 2021, 50, 23–39. [Google Scholar] [CrossRef]
  54. Smith, L. Uses of Heritage; Routledge: Abingdon, UK, 2006. [Google Scholar]
  55. UNESCO. Basic Texts of the 2003 Convention for the Safeguarding of Intangible Cultural Heritage’ for Its Protection and Promotion; UNESCO: Paris, France, 2020. [Google Scholar]
  56. Howard, K. Music as Intangible Cultural Heritage: Policy, Ideology, and Practice in the Preservation of East Asian Traditions; Routledge: Abingdon, UK, 2016. [Google Scholar]
  57. Lenzerini, F. Intangible cultural heritage: The living culture of peoples. Eur. J. Int. Law 2011, 22, 101–120. [Google Scholar] [CrossRef]
  58. Spennemann, D.H.R.; Clemens, J.; Kozlowski, J. Scars on the Tundra: The cultural landscape of the Kiska Battlefield, Aleutians. Alsk. Park Sci. 2011, 10, 16–21. [Google Scholar]
  59. Wells, J.C.; Stiefel, B.L. Human-Centered Built Environment Heritage Preservation: Theory and Evidence-Based Practice; Routledge: Abingdon, UK, 2018. [Google Scholar]
  60. Griffin, G.; Wennerström, E.; Foka, A. AI and Swedish Heritage Organisations: Challenges and opportunities. AI Soc. 2023, 8, 301–311. [Google Scholar] [CrossRef]
  61. Romanengo, C.; Biasotti, S.; Falcidieno, B. Recognising decorations in archaeological finds through the analysis of characteristic curves on 3D models. Pattern Recognit. Lett. 2020, 131, 405–412. [Google Scholar] [CrossRef]
  62. Ostertag, C.; Beurton-Aimar, M. Matching ostraca fragments using a siamese neural network. Pattern Recognit. Lett. 2020, 131, 336–340. [Google Scholar] [CrossRef]
  63. Marie, I.; Qasrawi, H. Virtual assembly of pottery fragments using moiré surface profile measurements. J. Archaeol. Sci. 2005, 32, 1527–1533. [Google Scholar] [CrossRef]
  64. Cardarelli, L. A deep variational convolutional Autoencoder for unsupervised features extraction of ceramic profiles. A case study from central Italy. J. Archaeol. Sci. 2022, 144, 105640. [Google Scholar] [CrossRef]
  65. De Smet, P. Reconstruction of ripped-up documents using fragment stack analysis procedures. Forensic Sci. Int. 2008, 176, 124–136. [Google Scholar] [CrossRef]
  66. Aslan, S.; Vascon, S.; Pelillo, M. Two sides of the same coin: Improved ancient coin classification using Graph Transduction Games. Pattern Recognit. Lett. 2020, 131, 158–165. [Google Scholar] [CrossRef]
  67. Verschoof-Van der Vaart, W.B.; Lambers, K. Learning to look at LiDAR: The use of R-CNN in the automated detection of archaeological objects in LiDAR data from the Netherlands. J. Comput. Appl. Archaeol. 2019, 2, 31–40. [Google Scholar] [CrossRef]
  68. Frąckiewicz, M. ChatGPT-4 for Digital Archaeology: AI-Powered Artifact Discovery and Analysis. Available online: https://ts2.space/en/chatgpt-4-for-digital-archaeology-ai-powered-artifact-discovery-and-analysis/ (accessed on 29 June 2023).
  69. Makhortykh, M.; Zucker, E.M.; Simon, D.J.; Bultmann, D.; Ulloa, R. Shall androids dream of genocides? How generative AI can change the future of memorialization of mass atrocities. Discov. Artif. Intell. 2023, 3, 28. [Google Scholar] [CrossRef]
  70. Trichopoulos, G.; Konstantakis, M.; Alexandridis, G.; Caridakis, G. Large Language Models as Recommendation Systems in Museums. Electronics 2023, 12, 3829. [Google Scholar] [CrossRef]
  71. Trichopoulos, G.; Konstantakis, M.; Caridakis, G.; Katifori, A.; Koukouli, M. Crafting a Museum Guide Using GPT4. Bid Data Cogntiive Comput. 2023, 7, 148. [Google Scholar] [CrossRef]
  72. Spennemann, D.H.R. Exhibiting the Heritage of Covid-19—A Conversation with ChatGPT. Heritage 2023, 6, 5732–5749. [Google Scholar] [CrossRef]
  73. Tenzer, M.; Pistilli, G.; Brandsen, A.; Shenfield, A. Debating AI in Archaeology: Applications, Implications, and Ethical Considerations. SocArXiv Prepr. 2023. Available online: https://osf.io/preprints/socarxiv/r2j7h (accessed on 28 June 2023).
  74. Leshkevich, T.; Motozhanets, A. Social Perception of Artificial Intelligence and Digitization of Cultural Heritage: Russian Context. Appl. Sci. 2022, 12, 2712. [Google Scholar] [CrossRef]
  75. Cobb, P.J. Large Language Models and Generative AI, Oh My!: Archaeology in the Time of ChatGPT, Midjourney, and Beyond. Adv. Archaeol. Pract. 2023, 11, 363–369. [Google Scholar] [CrossRef]
  76. Chang, K.K.; Cramer, M.; Soni, S.; Bamman, D. Speak, memory: An archaeology of books known to chatgpt/gpt-4. arXiv 2023, arXiv:2305.00118. [Google Scholar]
  77. Spennemann, D.H.R. The Digital Heritage of the battle to contain COVID-19 in Australia and its implications for Heritage Studies. Heritage 2023, 6, 3864–3884. [Google Scholar] [CrossRef]
  78. Hines, A.; Bishop, P.J.; Slaughter, R.A. Thinking about the Future: Guidelines for Strategic Foresight; Social Technologies: Washington, DC, USA, 2006. [Google Scholar]
  79. van Duijne, F.; Bishop, P. Introduction to Strategic Foresight; Future Motions, Dutch Futures Society: Den Haag, The Netherlands, 2018; Volume 1, p. 67. [Google Scholar]
  80. Spennemann, D.H.R. Conceptualizing a Methodology for Cultural Heritage Futures: Using Futurist Hindsight to make ‘Known Unknowns’ Knowable. Heritage 2023, 6, 548–566. [Google Scholar] [CrossRef]
  81. Boesch, C.; Tomasello, M. Chimpanzee and human cultures. Curr. Anthropol. 1998, 39, 591–614. [Google Scholar] [CrossRef]
  82. Spennemann, D.H.R. Of Great Apes and Robots: Considering the Future(s) of Cultural Heritage. Futures–J. Policy Plan. Futures Stud. 2007, 39, 861–877. [Google Scholar] [CrossRef]
  83. Nihei, Y.; Higuchi, H. When and where did crows learn to use automobiles as nutcrackers. Tohoku Psychol. Folia 2001, 60, 93–97. [Google Scholar]
  84. Krützen, M.; Mann, J.; Heithaus, M.R.; Connor, R.C.; Bejder, L.; Sherwin, W.B. Cultural transmission of tool use in bottlenose dolphins. Proc. Natl. Acad. Sci. USA 2005, 102, 8939–8943. [Google Scholar] [CrossRef]
  85. Whiten, A.; Goodall, J.; McGrew, W.C.; Nishida, T.; Reynolds, V.; Sugiyama, Y.; Tutin, C.E.; Wrangham, R.W.; Boesch, C. Cultures in chimpanzees. Nature 1999, 399, 682–685. [Google Scholar] [CrossRef]
  86. van Schaik, C.P.; Ancrenaz, M.; Djojoasmoro, R.; Knott, C.D.; Morrogh-Bernard, H.C.; Nuzuar, O.K.; Atmoko, S.S.U.; Van Noordwijk, M.A. Orangutan cultures revisited. In Orangutans: Geographic Variation in Behavioral Ecology and Conservation; Wich, S.A., Atmoko, S.S.U., Setia, T.M., van Schaik, C.P., Eds.; Oxford University Press: Oxford, UK, 2009; pp. 299–309. [Google Scholar]
  87. Boyd, R.; Richerson, P.J. Why culture is common, but cultural evolution is rare. In Proceedings-British Academy; Oxford University Press Inc.: Oxford, UK, 1996; pp. 77–94. [Google Scholar]
  88. UNESCO. UNESCO Universal Declaration on Cultural Diversity 2 November 2001. In Records of the General Conference, 31st Session Paris, France, October 15–November 3, 2001; United Nations Educational, Scientific and Cultural Organization: Paris, France, 2001; Volume 1 Resolutions, pp. 62–64. [Google Scholar]
  89. Spennemann, D.H.R. Beyond “Preserving the Past for the Future”: Contemporary Relevance and Historic Preservation. CRM J. Herit. Steward. 2011, 8, 7–22. [Google Scholar]
  90. Spennemann, D.H.R. Futurist rhetoric in U.S. historic preservation: A review of current practice. Int. Rev. Public Nonprofit Mark. 2007, 4, 91–99. [Google Scholar] [CrossRef]
  91. ICOMOS Australia. The Burra Charter: The Australia ICOMOS Charter for Places of Cultural Significance 2013; Australia ICOMOS Inc. International Council of Monuments and Sites: Burwood, Australia, 2013. [Google Scholar]
  92. Murtagh, W.J. Keeping Time: The History and Theory of Preservation in America; John Wiley and Sons: New York, NY, USA, 1997. [Google Scholar]
  93. Bickford, A. The patina of nostalgia. Aust. Archaeol. 1981, 13, 1–7. [Google Scholar] [CrossRef]
  94. Lowenthal, D. The Past Is a Foreign Country; Cambridge University Press: Cambridge, UK, 1985. [Google Scholar]
  95. Fredheim, L.H.; Khalaf, M. The significance of values: Heritage value typologies re-examined. Int. J. Herit. Stud. 2016, 22, 466–481. [Google Scholar] [CrossRef]
  96. Smith, G.S.; Messenger, P.M.; Soderland, H.A. Heritage Values in Contemporary Society; Routledge: Abingdon, UK, 2017. [Google Scholar]
  97. Díaz-Andreu, M. Heritage values and the public. J. Community Archaeol. Herit. 2017, 4, 2–6. [Google Scholar] [CrossRef]
  98. Spennemann, D.H.R. The Shifting Baseline Syndrome and Generational Amnesia in Heritage Studies. Heritage 2022, 5, 2007–2027. [Google Scholar] [CrossRef]
  99. Spennemann, D.H.R. On the Cultural Heritage of Robots. Int. J. Herit. Stud. 2007, 13, 4–21. [Google Scholar] [CrossRef]
  100. Schwitzgebel, E. AI systems must not confuse users about their sentience or moral status. Patterns 2023, 4, 100818. [Google Scholar] [CrossRef] [PubMed]
  101. Bronfman, Z.; Ginsburg, S.; Jablonka, E. When will robots be sentient? J. Artif. Intell. Conscious. 2021, 8, 183–203. [Google Scholar] [CrossRef]
  102. Walter, Y.; Zbinden, L. The problem with AI consciousness: A neurogenetic case against synthetic sentience. arXiv 2022, arXiv:2301.05397. [Google Scholar]
  103. Shah, C. Sentient AI—Is That What We Really Want? Inf. Matters 2022, 2, 1–3. [Google Scholar] [CrossRef]
  104. Coghlan, S.; Parker, C. Harm to Nonhuman Animals from AI: A Systematic Account and Framework. Philos. Technol. 2023, 36, 25. [Google Scholar] [CrossRef]
  105. Gibert, M.; Martin, D. In search of the moral status of AI: Why sentience is a strong argument. AI Soc. 2022, 37, 319–330. [Google Scholar] [CrossRef]
  106. Silverman, H. Contested Cultural Heritage: Religion, Nationalism, Erasure, and Exclusion in a Global World; Springer: New York, NY, USA, 2010. [Google Scholar]
  107. Rose, D.V. Conflict and the deliberate destruction of cultural heritage. In Conflicts and Tensions’ The Cultures and Globalization Series; Anheier, H., Isar, Y.R., Eds.; Sage: London, UK, 2007; pp. 102–116. [Google Scholar]
  108. Tunbridge, J.; Ashworth, G. Dissonant Heritage: The Management of the Past as a Resource in Conflict; Wiley: New York, NY, USA, 1996. [Google Scholar]
Figure 1. Enculturation, the acquisition of knowledge, and creativity. (A): Human to Human; (B): Human to academic sources; (C): Human creativity.
Figure 1. Enculturation, the acquisition of knowledge, and creativity. (A): Human to Human; (B): Human to academic sources; (C): Human creativity.
Heritage 07 00170 g001
Figure 2. Enculturation, the acquisition of knowledge, and creativity when using genAI. (A): single query; (B): Conversation with genAI; (C): Creativity with genAI as a tool.
Figure 2. Enculturation, the acquisition of knowledge, and creativity when using genAI. (A): single query; (B): Conversation with genAI; (C): Creativity with genAI as a tool.
Heritage 07 00170 g002
Figure 3. Images generated by the author with DeepAI, using the prompt “show a palm-studded tropical island with the Apollo 11 command module sitting on the beach”. (A) Renaissance Painting Generator; (B) DeepAI Abstract Painting Generator; (C) Fantasy World Generator; (D) Impressionism Painting Generator.
Figure 3. Images generated by the author with DeepAI, using the prompt “show a palm-studded tropical island with the Apollo 11 command module sitting on the beach”. (A) Renaissance Painting Generator; (B) DeepAI Abstract Painting Generator; (C) Fantasy World Generator; (D) Impressionism Painting Generator.
Heritage 07 00170 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Spennemann, D.H.R. Generative Artificial Intelligence, Human Agency and the Future of Cultural Heritage. Heritage 2024, 7, 3597-3609. https://doi.org/10.3390/heritage7070170

AMA Style

Spennemann DHR. Generative Artificial Intelligence, Human Agency and the Future of Cultural Heritage. Heritage. 2024; 7(7):3597-3609. https://doi.org/10.3390/heritage7070170

Chicago/Turabian Style

Spennemann, Dirk H. R. 2024. "Generative Artificial Intelligence, Human Agency and the Future of Cultural Heritage" Heritage 7, no. 7: 3597-3609. https://doi.org/10.3390/heritage7070170

APA Style

Spennemann, D. H. R. (2024). Generative Artificial Intelligence, Human Agency and the Future of Cultural Heritage. Heritage, 7(7), 3597-3609. https://doi.org/10.3390/heritage7070170

Article Metrics

Back to TopTop