Next Article in Journal
Judith Leyster’s A Boy and a Girl with a Cat and an Eel: An Intersectional Approach
Next Article in Special Issue
Calculated Randomness, Control and Creation: Artistic Agency in the Age of Artificial Intelligence
Previous Article in Journal
Arts—Update on the Aims and Scope
Previous Article in Special Issue
A Machine Walks into an Exhibit: A Technical Analysis of Art Curation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Distributed Authorship of Art in the Age of AI

Faculty of Design, Informatics and Business, Abertay University, Dundee DD1 1HG, UK
Arts 2024, 13(5), 149; https://doi.org/10.3390/arts13050149
Submission received: 14 July 2024 / Revised: 20 September 2024 / Accepted: 20 September 2024 / Published: 30 September 2024
(This article belongs to the Special Issue Artificial Intelligence and the Arts)

Abstract

:
The distribution of authorship in the age of machine learning or artificial intelligence (AI) suggests a taxonomic system that places art objects along a spectrum in terms of authorship: from pure human creation, which draws directly from the interior world of affect, emotions and ideas, through to co-evolved works created with tools and collective production and finally to works that are largely devoid of human involvement. Human and machine production can be distinguished in terms of motivation, with human production being driven by consciousness and the processing of subjective experience and machinic production being driven by algorithms and the processing of data. However, the expansion of AI entangles the artist in ever more complex webs of production and dissemination, whereby the boundaries between the work of the artist and the work of the networked technologies are increasingly distributed and obscured. From this perspective, AI-generated works are not solely the products of an independent machinic agency but operate in the middle of the spectrum of authorship between human and machine, as they are the consequences of a highly distributed model of production that sit across the algorithms and the underlying information systems and data that support them and the artists who both contribute and extract value. This highly distributed state further transforms the role of the artist from the creator of objects containing aesthetic and conceptual potential to the translator and curator of such objects.

1. Introduction

This paper considers the authorship of the contemporary artwork, which operates within a culture that is increasingly mediated by information, information systems, algorithms and artificial intelligence (AI) and the nature of the things that may have the appearance of art or human authorship but are in fact generated by AI and wider information networks. AI is a broad and complex field focused on machines capable of performing tasks that require learning and problem-solving, and these include machine learning and neural networks. Two applications of AI have attracted attention in the art world: the creation or manipulation of texts and the creation or manipulation of images. Both areas will be referred to in this paper in non-technical and abstract terms as the focus is not on the functioning of AI tools but on how we think about authorship in relation to AI systems. The primary AI tools for text production and manipulation are ‘Large Language Models’, or LLMs, which specialise in text generation, translation and summarisation, where the interaction takes place using natural language or natural language processing (NLP) as opposed to computer interaction via a programming language (Dhamani and Engler 2024, pp. 11–19). Although NLP methods have been available for decades, increases in the availability of computational processing power and data, aligned with refinements in architecture design, famously articulated in the paper Attention is All You Need, has exponentially increased their use and our concerns in relation to authorship (Vaswani et al. 2017).1 An example would be OpenAI’s ChatGPT.
The second area of interest and concern for artists is the production or manipulation of images using AI tools, which, on a superficial level, strikes at the foundations of visual art. Mainstream media began discussing the production of AI-augmented images with the arrival of ‘deep fake’ images, including a widely circulated video clip of artificial images and audio purportedly showing Mark Zuckerberg saying, ‘whoever controls the data, controls the future’ (Cole 2019). This is a reference to George Orwell’s famous line from the novel 1984 (1949), ‘Who controls the past controls the future. Who controls the present controls the past’ and alludes to the dialectical relationship between power and historical narrative (Orwell 2021, p. 192). Equally, it could be a reference to Frank Herbert’s Dune (1965) and the statement made in the first film adaptation, ‘He who controls the spice controls the universe’ (Lynch 1984). Spice or mélange being a life and mind-extending substance which was mined, traded and fought over. Thus, whether the reference is comparing the control and circulation of data and information with the power to revise history or the power to control society, a valid observation was being made through fiction or artificially generated means. There are several approaches to generating these artificial images, including a diffusion-based approach such as OpenAI’s Dall-E and Stable Diffusion, which incorporate similar aspects of attention-based approaches as the LLMs focused on text generation, whilst Generative Adversarial Networks (GANs) consist of two neural networks, one generating data and the other evaluating or ‘discriminating’ by comparing it against a reference or training dataset (Goodfellow et al. 2014).
This paper does not deal with the technical detail concerning the different forms of AI systems currently in operation, nor does it deal with the broader emotional, psychological, social, economic, political or ethical implications of AI use within art or society, which will be significant and impossible to anticipate fully. An emerging body of work is directly addressing the social, ethical and political considerations of generative AI and its impact on art, artists and broader culture, including Zeilinger (2021), Vyas (2022), Jiang et al. (2023) and Piskopani et al. (2023). Zeilinger (2021), for example, maps out the issues surrounding AI and copyright infringement and ways in which the artist can tactically engage with these tools to retain creative agency. Due to the nature of this rapidly evolving field and body of literature, this paper does not directly draw from the contemporary texts on AI and art but has stepped back and considered the issue of authorship in the age of AI from a systems thinking perspective, which underpins systems-based art, which would include AI-based work and the broader information culture which shapes our contemporary lifeworld. The paper is, therefore, narrowly focused on the nature of authorship in the age of AI from a systems perspective and how the exponential expansion of potentially meaningful information due to AI tools forces us to acknowledge more fully the distributed nature of art production, dissemination and reception.
The concept of distribution underpins many aspects of contemporary culture. All networked information technologies are distributed on some level. Blockchain technology, for example, the decentralised authentication technology, distributes transaction records across vast networks of computers as a way of ensuring transparency and security. Within an art context, the idea of distributed authorship was discussed within systems art to describe both the networks of information and information technologies, as well as a mode of working interactively and remotely. In Art and Telematics: Towards A Network Consciousness (1984), for example, Roy Ascott highlights the necessarily collaborative nature of working with information technologies and makes the distinction between the Postmodern readings of distribution, which he suggests advocates isolated deconstruction, and the ‘telematic’ or networked reading of distribution, which is morphologically connected, non-linear and collaborative (Ascott 2008, pp. 184–200). He states that ‘In telematic discourse, meanings are not asserted and consumed in one way linearity, but negotiated, distributed, transformed and layered in multiple exchanges where the authorial role is decentralised and scattered in space and time.’ (Ascott 2008, p. 195). Central to this discussion of distribution and authorship is the distinction between things generated by AI systems, which may be employed within or as artworks and artworks narrowly defined as such within the contemporary art system. To fully appreciate this distinction between artworks and their wider environment of things, we need to define contemporary art.

2. The Contemporary Artwork

Contemporary art is concerned with the communication of conceptual ideas that have an aesthetic dimension and are distributed in terms of their authorship and circulation. Supporting this cultural communication are the art objects: texts, pictures, images, audio, objects and experiences, which contain affective, aesthetic and conceptual information. Underpinning the contemporary understanding of art are the developments within conceptual art that took place in the 1960s and 1970s. These included conceptually driven movements such as Minimalism and its focus on structure and ideas, Fluxus and its emphasis on social interaction, Pop Art and its appropriation of mass media, and Systems Art or Cybernetic Art and their focus on systems and the circulation of information. The systems artist, curator and writer Jack Burnham described this move from the art object to information in the catalogue for the influential exhibition Software—Information Technology: Its New Meaning for Art at the Jewish Museum (New York 1971), stating the following:
…in the past few years, the movement away from art objects has been precipitated by concerns with natural and man-made systems, processes, ecological relationships, and the philosophical-linguistic involvement of Conceptual Art. All of these interests deal with art which is transactional; they deal with the underlying structures of communications or energy exchanges instead of abstract appearances.
This sense that Systems Art was anticipating the systemic and informational future was also evident in Burnham’s important texts Systems Esthetics (1968) and Real-time Systems (1969), published in Artforum originally and republished in his collected writing (Burnham et al. 2015).
Such technologies have since expanded exponentially, and information is the primary medium of the twenty-first century, infiltrating every aspect of the artist’s practice, the work they produce and the art system that supports them. However, information as a material remains largely abstract and challenging to pin down, and its influence is rarely acknowledged. Timothy Morton developed the concept of the hyperobject to describe the climate and the climate crisis as being so immense and all-consuming as to be invisible (Morton 2013). Information and the infosphere, the more comprehensive informational system, can also be considered hyperobjects, invisibly shaping art, the art discourse and broader culture. AI is exponentially extending this informational state and destabilising our understanding of authorship.
Both the climate and the infosphere can be considered from a systems perspective, with the climate exchanging energy and the infosphere exchanging information as they maintain systems balance. The idea of the ‘open system’, which has porous boundaries to exchange information with its environment, can also be applied to the art world as it is largely self-sustaining and uses feedback to control its internal processes and maintain stability—seen in the complex interactions among artists, galleries, critics, collectors and institutions, which create dynamic behaviour, self-regulation and feedback. Broader cultural, political and technical developments such as globalisation, decolonisation and computation, respectively, introduce additional complexity and feedback loops that implicate and enmesh the artist within a system that is beyond their ability to fully appreciate or control. Understanding the art world as a system was described by Niklas Luhmann in his thorough sociological text Art as a Social System (Luhmann 2000) and developed further by Francis Halsall in Systems of Art (2008), where he considered the artwork and art history from a systems theoretical perspective.2
Underpinning the art system is the communication and the exchange of information; however, the employment of the term ‘information’ can be ambiguous and becomes entangled with related concepts such as data, knowledge and meaning. From a biological and evolutionary perspective, Gregory Bateson, in Steps to an Ecology of Mind (1972), famously defined information as the ‘difference that makes a difference’ (Bateson [1972] 2000, p. 453) and later stated, ‘information is necessarily the receipt of news of difference’ (Bateson [1979] 2002, 29). Bateson proposes that meaning is derived from differences, the contrasting of one thing with another, and this aligns with Jacques Derrida and his concepts of the trace and différance, which will be discussed shortly but can be summarised as meanings being endlessly relational and subject to change as information is placed within different contexts (Derrida [1967] 1977).
The information scientist Marcia Bates has written widely on the nature of information3, but she succinctly states that information is ‘the pattern of organization of matter and energy’ and that living or sentient ‘beings process, organize and ascribe meaning to information’ (Bates 2016, p. 28). From this perspective, the artist will embed information within an artwork, but it is the audience who experiences the text or artwork who assigns meaning, and this is a product of this new information comingling with the audiences’ wider experiences as they calibrate the information, the differences, against other information, memories and experiences.
The art historian will bring a broader and deeper range of knowledge and experience to the consideration of an artwork, for example, Martin Creed’s Work No. 88, A sheet of A4 paper crumpled into a ball (1995), which is constructed from a sheet of white paper crumpled into a ball, as compared to a casual audience member who is unaware of Conceptual Art, Minimalism or the use of irony within Postmodern art. Thus, the audience, given their unique physical, mental, emotional and social ‘field of experience’, a concept developed by Pierre Bourdieu, may extract different or contradictory meanings from those originally intended by the artist (Bourdieu 1993). From a systems perspective, the audience and their fields of experience become part of the art system they are observing, adding to, or disrupting the information in circulation. This is described as a second-order cybernetic system whereby there is co-evolution between the observer and the system (Hayles 1999, pp. 131–59).
To understand the informational nature of contemporary art, we need to acknowledge the technical and social systems that generate and circulate information. Technological information systems, such as AI systems and the Internet, create and circulate information, and social systems, such as political systems and the art system, manage the flow of art communication. These systems are complex, and four terms, each with the prefix ‘post’, express an aspect of this complexity. The first term, the ‘post-systems condition’, acknowledges that we are subsumed within natural and technological systems, both of which exceed our understanding or ability to control. The post-systems condition expresses a wistfulness, or saudade, for a time before system apprehension, before we understood that the environmental, social and political systems were destabilised and before we were contained within technological systems (Goodfellow 2019a). The second term, the ‘post-medium condition’, refers to the shift away from the medium specificity within art and onto the idea and the transfer of information. As noted, from the late 1960s, attention was shifted away from specific mediums: painting, sculpture and photography and onto the underlying conceptual ideas and relationships. Rosalind Krauss described this as the ‘post-medium condition’ in her text A Voyage on the North Sea: Art in the Age of the Post-Medium Condition (1973), in which she charted conceptual arts’ increasing engagement with work in textual terms, a position theorised through poststructuralist philosophy and a turn away from the increasingly technological materials that remained a central concern for Media Art (Krauss 2000). The third term, the ‘postconceptual artwork’, a concept developed by Peter Osborne in Anywhere or Not at All: The Philosophy of Contemporary Art (Osborne 2013) refers to the fundamentally conceptual nature of contemporary art. The final term, ‘postproduction, ‘will be introduced in the conclusions to describe how art has moved from the production of aesthetic and conceptual objects to their curation and circulation within culture.
The ‘post-medium condition’ mirrored the wider cultural moves from Modernism to Postmodernism, from a focus on the material substrate to a focus on the ideas and information being communicated, and Postmodernism as an era within art is generally considered to span from the late 1960s to the early 2000s. Whereas Systems Art and Media Art remained focused on the underlying technological systems, Postmodern Art focused on the status of the ‘text’ and its role within culture. This historical bifurcation is described by Edward Shanken in Contemporary Art and New Media—Digital Divide or Hybrid Discourse? (Paul 2016). Postmodern Art’s focus on the artwork as an analysable text was anticipated, articulated and supported in the writing of Umberto Eco’s The Open Work (Eco and Robey [1962] 1989), Jacques Derrida’s Of Grammatology (Derrida [1967] 1977), and Roland Barthes’ The Death of the Author (Barthes [1967] 1987), each describing the changing relationship between the author and reader and between the reader and the text. Each theorist suggested that works should be considered as ‘open’, reconfigurable and essentially independent of the author. This position destabilised the authorship of the work and gave the audience a commensurate relationship with the work as the original author, allowing them to deconstruct and reconstruct the text freely. Michel Foucault in What is an Author, originally presented as a lecture in 1969, suggests that the author ‘has disappeared’, and instead ‘we must locate the space left empty by the author’s disappearance, follow the distribution [emphasis added] of gaps and breaches, and watch for the openings this disappearance uncovers.’ (Foucault 1997, p. 209). This core idea, that the work operates independently from the author, was further developed by the literary theorist Stanley Fish during the 1980s, articulated in Is There a Text in This Class? (Fish 1982), which argued that the reader essentially constructed the work through their critical engagement with the text.
By the early 2000s, Postmodern Art had evolved into Contemporary Art with its outward concerns with globalisation due to the expansion of Late Capital, politics due in part to the post 9/11 landscape, technology due to the fragmenting of culture via the Internet and ecology due to the realities of the climate crisis. Outwardly, there was a newfound engagement with the issues that affected social, technological and environmental systems. However, this has not played out as a return to the activism of the late 1960s, but the post-systems condition with expressions of enfoldment within systems and a nostalgia for more ‘innocent’ times seen in the perpetual re-mix of pre-internet culture—what Fredrick Jameson described, with reference to cinema, as the ‘nostalgia mode’ (Jameson 2009). However, if we peel away the contemporary and outwardly social and ecological concerns of globalisation and the climate crisis, persistent Postmodernist ideas continue to shape the culture of art and the art system, with appropriation, recontextualisation and the questioning of authorship remaining central. Further, it can be argued that Contemporary Art’s engagement with our material and social conditions, such as the climate crisis destabilising planet Earth and technological changes destabilising both society and the self, operates at the level of the text and does not engage with the deeper operations of these system complexities.
Despite these shifts within culture and a general move towards textuality and information, the artwork remains central to Contemporary Art, the common ground to exchange ideas and value. Osborne describes the contemporary artwork as ‘postconceptual’ with the three integral qualities of conceptuality, aesthetics and distribution. Firstly, the postconceptual artwork is inherently conceptual, and this distinguishes it from non-art objects. AI-generated text and images, for example, may have the appearance of art, but if they do not carry conceptual ideas embedded by the artist and are not presented as art objects within the art system, they cannot be considered as art. This is not to suggest that AI-generated materials cannot be employed within a contemporary art setting and considered as art objects or part of a wider art experience, such as an installation, screening or performance, but this repurposing needs to be facilitated by an artist, curator, or institutional art, such as a gallery or museum to be considered as an artwork (Osborne 2013, p. 48). Secondly, the postconceptual artwork has an ‘ineliminable aesthetic dimension’, meaning that the work must have some form of experienceable materialisation, which is located in space and time. Following this criterion, the AI-generated text and images must be located in the appropriate cultural context to be experienced as art, either as material works such as prints or spatio-temporal experiences such as a film screening within a gallery or other art institution (Osborne 2013, p. 48). Finally, the postconceptual artwork is ‘radically distributed’ and ‘irreducibly relational,’ meaning that the work is not located or conceptually sustained within a single material object (Osborne 2013, pp. 46–49). Osborne illustrates the idea of the distributed artwork with two examples. The first, Robert Smithson’s earthwork Spiral Jetty (1970), was distributed as artwork across the earthwork and the supporting documentary film, maps and drawings that were presented in galleries as Spiral Jetty was, for many years, submerged underwater in the Great Salt Lake, Utah, and was, therefore, conceptually and practically sustained as an artwork through the documentation (Osborne 2013, p. 110). Spiral Jetty’s distributed nature was also discussed by Halsall from a systems perspective (Halsall 2008, pp. 146–51). Osborne’s second example concerns photography as a distributed work both on the level of the photographic image and its reproducibility and, more broadly, as a field of activity (Osborne 2013, pp. 120–23). The argument for distribution is even stronger with AI-generated images due to the break in the indexical relationship between the photographed, sampled or described subject and the generated images. AI images cannot, however, be considered artworks unless they meet the criteria of the postconceptual artwork and are explicitly operating within the art system.

3. Distributed Authorship

Osborne described photography as an ‘imagined unity’ that is distributed across a range of optical, mechanical and chemical processes and social functions, including documentation, advertising and surveillance. He states, ‘This imagined unity is anchored in, or condensed into, the famous meaning-effect of “the real” (Osborne 2013, pp. 123–24). The ‘reality effect’ is a concept developed by Barthes and rooted in literary theory to describe how textual details in literary works create the illusion of coherence and authenticity (Barthes 1989, pp. 140–48). Osborne is employing the term to highlight photography’s feeling of reality despite or because of its distributed nature. This leads us to consider the nature of authorship and whether the AI-generated image is co-produced, like the photograph, with the underlying technical processes and the degree to which meaning can be transported through the highly distributed AI system from the artist to the audience.
Authorship can be considered as a spectrum running from pure individual creation through forms of co-production or co-creation with other actors or tools and pure machine creation with the artwork where the authorship is so radically distributed or beyond the understanding of a human author or wider culture for it to be thought of as alien. Pure creation is a theoretical and unrealisable model in much the same way the isolated system is an idealised system which operates in isolation from the wider world. Pure creation would consist of the production of the artwork directly from the interior world of the artist—their bodily functions, affective experiences, emotions and ideas. However, the artist is not an isolated or even a closed system but operates as an open system openly absorbing and exchanging materials, energy and information, such as the eating of food, the warmth of the sun and the registering of external information.
We are constantly absorbing information at different levels of consciousness, and these will all contribute to the workings of the self or artist as a system. At the most primal level, sensory receptors in our skin can detect a prick from a rose’s thorn, and this information is sent through the central nervous system and experienced as pain. We can also experience the rose on an aesthetic level—its visual or sensory appeal—through the comingling of external stimuli with our subconscious mind, as it seeks patterns and triangulates these sensations with stored memories. We also absorb and process conceptual information, such as the taxonomic knowledge of the plant family or stories which employ the rose symbolically, such as Oscar Wilde’s The Nightingale and the Rose (Wilde [1888] 2001). The artist cannot, therefore, operate as an isolated or closed system as they are entangled with their environment.
No art, therefore, is produced in isolation from the artist and their interaction with the world. Likewise, no art experience is completed except in relation to the audience, whether that audience is the artist, another person interacting with the work, or the institutions of art and wider society that sustain it. Art is, therefore, produced through the interaction of the artist with the world and the audience with the artwork, meaning that an artwork cannot be created without an author. This was demonstrated by Marcel Duchamp in 1917 when he exhibited a commercially produced urinal as an art object Fountain (1917), which highlighted both the conceptual nature of the urinal as an artwork and the aesthetic qualities of the ceramic object (Foster et al. 2016, pp. 127–29). This commercially produced sanitary ware was not created as an art object but redeployed by Duchamp as art. This transmogrification is perceptual but is also facilitated by material, social and conceptual systems. In social and material terms, the production of the urinal is the product of a complex system or set of processes, including the mining industries and the extraction and refinement of silica, feldspar and clay; the ceramic manufacturing processes, including the design, moulding, firing and glazing of the clay form, and the commercial processes of transaction and distribution. Thus, whilst the authorship of the artwork is centred on Duchamp, on the broader material level, we can acknowledge the systems of extraction, production and dissemination that created the glazed ceramic object. On a conceptual level, the authorship of the artwork must be drawn around Duchamp as the originator of the work, and to some degree, the audience that imagines and accepts the urinal as an artwork and the institutions that legitimise and sustain the work as art.
From an art historical and art theory perspective, such appropriation is unproblematic and well documented, with Duchamp being one of the first, alongside Pablo Picasso and Georges Braque and their newspaper collages, to redeploy found objects as art. This continued during the 20th century, as seen in the work of Jasper Johns and Robert Rauschenberg in the 1950s, Andy Warhol and Pop Art in the 1960s, and the theorised Postmodern Art of the 1980s, including Jeff Koons. Notably, Sherrie Levine made a bronze cast of Duchamp’s urinal, Fountain (After Marcel Duchamp: AP) (1991), which both underlined the historical importance of the original work and operated as both a critical and philosophical conclusion to the debate concerning appropriation. The reuse of found material has, of course, expanded in the 21st century, particularly in relation to the remix culture of fashion, music and the Internet and AI image and text generation is a highly distributed form of remix and appropriation.
From this perspective, the text and images generated from AI models may have the appearance of art, but they are not in themselves art until they are deployed by an artist within the art system as an art object: the art system being the artist, the artwork and the wider network of art, including the audience and the institutions that support art. An image generated from a prompt using an AI system is fundamentally not art unless it operates within the art system. The AI image may prompt an aesthetic experience or conceptual idea in the mind of the audience, but that has been constructed by the audience in relation to the image and the distributed system that produced it: the image being a product of a complex system or set of processes equivalent to the industrial production of the urinal employed by Duchamp.
Like the urinal, the authorship and production of a film are highly distributed, albeit with a distinct and clearly articulated ‘hierarchy of credits’, as a film is primarily associated with the writers and directors and, more broadly, with the producers and production teams who facilitated its inception and production. More broadly still, the people, cities and cultures who have impacted the film on both a practical and cultural level are acknowledged as an influence. As Karen Pearlman and John Sutton state, film is a ‘complexly layered form of artistic production […] a deeply interactive process, socially, culturally, and technologically’ (Pearlman and Sutton 2022, p. 86). Pearlman brilliantly demonstrates the distributed nature of filmmaking in the video essay Distributed Authorship: an et al. proposal of creative practice, cognition and feminist film histories (2021), and the associated paper (2023), in which she discusses the Soviet avant-garde filmmaker Esther Shub, who used montage or collage techniques to create ground-breaking documentaries in the 1930s. Pearlman explores two important points in relation to distributed authorship. Firstly, the authorship of film, as noted above, is distributed across the contributors, and equating the director with the author should be understood metaphorically. Secondly, from a feminist film theory perspective, contributors to the authorship of films have been and continue to be marginalised or elided. Pearlman proposes that to better understand how and who contributes to authorship, we need to have a deeper embodied understanding of how films come into being. She also suggests that the term ‘et al.’, meaning ‘and others’, could be applied to films to denote the collective effort required in filmmaking. Likewise, AI-generated or augmented works, including images redeployed within art or text redeployed within other texts or contexts, could carry an equivalent postfix to denote this distribution, such as a label to denote the work was not only coproduced with AI tools, but with the artists, writers and programmers who have created or trained the system.
Similarly, a computer video game can and is more readily understood as having distributed authorship. These are very complex cultural and technical objects, which simultaneously operate as navigable worlds, narratives and pictures and their production, and to some degree, their consumption is complex and multi-layered. As Stephanie Jennings notes in Co-Creation and the Distributed Authorship of Video Games (2016), it is not only the ‘developers, the designers and artists and writers who create its content, rules, and form’ but also the technologies and wider culture which supports it (Jennings 2016, p. 123). Jennings also makes an important observation that distributed and co-creation models of production are not intrinsically cooperative and conflict-free but can accommodate competition, and the final works will be a product of the different goals of the participants. From a systems perspective, the final game ‘emerges’ from this complexity.
Likewise, the production of an AI image is highly distributed, but we have yet to fully appreciate its provenance, in part due to the enduring myth of the artist as the sole author. An AI image, for example, is generated by a prompt written by the artist or author, but this is only part of the complex system which generates the work. The prompt is a set of instructions provided to an AI system containing a clear description of the desired textual or visual output. In simple terms, the prompt is tokenised, meaning the prompt, for example, ‘a urinal presented on a plinth’, will be converted into the tokens: [“a”, “urinal”, “presented”, “on”, “a”, “plinth”], whilst retaining a meaningful or recognisable semantic relationship between the terms (Dhamani and Engler 2024, pp. 9–10). When a prompt is input into an AI system to generate a text-based response, the system responds to the prompt in relation to the mapping of relationships built on the extensive datasets. The model analyses the input and predicts the most likely sequence of words to follow based on patterns learned by training the system on masses of data or constructing an image based on trained patterns extracted from the image dataset (Lai et al. 2023).
Thus, text is input into the system and the system outputs text, and this allows for slippages of meaning to emerge as the programmed and inferred semantic relationships between words in the prompts and the encoded and emergent relationships within the AI system must be understood as both provisional and subject-specific. Derrida argued, and it has generally been agreed, that language is an open system operating within the world it describes and is consequently dynamic and subject to constant relational evolution. Derrida explained this through two flexible concepts: trace and différance. Building on the work of Ferdinand de Saussure, Derrida argued that the sign is constructed in relation to and in opposition with other signs, stating the following:
The play of differences involves syntheses and referrals that prevent there from being at any moment or in any way a simple element that is present in and of itself and refers only to itself. Whether in written or spoken discourse, no element can function as a sign without relating to another element, which itself is not simply present. This linkage means that each ‘element’ […] is constituted with reference to the trace in it of the other elements of the sequence or system. This linkage, this weaving, is the text, which is produced only through the transformation of another text. Nothing, either in the elements or in the system, is anywhere simply present or absent. There are only, everywhere, differences and traces of traces.
The terms ‘trace’ and ‘différance’ are closely related but focus on different aspects of how meaning is constructed and understood within language. Trace refers to the residual mark or shadow left by the absence of something—that meaning always carries some record of what is not there. For example, when it is dark, we are reminded of the absence of light or its trace. The related term, différance, combines the concepts of difference and deferral, focusing on the process by which meaning is generated by contrasting two concepts and the unfixedness or deferred nature of meaning. For example, there is not a single Cy Twombly painting that can definitively represent his complex practice, and until you have seen all of his paintings, the concept of a ‘Twombly’ is deferred. However, an informed audience can differentiate between his work, with its mix of energised paint marks and jittery calligraphy in pencil and wax crayon, and other abstract artists without having seen all of his work. The term ‘Twombly’ differentiates these works without being fully realised as a concept. Taken together, trace and différance demonstrate the dynamic, relational and complex nature of signs, and this appreciation can be applied to the three primary engagements with AI systems—during their development, as we interact with them and as we respond to their outputs.
At the system development stage, a ‘semantic index’ needs to be developed for the AI system to map the relationships between words, concepts and entities to reflect their meanings and contexts through a combination of human effort and automated AI techniques. Humans input specialist knowledge in fields such as medicine or art, curate taxonomies and define the initial relationships between words and concepts. In addition to this, AI systems can apply natural language processing techniques such as entity recognition (the identifying and classifying of text into defined categories), semantic embedding generation (creating numerical representations of words, phrases and sentences, or other pieces of text that capture their meanings mathematically) and query expansion (expanding the context of prompts with additional relater terms) (Dhamani and Engler 2024).
This is a vast technical subject area, but a productive overview is given in Introduction to Generative AI: An Ethical, Societal, and Legal Overview (Dhamani and Engler 2024), and several papers are included here to give a sense not only of the complexity and richness of the subject but its morphological similarity with literary theory. In the 1990s, George Landow drew a comparison between hypertext, the system of connecting and linking text with literary theory. Landow stated the following:
When designers of computer software examine the pages of ‘Glas’ or ‘Of Grammatology’, they encounter a digitalized, hypertextual Derrida; and when literary theorists examine ‘Literary Machines’, they encounter a deconstructionist or poststructuralist Nelson. These shocks of recognition can occur because over the past several decades literary theory and computer hypertext, apparently unconnected areas of inquiry, have increasingly converged.
This analogy remains productive for the non-technical reader today, between semantic indexing and literary theory—how things are connected and differentiated. Further reading concerning the building and mapping or relationships between words, concepts and entities or semantic indexing include, but are not limited to, the following: Attention is All You Need (Vaswani et al. 2017), Distributed Representations Of Sentences And Documents (Le and Mikolov 2014) and Efficient Estimation Of Word Representations In Vector Space (Mikolov et al. 2013).
There are three bodies working on the production of signs at the system development stage: the subject specialists, the programmers of the system and the system itself. The subject specialists are contributing their knowledge and understanding to the system. The system is built on the knowledge and understanding of the programmers who have written the underlying algorithms of the AI systems, who in turn build on the knowledge of the earlier programmers who have written the programming languages, the libraries of reusable code and the code that underpins the operating system of the computer and its communication. At each layer, terms have been defined, and structures have been implemented that shape how the user can interact with the AI system and how the system will handle the requests, navigate the semantic indexes and the collected data and generate results. Underpinning the AI system are immense datasets that have been generated by other actors: humans, organisations, algorithms and other technological systems. On an individual level, data can include the oeuvre of an artist or writer; at an institutional level, it can include financial or medical datasets; on an algorithmic level, it can include synthetic data, such as simulations of real-world or hypothetical systems, and at the technological level, it can include data derived from sensing devices such satellite images and CCTV cameras. This sensed information is not a direct index of reality but is subject to the technological limits of the sensor and programmed methods of sampling, which stratifies the world into a limited range of pixel or data values.
Thus, the authorship of the AI system is distributed across the programmers of the AI software and the technologies that support the AI system and its underlying data. More broadly, it can be argued that it is supported by the social, political and market systems that sustain it as a feasible and desirable model for information generation. The authorship of the data that is required for the AI system to generate potentially meaningful results is massively distributed across the humans, organisations, algorithms and technological systems that have contributed data voluntarily or unwittingly via product and service agreements or unknowingly through data scraping—the process of extracting information from the Internet—and sensed data from sensing technologies. More broadly, these data are shaped by technological, social, political and disciplinary methods, which shape what information is collected and how it is framed. Once the AI system has sufficient data, the user can query the system to generate text or images using prompts. As noted earlier, a prompt is a short statement that can be in the form of a question, such as “Who was Jack Burnham?” or an instruction, such as “Describe Jack Burnham’s influence on the development of Systems Art.” The AI system breaks the prompt down into a string of words or tokens and then, drawing from its semantic index and vast training data, generates a response. This response is then reconfigured into natural language that is comprehensible to the person who generated the prompt.
As Derrida has demonstrated, the meaning of words is not fixed either at an individual or cultural level. Dictionaries and the semantic indexes of the AI systems offer some stability, but often as a point of différance, meaning that we calibrate our understanding in relation to and in opposition with these temporally fixed meanings. Thus, the author brings their own understanding and use of language and their fields of experience, and this is reflected in the prompt and may not correspond directly with the meanings fixed in the index. Consequently, there is a slippage in meaning, and the results they receive may not have the desired conceptual content, tone or voice, and the resultant text may feel alien. The author will consequently revise the framing of their query or instructions to nudge the system to produce a result that more closely resembles what they have in mind or the voice or style they are seeking for the work.
This process of evolving the prompts to achieve an imagined result may feel dialogical and dialectical, but the AI system is not directly responding to the intended meanings that sit beneath the words of the author, as these meanings are not fixed for the author, possibly unknown to the author on a conscious level and are subject to change, nor is the LLM evolving its understanding based on its interaction with the author. The AI system is fixed (other than updates by the developers) and cannot autonomously evolve in response to the prompts in an equivalent way to biological organisms that can respond to their environment or mutate generationally. They may give the uncanny appearance of responsiveness and evolution as the LLM models are trained to predict the next word or token in a string, and this process is autonomous in the sense that it is self-supervised (Dhamani and Engler 2024, p. 11). The author of the prompt is not, however, having a dialogue with a sentient machine but is scoping out the terms by which they can achieve their desired result. There is an inherent and complex dialogue taking place, but this is between the author and herself via the unknowable index and the infinitude of data that underpin the system. This has the potential to create a para solipsistic psychological state whereby the author thinks they are controlling the narrative, but they are, in fact, slowly evolving their ideas, language and tastes to fit with the outputs of the AI system. Langdon Winner has defined this propensity to adjust our desires to fit with technological constraints as ‘reverse adaption’ (1977). Howard Veregin describes it as follows:
Reverse adaption refers to the transformation of existing goals to accommodate a new technical means. Goals are in effect rearranged in accordance with the demands of the technological order. In extreme cases, the broader social context ceases to be relevant as long as technological demands are satisfied and maintained.
At this level of system interaction, the authorship is distributed across the author of the prompt and the wider AI system, but it is also refracted through the feedback loops taking place within the interaction whereby the author is adjusting their language, expectations and desires as they interact with the unknowably complex system and pool of information it draws from. More broadly, the author’s employment of language and concepts, translated into prompts, is built on their own knowledge, be it in art, literature or science, which further extends the authorship of the AI-generated work to these fields of experience. This alludes to the idea, attributed to Johann Wolfgang von Goethe, that we can only see what we know; that is, to actively engage with the AI system, we need to operate within our fields of experience so that we can productively shape queries and instructions and have the knowledge and critical faculties to calibrate the resultant information against our internalised knowledge and external bodies of knowledge, be it the cannon of an artist or the principles and equations underpinning quantum mechanics.
The prompt author can, of course, interact with the AI system without specialist knowledge to fully appreciate the materials they are submitting. The prompter can input the work of a poet or painter and ask the system to produce another work in the style of the submitted work without fully appreciating the underlying linguistic and conceptual acuity present in the sonnet or the painterly decisions of colour and composition present in the image of the painting. Such a prompt is redeploying a found object within the AI system, and the underlying knowledge and experience required for the original authorship cannot be fully appreciated. In this scenario, the contribution of the prompt author in the authorship of the new work relative to the original poet or artist, the AI system and the many artists whose work has contributed to the training of the AI, is limited.
Three scenarios of interaction with an AI system can be briefly considered here to illustrate the degree to which the prompter is shaping the production of the images. As described, the production of images is the result of the author defining a prompt that interacts with the AI system, which is trained on data. In the first scenario, someone writes a prompt, a descriptive statement, for an image generator such as OpenAI’s Dall-E 3 to create an image. The statement would contain things such as nouns, such as an ‘image’, an adjective + noun, such as a ‘glass sculpture’ and prepositional phrases, such as a description of how the light is refracted through the crystal in a white gallery space. The resultant images are the product of the prompt combined with the Al system and the data it is trained on, which may include images of glass, sculptures, gallery spaces and the play of light. The resultant images would have a distributed authorship between the prompt writer, the wider culture that supports the production of language and the AI system and, crucially, the originators of the training data, which would include the artists and photographers who produced the original works and their photographic image-based record.
This could be further understood in terms of signs. Charles Sanders Peirce articulated the three forms of sign as the index, icon and symbol, with the icon being the resemblance, the index being a physical connection and the symbol being a shared convention such as language or mathematical notation (Soderman 2007, pp. 156–57). From this perspective, AI images can be compared to other image forms, such as photographs and paintings. Firstly, a photograph, whether chemical or digital, has an indexical and iconic relationship with the photographed subject, whilst the digital photograph’s indexicality is complicated and extended by the camera’s sensor and the algorithms it employs to process the data. Likewise, AI-generated images will have an indexical relationship with both the AI algorithms and the data they were trained on. Such algorithmic images do not present us with pictures of the world but ‘operate as reconfigurable outputs of a simulation of the world’ (Goodfellow 2019b, p. 5).
The second scenario can be illustrated with an example of an artist interacting with an AI system. The artist Peter the Roman employs Stable Diffusion’s Image2Image (Img2Img) technique to create work that is trained on both his own images and a more expansive training dataset. There is, therefore, a triangulation between his prompts and his artwork and the more extensive training datasets that would draw from other objects, images and works (Ai and Sheng 2023). See Figure 1. This approach allows for the creation of original images based on the prompt and an input image or set of images (all created by the artist) by employing a diffusion model to iteratively refine random pixel values or noise into a coherent image in conjunction with the AI system’s data. Consequently, the authorship of the work is shared between the artist and the AI system and its data, and the resultant images will have an indexical relationship with both the artist’s images and the larger training data. This potentially shifts the status of the resultant work from an image to a picture as they are more directly the product of an author who is directing their production.
Charles Baudelaire, in The Painter of Modern Life (1863), made it clear that a picture (or tableau) is not a window or view of the world, but it is constructed from the experience of being in the world (Baudelaire 2010). Whereas the photograph is a record of a physical event, and the software-generated image is a record of an algorithmic event, the picture is a record of a psychological event, as the artist has embedded conceptual, aesthetic and affective information into the process of production at either a conscious or subconscious level. Such embedded information will exist in work solely trained on third-party data, but the authorship of the images and the experiences and ideas they represent are so highly distributed that they are not only post-photographic in the sense that they are constructed algorithmically but also post-pictorial, as they are created without the singular experience of being in the world. An image may allow us to see something, but only a picture contains the experience of the artist.
The images created from the first and second scenarios are the products of an open system as information is exchanged with the wider environment of the AI system and its data. It is also a complex system, as the system taken in totality (the AI system, training data and artist or prompt writer) is beyond the full comprehension of the user, and the system exhibits unexpected results or emergence. The causal user, who writes the prompts and has no understanding of the underlying processes or data, may even experience the AI system or the wider computational networks as hyperobjects being incomprehensibly extensive and complex. This feeling of technological enfoldment is increasingly narrativised within contemporary culture by pointing to the idea that we are living in a simulation. This idea was established within mainstream culture with the film The Matrix (Wachowski and Wachowski 1999) and given academic authority as a speculative concept several years later with Nick Bostrom’s seminal paper, Are You Living in a Computer Simulation? (Bostrom 2003), in which he calculated the likelihood that we are living in a simulation. More recently, the idea that we are living in a technologically mediated simulation is proffered as an explanation for the weirdness of contemporary culture, what Mark Fisher (2016, p. 61) describes ‘as presence of that which does not belong’.
The uncomfortable presence that we sense in contemporary culture is not (necessarily) evidence of a meta-simulation that encompasses our experienced reality but the technological excesses, which, along with pandemics and climate crises, engulf our personal lived experience, making the world ‘increasingly unthinkable’ as Euegene Thacker described it (Thacker 2011, p. 1). One way of containing the excesses and unknowability of the world is by acting locally in an embodied way and engaging with systems that are understood and in control of the artist. The artist who uses their body in performance or their hands to shape clay or draw with charcoal engages with the world directly in ways unmediated by complex tools, technologies or systems. Bates describes such mark-making as ‘embedded information’, describing it as the ‘pattern of organization of the enduring effects of [..] presence’—a record of being in the world (Bates 2006, p. 1036). Beyond these direct records of matter and energy exchanges, artists employ tools to explore their affective, emotional and conceptual experiences, and it is the degree to which they control and understand these tools and the provenance of the outputs of these artist–tool relationships that are central to the discussion of authorship in the age of AI.
The third and final scenario of engagement with AI systems concerns the artist who trains the AI system solely on their own work, as opposed to a vast and unknown dataset, thus creating a dialogue between themselves, the AI system and their own practice. This can be illustrated with Peter the Roman’s Blob series of works, which were created using a GAN model directly trained on a dataset containing only his artwork, thus constructing a dialectical feedback loop whereby he evolves his work in response to the generated images, which could, in turn, be fed back into the dataset. See Figure 2.
The artists who refine their prompts and train the AI with their own data, such as Peter the Roman, are operating similarly to earlier systems artists such as Manfred Mohr, the computational artist or the conceptual artist Sol LeWitt, both of whom used rules to produce work. LeWitt famously stated in the Artforum article Paragraphs on Conceptual Art (1967), ‘In conceptual art the idea or concept is the most important aspect of the work’ and ‘The idea becomes a machine that makes the art’ (LeWitt 1967). He demonstrated this in his wall drawings, which he developed as a set of instructions, which were then executed by others as drawings or wall paintings. The comparison with the contemporary artist using AI is productive, as although LeWitt’s instructions were clear, they left room for interpretation in their execution at both an intentional level, such as the scale of the drawing and an unintentional level, in terms of the physical execution. A similar level of slippage between what is expected and what happens takes place with the writer of the prompt, which is trained on their own data, due to the difference between the intention of the writer, how this is transcoded into words, the prompt and how this is dealt with in the AI system. Once the artist or prompt writer reviews the results, they can then refine the prompt or expand the training data to achieve results that are more aligned with their expectations and desires. This process of evolution through feedback and refinement has the potential to accelerate the artist’s understanding of their underlying rules of creation, things that may be difficult to appreciate without this feedback mechanism, allowing them to locate the pockets of novel indeterminacy that remain within their practice and achieve emergent complexity in their work.
In contrast, the artist who is training the AI system solely on the work of others can be understood as continuing the tradition of appropriation, albeit in vastly distributed terms. The artist who writes the prompt but is not using their own training data is operating with less control as they do not know the underlying algorithms or data shaping the work. Thus, there is a level of ‘chance’ or ‘randomness’ from the perspective of the artist as they cannot fully anticipate the results of the prompt. There is, of course, a long history of employing randomness and chance within art, which can be seen within Systems Art, Minimalism and Fluxus in the 1960s and back to the early 20th century and Dadaism. Duchamp, for example, created the work 3 stoppages-étalon (1913–1914) by dropping threads onto a canvas and fixing where they landed (Judovitz 1998). Duchamp felt the employment of such methods was liberating, stating that it ‘was a way to escape from those traditional methods of expression long associated with art’ (Judovitz 1998, p. 35). The way the threads were released from Duchamp’s hand may exhibit a level of chance and randomness, as this action may be beyond his conscious decision-making. However, the placement of the threads, once they begin their descent, is not strictly chance but something unpredictable and beyond the control of the artist. Although complex, their movement could be described with physics, such as the thread’s mass and density or the way the resistance of the air will affect their final placement. Likewise, the origin of an AI image, like the release of the thread, has a random dimension and is beyond the control of the artist who is engaging with the AI system. However, the artist may not be distinguishing between aspects of the process that are designed to be mathematically random (the origin of the AI-generated image is random noise or a ‘noise map’), and the broader experience of engaging with the AI system, which may feel like it involves randomness and chance due to its complexity. AI-generated images and texts are, therefore, not random in the strict sense but the products of a complex system (the initial noise map, plus the prompt, the AI algorithms, and the underlying data) in much the same way that 3 stoppages-étalon, the art object, is the product of Duchamp’s idea, the production of the thread, its release and the physics which affects its placement.

4. Conclusions

This ambivalent position concerning the provenance of ‘found’ material is the condition of 21st-century culture where there is an excess of material and information. This is demonstrated in the remix culture of fashion, music and contemporary art with its knowing re-use of signs from popular culture, the Internet and politics. The reality of this material and information infinitude shifts the contemporary artist from producer to consumer. This transition is underpinned by theory, such as Barthes ‘death of the author’ and Stanley Fish’s reader-response criticism whereby the reader becomes the author. When an artist writes a prompt for an AI system, are they the author, part of the distributed authorship or merely a reader of the resultant outputs? When the artist redeploys the materials from the AI system, are they acting as an artist or curator, or merely a consumer?
Artists are increasingly operating more like curators, as the production of material is not the primary concern anymore, but what to do with information, its ordering, archiving and selective presentation. Artists working with AI systems may further accelerate this archival and curatorial response within contemporary art, a condition which was articulated by Nicolas Bourriaud in Postproduction: Culture as Screenplay: How Art Reprograms the World (Bourriaud [2002] 2005). Bourriaud discusses how contemporary artists have made this shift from producer to remixer of culture as they take found objects and ideas and recontextualise them for consideration in ways that resemble and directly draw from curatorial practices.4 As Bourriaud questions:
…how can we produce singularity and meaning from this chaotic mass of objects, names, and references that constitutes our daily life? Artists today program forms more than they compose them: rather than transfigure a raw element (blank canvas, clay, etc.), they remix available forms and make use of data.
Bourriaud continues that contemporary culture is ‘stockpiled’ with ‘data to manipulate and present’, and AI systems are exponentially adding to this infinitude. As noted at the beginning of this discussion, we are operating within the post-systems condition with algorithmic and cybernetic systems shaping our lifeworld and we consequently need to understand the information-driven systems shaping society. It was also noted that postconceptual artwork is simultaneously conceptual and aesthetic and distributed in both production and dissemination, and the post-medium condition untethers, to some degree, the message from the medium.
However, Bourriaud employs the prefix ‘post’ differently to refer to the activity that takes place after production—once we have access to unlimited goods, services and information. AI systems will generate limitless text and images, first trained on things created by humans, then trained on things created by algorithms and then trained on things it created itself—pure simulacrum, in Baudrillard’s terms—as they become self-referential at a fundamental level (Baudrillard [1981] 1994). AI systems will become self-sufficient autopoietic systems that generate exponentially more information, further diluting the role of the programmers and artists who interact with them: pure machine production or alien authorship. In this situation, the role of the artist, as articulated by Bourriaud, is the invention of ‘protocols’, ways of engaging with the emerging modes of representation and new formal structures (Bourriaud [2002] 2005, p. 24).
The artist and the audience for their works are increasingly entangled within materials generated by AI tools, which displaces and distributes authorship. In a mythological and idealised past, the artist was the sole author of their work. Postmodernism extended authorship to include the audience, and systems thinking articulated this in terms of the flow of information and fields of experience and their influence on both the author and audience. Authorship is now further extended to include the technological systems that generate new works from the augmentation of production, the sampling of earlier works and works generated from algorithms independent from an external referent. Bourriaud argues that the role of the artist in this state of postproduction is the curator of conceptual and aesthetic materials in a way that is meaningful for human culture. The processes of collection, synthesis and representation by an artist (or writer) may soon seem profoundly inefficient as compared to the relational and generative powers of AI, which can instantaneously iterate something new from mountains of text and image data; something that would take a single artist many lifetimes to replicate. Nevertheless, the work of the artist, although limited when compared to the processing power of AI tools, is fundamentally an embodied human activity directed at other humans and will, therefore, remain essential to other humans and differentiated from machinic production.
With this humane perspective in mind, many texts and artists have been important in shaping the author’s opinions and the assembly and writing of this essay. Many authors are consciously referenced, and others have been included in the notes as they contribute to this discussion in more distributed terms. Others will have influenced the writing in ways not consciously appreciated by the author as they have become so embedded within his thinking to be ‘second nature’. In that sense, Pearlman’s suggestion for cinema and the postfix ‘et al.’ could be applied more broadly to human activity and its extensions into tools such as writing, art and AI, without erasing the role of the writer, artist or programmer. Many works, including this paper, could have the postfix ‘et al.’ to denote the distributed nature of production. As Pearlman states. ‘The “et al.” system is not intended to diminish the work of directors. Rather, it aims to provide a more accurate understanding of how the director’s leadership, artistry and individual abilities are entangled with those of the creative team’ (Pearlman 2023, p. 98). The paper Massively Distributed Authorship Of Academic Papers (Tomlinson et al. 2012), for example, is attributed to thirty authors, morphologically demonstrating the distributed nature of the writing process. To employ Bourriaud’s concept of postproduction, the primary role of the author in a networked culture is the channelling, synthesising and presentation of ideas drawn from different fields of information.
The designation of ‘artist’ for the person who is communicating affective, emotional, aesthetic and conceptual ideas via artworks or art experiences within the context of the art system is an accurate term or description. However, if the production of the things they create and circulate extends beyond their bodies and minds and they employ the tools and labour of others and this labour meaningfully contributes to the final work in terms of its communicatory content, then some acknowledgement of this contingency would offer transparency and would be socially and culturally productive as it expresses our entangled state with each other, with technologies and with the wider environment.
Anxieties regarding authorship and AI will inevitably persist despite an understanding that the authorship of cultural artefacts is distributed, as AI tools make manifest our interconnected state within technological systems in which we are simultaneously atomised, being a single node or actor, and distributed, being distributed across our data and data interactions. This cognitive dissonance strikes at the heart of what it is to be human: both a discrete embodied self and part of the broader ecological and technological systems in operation. As Martin Heidegger explores in Being and Time (1927), we cannot understand ourselves except in relation to the wider world, and this paradox is a foundational cause of human anxiety. As he observes, ‘That about which Angst is anxious reveals itself as that for which it is anxious: being-in-the-world’ (Heidegger et al. 1996, p. 176). Heidegger describes how we are ‘thrown’ into circumstances we did not choose, revealing the uncanniness of life, or, in the case of AI, the anticipation of an uncanny future where we may be increasingly decentred within our lifeworld due to a culture that prioritises efficient and distributed information processing over lived experience (Heidegger et al. 1996, p. 315).
In closing, several texts are returned to briefly that remain important to the author. Rereading Jean Baudrillard’s Simulacra and Simulation is productive for this discussion, as it demonstrates that culture has been moving towards pure simulacrum for many decades (Baudrillard [1981] 1994). However, the acceleration of AI turns the simulacrum from a poetic to a literal description of alienation. Rereading the work of Jack Burnham reminds us that Systems Art and Burnham’s texts demonstrate that art needs to be more than the reconfiguration of signs but have an engaged understanding of the flow of information and the technological systems that underpin both art and society. This systemic description of art is most fully articulated in Niklas Luhmann’s Art as a Social System (2000), which differentiates art in terms of its communicatory power in the age of systems enfoldment. More broadly, Capra and Luigi’s (2014) The Systems View of Life: A Unifying Vision offers a comprehensive systems description of the physical world and demonstrates the fundamental interconnectedness of things and how we need to think holistically about the planet, the environmental and biological systems and, more speculatively, how we might apply this to culture, including the production of art, as a collective and distributed activity. Thinking through systems from a cultural perspective, Paul Cilliers’s Complexity and Postmodernism: Understanding Complex Systems (1998) synthesises Postmodernism and systems thinking, which, as discussed, are often dealt with in mutually or culturally exclusive terms. Finally, rereading Bourriaud’s Postproduction, we can start to think through what the role of the artist may be in the era of informational infinitude, where the artist is not primarily concerned with the authorship of new materials but its containment.

Funding

This research received no external funding.

Data Availability Statement

No new data is created in this research. Data is not applicable.

Acknowledgments

The author wishes to thank Christopher Dorsett for the many discussions on the ontological nature of art, Jagdeep Ahluwalia for conversations on some of the more technical aspects of AI and Peter the Roman for the supportive correspondence and the generous permission to reproduce the images of his artwork. I would also like to thank the journal’s reviewers for their valuable feedback.

Conflicts of Interest

The author declares no conflict of interest.

Notes

1
The field of NLP models and generative AI research and literature is vast and complex. However, moving beyond Vaswani et al. (2017), several other influential papers that have evolved the field can be highlighted here. Combining Labeled and Unlabeled Data with Co-Training (Blum and Mitchell 1998) addresses how to improve algorithms through the use of extensive uncategorised or unlabelled data when only a small set of known or labelled data is available. An Introduction to Conditional Random Fields (Sutton and McCallum 2010) considers a probabilistic method to predict the relationships between words. In Beyond Accuracy: Behavioral Testing of NLP Models with CheckList (Ribeiro et al. 2020), the authors proposed an approach to test NLP models using a matrix of linguistic capabilities. From a generative AI perspective, the foundational paper Generative Adversarial Nets (Goodfellow et al. 2014) lays out the standard framework for GANs whereby two neural networks operate together: one to generate material and the other to discriminate.
2
Information in art
The broader history of information is well documented, and exhaustive historical texts include Information: A History, a Theory, a Flood (Gleick 2011) and Beautiful Data (Halpern 2015). It is, therefore, sufficient for this discussion to highlight several edited volumes that have covered art’s relationship with information technology since the 1960s. These include Information edited by Sarah Cook (Cook 2016), Systems edited by Edward Shanken (Shanken 2015), Networks edited by Lars Bang Larsen (Larsen 2014), Art and Electronic Media (2009), A Companion to Digital Art (Paul 2016) and Information Arts: Intersections of Art, Science, and Technology (Wilson 2002). From a systems perspective, Chronophobia, On Time in the Art of the 1960s (Lee 2006) and Art, Time and Technology (Gere 2006) consider the temporal dimension of cybernetic and systems art. Eve Meltzer’s Systems We Have Loved (Meltzer 2013) offers an anti-humanist reappraisal of systems-based art and the role of affect within cybernetic and conceptual art. Although published 25 years ago, Nicolas Bourriaud’s Relational Aesthetics remains a seminal text to describe how contemporary art engages with systems (Bourriaud 1998). A recent collection of writing on the early systems artist Hans Haacke, Hans Haacke: Volume 18, demonstrates the conceptual rigour present within the work of early systems artists (Churner 2015). Likewise, As noted in the essay, the collected essays Dissolve into Comprehension: Writings and Interviews, 1964–2004 of Jack Burnham demonstrates a prescience in relation to information and systems within art (Burnham et al. 2015). Also noted in the essay are Niklas Luhmann’s sociological text Art as a Social System (Luhmann 2000) and Francis Halsall’s Systems of Art (2008), which deconstruct the art world and artwork systemically. More recently, Jason A. Hoelscher, in Art as Information Ecology (Hoelscher 2021), has explored American art of the 1960s from a systems perspective.
3
Marcia Bates and Information
Marcia Bates has produced several important texts, including Information and Knowledge: An Evolutionary Framework for Information Science (2005), which maps the biological, anthropological and psychological roots of information thinking. A companion paper, Fundamental Forms of Information (2006), categorises information into distinct information flows and types of information. A third text, Information (2010), offers a comprehensive literature review of work that defines information through several conceptual frameworks, including Semiotics, Structuralism and Deconstructionism. These important texts have been collected into Information and the Information Professions (Bates 2016), which is the first of three volumes of texts from Bates on information science, demonstrating not only the complexity and breadth of discussion but the indeterminacy at the heart of the information concept.
4
The artist as curator and archivist
There are several important texts that reposition the artist from the producer of conceptual and aesthetic objects to the curator of objects and experiences. These include The Artist As Curator (Jeffery 2015), The Artist As Curator An Anthology (Filipovic 2017) and When Artists Curate Contemporary Art and the Exhibition as Medium (Green 2018). There is also a significant strand of critical writing that draws parallels between the rise of information and the ‘archival impulse’ within contemporary art. These include The Archive (Merewether 2006), Performing the Archive (Osthoff 2009), Staging the Archive (van Alphen 2014), The Archive as a Productive Space of Conflict (Miessen and Chateigné 2016) and The Big Archive: Art from Bureaucracy (Spieker 2017). In Bad New Days (Foster 2015), Hal Foster describes this archival response in terms of both critical complicities with technology such as Pierre Huyghe and Philippe Parreno’s collaborative project No Ghost Just a Shell (1999–2002) and material opposition demonstrated in the use of celluloid film instead of digital production in the work of Tacita Dean (Foster 2015, pp. 31–60).

References

  1. Ai, Hao, and Lu Sheng. 2023. Stable Diffusion Reference Only: Image Prompt and Blueprint Jointly Guided Multi-Condition Diffusion Model for Secondary Painting. arXiv arXiv:2311.02343. [Google Scholar]
  2. Ascott, Roy. 2008. Telematic Embrace: Visionary Theories of Art, Technology, and Consciousness, 1st ed. Berkeley: University of California Press. [Google Scholar]
  3. Barthes, Roland. 1987. Image Music Text, New ed. London: Fontana Press. First published 1967. [Google Scholar]
  4. Barthes, Roland. 1989. The Rustle of Language. Berkeley: University of California Press. [Google Scholar]
  5. Bates, Marcia J. 2006. Fundamental Forms of Information: Research Articles. J. Am. Soc. Inf. Sci. Technol. 57: 1033–45. [Google Scholar] [CrossRef]
  6. Bates, Marcia J. 2016. Information and the Information Professions: Selected Works of Marcia J. Bates, Volume I. Berkeley: Ketchikan Press. [Google Scholar]
  7. Bateson, Gregory. 2000. Steps to an Ecology of Mind. Chicago: University of Chicago Press. First published 1972. [Google Scholar]
  8. Bateson, Gregory. 2002. Mind and Nature: A Necessary Unity. Advances in Systems Theory, Complexity, and the Human Sciences. Cresskill: Hampton Press. First published 1979. [Google Scholar]
  9. Baudelaire, Charles-Pierre. 2010. The Painter of Modern Life. London: Penguin Classics. [Google Scholar]
  10. Baudrillard, Jean. 1994. Simulacra and Simulation. Translated by Sheila Glaser. Ann Arbor: University of Michigan Press. First published 1981. [Google Scholar]
  11. Blum, Avrim, and Tom Mitchell. 1998. Combining Labeled and Unlabeled Data with Co-Training. Paper presented at Annual ACM Conference on Computational Learning Theory, Madison, WI, USA, July 24–26. [Google Scholar]
  12. Bostrom, Nick. 2003. Are we living in a computer simulation? Philosophical Quarterly 53: 243–55. [Google Scholar] [CrossRef]
  13. Bourdieu, Pierre. 1993. The Field of Cultural Production: Essays on Art and Literature, 1st ed. Cambridge: Polity Press. [Google Scholar]
  14. Bourriaud, Nicolas. 1998. Relational Aesthetics. Dijon: Les Presse Du Reel. [Google Scholar]
  15. Bourriaud, Nicolas. 2005. Postproduction: Culture as Screenplay: How Art Reprograms the World, 2nd ed. New York: Lukas & Sternberg. First published 2002. [Google Scholar]
  16. Burnham, Jack, Melissa Ragain, and Hans Haacke. 2015. Dissolve into Comprehension: Writings and Interviews, 1964–2004. Cambridge: MIT Press. [Google Scholar]
  17. Capra, Fritjof, and Pier Luigi Luisi. 2014. The Systems View of Life: A Unifying Vision, Reprint ed. Cambridge: Cambridge University Press. [Google Scholar]
  18. Churner, Rachel. 2015. Hans Haacke. Cambridge: MIT Press. [Google Scholar]
  19. Cilliers, Paul. 1998. Complexity and Postmodernism: Understanding Complex Systems, 1st ed. London and New York: Routledge. [Google Scholar]
  20. Cole, Samantha. 2019. Deepfake of Mark Zuckerberg Highlights Facebook’s Fake Video Policy. Vice. June 11. Available online: https://www.vice.com/en/article/ywyxex/deepfake-of-mark-zuckerberg-facebook-fake-video-policy (accessed on 4 July 2024).
  21. Cook, Sarah. 2016. Information (Whitechapel: Documents of Contemporary Art). London: Whitechapel Gallery. Cambridge: MIT Press. [Google Scholar]
  22. Derrida, Jacques. 1977. Of Grammatology, 1st American ed. Translated by Gayatri Chakravorty Spivak. Baltimore: The Johns Hopkins University Press. First published 1967. [Google Scholar]
  23. Dhamani, Numa, and Maggie Engler. 2024. Introduction to Generative AI. Shelter Island, NY: Manning. [Google Scholar]
  24. Eco, Umberto, and David Robey. 1989. The Open Work. Translated by Anna Cancogni. Cambridge: Harvard University Press. First published 1962. [Google Scholar]
  25. Filipovic, Elena. 2017. The Artist as Curator: An Anthology. Milan: Mousse Publishing. [Google Scholar]
  26. Fish, Stanley. 1982. Is There a Text in This Class?: The Authority of Interpretive Communities. Cambridge: Harvard University Press. [Google Scholar]
  27. Fisher, Mark. 2016. The Weird and the Eerie. London: Watkins Media. [Google Scholar]
  28. Foster, Hal. 2015. Bad New Days: Art, Criticism, Emergency. London and New York: Verso Books. [Google Scholar]
  29. Foster, Hal, Rosalind Kraus, Yves-Alain Bois, and Benjamin H. D. Buchloh. 2016. Art Since 1900: Modernism, Antimodernism, Postmodernism. London: Thames & Hudson. [Google Scholar]
  30. Foucault, Michel. 1997. Aesthetics, Method, and Epistemology: Essential Works of Foucault 1954–1984. London: Penguin Books Limited. [Google Scholar]
  31. Gere, Charlie. 2006. Art, Time and Technology, English ed. Oxford and New York: Berg Publishers. [Google Scholar]
  32. Gleick, James. 2011. The Information: A History, a Theory, a Flood. London and New York: HarperCollins Publishers. [Google Scholar]
  33. Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Networks. arXiv arXiv:1406.2661. [Google Scholar] [CrossRef]
  34. Goodfellow, Paul. 2019a. Eerie Systems and Saudade for a Lost Nature. Arts 8: 124. [Google Scholar] [CrossRef]
  35. Goodfellow, Paul. 2019b. Reframing the Horizon within the Algorithmic Landscape of Northern Britain. Arts 8: 114. [Google Scholar] [CrossRef]
  36. Green, Alison. 2018. When Artists Curate: Contemporary Art and the Exhibition as Medium. Art since the ’80s. London: Reaktion Books. [Google Scholar]
  37. Halpern, Orit. 2015. Beautiful Data: A History of Vision and Reason since 1945. Experimental Futures. Durham: Duke University Press. [Google Scholar]
  38. Halsall, Francis. 2008. Systems of Art: Art, History and Systems Theory, 1st New ed. Bern and Oxford: Verlag Peter Lang. [Google Scholar]
  39. Hayles, N. Katherine. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics, 74th ed. Chicago: University of Chicago Press. [Google Scholar]
  40. Heidegger, Martin, Joan Stambaugh, and P. J. Stambaugh. 1996. Being and Time: A Translation of Sein Und Zeit. SUNY Series in Contemporary Continental Philosophy. New York: State University of New York Press. [Google Scholar]
  41. Hoelscher, Jason A. 2021. Art as Information Ecology: Artworks, Artworlds, and Complex Systems Aesthetics. Thought in the Act. Durham: Duke University Press. [Google Scholar]
  42. Jameson, Fredric. 2009. The Cultural Turn: Selected Writings on the Postmodern, 1983–1998. London and New York: Verso Books. [Google Scholar]
  43. Jeffery, Celina, ed. 2015. The Artist as Curator. Bristol: Intellect. [Google Scholar]
  44. Jennings, Stephanie C. 2016. Co-Creation and the Distributed Authorship of Video Games. In Examining the Evolution of Gaming and Its Impact on Social, Cultural, and Political Perspectives. Edited by Keri Duncan Valentine and Lucas John Jensen. Hershey: IGI Global, pp. 123–46. [Google Scholar] [CrossRef]
  45. Jiang, Harry H., Lauren Brown, Jessica Cheng, Mehtab Khan, Abhishek Gupta, Deja Workman, Alex Hanna, Johnathan Flowers, and Timnit Gebru. 2023. ‘AI Art and Its Impact on Artists’. Paper presented at the 2023 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’23, Montreal, QC, Canada, August 8–10; pp. 363–74. [Google Scholar] [CrossRef]
  46. Judovitz, Dalia. 1998. Unpacking Duchamp: Art in Transit. Oakland: University of California Press. [Google Scholar]
  47. Krauss, Rosalind E. 2000. ‘A Voyage on the North Sea’: Art in the Age of the Post-Medium Condition. Walter Neurath Memorial Lectures, London: Thames & Hudson. [Google Scholar]
  48. Lai, Zeqiang, Xizhou Zhu, Jifeng Dai, Yu Qiao, and Wenhai Wang. 2023. Mini-DALLE3: Interactive Text to Image by Prompting Large Language Models. arXiv arXiv:2310.07653. [Google Scholar]
  49. Landow, George P. 1991. Hypertext: The Convergence of Contemporary Critical Theory and Technology (Parallax: Revisions of Culture and Society. Baltimore: The Johns Hopkins University Press. [Google Scholar]
  50. Larsen, Lars Bang. 2014. Networks (Whitechapel: Documents of Contemporary Art). London: Whitechapel Gallery. Cambridge: MIT Press. [Google Scholar]
  51. Le, Quoc V., and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. arXiv arXiv:1405.4053. [Google Scholar]
  52. Lee, Pamela M. 2006. Chronophobia: On Time in the Art of the 1960s. Cambridge and London: MIT Press. [Google Scholar]
  53. LeWitt, Sol. 1967. Paragraphs on Conceptual Art. Artforum. Summer 1967. Available online: https://www.artforum.com/print/196706/paragraphs-on-conceptual-art-36719 (accessed on 4 July 2024).
  54. Luhmann, Niklas. 2000. Art as a Social System. Translated by Eva M. Knodt. Stanford: Stanford University Press. [Google Scholar]
  55. Lynch, David, dir. 1984. Dune. Universal City: Universal Pictures. [Google Scholar]
  56. Meltzer, Eve. 2013. Systems We Have Loved: Conceptual Art, Affect, and the Antihumanist Turn. Chicago and London: University of Chicago Press. [Google Scholar]
  57. Merewether, Charles. 2006. The Archive (Whitechapel: Documents of Contemporary Art Series). London: Whitechapel Gallery. Cambridge: MIT Press. [Google Scholar]
  58. Miessen, Markus, and Yann Chateigné. 2016. The Archive as a Productive Space of Conflict. London: Sternberg Press. [Google Scholar]
  59. Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. arXiv arXiv:1301.3781. [Google Scholar]
  60. Morton, Timothy. 2013. Hyperobjects: Philosophy and Ecology after the End of the World. Minneapolis: University of Press. [Google Scholar]
  61. Orwell, George. 2021. Nineteen Eighty-Four. Oxford World’s Classics Series. Oxford: Oxford University Press. [Google Scholar]
  62. Osborne, Peter. 2013. Anywhere or Not at All: Philosophy of Contemporary Art, 1st ed. London and New York: Verso Books. [Google Scholar]
  63. Osthoff, Simone. 2009. Performing the Archive: The Transformation of the Archive in Contemporary Art from Repository of Documents to Art Medium. Think Media: Egs Media Philosophy. New York and Dresden: Atropos Press. [Google Scholar]
  64. Paul, Christiane. 2016. A Companion to Digital Art. Hoboken: John Wiley & Sons. [Google Scholar]
  65. Pearlman, Karen, and John Sutton. 2022. Reframing the Director. In A Companion to Motion Pictures and Public Value. Hoboken: John Wiley & Sons, Ltd., pp. 86–105. [Google Scholar] [CrossRef]
  66. Pearlman, Karen. 2023. Distributed Authorship: An et al. Proposal of Creative Practice, Cognition, and Feminist Film Histories. Feminist Media Histories 9: 87–100. [Google Scholar] [CrossRef]
  67. Pickles, John, ed. 1995. Ground Truth: The Social Implications of Geographic Information Systems, 1st ed. New York: Guilford Press. [Google Scholar]
  68. Piskopani, Anna Maria, Alan Chamberlain, and Carolyn Ten Holter. 2023. ‘Responsible AI and the Arts: The Ethical and Legal Implications of AI in the Arts and Creative Industries’. Paper presented at the First International Symposium on Trustworthy Autonomous Systems, TAS ’23, Edinburgh, UK, July 11–12. [Google Scholar]
  69. Ribeiro, Marco Tulio, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond Accuracy: Behavioral Testing of NLP Models with CheckList. Paper presented at 58th Annual Meeting of the Association for Computational Linguistics, Online, July 10; pp. 4902–12. [Google Scholar]
  70. Shanken, Edward A. 2015. Systems. Cambridge: MIT Press. [Google Scholar]
  71. Soderman, Braxton. 2007. The Index and the Algorithm. Differences 18: 153–86. [Google Scholar] [CrossRef]
  72. Spieker, Sven. 2017. The Big Archive: Art from Bureaucracy. Cambridge: MIT Press. [Google Scholar]
  73. Sutton, Charles, and Andrew McCallum. 2010. An Introduction to Conditional Random Fields. arXiv arXiv:1011.4088. [Google Scholar]
  74. Thacker, Eugene. 2011. In the Dust of This Planet: Horror of Philosophy. Arlesford: John Hunt Publishing, vol. 1. [Google Scholar]
  75. Tomlinson, Bill, Joel Ross, Paul André André, Eric Baumer, Donald Patterson, Joseph Corneli, Martin Mahaux, Syavash Nobarany, Marco Lazzari, Birgit Penzenstadler, and et al. 2012. Massively Distributed Authorship of Academic Papers. Paper presented at the CHI EA ’12: CHI ’12 Extended Abstracts on Human Factors in Computing Systems, Austin, TX, USA, May 5–10; pp. 11–20. [Google Scholar] [CrossRef]
  76. van Alphen, Ernst. 2014. Staging the Archive: Art and Photography in the Age of New Media. London: Reaktion Books. [Google Scholar]
  77. Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. arXiv arXiv:1706.03762. [Google Scholar]
  78. Vyas, Bhuman. 2022. Ethical Implications of Generative AI in Art and the Media. International Journal For Multidisciplinary Research 4. [Google Scholar] [CrossRef]
  79. Wachowski, Lana, and Lilly Wachowski. 1999. The Matrix. Burbank: Warner Bros. [Google Scholar]
  80. Wilde, Oscar. 2001. The Nightingale and the Rose. London: Electric Book Company. First published 1888. [Google Scholar]
  81. Wilson, Stephen. 2002. Information Arts: Intersections of Art, Science, and Technology. Cambridge: Leonardo (MIT Press). [Google Scholar]
  82. Zeilinger, Martin. 2021. Tactical Entanglements: AI Art, Creative Agency, and the Limits of Intellectual Property. London: Meson Press. [Google Scholar]
Figure 1. Unearthly Bloom 00 (BETA). Peter the Roman (2024). The authorship of the image is distributed between the prompt, the AI system, the artist’s input images and the wider data it is trained on. However, the artwork is authored by the artist Peter the Roman.
Figure 1. Unearthly Bloom 00 (BETA). Peter the Roman (2024). The authorship of the image is distributed between the prompt, the AI system, the artist’s input images and the wider data it is trained on. However, the artwork is authored by the artist Peter the Roman.
Arts 13 00149 g001
Figure 2. Blob 5_50_0016. Peter the Roman (2024). The authorship of the image is distributed between the AI system and the artist’s work used as training data. The artwork is authored by the artist Peter the Roman.
Figure 2. Blob 5_50_0016. Peter the Roman (2024). The authorship of the image is distributed between the AI system and the artist’s work used as training data. The artwork is authored by the artist Peter the Roman.
Arts 13 00149 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Goodfellow, P. The Distributed Authorship of Art in the Age of AI. Arts 2024, 13, 149. https://doi.org/10.3390/arts13050149

AMA Style

Goodfellow P. The Distributed Authorship of Art in the Age of AI. Arts. 2024; 13(5):149. https://doi.org/10.3390/arts13050149

Chicago/Turabian Style

Goodfellow, Paul. 2024. "The Distributed Authorship of Art in the Age of AI" Arts 13, no. 5: 149. https://doi.org/10.3390/arts13050149

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop