One of the core essays in
The Linguistic Turn, Richard Rorty’s important 1967 anthology underscores the growing role language seems to play within 20th century western philosophy, noting in particular how linguistics has come to model the very act of thinking itself (
Rorty [1967] 1992). Philosophers, we see,
use language in two ways, in its ordinary sense and in one that is puzzling to say the least. To decide whether what they say as philosophers is true one must, therefore, first discover what they say, that is, precisely what that peculiar sense is. The inquiry is linguistic. It starts from common sense, for what else is there to start from. (
Bergmann [1949] 1992, p. 65)
This essay “Logical Positivism, Language, and the Reconstruction of Metaphysics” (
Bergmann [1949] 1992, reprinted in 1967, pp. 63–71), written by Gustav Bergmann first appeared in English in 1953; it remains today primarily associated with Rorty’s timely collection—not at least of all for providing the anthology’s title. To be sure, Bergmann’s contribution holds that the current “linguistic turn” in philosophy should be considered a major methodological advance in terms of how we formally and systematically interpret our faculties of reason. When “everybody who speaks uses language as a means or tool”, a much more critically reflective understanding of how language functions in objective analysis and reasoning can emerge. The result is a fresh array of philosophical movements with new methods and most importantly new questions (
Bergmann [1949] 1992, p. 63).
Key to contemporary reasoning, Bergmann argues, is our ability to determine and respond to a linguistic level in all philosophical discourse. To engage in philosophy is, at a fundamental level, to engage, both reflectively and critically, in language itself. Early reviews of the anthology were less convinced. Edward A. Maziarz, for example, complained that, while enriching many current epistemologies, linguistic approaches to philosophy when considered broadly as their own formal movement or “turn” in western thinking encouraged a corresponding threat to the reasoning process. We have to be careful, he warned us, not to substitute “language games” for actual rational analysis (
Maziarz 1968, p. 296). So drawn, that path in philosophy is laced with hubris, and inevitably risks producing nothing more than a ceaseless diversion from argument, laced with sophistry.
Both Bergmann and Rorty, reflecting upon the many cultural changes in attitudes and new media technologies that were by the late 1960s just beginning to make their mark, clearly envisioned a much more substantial alternative to traditional methods of logical analysis, wrapped firmly in the structures and formats of how such reasoning is actually carried out. Indeed, since then, philosophers have, for the most part, increasingly accepted language’s crucial and essential role in the reasoning process, even if they are hardly any closer to pinpointing what that role might be. In his own work, Bergmann felt that “the relation between language and philosophy is closer than, as well as essentially different from, that between language and any other discipline” (
Bergmann [1949] 1992, pp. 64–65). We have early on in these studies, thus, a linguistic dimension operating in all analytical thinking—one that goes beyond even the intellectual parameters historically assigned to symbolic logic, as conceived from Leibniz to Frege. Language, while functioning descriptively and syntactically in an argument, must now, Bergmann held, assume “the task of elucidating common sense, not either proving or disproving it;” that is, it cannot simply “invent and investigate these schemata for their own sake, as mathematical logicians often do, but with an eye upon suitability for serving upon interpretation, as the ideal language” (
Bergmann [1949] 1992, p. 66). Symbolic logic, in Bergmann’s view, had never specifically assigned language this capacity, seeing it instead as a very useful, if somewhat limited means for determining numerical proofs. In fact, to extend the domain of such instruments into the realm of “common sense” in philosophy might very easily justify the charge of “language games” over reason.
In fact, the search for an “ideal language” Bergmann attributes to the more current and complex linguistic approaches to reason invokes a very different, far more interdependent relationship between language, grammatical form and sense itself. The first major work to approach this level of interactivity with language for Bergmann was, of course, Bertrand Russell’s
Principia Mathematica (1910–1913), where both grammar and modes of reasoning appeared to come together in Russell’s then unique attempt to solve “several philosophical questions...by means of a symbolism” (
Bergmann [1949] 1992, p. 65). The questions themselves varied in content with some being “about arithmetic, some about just such entities as Cerberus”, and Bergmann did not explicitly reference algorithms and their role in reasoning in his essay. Still, we might usefully reconsider Russell’s innovations in symbolic reasoning in terms of adapting algorithmic modelling to the task of building ever more complex and functional procedural chains in reasoning. To imagine better grammars, along with improved terminologies and symbolic forms for processing routines or patterns in reasoning, does indeed bring to mind something akin to the search for an ideal language, or at least a more capable linguistic tool that should be able to do more and more of the actual thinking for us. Obviously such questions are hardly controversial or even novel with respect to coding, wherein better programming languages and tools are continually being developed in order to improve our collective social relationship to software. Much might be said about this relationship itself, as software and programming become generally implemented into more and more facets of our daily routines. Are we not in some ways carrying forward the work of Bergmann and the many other linguistic philosophers featured in Rorty’s impressive collection when we, too, begin to imagine better languages and more efficient ways to communicate with our cars and kitchen appliances?
The ultimate parallel between Russell’s work and coding had already occurred, of course, a mere decade before Rorty’s anthology, when early software developers Allen Newell, Cliff Shaw and Herbert A. Simon in 1956 applied their newly built program “Logical Theorist” to solve over fifty of Russell’s original theorems. Logical Theorist, as many AI enthusiasts continue to remind us, was able to prove 38 of them. In the words of computation historian Pamela McCorduck, the program Logical Theorist was “proof positive that a machine could perform tasks heretofore considered intelligent, creative and uniquely human” (
McCorduck 2004, p. 167). In short, Bergmann’s linguistic turn in philosophy now seems clearly in play when considering Newell’s, Shaw’s and Simon’s successful experiment in programming. Given the right procedural framework, as Logic Theorist demonstrated with its impressive number of interlaced search trees, along with a fully functional grammatical template for organising, computation could advance from simple binary proofs to the full development of a working model of reasoning. In this paradigm, evidence of something akin to a type of artificial intelligence begins to appear.
A more contemporary example of this use of computation with grammar templates can be glimpsed in the Swiss project of LARA, the Lab for Automated Reasoning and Analysis. LARA, based in the Ecole Polytechnique Federal de Lausanne (EPFA), or Lausanne’s Polytechnic School, develops precise automated reasoning techniques by combining algorithms with linguistic tools to enable people to program logic verification systems based upon the pattern and structure of language. It would be inaccurate, of course, to align this mode of computation with the act of reading or interpreting language, and yet, to an extent, these programmers infer that any human ability to read a text constitutes at some level a type of “verification” procedure. We read to make sense of a text, and this very activity, it seems can be algorithmically programmed.
In the four plus decades following Rorty’s anthology, much philosophical attention to language as a verification system has continued to inform perspectives and ideas throughout western artwork, ranging across a host of different genres and formats. In fact, wherever language takes on the role of less a medium, and more a kind of reasoning tool, it turns up as a primary descriptive term in the actual title of the art movement itself, supporting such important mid-20th century experiments as Language Art, Language Sound and, of course, L=A=N=G=U=A=G=E Poetry. All of these schools of thought in art seem to follow directly from Rorty’s philosophical collection, developing between the late 1960s and early 1980s. In each of these movements, as with the essays in The Linguistic Turn, language retains a certain preferred status as a distinctly human ability, and yet many of the conflicts and points of enquiry that begin to develop here commonly emphasize its instrumental capacity as a self-verifying system of reason.
As we’ll see in this collection, coding, both in theory and in practice, suggests many similar alignments between language and reason. Certainly on one level, its fifty-year history as a developing tool provides many clear models of how language can be used to verify and subsequently execute rational analysis, outperforming centuries of western philosophy engaged in a similar aim. In coding, compared to even the newer linguistic “turns” of Rorty’s compilation, we see both a direct and explicit alignment between data processing and cognition. Language, computation makes clear, affords us a highly functional tool for procedural reasoning, especially when we consider cognition itself in terms of data verification. Once language becomes fundamentally computational, moreover we have at our disposal a new linguistic paradigm in which the human brain, itself, increasingly resembles nothing less than a kind of organic epistemological apparatus—one innately designed to build better and better information structures.
Working openly from this implied parallel between language and reasoning, many of the essays gathered in this collection remain acutely focused on how computation continues to challenge our concepts of knowledge, rationality, and cognitive agency. The consistent aim within the field to develop better programming languages and more robust interfaces, for these writers, can be linked to a wide variety of related problems concerning how best to re-imagine language in relation to techniques, if not explicit instruments of reasoning. Several essays, in fact, emphasize what might be called language’s ongoing failure as a human-based programming tool for building larger and more inclusive databases, noting instead its better capacity to stimulate constant enquiry and deep states of uncertainty. To reimagine thinking and cognition as something other than organizing information can, as we will see, prove to be especially challenging in the current era of computation. Language when recruited as code performs exceedingly well as reasoning tool: it can search, compare and verify via grammar structures infinitely more quickly than the human mind. Further, much as pictures moving across our eyes at twenty-four frames per second can simulate the illusion of movement, linguistic forms appearing nearly instantaneously on a screen can easily be mistaken for cognition.
This particular dilemma of language’s growing capacity to program, while adapting itself more holistically and accurately to human communication is central to John Cayley’s contribution “Reconfiguration: Symbolic Image and Language Art” (
Cayley 2017). Here Cayley offers an acute, at times even poignant response to recent technological advances in natural language processing and the latest generation of automatic speech recognition and synthesis programs. As our everyday engagement with screens and basic automation increasingly depends upon oral, speech-based modes of input, a much more complex, certainly invasive paradigm of human-computer relationships seems inevitable.
In Cayley’s view, language’s most basic functions as a “medium” of communication need to be reconsidered in view of recent advances in linguistic technologies. Such developments prompt him to introduce the term “transactive synthetic language”. Operating somewhere on the cusp of code and conversation, transactive synthetic languages allow us to engage with our many daily devices and appliances, whatever forms they take, in a much more intimate manner. When we adjust the room temperature in our apartment by speaking openly to audio inputs feeding into thermostat controls, we are doing more than simply using our tools more efficiently, Cayley argues; we are forming new kinds of relationships, first with the devices themselves, and second, even more significantly, with the corporate entities controlling them. Cayley finds the term and concept of the “pharmakon”, as it appears in Plato’s well-known dialogue Phaedrus, as well, of course, in Derrida’s 1981 essay “Dissemination”, especially insightful with respect to our increasingly complex relationships with all media technologies.
In “Phaedrus”, Plato introduces the term “pharmakon” to denote the medicinal use of toxins as remedies—usually by simply adjusting their dosage. Plato’s main point is to remind us of the inherent nature of pharmaceuticals as poisons in order to prompt a more cautious, if not reflective attitude when dispensing them. Any substance used regularly for pleasure, perhaps even in the service of health remains a core level a mode of self-poisoning. Derrida considered writing to function somewhat similarly, much as Plato infers in the same dialogue, but emphasizes the apparent duality in play as a point of philosophical value, not crisis. Writing’s social significance, as a tool and practice, is unquestionable, grammatically “remedying” all human intercourse in the form of law, record, and argument. At the same time, writing signifies something akin to an alien toxin within the social body, constantly being used to question, if not disrupt modes of interpretation and points of reference. At an even more basic level, one cannot avoid its continuous interference with our memories and physical sense of time and space. Hence, while typical interpretations of Plato’s critique of writing in Phaedrus tend to focus on its more toxic qualities, Derrida argues that these same attributes, like any “Pharmakon”, are simultaneously enabling. Toxins remedy by invading the very body they are meant to aid.
A similar conflict appears in Cayley’s work with transactive synthetic languages. To engage in a new kind of oral discourse with our machines and cultural environments, we must transform and advance both our cognitive and physical relationships to them. To this degree, we are able to experience more intimate, immediate and even creative forms of interaction. At the same time, such tools and technologies inevitably demand more control of these same personal environments. In this sense, they too seem quite toxic, invading, while monitoring, our psychological and social worlds as much as our physical one. No doubt the potential for these languages to shape our sense of self and provide a new reflective surface against our very consciousness is profound. Yet the network they feed into cannot be considered wholly benign, much less sympathetic or even helpful. In his words, “I can make my own transactive linguistic artifacts within and across this network of actors and affordances. Any such work that I make will, necessarily, be implicated with all the other—many unforeseen—consequences of the underlying systems and networks” (
Cayley 2017). The dilemma invoked by transactive synthetic languages corresponds closely thus with Derrida’s earlier argument that aligned our capacity to engage linguistically with our technologies and environments with a corresponding loss of social, psychological and political autonomy. Cayley accordingly underscores our ongoing interaction with our devices with a firm call to action: “Our practices share an urgent need to respond to these circumstances, because they present us with crisis, catastrophe, pharmakon, existential challenge” (
Cayley 2017). Much like Derrida’s original critique of Plato, simply refusing to use these tools (perhaps in an act of ethically inspired abstinence) does not respond adequately to the complex array of issues in play. Cayley observes, “[w]e are the artists whose media—by which I mean the plural of medium—are being reconfigured before their eyes, within their hearing, under their feet, in their very hands” (
Cayley 2017). Any attempt at renunciation does not automatically reclaim earlier, less invasive models of writing; in fact, it risks an even more politically problematic silence as a form of implied consent.
Concerns similar to Cayley’s warnings can be found in virtually all of the essays featured in this issue, despite the wide variety of topics they cover. At a primary level, the profound reconfiguration of media Cayley attributes to transactive synthetic languages depends upon our own use of communication technologies becoming less and less distinct in terms of how such apparatuses actually function as tools and methodologies. Once we begin to confuse our attempts to use a so-called “synthetic language” to program or code our environment with personal conversations or even cognition itself, we don’t so much escape the technical boundaries of writing as we merely lose sight of precisely where and how they are functioning. More significantly, this loss occurs first and foremost as a loss of language. In its place, we appear to have chosen to work some strange “echo” of what it used to be like to converse or discuss with others our thoughts and ideas as social beings. Language, Cayley reminds us, was always toxic, but capacity to remedy our social situation is not to be found in its “synthetic” replacement. Faced with such tools, Cayley reminds us, we must consider an even more serious disruption in social agency, never mind the threat of our very environments becoming new spaces of surveillance and suspicion.
Cayley’s efforts to reconstruct this disruption, to bring back mediation as an opportunity for argument, for disturbance and actual cognition rather than simply better programming also appears in Christopher Funkhouser’s enquiries into electronic poetry and the concept of “limitation”. His contribution provides an engaging reading of Emmet Williams’s early literary experiments with concrete proceduralism. A poet engaging both creatively and critically with programmable structures necessarily incurs what Funkhouser terms “the high price of coding” (“IBM Poetry: Exploring Restriction in Computer Poems,” (
Funkhouser 2017)). In computational poetry’s still relatively short history, most common critical approaches inevitably emphasize a set humanist vigilance against any easy cultural merging of technology with supposedly genuine individual creativity. The model for this argument, Funkhouser shows us, can easily be found in postwar literary criticism’s early attempts to diminish the creative potential of using poetry-generating software by relegating it to the status of pure novelty or simply a kind of prank. Funkhouser’s primary example of this type of response emerges in the first press reviews of the 1962 “Autobeatnik” tool, an early computer program for building free verse poems from readymade vocabularies and grammatical patterns. The popular press, Funkhouser argues, tended to regard such programs as primarily more proof that human minds were superior to “synthetic” ones. Yet, by merely re-emphasizing a stricter hierarchy between the human agent and the tools at her disposal, especially as they become more complex and challenging, vital changes in the writing process risk being ignored. Funkhouser’s more sophisticated approach in many ways follows Cayley’s reasoning, and offers instead a broader, more open enquiry into humanity’s constantly changing relationship, creative or otherwise, to its linguistic technologies. Such questions, for Funkhouser, are especially clear in Williams’s own attempts to adapt computation to poetic experimentation, as we see in both the procedure and work entitled “IBM” (1965).
At one level, the poem teaches literary artists to use programmable language tools aesthetically and creatively by experimenting with pattern, number and above all “limitation”. Here, in Funkhouser’s view, the engineer and the artist literally come together to work with language critically, yet also to a great literary effect. As Funkhouser notes, “by drastically cutting down its choice of words—so that the incidence of a subject word reappearing is greatly increased—engineers can make the machine seem to keep to one topic” (
Funkhouser 2017). Such creative cooperation shows machine itself as a new kind of palette where algorithms appear alongside our own vocabularies and appreciations of linguistic rhythm and structure. Williams’s play with method focuses first on the poem’s procedural limits, and subsequently encourages us as readers to move beyond prescriptive patterns to embrace previously unseen possibilities in form and content.
This critical interest in highlighting media as a participatory space in digital writing and programming appears also in Brian Kim Stefans’s important contribution. Alternatively, however, while Stefans understands the importance of language as a mediating tool, he offers us a different authorial relationship to it based less upon its structural limits as a programmable device and more directly upon its capacity, if used effectively, to construct aesthetic alternatives far beyond most typical human-centred understandings of semantic meaning and linguistic form. To be sure, any critique of instrumentalism and semiotic proceduralism in the literary arts tends mistakenly to maintain poetry as a special class of objects. One chief example of this approach occurs in L=A=N=G=U=A=G=E poetry, an experimental poetry movement that was especially influential in American literature from 1970 to the end of the 20th century. In Stefans’s words, L=A=N=G=U=A=G=E “not only remained unsullied by ephemeral, goal-oriented activities but...refreshed the reader’s relationship to the real, reconfiguring this relationship from the closed “self” and “commodity” binary to, perhaps (they generally avoided capital-letter abstractions), that of an intersubjective “Thought” and/or a multiple ‘Being’” (“The New Commodity: Technicity and Poetic Form,” (
Stefans 2017)). Stefans offers instead an alternative discourse on we might call linguistic technologies and their formal capacity to construct semantic meaning in terms of structure and method. The key phrase in Stefans’s piece identifies the poem itself as a “technical object”, which is to say an apparatus situated somewhere between texts or media, on the one hand, and machines on the other. The spectrum defined by these two points is subsequently defined by degrees of “technicity”, a concept first theorised by the French philosopher Gilbert Simondon. As Stefans shows us, Simondon’s paradigm presents a much more nuanced relationship between culture and instrumentalism, one in which any supposed conflict between the two modes of production does not automatically lead us to abandon one for the other. Stefans thus subsumes modernity’s traditional intellectual opposition between tools and works of culture within what he describes as an enduring “line of battle” between the technical object and a more or less unquestioned acceptance of all cultural practice as the unique product of the human mind—a concept comparable to Descartes’s idea of thought as substance, or
res cogitans. Such “activity”, for Simondon, of course, is entirely illusory, and ends up being, in his words, “the most powerful cause of alienation in the contemporary world” (Simondon, “On the Mode of Existence of Technical Objects,” qtd. in (
Stefans 2017)). He explains, our “failure to understand the machine...is not caused by the machine but by the non-understanding of its nature and essence, by its absence from the world of meanings, and by its omission from the table of values that are part of culture” (
Stefans 2017). Stefans follows Simondon by plainly considering a more integrated understanding of both human and computational modes of reasoning. We subsequently are treated to an intriguing reading of a work by the poet Ben Lerner, taken from his first book
Lichtenberg Figures (
Lerner 2004). Lerner’s collection gives us a sequence of sonnet-like poems that depend upon a semantic as well as aesthetic use of limitation similar to the technique Funkhouser applies to Emmett Williams’s
IBM. On one level, Lerner presents a poem, yet at the same time, we see in the piece an ongoing, somewhat ghostly trace of a more formal machination that Stefens is able to link further to a fundamental materiality in the work. For Stefans, this formal and thus material element refers ultimately to the poem’s technicity, evoking again Simondon’s earlier concept of the “technical object” with its inherently mixed sense of cultural and instrumental value and corresponding resistance to having one quality pared from the other.
Stefans’s interpretation of technicity once again recalls Derrida’s original theory of writing as Pharmakon by reminding us that language, as a medium, cannot remedy our social, technical world without a constant risk of toxifying it. Occupying a unique, interstitial stage between object and tool, caught forever amid the limits of representation and instrumentality, technical objects offer a strikingly similar multiplicity, where toxicity must be continually balanced against its more therapeutic qualities. This concept of technicity, Stefans reminds us, outlines another central question, one common to most if not all contributions in this issue: namely, just how must a reader adjust her own cognitive role when attempting to engage a text as a technical object. In other words, what is it to read “writing” that is both readable and computational? How do we read texts that just as easily read us?
Here, Sandy Baldwin’s essay is particularly insightful, as he investigates the digital reader as a kind of eyewitness to computation’s inevitable and ongoing transformation of writing into something far more alien and more terrifying than we were ever able to imagine language could produce. Baldwin is quick to characterize today’s digital viewers as inherently hysterical, unable to engage fully with texts that not only are unreadable but, to an extent, possibly are imaginary. In Baldwin’s essay, we are treated to a fascinating glimpse of what is commonly called the “dark web”—a collusion of mystifying, often nonsensical online channels that continue to defy even the most hardened attempts to access them. For Baldwin, our efforts to engage the very limits of legibility easily bring to mind several core gestures of aesthetics and aesthetic theory. In his words, “the solution and destination of these channels is inseparable from the role of the art object as an example for appearances and the origins of what appears. [Yet the] anomaly and the singular that appears on the net can only be exemplified and understood in terms of the art object and the tradition of aesthetics” (“How to Pronounce Meme. Three YouTube Channels,” (
Baldwin 2017)). These practices, along with the channels that support them, interest Baldwin, not because they foster communication, but rather because they sabotage the message as its own type of limit; instead, they are one sided, echoing a branch of digital and computational culture that refuses to be integrated. As Baldwin points out, like any computation, “[t]hese messages proffer readability and suggest decoding, but they are more like crumbs or crumbling fossils. Think of all those tweets and Facebook posts: they suggest readings but deliver nothing. The light sheds and fades from the screen and in the end there is nothing to read but the dregs of mediation itself, nothing but the faint message that mediation was at work and has now departed, faded, slipped away” (
Baldwin 2017).
Once again—as we see with Baldwin, as well as the other works mentioned here—we find ourselves confronting new media works that continue to resist the reading gesture itself. They function instead as puzzles to be solved or, better, actual errors and glitches in our communication channels that continue to resist resolution. It is hardly surprising that Baldwin understands our role as readers in this new digital culture to be essentially “hysterical”; other contributors also associate a profound anxiety and sense of uncertainty in contemporary reading practices, where language itself seems increasingly to foreshadow an indeterminate, even hostile, relationship to human communication.
When considering this relationship to the screen, one particular storyline comes to mind, straight from the annals of classic vaudeville: the comedy sketch popularly titled “The Broken Mirror”. The routine’s setting can be adapted to any number of different scenarios and plots; in most cases, however, the core narrative is always the same. One character, usually the villain of the piece, is caught quite literally breaking into another character’s domestic space after accidentally shattering a full-length mirror inconveniently set up in one of the rooms he is exploring. To save himself and avoid capture, the villain sees no choice but to don a disguise of the mirror’s owner, subsequently replacing the mirror in the room with his own physical body and gestures. The comedy subsequently unfolds, and we are treated to an expert pantomime, where the intruder must somehow anticipate and mimic any physical gestures the protagonist decides to throw at him. What proceeds, of course, is a clever variation of a “cat and mouse” chase with each character hoping to outsmart the other as to what is real and what is a copy. The irony of situation drives much of the piece’s overall humor. Casting himself as a reflection, the intruder is most authentic—which is to say most “real”—when he is most “unreal”. Having to convince the other that he is, in fact, not in the room with him, he transforms the strange sense of being intruded upon in the dead of night, of sensing another, into a moment of self-reflection—the preferred outcome for the original occupant, too.
Mirrors, wherever we place them in our domestic spaces, are particularly handy for confirming our own solitude. Reflected back on us in all their supposed wholeness, our bodies offer a distinct singularity within a mirror’s frame. Even the “selfies” we continue to mass produce across every nook of the online universe too often sport distorted or partial views of the agent snapping them. Mirrors bear an authenticity well beloved by filmmakers and psychologists alike that seems fundamental to modern ontology. Despite this comic routine’s obvious absurdity, a somewhat darker, more terrifying theme is never too far below the surface of the narrative. (Of course, this dynamic seems central to all humor.) Eventually, we fear, the mirror in front of us will fail at reflecting, thus confirming our distress that the “self” we see is not really our own. The reflection is inconsistent and, worst of all, there are other selves, actual intruders into our very psyches. They look exactly like us, pretending to be us.
Such terror, lurking on the edges of nearly every mirror-glimpse we make, inevitably precipitates our own complicity in the illusion. Even Lacan considered this interesting level of compromise we make with our own reflections to be central to the development of every human psyche. The “broken mirror” routine is particularly effective at underscoring this point. In just about every incarnation of the performance, sooner or later the charade begins to fall apart; the intruder misses a cue, and the lead character becomes increasingly suspicious that another “self” is standing across, staring back. However, instead of rushing to the denouement we the audience have been expecting, more typically this same character remains curiously faithful to the illusion. He begins to cooperate with the intruder by deliberately slowing his gestures, simplifying each movement, and even allowing failed attempts at imitation to be repeated. In short, he provides the other “self” with as much opportunity as possible to continue the mimicry, perhaps indefinitely.
This more cooperative stage of the pantomime is especially well captured in the Marx Brothers classic
Duck Soup (
McCarey 1933), where Harpo Marx, as the intruder or false “self” is pitted against his brother Groucho, mustache and all, after breaking into the latter’s bedroom. The more Groucho becomes certain his mirror has been shattered, the more eager he is to help Harpo keep the illusion working. At one point, Groucho even cheekily hands back Harpo his hat after it has fallen through the mirror’s supposed surface. Towards the end of the sequence, the two characters actually pass through the original mirror frame, exchanging sides—with Groucho never in doubt that some semblance of a functioning mirror can be salvaged, while both characters compete for the status of who is the real self (See “Duck Soup: Mirror” YouTube clip here:
https://youtu.be/rdQ9jh5GvQ8).
The absurdist nature of such comedy only serves to emphasize the ontological complexities of the situation at hand; at the same time, a peculiar parallel seems to emerge with our own relationship to media and mediation in the age of computing. In this era, as many of the essays collected here demonstrate, media, too, becomes something more than a tool of mimicry or even simple representation. Some loss in personal reflection seems to occur in just about every digital creation we find ourselves gazing upon (never mind interacting with), since many of these works seem just as actively to gaze back at us, intruding into our spaces—both physical and psychical—in ways we did not anticipate. Thus, we must critically reconcile new modes of reading and interaction with the many broken mirrors that suddenly seem to surround us. As part of this compromise, we must also decide just how willing we are to maintain any illusion that we are still reflecting ourselves with these tools, solely casting our own thoughts into the media formats before us. How badly do we need to convince ourselves that we are still safely alone in our homes, driving our own cars, reading our own books, using our own languages? Of course, we can always remind ourselves of a potential freedom in finally being able to step through the mirror’s frame to access what spaces may lie beyond it—physically as well as cognitively. Each of the essays you see in our issue represents exactly this kind of walk through our broken mirrors. Indeed, the voices and arguments presented herein have been chosen precisely for their critical intrusiveness: intruders in the night encouraging us to do more than simply reflect back upon our cultural situation. We also must engage it actively, building new practices and ultimately new modes of agency.