*Article* **On Hybrid Creativity**

#### **Andy Lomas**

Department of Computing, Goldsmiths College, University of London, London SE14 6NW, UK; andylomas@gmail.com

Received: 2 May 2018; Accepted: 5 July 2018; Published: 9 July 2018

**Abstract:** This article reviews the development of the author's computational art practice, where the computer is used both as a device that provides the medium for generation of art ('computer as art') as well as acting actively as an assistant in the process of creating art ('computer as artist's assistant'), helping explore the space of possibilities afforded by generative systems. Drawing analogies with Kasparov's Advanced Chess and the deliberate development of unstable aircraft using fly-by-wire technology, the article argues for a collaborative relationship with the computer that can free the artist to more fearlessly engage with the challenges of working with emergent systems that exhibit complex unpredictable behavior. The article also describes 'Species Explorer', the system the author has created in response to these challenges to assist exploration of the possibilities afforded by parametrically driven generative systems. This system provides a framework to allow the user to use a number of different techniques to explore new parameter combinations, including genetic algorithms, and machine learning methods. As the system learns the artist's preferences the relationship with the computer can be considered to change from one of assistance to collaboration.

**Keywords:** art; computer; evolutionary design; machine learning; computationally assisted design

#### **1. Introduction**

How are we to work creatively with generative systems that computationally create results? In particular, how should we work with systems deliberately designed to encourage emergence: complex systems where results are intrinsically difficult to predict?

There is a strong analogy with plant breeding, where we are working with a medium that is naturally rich. Through experimentation and experience we can develop insights into what is possible and how to influence plants to develop in ways that give desired properties. We need to discover the potentialities of the system we are working with, as well as the limits of its capabilities. Which features can be independently influenced, and which are co-dependent? Whether art, design or architecture, this involves changing our relationship with the computer. Traditional top-down design methods are no longer appropriate. We need to be open to a process of exploration. Participating in a search for interesting behavior: selecting and influencing rather than dictating results.

Generative systems are typically based on algorithmic processes that are controlled by a number of parameters. Given a set of parameter values the process is run to create an output. Classic examples include Conway's Game of Life (Conway 1970) and reaction diffusion equations (Turing 1952). Generative systems have been used by a number of artists, from pioneering early work by Algorists such as Manfred Mohr (Mohr and Rosen 2014), Frieder Nake (Nake 2005), Ernest Edmonds (Franco 2017), and Paul Brown (Digital Art Museum 2009a) to more recent work by artists such as William Latham (Todd and Latham 1992), Yoichiro Kawaguchi (Digital Art Museum 2009b), Casey Reas (Reas 2018), and Ryoji Ikeda (Ikeda 2018).

The most interesting systems are generally those that create emergent results: genuinely unexpectedly rich behavior that cannot be simply predicted from the constituent parts. For these systems the relationship between the input parameters and the output is generally complex and

non-linear, with effects such as sensitive dependence on initial conditions. This makes working with such systems particularly challenging: both fascinating and potentially frustrating.

With a small number of dimensions, such as up to three parameters, the space of results can be relatively easily explored by simply varying individual parameter values and plotting the effects of different combinations. One common technique is to create charts where all the parameters are sampled independently at regularly spaced values and results are plotted to show the results. What scientists would call a phase space plot, or in the animation and visual effects industry is commonly called a wedge sheet. This method of parameter exploration can be effective, and was used by the author for earlier work such as for his 'Aggregation' (Lomas 2005) and 'Flow' (Lomas 2007) series.

Figure 1 shows one of the charts the author created when working on his 'Aggregation' series, exploring the effect that two different parameters had on the forms created. In this example the author was taking 8 samples in each dimension. With two parameters only 64 samples were needed to complete this chart. However, as the number of parameters goes up the number of samples needed to explore different sets of combinations using this method increases rapidly. Three parameters would require 512 samples. Four parameters would need 4096 samples. If we had a system with 10 parameters then just over a billion samples would be needed. This problem is commonly called the 'Curse of Dimensionality' (Bellman 1961) (Donoho 2000), where the number of samples that need to be taken increases exponentially with the number of parameters. Even if enough samples can be taken, how to visualize and understand the space becomes a significantly difficult problem and concepts such as finding the nearest neighbors to any point in the parameter space become increasingly meaningless (Marimont and Shapiro 1979).

**Figure 1.** Phase space plot from the Aggregation series (Lomas 2005).

One approach is to simply limit the number of parameters, but this can be at the expense of overly limiting the type of system that we are willing to work with. If we are working with richly emergent system these problems are often further compounded. A direct consequence of emergence is that the parameters often work in difficult to comprehend, unintuitive ways. Effects are typically non-linear, often with sudden tipping points as the system goes from one type of behavior to another. Indeed, the most interesting results, such as those in the right hand columns of Figure 1 above, are often near regions of instability or at transitions between behaviors. In particular, in many systems the most interesting emergent behavior occurs close to the boundary between regularity and chaos (Kauffman 1996). The shape of this type of boundary can be extremely complex, a classic example being the ornately fractal shaped boundary between regularity and chaos in the Mandelbrot set (Douady and Hubbard 1984).

This raises the idea of working with the machine not merely as the medium but as an active collaborator in the process of exploration and discovery. Can computational methods be used to allow exploration of generative systems like these in ways that would not be otherwise possible? The computer becomes an active part of the process of discovery, not just as the medium used to create artefacts.

One analogy worth exploring is that of Advanced Chess: a form of the game introduced by Garry Kasparov where each human player can use a computer to assist them to explore possible moves (Kasparov 2017). In particular, computer chess programs are generally very good at quickly detecting whether a proposed move will have catastrophic results. The effect of allowing a human player to test potential moves with a computer assistant is to make the game blunder-free. By removing the stress of making easily punished mistakes the human in the collaboration is freed to approach the game in a much more actively experimental way.

Another potentially rich analogy is with fly-by-wire systems in aircraft (Sutherland 1968). These allow designs of aircraft to be created which are inherently unstable but can perform complex maneuvers beyond the performance envelope of conventional aircraft (Stein 2003). These include designs that would be difficult or even impossible for a human pilot to directly control. Through the use of digital fly-by-wire technology, where the pilot uses their controls to indicate their intent but all the data is passed through a computer before being fed to actuators on the control surfaces, such aircraft can be flown safely.

How are we to express an artistic opinion while working with a complex space of possibilities? The space of possibilities may be rich, but it can be a challenging territory to explore. How should we explore the space of possibilities to find the most interesting results when dealing with more and more parameters? There is a danger of becoming overwhelmed by too many controls. After we have tried out some initial parameter values, what values to try next? This problem, of how to choose new parameter values based on the limited number of parameter combinations we've sampled so far, is one that a computational algorithm may be able to actively help with.

We can think about the possibility for different relationships with the computer. Can using them actively in the creative process allow more fearless engagement with difficult 'unruly' generative systems, exploring spaces with more parameters and complex inter-dependence between them than would otherwise be possible? To try to address these questions, the author has created his own system, called 'Species Explorer', that provides a range of different methods to help the user select parameter combinations to try when working with generative systems.

#### **2. Working with a Complex Space of Possibility**

Most of the author's recent work involves exploration of morphogenesis: creating forms generatively through simulation of growth. The aim is to create systems that have the potential for a rich variety of different types of three-dimensional structure and form. All these come emergently from low level rules such as forces between cells, how cells split and connect to their neighbors, and how food needed for growth is created and shared between cells. The work can be considered as explorations of artificial life: inspired by, rather than trying to copy, biology. Actively exploring whether different rules create familiar or deeply alien structures.

The aim is to create systems that have the potential for a rich range of possibilities. Typically, each system has a quite large number of parameters, any of which could influence the development of the forms in potentially interesting ways. The simplest systems that the author has created as part of his Cellular Forms series (Figure 2) has 12 parameters, with more recent variations, such as Hybrid Forms (Lomas 2015) and Vase Forms (Lomas 2017), having 30–40 parameters.

A big question is how to explore a landscape with this number of parameters with a creative intent. As was previously discussed, structured sampling taking a fixed number of samples independently in each dimension would result in an overwhelmingly large number of samples to deal with. Even if we could take the required number of samples, how are we to visualize or otherwise make sense of the results? On the other hand, if we have a smaller number of less structured samples it is very difficult to make sense of what the data means and decide on new parameter combinations to try.

This is an area where computational methods, such as genetic algorithms or machine learning techniques, may be useful. In effect: allow the computer to help us make the best use of the data that has been acquired so far to decide where to next sample.

A number of authors have proposed using evolutionary methods to allow artists and designers to explore systems with large numbers of parameters. Examples include Dawkins' Biomorphs (Dawkins 1986) and Mutator (Todd and Latham 1992). A number of systems that use evolutionary selection for design are described in (Bentley 1999).

**Figure 2.** Examples of a range of morphologies from the author's Cellular Forms (Lomas 2014).

As demonstrated by natural processes, evolutionary methods can be effective even with extremely large numbers of parameters. One problem, though, can be that these methods generally lead to exploring a small number of paths within the space of available possibilities. The nature of these types of methods are to bias the search towards the most successful areas of the parameter space that have already been highly sampled. New samples are taken by mutation or cross-breeding of the gene codes from previous samples that are deemed fittest according to a specified fitness function. This means that previously highly sampled areas are likely to be even more highly sampled in the future as long

as they contain 'fit' individuals. This is a good strategy for exploiting the best results that have been previously found, but can be seen as a bad strategy for actively finding novel solutions which may be in areas of the landscape that have had very few samples so far.

Another issue worth considering is that for creative work there is often a need for different phases of exploration. Initially we may be actively experimenting: trying to get a feel for the capabilities of the medium we are working with. Once we have done some initial experiments we may want to continue to explore broadly, but with a general focus on regions that seem to have promise. Once we have found some particularly interesting results we may wish to further refine them into presentable artefacts for exhibition, or want to switch to actively looking for novel results that are significantly different to those we have found so far. In other words, the intent of a process of exploration changes over time. If a computer is assisting us we may want it to work in different ways depending what type of result we are currently searching for.

One analogy is with journeying into an unexplored landscape looking for the particularly interesting locations. When we first go into the landscape we may go to a few random places, exploring along a single path. Once we have got an idea of the different types of terrain, we may want to go back and explore some of the most interesting locations we have found in greater detail. If we get bored with the places we have seen so far, we may want to see if we can find new distinctive types of terrain different to any of those we have seen previously.

There are at least four different phases of exploration worth considering separately: Initial Exploration, Secondary Exploration, Refined Focus, and Looking for Novelty.

#### *2.1. Initial Exploration*

Initial exploration is where the user is trying to get an initial basic understanding of how a system works and what sort of result may be possible. At this stage they typically have little or no idea what places in space may be interesting, and are effectively just randomly trying out parameter values to test if the system works as expected. They start to get a flavor for what types of behavior may be achieved, and are interested in hints of what may be possible rather than detailed exploration.

#### *2.2. Seconday Exploration*

At this stage the user has done some initial exploration. They want to use the information gathered so far to help guide the search: to be steered into broad areas of the parameter space that appear to be potentially fruitful, and away from regions that have been found to yield invalid results or are otherwise undesirable. However, the search should still be a broad one, avoiding what would be considered in optimization as becoming trapped in a local maximum by over-refining too early and in the process missing potentially even more interesting results.

#### *2.3. Refined Focus*

When the user has found some results that appear particularly interesting they may want to focus on those for further refinement. This is effectively a focused exploration within a small range of parameter values, such as to create final artistic exhibitable artefacts.

#### *2.4. Looking for Novelty*

However, once the user has found and refined some particularly interesting results, they may want to switch to looking for novelty. Are there additional rich seams in the landscape that have not yet been discovered? Can the space be searched for results unlike those seen previously, which may in turn take the exploration in fruitful new directions?

At this stage it is sensible to want to make good use of all the data that has been acquired so far, but without being overly biased towards regions that have been explored in detail. The user is willing to get results that are less 'fit' if they hold the promise for something genuinely new.

#### **3. Evolutionary Methods and Machine Learning**

As previously discussed, evolutionary methods are a commonly suggested approach for creatively working with generative systems. For the different phases of exploration, the author considers that they are particularly effective when refining previously found promising solutions, are also useful for secondary exploration, but because of their strong bias towards areas that have already been highly sampled are generally very poor for looking for novelty. With evolutionary techniques the general approach to find novel solutions is to dramatically increase mutation rates, or to simply start with a completely new population with the hope that a fresh search may explore a new path and find novel solutions. However, this is at the expense of throwing away hard-earned data about what has already been discovered in exploring the landscape of possibilities.

In more recent years a number of authors have proposed using machine learning techniques to assist human designers. In general these are for domain specific applications, such as for architectural space frame structures (Hanna 2007), structurally valid furniture (Umetani et al. 2012) or aircraft designs (Oberhauser et al. 2015). In these systems machine learning is typically used to learn about specific properties of the system. This is then used to provide interactive feedback for the user about whether an object designed by them is likely to have desired properties, such as being structurally feasible, without having to do computationally prohibitive tasks such as evaluation of structural strength using finite element analysis.

Machine learning is a potentially interesting technique when looking for novelty. In particular, it should be possible to use machine learning as a method to predict fitness at arbitrarily chosen new points in the parameter space based on all the data that has been collected so far, and conduct a search explicitly biased in favor of new samples that are a long distance in parameter space from previous samples.

#### **4. Species Explorer**

In response to these issues, the author has developed a program called Species Explorer to assist the process of generating parameter values to be used with generative systems (Figure 3). Developed out of necessity, the specific need for such a system came from the number of parameters that the author found he needed when he was developing the simulation engine for his Cellular Forms work (Lomas 2014). This program provides a framework for various methods to be used to assist exploring the landscape of possibilities.

The software is designed to be used on computers running Windows and Linux, but everything has been written in an operating system agnostic manner that should facilitate support for other operating systems. It is implemented in Python together with Qt, using the PySide Qt bindings (The Qt Company, Oslo, Norway). This has allowed rapid development and experimentation.

The software provides an interface for the user to rate and categorize the results from running a generative system with different parameter values. Various methods, including random sampling, evolutionary search techniques and machine learning, can be used to generate new parameter values to try out. Once a set of parameter values has been chosen the system writes out a 'creation script' (Linux shell script, Windows batch file or Python script) that can be executed on the computer to run the generative system with the specified values. The user can then rate and categorize the results of these new samples, and the computer in turn suggests parameter values for new places in the landscape of possibilities to try.

**Figure 3.** Species Explorer user interface.

The software allows the user to select from a number of different 'creation methods', each of which use different techniques for selection of new parameter values based on the data gathered so far. This provides the flexibility to allow the user to explore the space of possibilities in different ways depending on their intent (such as focused refinement based on some previous samples, or an active exploration for potentially novel results). The software also provides a framework for plugins to implement new 'creation methods', so the user can specify their own custom ways for how samples are chosen.

Currently the following creation methods are provided. For more technical detail see (Lomas 2016).


These provide a flexible palette of different methods for generation of new parameter values. In the author's experience using the system, he has found that different techniques are appropriate depending on his current intent. For the initial stages of exploration the author generally uses simple random parameter selection. Once he has evaluated some results, scoring them using rating values in a range from 0 to 10, he then uses evolutionary methods such as cross breeding or estimated scores using machine learning to do 'secondary exploration' as described above in Section 2.2. When he finds results that he considers particularly interesting and he wants to refine further, adaptive mutation (where mutation rates are varied depending on the distance to the closest neighbors in parameter space) and cross-breeding confined to the neighbors both work well.

However, when searching for novelty the author prefers machine learning techniques over evolutionary ones. These methods use lazy machine learning to estimate scores at new positions in the parameter space. New individuals are chosen based on these estimate values using a Monte Carlo method that estimates the score at a number of candidate points, and chooses one of the candidates with a probability proportional to the estimated values. This means that parameter combinations which are expected to have high score values will be preferentially selected.

The user can also provide custom 'score expressions' to be used when selecting parents for evolutionary techniques or for the value to be estimated using machine learning. For instance a score expression can be used to raise a score to a power to bias the selection of parents even more towards those with higher score values, to combine the effects of different score values (such as if the user has rated results both on an overall 'score' and on a rating for how 'hairy' structures appear), or to deliberately bias the selection of new samples to less explored regions of parameter space by including the distance to the closest existing sample point in the score expression.

As well as providing numeric ratings, the user can group samples into categories, such as forms that look like 'brains', 'broccoli', or 'corals'. Using score expressions, the user can either restrict breeding to sets of parents in specific categories, or use the machine learning creation methods to find new parameter values that are predicted to have a high probability of being in a given category.

The author has used Species Explorer for all the work he has created in recent years, including his Cellular Forms and more recent related series (Figure 4). The key intent with the system is for the computer to act as an active assistant, helping guide users as they explore a system to discover its potential capabilities and making the best use of all the input the user has made. The user should be able to steer the search with a creative intent, refining particularly interesting results, with the computer assisting them in exploring the space for novel rich behavior.

**Figure 4.** Morphogenetic Creation exhibition at Watermans, 2016. All works in the exhibition were created using Species Explorer to discover parameter values for the forms presented.

This also raises the question of how the relationship between the user and the computer can change over time. The use of genetic algorithm and machine learning methods means that the system adapts through use, effectively learning about the user's preferences. The author believes that a consequence of this is that the relationship with the computer can also change over time, from one where the computer is very much in the role of purely an assistant where all creative choice is explicitly controlled by the user, to one where the computer can be seen as more of a collaborator suggesting parameter combinations that have a high likelihood of producing interesting possibilities.

Currently all aesthetic judgements when rating or categorizing results are explicitly made by the user, but one potentially interesting direction for future enhancement would be to use the system as a platform to provide training data for machine learning to model how the user is rating and categorizing. This could allow a system of interactive training, with the user rating and categorizing some initial samples, the machine making predictions about score and categories for new samples, and the user correcting the predictions if they are wrong. Using such a system we could explore whether we can reach a point where the computer can successfully predict how the user will rate and categorize.

#### **5. Conclusions**

Working with generative systems gives the potential for rich possibilities, but also presents many challenges, particularly when trying to work with a creative intent. It is natural to look towards working collaboratively with a computer as an inherent part of the process, but what is the relationship that we want to develop with the computer? Is there a way that humans and computers can work together to their mutual strengths? In the author's experience the computer can become an active assistant in the process of discovery as well as being a medium to work with, enabling creative exploration with systems that the author previously found overwhelming (due to the systems having large numbers of parameters any of which could affect the results in difficult to predict but potentially interesting ways). The process changes to one that feels like a productive active collaboration, with the computer freeing the author from anxiety when creating new systems and adding parameters that he believes may have the potential to generate unexpectedly interesting results.

In particular, as an artist it can be important to have a relationship that gives at least the plausible illusion of being relevant and provides for interaction that feels rewarding, where decisions by the artist are steering the work in ways that match their intent. The nature of using a system that adaptively changes based on the user's input is that the relationship with the computer can change over time, from one where the computer acts purely as a technical assistant to one where the computer can be seen as a collaborator in the process of creation.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**

Bellman, Richard. 1961. *Adaptive Control Processes: A Guided Tour*. Princeton: Princeton University Press.

Bentley, Peter J. 1999. *Evolutionary Design by Computers*. San Francisco: Morgan Kaufmann, ISBN 978-155860605X. Conway, John H. 1970. The game of life. *Scientific American* 223: 4.


Dawkins, Richard. 1986. *The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design*. New York: WW Norton & Company, ISBN 978-0141026169.


Ikeda, Ryoji. 2018. Ryoji Ikeda. Available online: http://www.ryojiikeda.com/ (accessed on 20 June 2018).


Lomas, Andy. 2007. Flow. Available online: http://www.andylomas.com/flow.html (accessed on 10 March 2018).


Reas, Casey. 2018. Home Page of Casey REAS. Available online: http://caesuras.net/ (accessed on 20 June 2018). Stein, Gunter. 2003. Respect the unstable. *IEEE Control Systems* 23: 12–25. [CrossRef]


© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Editorial* **Robot Art: An Interview with Leonel Moura**

#### **Leonel Moura**

Artist at Robotarium/Rua Rodrigues Faria, 103 Lisbon, Portugal; arte@leonelmoura.com Received: 16 July 2018; Accepted: 16 July 2018; Published: 18 July 2018

**Abstract:** In the wake of his inclusion in the landmark 2018 "Artists and Robots" show at the Grand Palais in Paris, Leonel Moura reflects herein on his own work and its place within the broad spectrum of techno-art; and of particular current interest is his reliance as an artist on emergent phenomenon—i.e., the ability of relatively simple systems to exhibit relatively complex and unexpected capabilities—which has recently come back into focus with the spectacular ability of the "deep learning" family of computer algorithms to perform pattern recognition tasks unthinkable only a few years ago.

**Keywords:** art; technology; robots; techno-art; robot art; emergent phenomenon; emergence

#### **1. Introduction**

#### *Arts:*

As you know, Leonel, the title of our special issue is "The Machine as Artist (in the 20th Century)". Can you please give our readers an overview of how you will be approaching the subject?

#### *LM:*

Can a Machine make Art? This question, bizarre back in 2001 when I started working with artbots, is today recurrent. Why? Because robots are invading our world, artificial intelligence is a reality, and art itself, on the path of Marcel Duchamp, accepts almost anything. However, the main issues remain the same since mid-20th century pioneers started using computers and algorithms to produce a new kind of art. Are machines really creative? Or are they just another tool in the hands (and minds) of human artists? I will try to answer based on my own work.

#### **2. Robots**

#### *Arts:*

For those of us who were not lucky enough to see your installation at the Grand Palais, or who are otherwise unfamiliar with your work, could you please give us a description?

#### *LM:*

My artbots are quite simple (Moura and Pereira 2004). They are autonomous, have an onboard microchip, sensors to avoid obstacles and detect colours, and a device to actuate a colour marker pen (Figure 1). They move in a haphazard way inside an arena (Figure 2), but with each sensing the colour over which it is then passing and reacting by either raising or lowering its pen when a certain threshold is sensed, that is, when a certain amount of colour is present. This reaction to the marks left by other robots is thus an indirect form of communication known as *stigmergy*, as originally described by Pierre-Paul Grassé (Grassé 1959). The process is emergent (Whitelaw 2004): from a random start, soon randomness is replaced by a reactive mode generating patterns and clusters of colour.

**Figure 1.** Three artbots at work. Photo ©2001 Robotarium and used by permission.

The finalization of the work is determined by a kind of negative feedback, i.e., when robots stop reacting as a certain density of colour is achieved.

The general behaviour of my robot swarm is inspired by ants. These insects communicate among themselves through chemical messages, the pheromones, with which they produce certain patterns of collective behaviour, like following a trail. I have replaced pheromone by colour. In this way, the swarm of robots create unique paintings, impossible to anticipate, and in which an abstract composition with different levels of colour concentration can clearly be recognized by the human viewer (Figure 3).

**Figure 3.** *RAP (Robot Action Painting)*, 2007, ink on canvas, 150 cm × 170 cm. ©2007 Robotarium and used by permission.

#### **3. Art**

#### *Arts:*

Thank you, Leonel, for this quite precise description of how your "artbots" create their output; but we come now to the inevitable question—is it Art?

#### *LM:*

Purists in respect to human uniqueness will say "no": only humans can make art. This, however, is an outdated concept. It has been understood since at least the birth of abstraction that the main issue in art is neither its production nor the individual artistic sensibility by which it is guided. The main issue of art is art itself: its history, evolution, and innovative contributions. Anything can be considered art if validated by one of the several art world mechanisms including museums, galleries, specialized media, critics, curators, and/or collectors. Only in this way has the Duchampian ready-made and most of the art produced since been accepted and integrated into the formal art realm.

Whether a work of art is made directly by a human artist or is the product of any other type of process is nowadays of no relevance. Recent art history shows many examples of art works based on random procedures, fortuitous explorations, *objets trouvés*, and arbitrary constructions. Surrealism, for example, even tried to take human consciousness out of the loop. More decisive is whether or not a new art form expands the field of art. Since the advent of modernism, innovation has become a more important criterion in evaluating artistic projects than personal ability.

Art made by robots also raises other kinds of issues. For the moment, robots and their algorithms remain human creations. In this sense, it can be said that their artistic production originates in the will and skill of the human artist. But since robots like those I use are able to generate novelty, it must also be recognized that they have at least some degree of creativity. Essential information in creating their composition, such as the detection of colour and small shapes, is gathered directly by the robots. Moreover, the emergent process implies that the resulting art works cannot be predetermined even by the person who initiates the process. Hence, the painting as a composition is the product of machines without decisive human intervention.

The algorithm and the basic rules introduced thereby via the robot microchip are not so very different, furthermore, from education. No one will claim that a given novel is the product of the author's school teacher. To the extent that the author, human or machine, incorporates new information, the art work becomes not only unique but also the result of the author's own creativity. In short, I teach the robots how to paint, but afterward, it is not my doing.

If we accept that intelligent machines can already perform many human tasks, why not accept that they can make art? Will I myself keep making paintings if robots can do it so well? Is this a menace to human creativity? No. We have plenty of other things to do.

#### **4. Future**

#### *Arts:*

And finally, Leonel, where is all of this headed? Please share with us your vision of the future.

#### *LM:*

Robots and artificial intelligence still depend on human enterprise. Soon enough, however, machines will be able to undertake their own evolution. And this is not a question of belief but rather of necessity. The autonomy of machines is essential to the best interests of humanity, as in cases such as multiple task performance, big data management, and space exploration. These developments imply the ability of the machine to solve problems, make decisions, and evolve as needed. And the result must be the capacity of machines to build new machines following their own purposes.

Does this pose a risk for mankind? Maybe. The possibility of human/machine confrontation in the future is real inasmuch as humans don't seem to be able to think and behave rationally. One example is the intense development by the military of unmanned robots with lethal capacity.

One way to avoid such an outcome is co-evolution: in a symbiotic manner, machines and humans will continue to depend on each other. In such a context, art can have an important role, teaching humans and machines how to share common goals (Figure 4).

**Figure 4.** *Bebot*, 2017, ink on canvas, 300 cm × 470 cm. ©2017 Robotarium and used by permission.

Art-making machines are also important beyond the creation of beauty or emotional stimulation, as is typically the case in human culture, and here I refer to the fundamental process of fabricating knowledge. No knowledge, be it biological or artificial, can evolve and be perfected without exploration, experimentation, and random creativity. In fact, natural evolution is generally based on such mechanisms. Trial and error evolution can therefore be seen as an equivalent to art, since art, as opposed to science, is non-objective and non-linear. Hence, I would say that the future of robots and artificial intelligence will be artistic, or we may otherwise find ourselves in serious trouble.

**Conflicts of Interest:** The author declares no conflicts of interest.

#### **References**


© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Self-Improving Robotic Brushstroke Replication**

#### **Jörg Marvin Gülzow \*, Liat Grayver and Oliver Deussen**

Fachbereich Informatik und Informationswissenschaft, Universität Konstanz, 78464 Konstanz, Germany; liat.gra01@gmail.com (L.G.); oliver.deussen@uni-konstanz.de (O.D.)

**\*** Correspondence: marvin.guelzow@uni-konstanz.de

Received: 28 September 2018; Accepted: 10 November 2018; Published: 21 November 2018

**Abstract:** Painting robots, like e-David, are currently unable to create precise strokes in their paintings. We present a method to analyse given brushstrokes and extract their trajectory and width using a brush behaviour model and photographs of strokes painted by humans. Within the process, the robot experiments autonomously with different brush trajectories to improve the reproduction results, which are precise within a few millimetres for strokes up to 100 millimetres length. The method can be generalised to other robotic tasks with imprecise tools and visible results, like polishing or milling.

**Keywords:** robotics; painting; art; generative method; brush

#### **1. Introduction**

E-David is an automatic painting system that uses an industrial robotic arm and a visual feedback system to create paintings. It can easily be adapted to varying painting styles, tools, surfaces and paints. Based on an input photo, the system creates a set of brush strokes and executes them using the robot arm. E-David aims to approximate the human painting process (Deussen et al. 2012). A variety of painterly rendering algorithms have been developed to work within the feedback-loop, in which the robot periodically takes a photograph of its progress and determines which actions to take based on the difference between the canvas and input image Lindemeier et al. (2013). These capabilities make e-David much more than just a printer capable of reproducing flat images, as it creates unique works through the application of paint strokes that are irreproducible in terms of colour blending and materiality of their layering. The possibility of visual feedback opens up many interesting questions within the contemporary discourse on deep learning, artificial intelligence and robotic creativity. One of them is the optimization of the painting process by eliminating unnecessary strokes in order to mimic a human's efficiency when painting Lindemeier et al. (2015).

Currently, the system uses a fairly static approach to painterly rendering. Strokes are painted with a brush held at a constant distance to the canvas throughout the entire process. The application pressure is never changed and the stroke width can only be varied by switching between brushes of different sizes. Furthermore, strokes are fully computer-generated for each image. Artists working with e-David have often expressed the desire to introduce their own strokes, which the system would then replicate. Knowledge about how a certain brush movement produces a stroke is also not retained. These limitations of the robot's painting technique decreased both the artistic and scientific usefulness of the machine, because much of the finesse required for detailed painting was lacking. The new methods presented in this paper allow for a much more controlled placement of strokes and thus more detailed and varied paintings.

In order to enable e-David to precisely draw strokes, we have developed several new methods designed to mimic the human process of painting. The first method involves measuring the width of a stroke produced by a brush at a certain application pressure. This allows the creation of pressure profiles that map the distance of the brush from the canvas to the width of the stroke. We then generalised this technology to measure the width of non-overlapping strokes of nearly any shape and size, as well as the movement used to paint them. Using knowledge acquired by the these two methods, a reproduction step was developed that recreates a stroke created by a human as closely as possible. Finally, we implemented a process that automatically improves the reproduction result by correcting deviations between the target stroke and the result. Each attempt is stored in a stroke database for later use in machine learning projects or as a repertoire for future painting.

On the artistic side of the project, Liat Grayver has been collaborating with the e-David team during the past three years in order to investigate methods to redefine one of the primitive forms of art—painting—in our current technology-based era. Specifically, this includes investigating new methods for the application of paint on canvas and for using computer-assisted generation of physical images. More broadly, the work aspires to harness computers and machines to establish new and innovative avenues in contemporary artistic practices. Grayver states:

"One of the aspects of artistic practice in general, and painting in particular, is the attempt to manifest one's own personal or intimate perspective through materials into the public and social discourse. This is not only about the form or the finished object, but also about the process, the perspective and perception of a structure—all of which is defined by our dynamic surroundings and contemplated through the tools, mediums and technology of the present time and local place."

One could claim that the history of art and culture is aligned with the history of technological innovation. The creation of painting machines is an attempt to explore and create new methods of human expressiveness; making the machine to be, in a way, more compatible to human playfulness and creativity. A painting robot seeks to achieve a result that could be experienced and felt as similar to the human way of painting. In other words, something that is aligned to the duality one can find in a work of art: a level of randomness balanced with precision and expressivity merged with a complex mathematical architecture.

In the following, we provide an overview of painting machines, from the earliest works in the 1760s up to contemporary devices, followed by a brief history of e-David and works produced by the machine. Finally, aspects of brush control and single stroke reproduction are discussed from both a technical and an artistic point of view.

#### **2. A Brief Overview of the History of Painting Machines: From Jaquet-Droz to Other Contemporary Practices**

The history of automata reaches back to antiquity, with mechanical computers like the Antikythera mechanism from around 150 BCE de Solla Price (1974) and musical automata built by Ismail al-Jazari around 1200 CE Hill (1991). The first known surviving complex painting machines first appeared during the 18th century, as people in the western world began to develop an increased interest in mechanical devices Maillardet (2017).

#### *2.1. 18th and 19th Century*

The earliest fully automated painting machine currently known is the "Draughtsman" or "Artist"—one of the three automata built by the Jaquet-Droz family between 1768 and 1774 Bedini (1964). These automata are small dolls driven by internal clockwork-like mechanisms that coordinate their movement. A central control wheel holds many plates shaped in such a way that they can act as a cam1. Followers are shifted to a selected wheel, reading information from the wheel as it performs one revolution. The followers then transfer the motion through amplification mechanisms to the end effectors Droz (2014).

<sup>1</sup> A cam is a rotating disk, on which a follower rests. The changing radius of the cam moves the follower up and down, thus translating rotational to linear motion. This is for example used in combustion engines, where a camshaft opens and closes the fuel valve.

*Arts* **2018**, *7*, 84

The "Musician" is a machine that plays a functional miniaturised organ by pushing the instrument's keys. The "Writer" produces a 40 letter text by reading each letter from a reconfigurable wheel and using an ink-quill to write on a sheet of paper Mahn (2013), Schaffer et al. (2013). This automaton is interesting as it represents one of the early programmable machines and introduces the concept of encoding information in shaped metal plates—a process that was later used to store sound information in vinyl records.

The "Artist" is capable of producing four different paintings: a portrait of King Louis XV; the couple Marie Antoinette and Louis XIV; a dog with "Mon toutou" written next to it; and a scene of a butterfly pulling a chariot Mahn (2013). Each scene consists of many lines drawn with a pencil that the automaton holds in its hand. These sketches are literally hard-coded into the metal cams inside the machine and cannot be reconfigured easily. The automaton and two of its stored drawings can be seen in Figure 1. Furthermore, the painter periodically blows away the graphite dust left behind by its pencil using an integrated air pump. This feature is not found in any contemporary painting robot, despite this being potentially useful for selectively drying paint Schaffer et al. (2013).

**Figure 1.** The "Artist" painting automaton by Jaquet-Droz (**Left**, Rama (2005)) along with two of the four images that it can draw: a portrait of Louis XV (**Top Right**) and a drawing of a dog with "Mon toutou" written next to it (**Bottom Right**) Droz (ca. 1770).

After Jaquet-Droz's early work, Henri Maillardet constructed the "Juvenile Artist" around 1800 (Figure 2). This machine is another automaton capable of writing and drawing using a quill or a pencil and was shown at exhibitions from 1807 to 1837. The device was delivered in 1928 to the Franklin Institute in Philadelphia in a disassembled state. Upon being restored, the automaton began to produce four drawings and wrote three poems stored in the mechanism, which also revealed the forgotten name of the original creator. Alongside the final poem, it wrote "Ecrit par L'Automate de Maillardet". Like Jaquet-Droz's machine, the "Juvenile Artist" also uses brass cams to store movement information that is transferred to the arm using followers Bedini (1964); Maillardet (2017).

**Figure 2.** Henri Maillardet's reconstructed automaton shown here without the original shell (Left, Maillardet (2011)) and the poem that identified the original builder of the machine Maillardet (ca. 1800).

#### *2.2. From Modern Time to Contemporary Painting Machines*

From their introduction at the beginning of the 20th century up to the present, automated devices have become widespread and range from simple appliances to industrial robots. The most common machines that handle paint are industrial paint spraying robots or simple plotters International Federation of Robotics (2016). These machines, however, are different from actual *painting* robots, of which only a handful exist. A painting robot is typically a machine built to replicate or create works similar to human art. It uses multiple paints or other pigments that it deposits on a painting surface with general-purpose brushes instead of specialized tools like paint nozzles. This class contains both XY-plotter based robots as well as robotic arms, both of which can operate with brushes.

Towards the end of the 20th century, as computer use became widespread, several artists created computer programs to autonomously generate art and explore the potential of creativity in machines. While their main output medium was a printer and a not an actual painting robot, their introduction of artificial creativity into the artist community was significant. The main actor in this was Harold Cohen, who in 1973 built AARON, a computer program designed to create images intended to be both artistic and original Cohen (2016). The program is able to generate objects with stylistic consistency, which Cohen has transferred to physical paper or canvas with "turtle robots" and printers. He states that AARON is neither creative nor a thinking machine, which raises the question if the output is art or not Cohen (1995).

Hertzmann, in a recent publication, states that artificial intelligence systems are not intelligent and that "artificial creativity" is merely the result of algorithms that generate output based on rules or by combining preexisting work Hertzmann (2018). He concludes that computers cannot generate art for now, as they lack the social component that human artists posses. He does, however, state that technology is a useful tool for artists to create better artwork.

Actual painting robots mostly arose in the 21st century, as the required technology became more mature and available to a wider audience. The number of painting robots currently in existence is unknown, as it is possible for hobbyists to create a functional machine from cheap hardware. Furthermore, many artists who use automata may not publish their work outside of exhibitions, thus making it difficult to estimate the use of painting robots globally. A selection of contemporary well-known robots and their creators follows:

*CloudPainter*, built by Pindar van Arman, is a software system that controls XY-Plotters and small robotic arms, which are able to use brushes to paint on a canvas Arman (2017). Van Arman presented his machine as a "creative artificial intelligence" in a TED Talk from April 2016 Arman (2016). This mainly refers to a TensorFlow Abadi et al. (2015) based style transfer algorithm.

*TAIDA* is a painting robot at the Taiwan NTU International Center of Excellence on Intelligent Robotics and Automation Research Luo and Hong (2016). It is a custom-built arm with seven degrees of freedom (DoF) that can dip a brush into small paint containers and paint on a flat canvas in front of the arm.

A specialised group of Chinese calligraphy robots exist that aren't used to create general artwork, but rather use specialised and often custom-built hardware to produce Chinese calligraphy. Examples include *Callibot* Sun and Xu (2013) or the *CCC* (Chinese character calligraphy) robot Yao and Shao (2006). Other unnamed calligraphy machines focus on brush mechanics Kwok et al. (2006); Lo et al. (2006); Zhang and Su (2005): using measurements of brush deformation, footprints and other mechanics, they optimise the painting of calligraphic elements, which allows them to create human-like writing.

#### **3. E-David**

E-David (An acronym for "Electronic Drawing Apparatus for Vivid Image Display") is the robotic painting system that has been under development at the University of Konstanz since 2008.

It has several features that distinguish it from other painting machines. An optical feedback system integrates information about the current canvas state into the painting process. E-David also has the ability to handle a large number of paints, to switch between brushes and to utilise sophisticated brush-cleaning methods for a cleaner result. Recent results can be seen in Figure 3. E-David is not a single machine, but rather a complex system consisting of hardware and software components.

**Figure 3.** Recent paintings created with e-David.

#### *3.1. Hardware*

E-David consists of a robotic arm mounted in front of a canvas. A brush is held by the arm and serves as the end effector. An accessory table holds a palette of colours that the robot dips the brush into, as well as mechanisms to clean and dry the brush before switching to another paint. A camera is positioned behind the setup such that it has a full view of the canvas, so feedback photos can be taken. These are used by the painting software described in the next section.

The main robot used by the project is a a Reis RV-20 6, which is intended for welding applications and machine tending. See Figure 4 for a photo of this device. The robot is also permanently bolted to the laboratory floor due to its weight and thus cannot be transported for events such as exhibitions. A mobile version, called "e-David Mini", was built using a small KUKA YouBot that was able to demonstrate the painting process in various locations. This setup has been demonstrated in Luzern in 2014 at the Swiss ICT award, and an upgraded version was shown in Leipzig in 2016 at an art exhibition Halle 14 (2017). A photo of the setup can be seen in Figure 4b. The YouBot turned out to be unsuitable for further work, as it lacks one degree of freedom, has no control software and developed mechanical defects after some use. Hence a new robot, the ABB IRB 1200 (see Figure 4c), has been acquired. A in-depth description of the technical aspects of the robots used can be found in Section 9.

**Figure 4.** The robots used for the e-David project. (**a**) Reis RV-20 6: The main robot (active); (**b**) KUKA YouBot: Mobile demonstrator (retired); (**c**) ABB IRB 1200: Mobile demonstrator (active).

#### *3.2. Painting Software*

After being given a digital image, e-David is capable of reproducing this target picture on a canvas using any paint that can be applied using a brush or a similar tool.

This painting approach is based on a *visual feedback loop*. The camera placed behind the robot provides a photograph of the current canvas state. By analysing differences between the current canvas state and the target picture, new strokes are computed and the robot is instructed to perform them. After a batch of strokes has been applied, a new feedback photo is taken and a new round of strokes is computed. This process repeats until the canvas looks similar enough to the input image. In each round, strokes are computed using a method similar to the Hertzmann algorithm Hertzmann (1998). The robot starts out with a large brush and performs several iterations. After each one, it switches to the next smaller brush, thus first generating a background followed by details layered on top.

A further benefit of the feedback loop is that it allows for error correction during the painting process. When the robot paints, inaccuracies in the painting occur due to deforming brushes, dripping paint and other hard-to-predict behaviours of the painting implements. Human painters circumvent these issues by avoiding certain behaviours based on experience, such as not keeping the brush too wet to prevent dripping, and by detecting mistakes visually after they have occurred and correcting them. While an effort is made to avoid defects in the painting through appropriate hardware design, some will inevitably occur. For example, the feedback mechanism detects a drop of paint as a colour mismatch and will draw over it using the correct colour.

The system is designed in such a way that any robot capable of applying paint to a canvas and providing feedback pictures can be used as an e-David robot. Only a driver, which translates between the e-David software painting commands and the robot, is required to be implemented for each machine. This is why there were no major redesigns required to accommodate a new robot. Today, the same software (save for the driver) operates both the RV-20 and the IRB 1200.

#### **4. Methods**

The general principle behind the current painting approach is to layer many strokes of thin paint upon each other. This allows the process to slowly converge onto the goal image. Strokes are placed according to a Voronoi-based optimization method, described in Lindemeier et al. (2015), which allows strokes to be arranged in a natural pattern and to adapt to their neighbouring strokes. The process is further extended through semi-automatic decomposition into layers for sharp corners and back-to-front painting Lindemeier et al. (2016).

The current process has several disadvantages. While up to five brushes of different size can be used by switching between them, the application pressure at which they are used is always constant. However, by varying the pressure, it is possible to achieve a continuous range of stroke width and to even vary width within a stroke. This is often used by human artists for detailing. Furthermore, the system does not take brush dynamics into account, so that brush deformation and other effects lead

to the actual stroke deviating by several millimetres from the intended location. This effect becomes less visible when multiple layers of paint are applied, but makes detailing difficult.

In order to improve upon these issues, we explored new techniques for the precise placement of single strokes. The goal of the techniques described in the following sections is to develop a better robotic handling of difficult tools like brushes, and to include knowledge about their behaviour in order to achieve more precise results.

#### *4.1. Physical Properties of Brushes and Stroke Width*

When a brush is used to apply colour to a canvas, it deforms and changes its shape. This determines how colour is applied to the canvas. A human painter naturally varies the pressure of the brush hairs on the canvas in order to adjust the resulting stroke properties.

The e-David system has so far used a constant pressure for most created paintings, i.e., the brush is always kept at a constant distance from the canvas. Hence, stroke width is only dependent on the brush type and there is no variation within a stroke. A preprogrammed pressure ramp is used at the beginning and end of a stroke in order to make it look more natural, but the width remains constant for the main body of the stroke.

While brushes of various kinds are essential in many production environments, very little academic research has taken place concerning the exact prediction of brush deformation Przyklenk (2013). Many virtual brush models exist, however, with the most prominent being introduced by W. Baxter in 2004 Baxter and Lin (2004). This model treats a brush as a small set of polygonal strips that are simulated kinematically as several external forces act upon them. The resulting simulation is very useful in a virtual painting environment, but the simulated brush does not correspond to a real brush in a way that would allow predictions about the behaviour of the real brush. A realistic simulation of a brush that predicts its behaviour precisely enough for use with the robot would need to account for many parameters, for example bristle material, previous deformation, paint viscosity and so on.

Using a brush with today's industrial robots is problematic. The kinematic models used to coordinate the movement of robotic arms all require that the robot's tool has a so-called *tool centre point* (TCP). The TCP is assumed to be a static point solidly attached to the robot's end effector, which holds true for common devices like welding guns, drills or grippers—but not for brushes. Due to deformable bristles, the tip of a brush may vary in position after every stroke and the entire body of the bristles can be employed for transferring paint. Because brushes violate the assumptions of solid TCP of industrial robots, we have developed several compensation methods for e-David that account for variations in tool location. The first method accounts for stroke width as a function of pressure, while the second corrects for brush hairs dragging along the paper. The second method is described in Section 4.5.

#### *4.2. Visual Feedback and Stroke Processing*

The feedback camera is calibrated before use in order to obtain usable images for stroke analysis. The calibration process accounts for lens distortion, external lights and colour variations between cameras.

Lens distortion is corrected through a separate calibration process, during which a calibration panel of known size is placed in several locations. Using 25 or more calibration pictures a reprojection matrix is computed and the image can be rectified. This is necessary in order to obtain a canvas image in which distances between points are consistent, independent of their location in the frame.

Afterwards the canvas is calibrated for lighting, by placing a blank paper of uniform colour onto it. Differences in brightness are measured and a light map is generated, which is used to brighten dark areas in feedback images. This does not correct for glare, which can occur on wet paint and in general a soft, consistent light is still required for the feedback process to work.

The final calibration step is a colour correction. A calibration target of known colour is placed upon the canvas and a picture is taken. The resulting image is then compared to the known values and a colour transformation matrix is computed which can be used for subseqent feedback images.

Given the enhanced canvas feedback photo, a certain canvas region is specified as the input area. This subimage is thresholded in order to separate the stroke from the background. Otsu's method is used for thresholding. It automatically finds an adaptive threshold value by searching for an optimal partition of pixels in the image, based on the intensity distribution in the histogram Otsu (1979). While Otsu's method does have a certain bias and may not produce the optimum threshold Xu et al. (2011), it is sufficient for the black-on-white brushstrokes used here and even works with coloured strokes. The method has proven to be much more robust than a fixed thresholding value.

Afterwards, internal holes of the stroke are filled, as these are assumed to be defects caused by the brush running out of paint locally or even splitting.

Gaussian blurring is then applied in order to reduce the influence of certain stroke features on the result. These are small gaps within the stroke or frays caused by some brush hairs separating from the main brush body. However, the blurring may not be too strong, as this can remove thin parts of a stroke, which should be preserved. Hence the kernel size of the blurring algorithm is chosen to be 0.025 times the maximum dimension of the image. This value has been determined experimentally2 and works well for many kinds of strokes.

Using the thinning algorithm described in Zhang and Suen (1984), the strokes are reduced to single-pixel lines. Thinning a pixel image removes outer layers of an area until it is reduced to a topologically equivalent skeleton with a width of one pixel Baruch (1988); Hilitch (1969). The implementation of this technique has been taken from Nash (2013).

After having obtained the thinned image (see Figure 5), it is much easier to identify the beginning and endpoints of a stroke. These are simply white pixels with only one neighbour. While some cases exist where the start or end pixel can have two neighbours, for a first decomposition method it is robust enough. Beginning from the starting pixel, the line is followed by going from the current pixel to the next neighbouring one which hasn't been visited yet. This avoids getting stuck in a loop. The walk terminates if an end pixel is reached. If a pixel without unvisited neighbours is encountered but unvisited pixels exist, the algorithm backtracks, until it finds a pixel with unvisited neighbours. This corresponds to a depth-first search Tarjan (1972) and ensures that the full stroke is always explored. By doing this for every starting pixel, all possible ways to paint a stroke are found.

An example trajectory extraction can be seen in Figure 5. The original Figure 5a contains several strokes of varying shape and size. The lighting is not perfect and the centre stroke contains several defects. The hole filling and blurring (Figure 5b) removes most of them. The thinned Figure 5c exactly matches the strokes in the image, even the very noisy dot in the bottom left. Finally, Figure 5d shows the detected trajectories as a sequence of small arrows, which represent the calculated robot movements. Note that both possible directions are drawn, as the original direction cannot be reliably inferred from a stroke. Even humans cannot reliably guess whether the top right stroke was drawn from left to right or right to left without prior knowledge. In this case it was drawn starting from the right.

For use by the robot these 2D stroke trajectories in pixel space are transformed into 3D vectors in millimetres. Since both the canvas size in millimetres and the image size in pixels are known, a pixel coordinate can be transformed to its corresponding X/Y location in millimetres with a simple linear transformation.

<sup>2</sup> Smaller kernel sizes leave too much noise in the image, like frays. Larger sizes do not work well with thin strokes.

stroke

#### *4.3. Stroke Width Calibration*

In order to calibrate stroke width, we first devised a technique for measuring how a brush behaves when it is applied to a canvas with varying pressure. In this case, the stroke width generated at certain pressures is of greatest interest. In the following, the term "brush pressure" is used to describe how firmly a brush is applied to the canvas by the robot rather than the physical pressure exerted on the bristles. The application is characterised by how far the robot TCP moves behind the canvas plane in millimetres. For example, at 0 mm, the tip is just barely touching the canvas and the brush is not deformed. At 2 mm the robot moves the TCP behind the canvas plane, i.e., the brush is held even closer to the canvas. Due to this collision, the brush hair deforms and the deformation increases as the TCP is moved forwards. For brevity, this is described as the "brush pressure in millimetres".

In order to control the variation of thickness within a stroke, the relationship between applied pressure and delivered thickness must be known for the current brush. Since commonly used paint brushes aren't manufactured to high precision standards3, each brush must be measured individually. To this end a fully automatic process has been developed that determines the breadth of a stroke at a certain brush pressure.

The brush starts out 5 mm away from the canvas surface and is held perpendicular to it. The distance is reduced at a known rate while the brush is moved along a straight line at constant velocity, using the robot's linear path motion functionality. This yields a stroke on the canvas of increasing width. Within this stroke, the distance between brush and canvas is known at every point, which creates a map between pressure and resulting stroke width. By repeating this process several times, errors caused by external factors, such as clumping paint or a deformed painting surface can be minimised. The result of this procedure can be see in Figure 6.

After painting the calibration strokes, a photo of the canvas is taken. Note that the calibration is robust against any background features, as only the calibration area is considered. Hence even a "noisy" canvas can be used, e.g., by placing a blank sheet of paper over it. A *difference image* is created by subtracting the resulting image from a photo of the canvas before painting the calibration strokes.

<sup>3</sup> Variations of up to three millimetres in bristle length and width have been observed during experiments.

This isolates the new strokes and makes it possible to use any colour for the calibration process, as long as it is sufficiently distinct from the background colour.

After some additional processing of the strokes, as seen in Figure 6, their width in pixels is measured in each available pixel column. The actual width in millimetres can be calculated, as the scale of camera pixels to millimetres on the canvas is already known. By collecting all these values, a table mapping pressure to width is created and stored for later use to determine the necessary Z for a desired width. A plot of measured values can be seen in Figure 7a,b.

**Figure 6.** Pressure/Width calibration of a brush.

**Figure 7.** Measured relationships between brush pressure and stroke width.

In general, this methods yields good results: the detected widths were verified by measuring the calibration strokes with a calliper and no deviation was detected within a precision of 0.01 mm. This is in line with the camera's resolution of about 0.16 pixels per millimetre.

After a calibration run, the robot was able to draw strokes of a selected width with a precision of 0.02 mm to 1.32 mm, depending on the brush. Each brush has a range in which it performs best. For example, the DaVinci Junior 12 works well for strokes around 1 cm in width and becomes less precise for smaller widths around 5 mm, where the Junior 8 performs much better. Extremely fine strokes down to a width of 0.25 mm can be painted. The error introduced by the robot is negligible in this, as it has a positioning repeatability of ±0.03 mm at its maximum speed of 7.9 m s−<sup>1</sup> ABB (2017b). Since the calibration is done at a TCP velocity of 100 mm s−<sup>1</sup> the robot can be expected to be more precise than the specified maximum error.

Accurate results were also obtained by using a linear model obtained from a regression on the data: the largest observed error between predicted and measured width was 0.5 mm with the largest brush. Smaller brushes, which are used to paint smaller features, show significantly lower errors. Stroke calibration is now done routinely for each new brush and stroke precision is highly repeatable, as so far no errors beyong the stated bounds have been observed.

A limitation of this method is the assumption that the painting surface starts out and remains flat throughout the painting process. However, materials such as paper are prone to warping, especially when water is applied to the surface along with paint. Another issue is the unpredictable flowing of paints such as ink, which quickly follows moisture in a paper canvas. For now, acrylic and gouache paint have been used for their predictable behaviour and limited warping of the canvas.

The data obtained here provides the basic information about brush behaviour that is required for stroke reproduction and self-improvement methods. For future work in robotic painting, this approach can be used to enable the machine to use a much more human-like approach to brush control.

#### *4.4. Stroke Reproduction*

The goal of the reproduction step is for the robot to recreate a stroke as precisely as possible when given only a photograph of the target stroke. This enables users to paint a stroke they would like to use in an image and have the robot store it for later use. Each reproduction also yields data about the difference between the target stroke and the reproduction result, which can later be used as a dataset for machine learning algorithms.

Through this feature, the robot gains the ability to produce a known stroke anywhere on the canvas. This is useful to create both patterns and details: repetition of a stroke can yield a surface with a certain structure. Placing a specific stroke in a precise location can be used by a painting algorithm to deliberately introduce detail or to speed up the painting process. This is in contrast to the current state of the art of placing strokes in a less directed way, which sometimes causes paintings to lack detail and sharp lines.

The reproduction of a stroke happens in four steps:


Figure 8 shows an example reproduction. The chosen example is a fairly complex stroke that was painted using a brush with 12 mm long bristles, which deform significantly. Despite these obstacles, the achieved result is quite close to the original. In Section 4.5 a method is discussed that minimises these divergences. Stroke reproduction is highly repeatable, as long as the input stroke is of high enough quality. For example a "hollow" brush stroke, where the brush has run out of paint, cannot be be used or will cause the robot to attempt a reproduction of several strokes.

**Figure 8.** Reproduction of a stroke similar to the letter "b".

attempt

#### *4.5. Experimental Stroke Improvement*

Despite working well in simple strokes, the reproduction process still suffers from inaccuracies in more complex strokes due to unknown brush dynamics. While the reproduced strokes can already be used for writing, the present deviations are an issue for other tasks. For example, if a specific detail like an eyebrow is to be painted in a portrait, even small errors can alter the overall impression. For this reason, we considered several possible solutions for e-David to increase precision.

One of these attempted to measure brush behaviour along curves in a similar fashion to the pressure-to-width calibration. The acquired data could be used to create a brush model that predicts the cornering behaviour of measured brushes. The measurement, however, would have to be made for a number of curve radii and pressure levels, which would have required an impractically large number of samples. Hence we decided not to use this approach of simulation.

Instead of virtual simulation we investigated a method for physical experimentation directly on paper. Physical experimentation has been used before in autonomous robot experimentation for biochemical research: for example, the ADAM robot, built by Sparkens et al. is a "robot scientist" that can autonomously formulate hypotheses about gene expression in yeast and then check these in a robotic laboratory, without human intervention Sparkes et al. (2010).

In our case physical experimentation is achieved by painting directly on paper and checking the result using the visual feedback system. The robot is given a *stroke prototype*, which has been painted onto the canvas by a human. The robot records this stroke using its feedback camera. Using the method described in Section 4.4, an initial *attempt stroke* is painted. Through observation of the difference between prototype and attempt, a new trajectory is computed. The experiment consists of painting this improved trajectory and recording the result. By repeating this process several times, the similarity of the attempt to the prototype stroke should increase.

Two strokes are compared by overlaying them. Their trajectories are computed and the deviation of the attempt from the prototype is determined in each point of the prototype. This is done by scanning along a line orthogonal to the prototype trajectory until the attempt trajectory is hit. Some special cases, like finding a correspondence to an unrelated part of the other stroke, must be considered. After discarding erroneous measurements, a new point is created for each control point in the original stroke. The new points are offset in the opposite direction of the measured deviation, which corrects for the deformed brush lagging behind the intended trajectory. For example, in areas of high curvature in the prototype stroke, the first attempt will commonly undershoot and be quite flat. In this case, the correction algorithm creates a more sweeping movement, bringing the brush to the correct location on the canvas.

#### **5. Results**

The work presented thus far is a step towards improving the brush handling capabilities of e-David. Fine-tuning the painting technique on a single-stroke level is an important part of producing results that is more similar to human works. The contribution here is threefold:

First, through the measurement of brush behaviour w.r.t. pressure, a primary characteristic of the utilized tool is included in the painting process. This allows adaptation to the complex dynamics of a paint brush and thus greater variation of stroke geometry. Second, stroke reproduction from visual examples is a new method for providing stroke data to the robot. It allows extending the robots capabilities via a demonstration instead of actual programming. Third, the experimental improvement of strokes as demonstrated here is the first step towards a self-teaching painting robot that can learn craftsmanship from own experimentation based on human guidance through example strokes.

#### *5.1. Stroke Reproduction Results*

An application of stroke reproduction is copying human writing, as shown in Figure 9. The example presented here was performed by the robot using only its optical systems. The prototype writing was photographed, each individual stroke was extracted and analyzed with the method described in Section 4.4. Additionally, the distance between strokes was preserved. Then the robot was able to write the presented text on a separate sheet of paper. In principle every type of writing or sketch can be reproduced with this method, as long as no self-overlapping strokes are used.

**Figure 9.** A reproduction of writing presented to the robot on the canvas without additional input. Human writing is shown on the **left** and the corresponding robotic reproduction on the **right**.

#### *5.2. Stroke Improvement Results*

Figure 10 shows an example of this approach producing an improved stroke.

The improvement method has approximated the stroke almost perfectly after one improvement iteration: Figure 10f is a difference image of the original stroke and the second attempt. Both strokes match very well in shape and size: the length is exactly the same and start and end points are located in the same position. The overall shape is also identical: the top loop, the long straight section and the bend at the bottom correspond precisely. Deviation can only be seen because of slight variations in stroke width. These occur since the width is not adapted between attempts, as this has proven to disturb trajectory approximation significantly. The general width profile is nevertheless still similar, as the middle part is thicker and width is smallest at the top loop. When comparing the original stroke and the improvement result without the difference image, both strokes are strikingly similar. Hence the method does successfully self-improve its painting technique.

All experiments have led to convergence between stroke prototype and the robot's attempt: while this is not always guaranteed, as random defects in the brush or paint could cause disruption of previously achieved progress, significant errors have not been observed during experiments so far. In general, the robot manages to reduce the average distance between prototype and attempt control points by approximately 14 mm each iteration, which has been achieved consistently in ten different experiments with varying stroke prototypes. The stroke in Figure 10a is about 110 mm tall for reference.

**Figure 10.** The steps performed for the stroke improvement method.

stroke

The main advantage of this approach is that no brush model is required to determine how to improve a stroke. The physical experiment is run with the real brush and the resulting stroke is used directly to infer an improved motion plan. As a consequence, the robot can adapt to any kind of disturbance, like a bent brush, unexpected paint behaviour, or other influencing factors. This saves a lot of effort in figuring out what parameters are relevant for the result and how they can be quantified.

The presented method has proven to work with all tested strokes: in every case improvement was observed over the initial reproduction attempt. Convergence was quickly achieved after a few attempts, because once a part of a stroke attempt matches the prototype well, that segment remains unchanged. Due to its simplicity, this model also provides a good baseline for future stroke improvement approaches.

Furthermore, as multiple experiments are run for every stroke, a lot of new strokes are gained for the stroke database. These are variations of the target stroke and can be used to analyse how a slight variation in motion changes the resulting stroke.

The main disadvantage of the experimental approach is that optimizing a stroke costs both time and material: as the robot moves slowly to avoid spilling paint, one optimization run takes at least one minute to complete and can take up to twenty. Hence time is still a major limiting factor for acquiring data about strokes. Furthermore, paper and paint are consumed and must be replenished, which as yet cannot be done automatically.

Another drawback is that only brush movement in the XY plane and pressure are considered as optimization parameters. Brush angle and twist are currently not accounted for. While most strokes can be approximated well enough using a brush being held perpendicular to the canvas, it could be necessary to include angle variation later on.

#### **6. Discussion: Technical Implications**

The feature we have developed that allows the robot to self-improve its strokes is a first step towards a painting process that can accumulate knowledge about the tools and materials being used. The inclusion of such mechanisms will allow the system to move away from the current static painting method and paves the way for more sophisticated paintings. The automatic improvement of a manufacturing process through observation of the results, as we have developed for e-David, might be transferable to industrial applications and make robots more suitable for new tasks. Current robots are able to detect internal failures Visinsky et al. (1994) or wear and tear Trendafilova and Van Brussel (2001) and some metal manufacturing machines can detect tool failure Tansel et al. (1995). These approaches all rely on motor sensor data, but in the case of tools which do not require much force to use, such a method might not be applicable. Hence visual checks of work progress can be useful.

Now that a simple method to improve strokes has been implemented, the generated data can be used in machine learning approaches. This can allow a learning system to predict brush dynamics by learning from past behaviour. Furthermore, a generalization to more complex brush movements or sequences of these can be developed in order to move from single-stroke painting to surface based approaches. Current learning approaches for style transfer Gatys et al. (2015) or artwork generation Elgammal et al. (2017) are pixel based. Moving from the pixel level to applying known discrete stroke or surface features could lead to improved results and mimic human works more closely. A prerequisite for this is to find a way to introduce the stored stroke data into such learning systems.

As a final note, because strokes were only handled as "solid" objects, methods that also consider how to develop a certain internal structure can be highly relevant as well. In conclusion, improving upon these details and including the techniques that create them into the painting process should make e-David paintings more detailed in the future.

#### **7. Discussion of Artistic Implications: Human-Machine Interaction as a Neutral Base for a New Artistic and Creative Practice**

In collaboration with computer engineers, neuroscientists and machine engineers, Grayver has been exploring new methods for the application of paint on canvas, as well as for computer-assisted generation of physical images, and has been using computers and machines in the service of exploring new aesthetic avenues in painting. This work aspires to constitute a novel venue for the establishment of new and innovative ground in contemporary artistic practices.

#### *7.1. Artistic Motivation: The Importance of the Individual Brushstroke*

The whole of artistic activity can be described as an instance of self-regulation. Order in painting is traditionally achieved through the self-regulation of the painter and by external intervention. It is necessary to distinguish between—and balance—those characteristics relevant to the realm of individual artistic perception and those that are external to the artist's motives, intentions and preferences.

Generated data and robotic technologies are tools used in Grayver's artistic practice to explore, retain and express visual information in relation to the digital and machine-based world we live in

today. Her work with the e-David painting robot explores the different ways the body and mind perceive not only the visual objects themselves (such as painting), but also the process through which they are created—what is seen as a whole (form) and what is felt as energy (vector). Grayver states:

"During the working process, passive materials (canvas, paper, wood surfaces, etc.) react to my active manipulation of materials upon them; both the passive and active elements are equally and reciprocally important to the process as well as to the finished work. Using and mixing different media in one work creates a rich context in which I explore the tension between marks that are made with bodily gestures and those made with different degrees of technological intervention."

Since 2015 Grayver has been exploring the general contemporary situation of painting and, more specifically, her own practice as a trained painter from a European art academy. She has dedicated herself to the exploration of the technological aspects of painting, returning to the elementary questions of painting, seeking to reflect on the relationship between image and objectness of the medium within the context of our technological era. Grayver states:

"My engagement with the technical conditions of creating images—digital as much as traditional print- and paint-based—has greatly influenced my conceptual understanding of the painterly process in historical and contemporary practices, and has 'left a mark' on the evolution of my own artistic activities. Stimulated by the experience and by the exchange between informatics and the robotic world, I found myself to some degree compelled to challenge and reconceptualise the foundations of my painterly practice, starting with the bodily movement of the single brushstroke all the way to questions concerning control and loss of control in the creative process."

The practice of digital image-making represents a new manner by which images can be created whose sources are not derived from painting or photography, but rather arise through the writing of computer code, and are therefore not based on existing images of things. Such an approach makes it possible to deal with the cultural and psychological implications of our environment through symbols. This particular manner of creating images can of course encapsulate a huge amount of information, emanating from the most diverse sources—for example, fractal models from nature, physical phenomena and mathematical laws—that can then be translated into the visual domain. However, despite the widespread prevalence of digital image-making today, hardly any research has been conducted into the practice of translating images created via a computer simulation into the physical world using brushstrokes.

#### *7.2. Artistic Collaboration*

Since February 2016, Grayver has been collaborating with the e-David Project on the use of robotics as a painterly tool that can assist in the exploration and development of new creative and aesthetic approaches, and even in shaping our understanding of painting. The following describes her use of the robot in her private practice and interpretation of robotic arts. In this collaboration, the robot is used as a painting tool due to its nonhuman capabilities, such as very precise repetition of movements.

The focus of the collaboration has grown from more deterministic approaches of machine-based painting to dealing with contemporary questions regarding artificial intelligence (AI) and machine deep learning, and their use in the artistic domain. The interdisciplinary working platform between computer scientists and an artist can provoke a large range of questions regarding the use of robotics in the creative process of painting: How does one incorporate the use of computers and machines in the very intuitive and gestural practice of making a painting? How would we decompose the act of making a mark to a body movement (machine), taking logical decisions (computer) and emotional intentions (the artist)? Subsequently, Grayver established with the e-David team an official plan of collaboration in order to investigate human creativity through the interactive methods of computer-to-machine (simulated to real) and man-to-machine (artist working together with the machine) methodologies.

#### 7.2.1. Composing a Painting from Individual Strokes

When Grayver first witnessed the e-David at work during a preliminary visit in January– February 2016, she was fascinated by the paths the robot chose to distribute strokes on the sheet once it began to structure a painting. To a trained painter like Greyver, the robot's stroke placement initially seemed to be illogical, strange, even arbitrary. But, it sparked a curiosity to understand the logic behind it, and illuminated an idea that the nonhuman attributes of robotic painting could cause us to rethink the practice of painting. In other words, to paint in a way that no painter would ever consider; to engage with decisions about forming and deconstructing an image; and to instigate and explore new approaches to structuring task order in the working process.

Through the collaboration, Grayver and the e-David team explored further possibilities to exploit the painting robot creatively and reflected on ideas about the ways in which these could be implemented in the form of software and hardware. A number of questions of wider impact arose as the result of the collaboration: When and why would a semantic method of defining the object in the image be used? Is it an advantage or a disadvantage to paint semantic objects without having a pre-existing cognitive understanding of them? How could we use abstract forms, grammatical structures or mathematical models to achieve more complex surfaces? How would computer language be used to express the intentions of a composition? When and why would different painting styles be used? Further, on a technical level, we had to take into consideration how different materials would react to one another. For example, how could different colours be mixed on the canvas or on the palette? How should the size of the brush be set, and when is it necessary to add glaze? We would have to develop a range of distinct, individual brushstrokes (controlling the velocity and the z-axis) whose characteristics are analogous to those made by human painters in the "real world", in order to be able to pre-define when, in which order and for which tasks each stroke is to be used. In doing so, we are basically defining and categorising singular parameters within a library of painterly "acts" and "perceptions", in order to create a grammatical structure for the "language" of robotic painting.

All of these questions—qualitative technical aspects, creative and aesthetic value, etc.—would need to be defined by the team and saved in the visual feedback of the robot as parameters or as rules. This led us to questions of control: To what degree should the robot's actions be controllable by humans? Should the robot make autonomous decisions? If so, at what stage? How would we evaluate the output of the robot (with such binary values as good/bad, or yes/no?). And how would these evaluations be saved to its memory such that the e-David would be capable of using this information "correctly", in turn enabling it to make new decisions about its actions in the next run?

#### 7.2.2. Making Abstract Painting: Thinking in Vectors Instead of Pixels

The paintings series "Just Before it Snaps" (Liat Grayver and the e-David) is an investigation into abstract thought and experimentation with composition as energy fields that were configurations of vectors (Rudolph Arnheim's study on composition in the visual arts). Grayver was looking for the places or "border areas" in which the balance between coincidental and intentional brushstrokes created harmony on the visual surface.

From another point of view, these images were a stage for experimenting with the different painting materials used in the robot lab. Typically in human painting, the materials are controlled by the painter in a sort of interactive "ping-pong" situation. With the e-David robot, however, this is not the case as all of its actions must be predetermined and given as commands. The robot does not, for example, notice if the paint is dripping or has dried. It is exactly these limitations that are fascinating from an artistic point of view, as it stands in opposition to "normal" thinking and allows for the emergence of new, uncontrolled and surprising brushstrokes.

#### 7.2.3. Grouping Singular Lines into Forms Using Nodes and Centre Points

In the early abstract works done with the e-David in June 2016 ("Just Before it Snaps", see Appendix A, Figure A1) individual painting operations were programmed such that the entire surface of the painting was treated equally (overall composition). Singular lines were used to construct the paintings, with each new line created according to given (programmed) variables. The first line was positioned according to a pre-determined starting point, and the location of each subsequent generated line was calculated in relation to the line painted before it. We had introduced into the system a strategy of dividing the painting into masks of colour areas using brushstroke patterns—sets of individual brushstrokes—in contrast to an approach using singular strokes. Masks were applied to fill in a section one colour at a time, according to pre-defined light and shade characteristics. In this series of paintings, the computer generates a set of strokes that are connected or related to each other due to their proximity of action, corresponding to the painter's bodily movement when performing similar tasks.

For the next step, we created a new set of paintings "Resisting Gravity" (Liat Grayver and e-David, Figure A2), using limited sets of zigzag and straight lines, as well as a grid pattern formed by intersecting brushstrokes. In order to give the patterns an organic and complex surface feel, and to break the precision and mechanical appearance of the repetitions, we defined the specific character for each set according to the following parameters: orientation of the set, curvature of individual lines within the set, centre point of the painted masks, angle of the meeting point of the two lines, number of strokes, and proximity between lines—all of which are subject to a degree of randomness.

This grouping of lines into blocks of paint enabled Grayver to incorporate the concept of a centre point as a parameter for the computer when generating a painting. This way, the brushstroke patterns are generated to be located either around or emanating from a pre-defined position.

In order to avoid the creation of a closed composition with poor visual tension, Grayver defined several centre points in a single painting. By experimenting with different colours and brushstroke characteristics (settings), the centre points can be made to support each other as visual nodes in the painting composition.

"Six Variations on Gestural Computer-Generated Brushstrokes" (Liat Grayver and the e-David, Figure A4), done in October–November 2016, is a series of computer-generated sets of brushstrokes that reflect the quality of spontaneous hand movement inspired by the practice of Japanese calligraphy. Using the e-David, Grayver repainted the same generated path again and again, each time on a new canvas, knowing that this kind of exact repetition of movement could never be achieved by a human hand. Each of the variations is an execution of the same path with an identical velocity. Nevertheless, the works are varied and can be distinguished from one other due to the use of different brushes and changes in the value of the colour, as well as variations in the viscosity of the paint and the number of times the robot was instructed to load the brush with new paint. Some of the variation applied the repetition using a layering method. Sometimes the paint didn't have enough time to dry, and so instead of the brush applying a new layer of paint, it actually scraped some of the paint off the canvas, creating some surprising and pleasing surface effects. To distinguish the layers from each other and to give the painting some visual depth, Grayver applied different painting techniques (glaze, colour variation, viscosity variation) and juggled with the information saved on the computer—for example, stopping the robot and restarting it at different points in the process or breaking and reassembling the loop action into fragments.

#### 7.2.4. Perception of Brushstrokes Made by an Unconscious Body

Painting is a practice in which a complex architecture is constructed of separate sections that interact with each other as a whole in the form of a unified composition. While working on the e-David "Self-Portrait" (Figure A3), Grayver became aware of the need to divide the painterly process into different categories, looking into the different paths of the physical act (characteristics of individual brushstrokes) and cognitive decisions (semantic vs. abstract recognition of geometric forms) that the painter uses in the process of decomposing and reassembling visual information and material elements into a painting. More than that, the ability to save each step in the painting process and to compartmentalise and conglomerate information and action in different constellations, opens up a new field in the painting domain that explores the space between abstract and figurative painting. Grayver states:

"Saving information in the painting process and creating, when needed, a distance between the painter and the painting (the painter is simultaneously the viewer and the executer) are two features that computer- and robotic-based painting offers the artist. As a painter and a consumer of art I wondered if I would be able to recognise brushstrokes done by a robot in a more complex, generated work. I wanted to play with this idea by generating strokes that appear gestural but are executed in a way that only a machine is capable of doing, namely, with exact repetition."

#### 7.2.5. Traversing the Threshold of Materiality

The work "Traversing the Threshold" (see Figure A5) features a room installation of roboticsassisted calligraphic works that stretch into and expose the temporal and physical space of the artist's creative process through the mediums of robotic painting. What could have been executed as one painting constructed of thousands of brushstrokes has instead been decomposed and distributed over numerous sheets of rice paper.

The individual paper works are extracted from a complex of computer-generated particles (Simulation of a World Overview) according to Newton's Law of Gravitation. Scaled to different sizes, each can be viewed not only as an individual work but also as part of the modular wall installation. In the creation process, Grayver cropped different sections of the master particle generator and translated the individual particles into single brushstrokes (assigning parameters such as, for example, the size, length, pressure and speed variation of the strokes), before sending it to the e-David robot for the final execution.

The fragility of the ink-infused rice paper work in particular stands in sharp contrast to the industrial robot used to create them. As with Japanese calligraphy, the brush trajectories and the ink's behaviour as it penetrates the surface are of far greater importance than the perception of the object itself.

#### **8. Conclusions**

The e-David project is currently a rare fusion of technology and art. The original design goal of robots, namely to perform repetitive tasks at high speed, present limitations which we seek to circumvent through the new methods developed for e-David. Our novel techniques of brush calibration and self-improvement integrate tools that are imprecise by their nature into the framework of robotic precision. This allows e-David to be used as more than just a remote controlled brush and provides a base from which painting technique can be understood and automised.

A focus on single brushstrokes, or the painting of small features in general, allow the artists working with e-David to operate on a much higher level, and to forgo very low-level programming of the machine. The brushstroke, in its various manifestations, is the singular tool of communication that is encountered in paintings and drawings throughout all epochs. Our driving motivation in this cross-disciplinary artistic research is to study painting from the perspective of its most essential act, i.e., the process of making of a line as opposed to the study of the painting itself (the artistic object). Hence we have placed a special focus on single brushstrokes in our research.

E-David's new capabilities in the domain of painting technique and the collaboration with artists are moving the entire project closer towards producing both a robotic painter and a mechanised assistant for the human artist.

#### *Future Work*

For the future development of the project we envison three main areas of research:

We have established single strokes as primitves for the painting process so far. In next iterations of the software we will recombine these in certain patterns to fill surfaces with different structures. This will transfer yet more control of the painting process to the robot, thereby impacting the artwork created and how artists can use the machine.

The stroke experimentation creates a dataset of robot movements and the associated stroke. We will extend this dataset by enriching it with more information and by letting the robot collect new strokes for long periods of time. We will also explore how the robot can more efficiently perform its own experiments to streamline data acquisition.

The collected stroke data will form a basis for using machine learning (ML) techniques in the future. While the usefulness of ML for e-David must be evaluated closely, some promising and applicable approaches exist: Gordon et al. have developed a method that allows a learning agent to explore those aspects of a task it has very little knowledge about Gordon and Ahissar (2011, 2012). This is a natural extension of the experimentation with strokes conducted in this study, and will allow the robot to make more directed trials.

#### **9. Additional Information: Detailed Technical Description of the Painting Setup**

The current painting setup and operating principle of all e-David machines was originally designed and built by Deussen and Lindemeier, who initiated the project in 2008 Deussen et al. (2012), Lindemeier et al. (2013). The description given here contains both their previous development efforts and recent additions to it.

#### *9.1. Painting Setup*

The schematic layout of an e-David painting machine is shown in Figure 11. We place a robotic arm in front of a canvas, which acts as the workpiece. The canvas is angled such that singularities are avoided and that the working envelope of the machine is used optimally. The tool used by the robot is a brush of known length. The TCP of the robot is calibrated to lie exactly at the tip of the brush. The feedback camera is placed such that it has a full view of the canvas. Due to the placement of the robot between camera and canvas, it is necessary for the arm to move aside when a feedback photo is taken. We also set up a table for painting accessories next to the arm, where paints, exchangeable tools and a brush washing device are located. The washing device provides a water jet in which the robot holds the brush hairs to clean them before picking up a new paint which avoids cross-contamination.

Paints are held in small steel containers, which also provide an edge for the robot to wipe off excess paint. Currently there are no sensors to supervise the amount of paint remaining, but the 30 mL containers are usually sufficient for painting overnight. Paints are premixed by the operators and refreshed regularly.

The painting surface is mounted on a steel frame, which in turn is bolted to the robot's base plate. This ensures rigidity and avoids the need for frequent recalibration of the workpiece location. While the machine never applies significant force to the surface, humans tend lean on it while inspecting progress or cleaning. Previous wooden frames would move too much and even slight deviations would cause inconsistent stroke widths to appear.

**Figure 11.** Schematic (**Left**) and actual (**Right**) layout of e-David.

#### *9.2. Robots*

We use six-axis industrial robots for e-David, as these provide a large degree flexibility in their use. XY-plotters have also been considered, but much more hardware effort is required to implement all necessary motions with such a machine. For example, the robotic arm is able to wipe its brush on the paint container after dipping the brush in it, in order to avoid dripping paint. This is a complex motion which has the brush follow a trajectory through dozens of points at varying tool orientations and velocities. An equivalent XY-plotter would require at least five axes to be able to perform such a motion. Furthermore, the robotic arms are a mature technology, widely used in the industry, allowing us to use common design patterns in our setup. The machines provide very good accuracy (±0.01 mm) and are very reliable.

Two robots are currently in use: The Reis RV-20 6 is a welding robot, with a range of 2800 mm and weight of 875 kg Reis (2012). It is a traditional manufacturing robot, being suitable for the production of cars or similarly sized objects. This allows the machine to work on large paintings, but makes the device unsuitable for being transported to exhibitions.

The ABB IRB 1200 is a general-purpose robot with an emphasis on a small form-factor and high speed operation. Unlike classical industrial robots, it is also suitable for medical applications or food processing. We bought this device due to its comparatively low weight of only 54 kg, as this makes moving it easy, given the right equipment ABB (2017b). The robot is attached to a 200 kg steel plate, which can be split into four pieces for transport. Figure 12 shows the robot being exhibited in Zürich.

**Figure 12.** A mobile version of e-David being exhibited in Zürich.

Both machines need to be programmed in a manufacturer-specific programming language, which is RobotSTAR for the RV-20 and RAPID for the IRB 1200. We implement a network communication interface, which allows both machines to receive commands from a control computer.

The robot uses four coordinate systems internally: The *base coordinate system* has an origin located at the lowest point in the centre of the robots base. The *world coordinate system* serves as the basic reference point for all other coordinate systems. It can be used to define a robot working cell but in this case its origin point is set to coincide with the base coordinate system. The *tool coordinate system* has its origin point in the current tool centre point, or TCP. It defines the tool's orientation and can be used for moving a tool with constant orientation. The *work object coordinate system* defines the location and orientation of a work object onto which the tool is being applied. In the case of a painting robot, this is the canvas. Hence if the work object coordinate system is defined to be in a corner of the canvas, it is possible to reference points on the canvas by their XY-coordinate in the corresponding work object coordinate system ABB (2017a). Thanks to this mechanism, the painting software can use canvas coordinates which are easily transferable to the robot. An overview of all systems can be seen in Figure 13.

**Figure 13.** The coordinate systems used by e-David.

The canvas coordinate system originates in the top left corner of the canvas. The X and Y axes lie in the image plane, with the X axis being the horizontal axis. The Z axis is perpendicular to the image plane. Z is zero on the canvas surface and is positive behind the canvas. The Z axis points away from the robot and is used to control the brush application pressure. A Z value of zero causes the brush tip to barely touch the canvas and increasing Z increases the application pressure. We limit Z for each brush to a known maximum pressure to avoid inadvertently breaking the tool.

A note must be made that neither the Reis nor the ABB robot can be used collaboratively with humans. Both machines can operate at high velocities and can produce a dangerous amount of force. They cannot detect a collision with a human and are thus unable to limit forces upon a body part which is in their path. Hence the robots are kept behind light barriers which exclude humans from their working range while they operate in automatic mode. In manual mode, their speed is limited and the operator must use a dead man's switch when moving them. New collaborative robots are not planned to be included in the project, as they are expensive and there are no safety certifications for pointy tools such as brushes as of now.

#### *9.3. Optical Feedback System*

High quality DSLRs are used by the e-David system in order to acquire information about the canvas. Currently a Canon EOS 70D with a 20 Megapixel sensor and a Sony Alpha 6300 with a 24 Megapixel sensor are included in the setup. We use gphoto2 to transfer the images via an USB connection to the control computer. Transfer and analysis of a photo for feedback purposes can take up to a minute. However, since this time period allows the paint to dry, there is currently no need to optimize in this area.

**Author Contributions:** J.M.G. developed the new methods for brush handling and wrote this paper. O.D. initiated the e-David project and developed the painting process with Thomas Lindemeier. L.G. uses the machine for artistic purposes and wrote the section about artistic implications.

**Funding:** This research received no external funding.

**Acknowledgments:** We thank Carla Avolio for proofreading this paper. We also thank the anonymous reviewers for their feedback and Calvin Li for his assistance.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

TCP Tool Centre Point

ML Machine Learning

#### **Appendix A. High Resolution Pictures of e-David Artwork**

**Figure A1.** "Just Before it Snaps", Acrylic on canvas, 30 cm × 40 cm. 2016, c Liat Grayver.

**Figure A2.** "Resisting Gravity in Blue and Red", Acrylic on canvas, 30 cm × 40 cm. 2016, c Liat Grayver.

**Figure A3.** "e-David Self-portrait", Acrylic on canvas, 60 cm × 80 cm, 2016, c Liat Grayver.

**Figure A4.** "Six Variations on Gestural Computer-Generated Brushstroke", six robotic paintings. Acrylic on canvas. 60 cm × 80 each. 2016. Exhabition view: "Pinselstriche im digitalen Zeitalter Interdisziplinäre Forschung in Malerei & Robotik" at the Halle 14, Februar 2017 Spinnerei Leipzig. Liat Grayver, photo c Marcus Nebe.

**Figure A5.** "Simulation of a World Overview", Exhibition view: "Traversing the Threshold" Room Installation of Robotics-Assisted Calligraphic Works and Videos in Collaboration with the e-David Project (University of Konstanz) and Video Artist Marcus Nebe. Exgirlfriend gallery, Berlin 2018. Photo c Gabrielle Fougerousse and Exgirlfriend Gallery.

#### **References**

ABB. 2017a. Operating manual: IRC5 with FlexPendant for RobotWare 6.05. In *ABB Document ID 3HAC050941*. Zürich: ABB Download Center.

ABB. 2017b. Product specification IRB 1200. In *ABB Document ID 3HAC046982*. Zürich: ABB Download Center.


Baruch, Orit. 1988. Line thinning by line following. *Pattern Recognition Letters* 8: 271–76. [CrossRef]


Hill, Donald R. 1991. Mechanical engineering in the medieval near east. *Scientific American* 264: 100–5. [CrossRef] International Federation of Robotics. 2016. World Robotics Report 2016. Available online: https://ifr.org/ifrpress-releases/news/world-robotics-report-2016 (accessed on 15 March 2018).


c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

#### *Article*
