Next Article in Journal
Unsure Theory: Ambivalence as Methodology
Next Article in Special Issue
Madeleine: Poetry and Art of an Artificial Intelligence
Previous Article in Journal
Emotions in the Psychology of Aesthetics
Previous Article in Special Issue
Landscape Projection and Its Technological Use in Conceptualising Places and Architecture
 
 
Article
Peer-Review Record

Region-Based Approaches in Robotic Painting

by Jörg Marvin Gülzow * and Oliver Deussen
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 4: Anonymous
Submission received: 24 June 2022 / Revised: 28 July 2022 / Accepted: 4 August 2022 / Published: 11 August 2022
(This article belongs to the Collection Review of Machine Art)

Round 1

Reviewer 1 Report

The manuscript introduces a region-based approaches to robotic painting. The point of departure for this manuscript is that path-based strokes which are typically used as the basic building blocks in painterly rendering techniques suffer from two downsides when mapped to robot drawing systems: (i) a planned stroke path is not always realizable and depends heavily on the tool used and environmental factors, and (ii) stroke-based rendering usually works frame-to-frame and does not take painting results deviating from the plan into account (cf. p. 1-2, l. 36-41). To address these drawbacks, the authors analyze the work of four artists and propose regions – in contrast to strokes – as a basic primitive to achieve more human-like results and to make the painting
process more modular.

In terms of topic, the manuscript is well suited to the Journal. In terms of methodology and presentation, the authors are advised to consider the possibility to address the following remarks.

Remarks:

1. “The generation of an adaptive paint plan” (announced on p. 2, l. 58-59) is not explained in sufficient detail in the manuscript. Thus, constrained regions are described as general polygons which can have “concave” and “convex” regions. The authors provide an example in Fig. 8. However, it would be beneficial for the reader if the authors explained in more detail how the critical (i.e. red and blue) regions were actually generated. At least the following questions are of interest:
- 1.1. How does the system determine the number of “critical” regions?
- 1.2. How does the system determine the shapes of critical regions? E.g., in Fig. 8 all blue regions except one are rectangles; the remaining one is a parallelogram. Why?
- 1.3. How does the system determine the dimensions of critical regions? The authors state the following: “Single-side constrained regions (SSCR) are mostly rectangular areas that have one straight or slightly curved side which must be painted exactly. All other sides are considered to be unconstrained and may be painted over” (p. 10, l. 366-368). However, the other sides are not really unconstrained: e.g., their dimensions depend on the dimension of the polygon, the brush tip overshoot, etc.
- 1.4. How does the system determine the order in which critical regions are to be painted?

2. Related to the formula for a line-circle intersection (cf. p. 11-12):
- 2.1 The original contribution of the authors should be clearly distinguished from the contribution of Rhoad et al. (1991, as cited on p. 11, l. 380).
- 2.2 The authors explicitly denote the tool center point (i.e., point O) in Fig. 9. However, they define the canvas C as line through points p and q (not a plane or surface, cf. p.11, l. 358) which makes the interpretation of the figure (and related formulae) more difficult. E.g., tool angle is defined relatively to canvas normal, but Fig. 9 neither denote the canvas normal nor the tool angle (cf. p. 11, Table 1). Thus, it wold be helpful if this figure denoted the canvas surface (or normal), all the parameters defined in Table 1, and points p, q, beta_1, beta_2.
- 2.3. Please explain in more details the formulae for calculation of p, q, beta_1, beta_2. E.g., from the given definitions for p and q, it might be concluded that p_y is equal to q_y, i.e., that d_y is equal to zero. Is this true? (If so, why do you introduce d_y?) Anyway, please explain the calculation of the above parameters in more detail.
- 2.4. What parameters x_1/2 and y_1/2 represent?
- 2.5. What is the difference between vectors F and f?

3. Related to the safety offset and small increments:
- 3.1 For single-side and two-side constrained regions: The authors use “a safety offset as an initial starting point and then use visual feedback to approach the target line iteratively” (p. 12, l. 396-397, cf. also p13., l. 417). Please explain how do you iteratively adapt the safety offset.
- 3.2 For two-side constrained regions: the authors state that “the stroke is moved in parallel to the edge in small increments while the distance to the tip is measured” (p. 13, l. 418-419). How do you determine the actual values of these small increments?
 
4. In general, the processing of structured and gestural regions are not explained in sufficient detail.
In addition, the notion of “predefined complex stroke movements” (p. 14, l. 469) is not clearly introduced. From the examples provided in Fig. 17-19, it might been concluded that the processing of structured and gestural regions can be conceptualized as consisting of two stages: outlining with a constrained regions in the first stage, and processing the outlined region as its edges were unconstrained in the second stage. Is it so? Please explain.

5. The following statements are not supported in the manuscript:
- 5.1. “A small set of versatile operations can be used to let robots produce a large variety of artworks” (p. 2., l. 55-56).
- 5.2. “The availability of more complex primitives also simplifies the painting process as a whole, since more types of abstraction can be built upon them” (p. 2., l. 56-58).

Author Response

1. “The generation of an adaptive paint plan” (announced on p. 2, l. 58-59) is not explained in sufficient detail in the manuscript. Thus, constrained regions are described as general polygons which can have “concave” and “convex” regions. The authors provide an example in Fig. 8. However, it would be beneficial for the reader if the authors explained in more detail how the critical (i.e. red and blue) regions were actually generated. At least the following questions are of interest:
- 1.1. How does the system determine the number of “critical” regions? 
   -> The number of regions is determined by the structure of the polygon. The new explanation makes this more clear.
- 1.2. How does the system determine the shapes of critical regions? E.g., in Fig. 8 all blue regions except one are rectangles; the remaining one is a parallelogram. Why? 
    -> We have added a detailed description of how regions are computed, based on the tool used for painting. The parallelogram was a rendering error and has been fixed.
- 1.3. How does the system determine the dimensions of critical regions? The authors state the following: “Single-side constrained regions (SSCR) are mostly rectangular areas that have one straight or slightly curved side which must be painted exactly. All other sides are considered to be unconstrained and may be painted over” (p. 10, l. 366-368). However, the other sides are not really unconstrained: e.g., their dimensions depend on the dimension of the polygon, the brush tip overshoot, etc.
    -> Added a paragraph explaining how the input polygon must have features within certain limits given by the available painting tools. The minimum achievable stroke witdh with the current tool dictates what is paintable and what is too fine. In teh description how regions are constructed, we added a remark about avoiding overshooting in these circumstances.
- 1.4. How does the system determine the order in which critical regions are to be painted? 
    -> Painting order is not important, since no constrained line is violated by another. Added a remark clarifying this at the end of the section on constrained regions.

2. Related to the formula for a line-circle intersection (cf. p. 11-12):
- 2.1 The original contribution of the authors should be clearly distinguished from the contribution of Rhoad et al. (1991, as cited on p. 11, l. 380).
  -> Added a footnote detailing which equations come directly from this source.
- 2.2 The authors explicitly denote the tool center point (i.e., point O) in Fig. 9. However, they define the canvas C as line through points p and q (not a plane or surface, cf. p.11, l. 358) which makes the interpretation of the figure (and related formulae) more difficult. E.g., tool angle is defined relatively to canvas normal, but Fig. 9 neither denote the canvas normal nor the tool angle (cf. p. 11, Table 1). Thus, it wold be helpful if this figure denoted the canvas surface (or normal), all the parameters defined in Table 1, and points p, q, beta_1, beta_2.
  -> Added a coordinate system to figure 9 to indicate this more clearly. Added other missing parameters and split the figure into a part with and without brush deflection to declutter it. Added an angle marker between canvas normal and tool axis.
- 2.3. Please explain in more details the formulae for calculation of p, q, beta_1, beta_2. E.g., from the given definitions for p and q, it might be concluded that p_y is equal to q_y, i.e., that d_y is equal to zero. Is this true? (If so, why do you introduce d_y?) Anyway, please explain the calculation of the above parameters in more detail.
  -> For the given horizontal canvas this is true. However, since we might want to compute the slippage on an angled canvas (e.g. useful for machines whithout a work object coordinate system), we leave in d_y for expandability.
- 2.4. What parameters x_1/2 and y_1/2 represent? 
  -> They are the coordinates of \beta_1 and \beta_2, the intersection points of the canvas with the brush arc. We erroneously defined \beta_2 = (x_1; y_1), which should actually be \beta_2 = (x_2; y_2)
- 2.5. What is the difference between vectors F and f? 
  -> Using \vec{f} was a typographical error, which has been addressed: f_x and f_y are the coordinates of point F, with \vec{F} now being the vector to F.

3. Related to the safety offset and small increments:
- 3.1 For single-side and two-side constrained regions: The authors use “a safety offset as an initial starting point and then use visual feedback to approach the target line iteratively” (p. 12, l. 396-397, cf. also p13., l. 417). Please explain how do you iteratively adapt the safety offset.
  -> The paragraph has been reworked and a detailed explanation of the steps taked to iterate on a line has been added. Also extended Figure 10 with an annotated difference image taken from the optical system to make clearer what is being measured.
- 3.2 For two-side constrained regions: the authors state that “the stroke is moved in parallel to the edge in small increments while the distance to the tip is measured” (p. 13, l. 418-419). How do you determine the actual values of these small increments?
  -> Replaced "small increment" with "half of the expected stroke width" which is how the increment is actually computed.
 
4. In general, the processing of structured and gestural regions are not explained in sufficient detail.
In addition, the notion of “predefined complex stroke movements” (p. 14, l. 469) is not clearly introduced. From the examples provided in Fig. 17-19, it might been concluded that the processing of structured and gestural regions can be conceptualized as consisting of two stages: outlining with a constrained regions in the first stage, and processing the outlined region as its edges were unconstrained in the second stage. Is it so? Please explain.
  -> Added  better definition for complex movements, their different use case compared to the other introduced regions and why a distinction is required.

5. The following statements are not supported in the manuscript:
- 5.1. “A small set of versatile operations can be used to let robots produce a large variety of artworks” (p. 2., l. 55-56).
  -> Addressed this by pointing out stippling (extended to lines), where a small amount of primitves is used to draw an image.
- 5.2. “The availability of more complex primitives also simplifies the painting process as a whole, since more types of abstraction can be built upon them” (p. 2., l. 56-58).
  -> Mentioning CAD/CAM as an example where complex machining operations can be collected into one larger operation and then are planned automatically (e.g. bolt cirlces)

We thank the reviewer for the precise and detailed feedback, which made it easy to update the manuscript :)

Reviewer 2 Report

The paper discusses a method for painting with AI using regions to mimic a typical human painting style.

The paper is well written and includes an in-depth background which builds into a methodology and testing.

One thing that is missing is a comparative analysis of different AI methods to achieve regional action planning. i.e fuzzy logic vs pre-programmed actions. This would be more interesting than some initial findings of one mode of achieving/testing a particular method. 

It is unusual to have figures in a conclusion/discussion. In this section, you recap your findings. The figures and corresponding text belong in the results or analysis, not the discussion. 

In the abstract and conclusion, please state the novelty of this research, how it builds on the current state of the art and the key contributions to existing knowledge as these are not particularly obvious in the paper. 

An image of the robot would also be beneficial to the reader.

Author Response

One thing that is missing is a comparative analysis of different AI methods to achieve regional action planning. i.e fuzzy logic vs pre-programmed actions. This would be more interesting than some initial findings of one mode of achieving/testing a particular method. 
  -> While working on this we studied different approach in this area and found few which are suitable to robotic painting. For example some studies exist  about path planning for region coverage in aerial imaging or agriculture. Ground based robots which perform scanning or cleaning tasks have received some attention but the methodologies used are different from painting, since for example walking outside of the target area or following a specified structure are not necessarily detrimental to the tasks outcome. In the domain of action planning we found no incremental method which allows us to slowly move towards but not exceed a constraint. Hence, we developped a specific method which allows us to produce better results within our problem domain. Later, work on e.g. reinforcement learning agents could be done which then use our current approach as a baseline to improve upon, e.g. by finding behaviour which is more adapted to the angle of a TSCR. Also, in order to imitate the features we derived from the observation of human made paintings, the current approach seems sufficient. We have added a note to the Future work section that a more general approach would be desirable.

It is unusual to have figures in a conclusion/discussion. In this section, you recap your findings. The figures and corresponding text belong in the results or analysis, not the discussion. 
  -> Moved figures from discussion, made latex layout stricter.

In the abstract and conclusion, please state the novelty of this research, how it builds on the current state of the art and the key contributions to existing knowledge as these are not particularly obvious in the paper. 
-> Added a conclusion section which sums up the novel developments 

An image of the robot would also be beneficial to the reader.
-> Added images of the system to the appendix

We thank the reviewer for the detailed feedback, which made it easy to update the manuscript :)

Reviewer 3 Report

1) All equations must be numbered.
2) All parameters used in the equations must be presented in detail.
3) Figure 10 requires more explanation.
4) Figure 11 requires more explanation.
5) Figure 12 requires more explanation.
6) Figure 13 requires more explanation.
7) Figure 14 requires more explanation.
8) Figure 15 requires more explanation.
9) Figure 16 requires more explanation.
10) Figure 17 requires more explanation.
11) Figure 18 requires more explanation.
12) Figure 19 requires more explanation.
13) The "Discussion" section needs to be expanded and developed. Here will be presented in detail all the new aspects introduced by the authors in the paper, compared to those of other existing works (from the literature). Emphasis will be placed on aspects of pictorial rendering techniques with their transfer to the machine (robot), as well as on how region-based techniques in the real world are transferred in an automatic context, the different types of region primitives being used as procedures for car-painted-David.
14) Enter a "Conclusions" section before "Future Work".


Author Response

1) All equations have now been numbered.
2) All parameters used in the equations must be presented in detail. -> Updated Figure 9 with all relevant parameters 
3) Figure 10 requires more explanation. -> Added more detailed explanation of the procedure and updated 10a to match the stroke progression and target line orientation.
4) Figure 11 requires more explanation.
5) Figure 12 requires more explanation.
6) Figure 13 requires more explanation.
7) Figure 14 requires more explanation.
8) Figure 15 requires more explanation.
9) Figure 16 requires more explanation.
10) Figure 17 requires more explanation.
11) Figure 18 requires more explanation.
12) Figure 19 requires more explanation
.
-> All figures starting from figure 10 have recieved expanded captions explaining what is going on and expanding on some details which did not fit into the main text.

13) The "Discussion" section needs to be expanded and developed. Here will be presented in detail all the new aspects introduced by the authors in the paper, compared to those of other existing works (from the literature). Emphasis will be placed on aspects of pictorial rendering techniques with their transfer to the machine (robot), as well as on how region-based techniques in the real world are transferred in an automatic context, the different types of region primitives being used as procedures for car-painted-David.
  -> The discussion section has been fully reworked to look at each primitive we introduced, weighing benefits and limitations. We also related them to their origins in the real world and compared them to other works.
14) Enter a "Conclusions" section before "Future Work".
    -> Added conclusion section which summarizes all new elements developed in the paper 

We thank the reviewer for the precise and detailed feedback, which made it easy to update the manuscript :) 

Reviewer 4 Report

The manuscript entitled "Region-based Approaches in Robotic Painting" explores the problem of robotic painting based on the use of region-based basic primitives instead of strokes. Before proceeding, as a researcher in the field of Computer Science, I can only assess the clarity, readability, and structure of the manuscript, as well as the feasibility of the proposed approach from the implementation side on the robot. Apart from that, the manuscript is well written and presents an interesting solution. The proposed method seems to be feasible and to provide a solution for paiting in border regions.

The manuscript contains some minor graphical errors:
1. The positions and references to the literature are not consistent and do not seem to be adapted to the template.
2. Figure 19. d) overlaps the caption of sub-figure b).

There are also some questions regarding the proposed method and manuscript:
3. Fid the authors perform an adequate error analysis to determine what errors the machine makes in the examples in Figures 13-17?
4. In Figure 19. d), a tree painting is shown with a spiral gesture primitive. Why is this so? What is the connection between said painting and the manuscript in general? This should be clearer from the text.

As for future work, I would suggest using existing simulators and methods from robotics (ROS, Gazebo, etc.) for planning. For the possible solution to the problem shown in Figure 7, a gradual change in inclination and pressure could cancel out the effects of tool contact.

Author Response

The manuscript entitled "Region-based Approaches in Robotic Painting" explores the problem of robotic painting based on the use of region-based basic primitives instead of strokes. Before proceeding, as a researcher in the field of Computer Science, I can only assess the clarity, readability, and structure of the manuscript, as well as the feasibility of the proposed approach from the implementation side on the robot. Apart from that, the manuscript is well written and presents an interesting solution. The proposed method seems to be feasible and to provide a solution for paiting in border regions.

The manuscript contains some minor graphical errors:
1. The positions and references to the literature are not consistent and do not seem to be adapted to the template.
  -> Fixed punctuation inconsistencies.
2. Figure 19. d) overlaps the caption of sub-figure b).
   -> Fixed

There are also some questions regarding the proposed method and manuscript:
3. Fid the authors perform an adequate error analysis to determine what errors the machine makes in the examples in Figures 13-17?
   -> Added a table in the result section detailing the errors made by our method for painting constrained regions
4. In Figure 19. d), a tree painting is shown with a spiral gesture primitive. Why is this so? What is the connection between said painting and the manuscript in general? This should be clearer from the text.
   -> Added more clarification in the image caption why this was done (show use case of replacing gestural motion to achieve variations of a motive without regenrating the underlying structure)

As for future work, I would suggest using existing simulators and methods from robotics (ROS, Gazebo, etc.) for planning. For the possible solution to the problem shown in Figure 7, a gradual change in inclination and pressure could cancel out the effects of tool contact.
-> unfortunately it is not possible to adequately simulate brush physics in ROS or Gazebo. The robot used is an ABB IRB 1660ID, which has built in motion planning. However brush handling is the main focus of our research and not available in other software. Changing inclination and pressure also changes the footprint of the brush and can cause the tool to over or undershoot significantly. While it is possible to measure and model the effects, they depend on tool, paint, angle and pressure, which makes calibration tricky. Instead the methods described here allow us to manage these brush artifacts in a similar way to human painters.

We thank the reviewer for the precise and detailed feedback, which made it easy to update the manuscript :)

Round 2

Reviewer 1 Report

The authors have adequately addressed my remarks from the previous review round and I believe that the manuscript has been sufficiently improved to warrant publication in Arts.

Back to TopTop