**3. Calibration**

Projection mapping systems use at least one camera and one projector, so that the camera can detect and adjust the image from the projector [5]. In order for an image projected onto the real object to be properly visualized, a series of geometric and photometric calibration algorithms are required [5]. This section shows the evolution of the calibration algorithms designed for projection mapping projects (Figure 5).

**Figure 5.** Calibration methods used in the studies shown in this review article.

The technique of Aliaga et al. initially consisted of acquiring a geometric model of the object to be restored and the proper calibration of the projectors based on the self-calibration method they described in [19,22]. Subsequently, the image of the object was captured and restored using an interactive energy minimization algorithm. Finally, the projection image that would be projected onto the object was calculated [19].

They obtained a self-calibrated structured light method allowing data processing from multiple points of view, being able to obtain a 3D object reconstruction. For the correspondence between the pixels they exploited the duality of cameras and projectors [22].

For the development of Revealing Flashlight, Ridel et al. used the calibration method that Audet et al. had proposed in 2010 [11,23]. Audet et al. had developed an alignment algorithm between camera, object and projector [23]. In their method, they described two models, a geometric model and a colour model, through which the necessary information can be obtained to predict how the image projected on the camera sensor is formed. They were based on the pinhole camera model to obtain Equations (1) and (2), which define the projection in the image plane of a camera placed at the origin and a calibrated projector, respectively, for a point *x*<sup>3</sup> located in the plane of the surface [23].

$$\mathbf{x}\_{\mathfrak{C}} = \mathbf{K}\_{\mathfrak{C}} (I\mathbf{x}\_{\mathfrak{S}} + \mathbf{0}),\tag{1}$$

$$\mathbf{x}\_p = \mathbf{K}\_p(\mathbf{R}\_p \mathbf{x}\_3 + t\_p),\tag{2}$$

where *K* is the camera matrix, which contains the internal or intrinsic projective parameters, and where *R* and *t* are parameters that shape the orientation and position of the devices [23].

They developed a colour model so that the system could be easily calibrated without camera control with a single projector and flat surface (Figure 6). For this, they formulated Equation (3), which allows predicting the colour information that the camera will observe (*pc*) knowing the colour emitted by the projector (*pp*) and the reflectance emitted on the surface (*ps*).

$$p\_c = p\_s[\mathcal{g}X\_{3\mathcal{X}3}p\_p + a] + b\_r \tag{3}$$

where *g* is the gain of the projector light; *X* is the colour mixing matrix; *a*, the ambient light; and *b*, the noise bias of the camera. All vectors are three-vectors in the RGB colour space [23].

**Figure 6.** Sketch proposed by Audet et al. [23] where the calibration method used is shown.

In order to use the models correctly, they developed a calibration algorithm to obtain the geometric parameters (*Kc, Kp, Rp* and *tp*) and the colour parameters (*X* and *b*). To do this, through Equations (1) and (2) that define the geometric model and a homography *(H)*, they developed the warping functions, which relate a point of the image in the camera with a point of the projector and a point from the camera image *x<sup>c</sup>* with the point *x<sup>s</sup>* of the surface plane image [23].

$$w\_p(\mathbf{x}\_\mathbf{c}) = H\_{\mathrm{pc}} \mathbf{x}\_\mathbf{c} \tag{4}$$

$$w\_{\mathfrak{s}}(\mathfrak{x}\_{\mathfrak{c}}) = H\_{\mathfrak{s}\mathfrak{c}}\mathfrak{x}\_{\mathfrak{c}}.\tag{5}$$

Finally, the equations are entered into the Equation (3) to obtain the colour of the pixels at the camera point *xc*, as shown in Equation (6).

$$p\_{\mathbb{C}}(\mathbf{x}\_{\mathbb{C}}) = p\_{\mathbb{S}}(w\_{\mathbb{S}}(\mathbf{x}\_{\mathbb{C}})) \mathbf{x}[\mathbb{S}X p\_{p}(w\_{p}(\mathbf{x}\_{\mathbb{C}})) + a] + b. \tag{6}$$

Once the calibration that provides us with the necessary parameters of the colour and geometric models was developed, they developed a cost function and its minimization function to optimize the system towards correct alignment [23].

Wang et al. (2010) established the alignment between the camera and the projector through a beam splitter, which allowed the light signal to be divided so that the same signal was perceived by the projector and the camera regardless of the detection distance. By performing this synchronization between the camera and the projector, a feedback effect was produced since the projected signal was part of the scene to be detected by the camera sensor. To avoid this phenomenon, the projector emitted in the visible range and the camera detector only detected IR wavelengths, thus preventing the virtual image from interfering with the projection of the following table [10].

Stenger et al. needed to develop a calibration system whereby the compensation image projected onto Mark Rothko's murals was displayed correctly. To do this, they first used the Matlab's control point selection tool to generate a geometric transformation and thus adjust the resolution of the target image and the current image of the artwork. Then, a lighting matrix with the levels of the three equal RGB channels was projected onto the work, in order to create a calibration curve for each channel. These calibration curves together with the colour mixing matrix, created to compensate the dependence of the channels on each other, created a suitable compensation image. In order for the compensation image to be positioned in the correct position relative to the illustration, a very irregular calibration image was projected onto the illustration and an image was captured. Then a Harris corner detector related corresponding points on the captured image and the calibration image. Using these points, a pattern was created in order to make the corresponding geometric transformation. Finally, by means of a RANSAC algorithm, outliers were eliminated, through an iterative non-linear fitting procedure [24].

Deformation Lamps tries to achieve the illusion of the perception of movements in objects by projecting an optimized light pattern on them. To do this, Kawabe et al. (2016) described an algorithm that consisted of defining a dynamic image sequence in order to project it onto a static object, thus achieving the perception of movement on the object.

To obtain this sequence of colour images, it was assumed that an image sequence, *Imovie*, can be calculated as a linear combination of a static colour picture *Istatic* and a dynamic grayscale image sequence *Iluminancedynamic* [12].

$$I\_{\rm movic}(\mathbf{x}, \mathbf{y}, t) \approx I\_{\rm static}(\mathbf{x}, \mathbf{y}) + I\_{\rm luminosity\,maxic}(\mathbf{x}, \mathbf{y}, t)\_{\rm r} \tag{7}$$

Finally, by means of Equation (8), the dynamic luminance is calculated, which by means of Equation (9) is projected onto the image, *w* being the factor that modulates the contrast of the dynamic component of the image and *B* is an arbitrary gray background so as not to take values below 0.

$$I\_{\text{luminancdymazmic}}(\mathbf{x}, y, t) = I\_{\text{luminancey}}(\mathbf{x}, y, t) - I\_{\text{luminancetwatic}}(\mathbf{x}, y) \tag{8}$$

$$P(\mathbf{x}, y, t) = wI\_{\text{luminosity}}(\mathbf{x}, y, t) + B \tag{9}$$

For the deformation lamps method, Kawabe et al. used a manual alignment between the projector and the object, since for this method on 2D objects, no specific calibration is needed to obtain the visual effect they were looking for [12].

In Aleksi´c and Jovanovi´c (2018), instead of using an algorithm yielding a real-time compensation image, a meticulous analysis of the artwork was carried out in order to be able to restore it digitally. This procedure was performed at the discretion of restoration professionals and then projected onto the work under controlled lighting conditions. For the alignment between the projected image and the actual artwork, the curvature of the image projection (curvilinear projection) as well as the perspective distortion were determined. The compensation images were projected at an angle and this perspective generated certain distortions of the image. In order for the compensation image to geometrically match the painting surface, the projection image curvature and perspective-generated distortion were determined so that the resulting image geometrically coincided with the surface of the artwork [20].

Vázquez et al. (2020) calibrated the projector to emit the corresponding lighting onto the work through a calibration algorithm and a merit function. Since the spectral power distribution of the projector (SPD) calculated is not the same as the real spectral emission of the projector (*DPK*), an algorithm was developed that relates both distributions through the *Z* factor. The purpose of the merit function is to minimize the colour difference (4*E* ∗ <sup>00</sup>) between the original and the restored artwork. The 4*E* ∗ <sup>00</sup> was calculated with CIEDE200. To eliminate distortions between the camera, the projector and the printed image, a *T* transformation is developed through the Matlab image processing toolbox [21]. Finally, the method of correspondence between pairs of images of Vincent and Laganiere (2005) was used for the alignment between the projected image and the printed image [21,25].
