**6. Discussion**

The work presented includes methodologies to address each integration step, with the final aim of achieving 3D thermography by maintaining the decoupling of the devices. Concerning the extrinsic calibration, we proposed an automatic method that exploits the object silhouette. An evaluation of the accuracy of the method is presented in Table 2. Since our automatic method does not exploit the concept of homologous points (and thus, MRE cannot be computed), the evaluation of its accuracy was made by a comparison with the extrinsic parameters (considered the ground truth) obtained by the manual selection of homologous points, executed on a test object purposely designed. A comparison can be made with the automatic calibration method based on a silhouette developed by J.T. Lussier and S. Thurn [11]. The error concerning the *x* and *y* translations (Δ*t*1 and Δ*t*2 in Table 2) in our work is about one order of magnitude lower (the max obtained is around 1 mm); concerning the *z* translation (Δ*t*3 in Table 2), our maximum error stays within 12 mm, against the over 50 mm reported in [11]. Regarding the error on the rotations (Δ<sup>α</sup>, Δβ and Δγ in Table 2), our errors are higher, with a mean of 0.58 degrees against the 0.3 degrees reported in [11]. We want to point out that this comparison is made between integration procedures that suit different integration modalities. In [11], indeed, the authors carried out a real-time integration between thermograms and depth-maps (2.5D), whereas we integrate (offline) thermograms with 3D point clouds. The difference in the type of the range data integrated entails, among other things, the following fact—the accuracy of the method used in [11], as the authors said, depends on the scene coverage, that is to say the percentage of the area covered by the object of interest with respect to the background. The higher the coverage is, the higher the edge variability is, and the lower the error in the extrinsic parameters. In our case, since we did not work with an edge map but with 3D point cloud, the concept of edge variability is not similarly defined and the error did not depend on this parameter.

Unfortunately, regarding the accuracy, a direct comparison with the decoupled method presented by A. G. Krefer al. [22] is not possible because the automatic calibration procedure they used was based on the automatic matching of interest points and was evaluated by the classical MRE. Apart from the calibration method, other di fferences from the work [22] include the fact that they used the 3D data in the form of a polygonal mesh, whereas we keep the 3D data as a point cloud during the whole integration process. For visual purposes, if need be, the results may be converted into a coloured mesh at a later time. Concerning the handling of the points in which more temperatures are superimposed, we computed the temperature to be assigned as a weighted average, where the weight decreases in an exponential way if the viewing angle increases, which was the method exploited by S. Vidas and P. Moghadam in [9]. In [22], conversely, the weight of each point temperature was computed as a function of the position of the point inside the view frustum of the camera (the weight increases if nearer to the optical axis or to the optical centre). A flaw of this latter approach is that it did not take into account the object geometry (i.e., the normal vectors in each point) and so it was prone to fail to compensate for the variation of the emissivity at high viewing angles. Figure 14 clearly shows that at high viewing angles the temperature can be underestimated (in that specific case up to one degree). The methodology followed, which takes the object geometry into account, allows us to overcome this issue, assuming that thermograms of the interested area can be acquired from di fferent orientations. However, if this is not possible (for instance, because of the position and of the limited mobility of the object or if the acquisition time is limited), an improvement of this method could be to apply a temperature correction to each thermogram singularly, exploiting, for instance, the correction formula proposed in [4], which relies on a theoretical model for the directional emissivity.

The whole integration methodology was first tested on a purposely designed 3D-printed object and then on a historical marble statue, and the results demonstrate the general feasibility of the approach. We are planning, however, further tests, in particular aimed at improving the robustness of the automatic extrinsic calibration method, which is a ffected, to a certain degree, by the object geometry (especially in terms of level of detail of the geometrical features and of the presence of symmetries). For objects which present a su fficient number of points of interest clearly identifiable both in infrared and in their 3D geometry (e.g., very sharp edges), the manual selection of the homologous points can still be a better option to compute the extrinsic parameters, and could be improved, for instance by applying to the thermogram the intensity transformation proposed in [22], which is able to highlight certain points which are otherwise not clear enough.
