Next Article in Journal
Novel Hate Speech Detection Using Word Cloud Visualization and Ensemble Learning Coupled with Count Vectorizer
Next Article in Special Issue
Reflectance Transformation Imaging Visual Saliency: Local and Global Approaches for Visual Inspection of Engineered Surfaces
Previous Article in Journal
Spiral Strip
Previous Article in Special Issue
Spot Weld Inspections Using Active Thermography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reflectance Transformation Imaging as a Tool for Computer-Aided Visual Inspection

1
ImViA Laboratory, University of Burgundy, EA 7535, 21000 Dijon, France
2
IRDL Laboratory, University Bretagne Sud, FRE CNRS 3744, 56100 Lorient, France
3
Department of Computer Science, Norwegian University of Science and Technology, 2815 Gjøvik, Norway
4
Francéclat, Technical Department, 25000 Besançon, France
5
Technical Center for Mechanical Industry (CETIM), 74300 Cluses, France
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(13), 6610; https://doi.org/10.3390/app12136610
Submission received: 26 May 2022 / Revised: 22 June 2022 / Accepted: 26 June 2022 / Published: 29 June 2022
(This article belongs to the Special Issue Automated Product Inspection for Smart Manufacturing)

Abstract

:
This work investigates the use of Reflectance Transformation Imaging (RTI) rendering for visual inspection. This imaging technique is being used more and more often for the inspection of the visual quality of manufactured surfaces. It allows reconstructing a dynamic virtual rendering of a surface from the acquisition of a sequence of images where only the illumination direction varies. We investigate, through psychometric experimentation, the influence of different essential parameters in the RTI approach, including modeling methods, the number of lighting positions and the measurement scale. In addition, to include the dynamic aspect of perception mechanisms in the methodology, the psychometric experiments are based on a design of experiments approach and conducted on reconstructed visual rendering videos. The proposed methodology is applied to different industrial surfaces. The results show that the RTI approach can be a relevant tool for computer-aided visual inspection. The proposed methodology makes it possible to objectively quantify the influence of RTI acquisition and processing factors on the perception of visual properties, and the results obtained show that their impact in terms of visual perception can be significant.

1. Introduction

Inspecting the appearance of manufactured surfaces is essential in many industries, particularly for high-added value products. Indeed, the quality of appearance can constitute an important lever of differentiation and added value for products. Indeed, with technological functions being more and more complex and difficult to evaluate objectively, a customer often builds his or her first and global impressions based on a visual assessment. In practice, the inspection tasks are still generally carried out through a visual or visuo-tactile [1,2] sensory analysis [3,4,5] directly on the objects. The human visual system is able to perform highly complex visual inspection tasks and is very flexible. In addition, various sensory analysis methodologies have been developed to optimize the repeatability and reproducibility of these processes. For example, some criteria related to the detection time which make it possible to better assess the visual impact of an anomaly or a more precise definition of exploration processes can allow one to better formalize the inspection process. Another approach to assessing the visual appearance of surfaces is to implement physical measurement(s) of the visual attributes [6] through instrumental systems. Digitization of the appearance is aligned with the approaches of Industry 4.0, since this numerical information could make it possible to implement active control of the surface manufacturing and finishing processes in order to control their appearance in the same way that can be carried out for other technical and technological surface features [7,8,9]. Strong synergies with other Industry 4.0 axes such as machine learning [10] and deep learning techniques [11] are also possible, or in connection with new manufacturing processes such as additive manufacturing [12,13] or remanufacturing [14] in a general context, where we seek to achieve more eco-efficient production methods [15]. Indeed, the challenges associated with the inspection and control of the visual quality of products can constitute important levers for the development and control of these processes [16]. For example, the printing process parameters and material properties can have a significant effect on the appearance of 3D printed parts [17].
Many technical and scientific challenges associated with measurement of the appearance are still unsolved [18,19,20,21]. One of the issues concerns the choice of the visual attributes that one wishes to quantify. Indeed, if for the human visual system the appearance of a surface constitutes an inseparable whole [22,23], this information is complex and multi-physical, and it cannot be quantified or estimated entirely by a unique appearance measurement system, especially since, in the industrial context, time constraints are essential for inspection processes. The implementation of a physical measurement therefore requires determining which components of the appearance we wish to quantify as a priority in relation to the problem and the applications to which we wish to respond. Today, while the development of appearance attribute measurement systems is still recent, and these systems are not widely implemented in the industry, one approach is to not replace but assist sensory experts, such as by providing them with augmented visualizations of the inspected surfaces or by automating certain aspects of the visual inspection process, such as the exploration path, (i.e., the sequence of scene visualization in terms of incidence light angles or the position of the observer). The work presented here is part of these so-called semi-automated approaches, whose main purpose is to assist visual experts [24] and thus objectify the inspection process.
The methodology proposed in this paper is based on an imaging technique which has seen many developments for cultural heritage applications and, more recently, in the industrial field. This approach, namely the Reflectance Transformation Imaging technique (RTI), consists of acquiring a sequence of images of a surface by only varying the direction of illumination, in a similar way to what is carried out during visual sensory analysis. Indeed, in visual inspection, the operator usually varies the configuration of the inspection scene, (i.e., the geometric configuration between the observer, the lighting, and the inspected object) in order to highlight possible appearance defects. It is important to highlight here that the fact that the RTI imaging technique transcribes, in a certain way, the visual inspection practices known by the operators in charge of these tasks, allowing a better and faster appropriation of the technique during its deployment in the industrial context. The acquired data obtained with the RTI technique enable building a local experimental model of the angular reflectance at each point of the inspected surface, which characterizes the appearance [25]. It is then possible to estimate the maps of local characteristics of surfaces, which are linked to the distribution of measured luminances or to the local micro-geometry [26,27], and to reconstruct the visual rendering of the surfaces under a virtual lighting.
The core objective of this paper is to evaluate the use of these relightings in an industrial context of inspection and assessment of the visual quality of surfaces, as a tool for computer-aided visual inspection, (i.e., to assist the experts and operators of sensory control). Studies in this sense have already been conducted, especially in the field of heritage [25,28], to quantify the performance of the existing RTI reconstruction models in particular [29,30]. This type of analysis evaluates the performance from a quantitative, numerical point of view but does not take into account the aspects related to human perception. The presented methodology aims to integrate these aspects and bring elements of an answer concerning the relevance of this approach in this context. Moreover, in addition to what has been presented in [29], the proposed methodology also integrates the dynamic aspects (which are essential in the human perception mechanisms) not taken into account when only static RTI relighting is assessed. Thus, this paper aims to show how the RTI approach can be used and implemented for surface quality inspection tasks and how the parameters associated with the RTI process, both from the point of view of acquisition and modeling or reconstruction, can influence the surface appearance assessment and therefore the analyses and decisions concerning, for example, the acceptability of a surface.
A brief overview concerning the Reflectance Transformation Imaging technique and the existing methods for the perceptual assessment of a sequence of images is presented in Section 2. The proposed methodology is detailed in Section 3. The major findings are presented and discussed in Section 4.

2. Background on the Implemented Techniques

2.1. Reflectance Transformation Imaging

The Reflectance Transformation Imaging (RTI) technique [31,32,33] consists of acquiring a series of images from several light directions using a camera positioned orthogonally to the inspected surface. Each image of the acquired Multi-Light Image Collections (MLICs) corresponds to an illumination direction and represents a discrete measurement of the luminance in one lightning direction. The angular reflectance of each pixel is then modeled as illustrated in Figure 1 (modeling section), where the measured luminance values are represented in the normalized ( L u , L v ) space as defined in [32] and the vertical axis is associated with the sensor gray level range ( [ 0 , 255 ] ). These local experimental models make possible the relighting, (i.e., the continuous reconstruction of the visual rendering of the inspected surface) for virtual illumination angles. The principle of this technique is illustrated in Figure 1, where PTM, HSH, and DMD are RTI reconstruction models and stand for Polynomial Texture Mappings, Hemispherical Harmonics, and Discrete Modal Decomposition, respectively.
Many new RTI acquisition modalities have recently been developed, including multi-spectral approaches [34,35], approaches to measure the complete luminance dynamic (HD-RTI, [36,37,38]) self-adaptive approaches to determine the relevant lighting directions (NBLP-RTI, [39]), and even robot-based RTI systems [40]. Within the framework of this research, we focus on the RTI acquisition parameters associated with the conventional approach. In a non-exhaustive way, these parameters are associated with the spectral content of the lighting implemented, the distribution of the lighting positions, their number or density [41,42], or even the measurement scale. In terms of processing, the main parameter is linked to the choice of the modeling method. Processing an RTI set of images allows one to model the local angular reflectance at each point or pixel of the inspected surface and allows the relighting of the surface in virtually any arbitrary direction of light. Originally, there was the Polynomial Texture Mappings (PTM) [32,33,43] approach, based on second-order polynomials. This method is simple, robust, and easy to implement, but it has the disadvantage of excessively smoothing the measured luminance point clouds, which alters the quality of the reconstructions when the surfaces are not Lambertian [44]. Other approaches were then developed to overcome this limit, in particular the HSH method based on Hemispherical Harmonics [45] and Discrete Modal Decomposition (DMD) [44,46,47]. This last approach implements the natural modes of vibration to form the basis of decomposition. More recently, local interpolation approaches have shown interesting results, such as the Radial Basis Function [48] or even machine learning approaches [49].
In this paper, we have chosen to study three parameters that the authors identified to be particularly influential in previous studies related to Reflectance Transformation Imaging. However, the methodology presented in these works is generic in the sense that it allows, whatever the choice of factors, quantifying the effect of the RTI factors of acquisition and processing in terms of visual perception. The chosen parameters are the density of lighting directions during the RTI acquisition sequence, the measurement scale, and the chosen modeling method.The modalities and levels chosen for the experience plan are detailed in Section 3. In addition to these three parameters, we applied the method to three different surfaces with distinct roughnesses or materials, allowing us to characterize the possible correlations between the type of surface reflectance behavior and the RTI parameters.

2.2. Perceptual Assessment

As mentioned above, surfaces are still often inspected by visual sensory analysis. In the industrial context, the results obtained by this type of approach remain the gold standard. Moreover, with the human perception mechanisms being very complex, the only manner of verifying the results obtained by means of instrumental measurement is comparison with sensory measurements. Thus, the proposed method is based on psychometric experiments. We detail here the main approaches to evaluate the perceived quality of images (or image sequences) reconstructed by the RTI technique.
Several subjective scaling methods can be used to measure the perceived quality of images and find the relation between physical characterization and the stimulus [50]. One of the most common approaches is Absolute Category Rating (ACR), where the evaluation of the test sequences uses a category scale. ACR with a hidden reference (ACR-HR) is a variation of this approach that implements a hidden reference test sequence. An alternative is the Degradation Category Rating (DCR) [50], where the degradation, (i.e., the distance to the reference image) is rated on a five-point scale. Test sequences can also be displayed in pairs of references and stimuli. This is particularly the case for the double stimulus impairment scale method. Each pair is then displayed simultaneously on the same monitor, which reduces the experiment’s time and helps subjects to rate the stimulus in parallel comparison to the reference. Another pair approach is the Pair Comparison method (PC) [51], where two images are presented and the operator has to decide which image possesses more of an investigated attribute. We chose to implement the principle of the PC method. Indeed, because of the choice of factors and modalities of the DoE, and also because the experiment was conducted from reconstructed videos and not static images, the experimentation time is an important criterion. The PC approach indeed saves time due to the simultaneous viewing of the analyzed videos and the absence of distance quantification. Moreover, the literature about RTI has shown that the reconstruction models were very influential on the quality of the reconstruction. We therefore chose to extend the PC approach to a triple comparison, which made it possible to evaluate the three selected models simultaneously. This approach is detailed in Section 3.2.
Another important point is the implementation of a training phase prior to the psychometric evaluation. Indeed, to ensure the subjects’ full understanding of the requested task (familiarization with the assessment tool and instructions) and stabilize the subjects’ evaluation, a preliminary training session is recommended for each subject using at least five representative test sequences (ITU-R Standard, [50]). Indeed, the training session allows the participants to become aware of the type of images or videos that they will have to evaluate and the scale of variation that they will encounter, which can help to notably improve the intra-observer repeatability. Two pieces of information need to be assessed for choosing training sequences [50]: the Spatial perceptual Information (SI) and the Temporal perceptual Information (TI). The SI indicates the amount of spatial detail of a sequence. This descriptor is based on the Sobel filter, and its mathematical expression is given in Equation (1). TI describes the difference between the same pixels’ values in two successive frames. Its expression is given in Equation (2):
S I = M a x t i m e { s t d s p a c e [ S o b e l ( F n ) ] }
T I = M a x t i m e { s t d s p a c e [ Δ ( i , j ) ] } w i t h Δ ( i , j ) = F n ( i , j ) F n 1 ( i , j )
where S o b e l ( F n ) represents the filtered frame at time ( F n ), s t d s p a c e is the standard deviation computed over the pixels, and m a x t i m e is the maximum value in the time series.
The implementation of this training phase in our methodology for the RTI-based approach is presented below in Section 3.

3. Dynamic Perceptual Assessment for RTI-Based Visual Inspection: Methodology

3.1. Experimental Surface Samples and RTI Acquisition Set-Up

As stated before, the adjustment of the RTI acquisition and modeling parameters have to be correlated with the type of inspected surface. For example, a homogeneous surface with Lambertian reflectance will not require a high density of lighting positions, with a low number of images distributed in the hemispherical space ( ϕ , θ ) allowing one to correctly estimate the angular reflectance and the opposite for when the local reflectance is heterogeneous or more complex. We therefore chose three surfaces samples with different properties and reflectance behaviors, noted in the following as S 1 3 . Sample S 1 was a Lambertian-type surface of paper material. Sample S 2 was a sandblasted surface (Spectralon® SRS-05-020 Diffuse Reflectance Chemically Inert Standard) with a reflectance value of 20%. Sample S 3 was a brush-finished metallic surface, inducing an anisotropic texture. These surface samples are illustrated in Figure 2.
The RTI acquisitions were carried out using an in-house developed set-up (Figure 3). The system is based on a powerful white LED light source mounted on a motorized hoop, allowing one to position the light in the ( θ , ϕ ) space. The imaging sensor is a monochromatic 2 / 3 active pixel-type CMOS sensor with a resolution of 12.4 Mp ( 4112 × 3008 ). A precision micro-imaging modular and motorized magnification and focus has been implemented to be able to adapt the field of view and the focus. The lower lens and other optic components are fully modular, allowing one to cover a very wide range of the measurement scale, the main limitation being the difference between the maximum and minimum magnification ( [ × 1 , × 12 ] ) and the space available to position the samples to be measured (maximum of approximately 10 × 10 cm). The device can be fully controlled with a user interface, allowing one to adjust all the RTI acquisition parameters [38]. In addition, the system is modular, and a specific light wavelength for RTI acquisition on particular surfaces, such as transparent or semi-transparent (varnished) surfaces, can easily be implemented.
In particular, the acquisition parameters associated with the light energy emitted are essential in the RTI approach. Their adjustment requires special attention, especially since this adjustment also depends on the level of magnification. Indeed, the illumination parameters can strongly alter the quality of the acquisitions and therefore consequently the analysis performed from these data [52]. Generally, the exposure time is, in practice, arbitrarily chosen by the operator in charge of the acquisition, who tries to reach a compromise between the number of saturated and underexposed pixels for all acquisition angles. For this experiment, we therefore carried out a preliminary step to optimize the illumination parameter settings for each of the surface samples of the study according to the methodology presented in [52].

3.2. Design of Experiment

The parameters retained for this experiment were the measurement scale, the acquisition density, (i.e., the number of lighting positions of the acquisition sequence), and the implemented reconstruction model. Regarding this last parameter, many models have indeed been developed and applied to RTI acquisitions with a significant effect on the quality of rendering. The most commonly used models are global models, such as the historical PTM approach [31,32], the HSH method [45], or the more recent Discrete Modal Decomposition (DMD) [44,47], which we chose to evaluate in this paper. The proposed methodology could, however, be applied in other reconstruction models, such as the Radial Basis functions (RBFs) recently proposed by [28], which are based on local interpolations. For the choice of these parameters having to be correlated with the type of surface to be inspected, three different industrial surface samples were retained. The chosen samples for the experiment and the DoE parameters and associated modalities are detailed in the following section.

3.2.1. Protocol, Studied Factors, and Modalities

The parameters retained for this experiment, noted as P 1 4 in the following, are detailed below.
  • P 1 , the reconstruction model (three modalities): As stated before, many models have indeed been developed and applied to RTI acquisitions with a significant effect on the quality of rendering. For this experiment, we chose the Polynomial Texture Mappings (PTM) approach, the Hemispherical Harmonics (HSH) technique, and Discrete Modal Decomposition (DMD).
  • P 2 , the angular density of acquisition, or the number of lighting directions in the acquisition sequence (four modalities): RTI acquisitions were performed with 50, 100, 200, and 400 positions homogeneously distributed in the ( θ , ϕ ) angular space (see Figure 4). These values correspond to what is commonly used in existing RTI acquisition systems, where typically, between 50 and 200 lighting directions are used. The value of 400 corresponds to a value that we consider to be maximal in an industrial context, where the inspection time is a major constraint.
  • P 3 , the measurement scale (two modalities): Mechanisms related to visual perception are very sensitive to the scale. We chose here two modalities for this parameter which were associated with the zoom factor ( 40 % and 80 % ). These two zoom factors correspond to pixel sizes of approximately 5 μ m and 2 μ m, respectively.
  • P 4 , the surface material and roughness (three modalities): For the choice of the preceding parameters ( P 1 3 ) having to be correlated with the type of surface inspected, three industrial surface samples were retained. These samples were described in detail previously in Section 3.1.
Many other RTI parameters could be influential on the visual perception of surfaces such as, non-exhaustively, the light path during the inspection, the light wavelength of the light source, or other general imaging parameters, such as the resolution, the gamma value, or even the type of sensor. However, we assumed that the proposed methodology could be transposed if necessary for the study of their impact and that the chosen parameters here were particularly relevant. Concerning the light path, a single light path was thus implemented, as presented in Figure 4a using a white line. To carry out this experiment, a dense RTI acquisition was realized on this chosen lightpath (397 acquisitions, linearly distributed) for each surface sample in order to reconstruct the reference video for the psychometric experiment. Thus, this reference video corresponded to the raw data and was not subject to any processing or modeling.

3.2.2. Psychometric Experiment

As mentioned before, the approach chosen is based on the principle of the Pair Comparison (PC) method [51]. To reduce the number of image sequences to be evaluated and thus the duration of the experiment, we chose to extend the PC approach to a triple comparison, which made it possible to evaluate the three selected models simultaneously. This approach is detailed below. In addition, to include the dynamic aspects in the proposed methodology, which are of particular importance in the human perception mmechanisms, the psychometric experiments were conducted on rendering videos, for which the reference lighting paths were defined. To conduct the evaluation of the reference videos to the rendered videos using RTI data, the proposed extended version of the Paired Comparison method allowed us to compare the videos according to their resemblance, and due to the simultaneous viewing of the reference-reconstruction pairs, the PC method saved time and reduced the distance quantification. To build the reference videos, a dense acquisition was acquired for each surface in the two measurement scales (two magnification levels). For this experiment, 6 reference acquisitions were acquired (3 surfaces × 2 measurement scales). The reference was then displayed simultaneously with the reconstructed videos associated with the DoE parameters and modalities on a divided screen as illustrated in Figure 5. The reference video was always in the top left, and the other three were randomly displayed. As the reconstruction model can particularly alter the quality of the reconstruction, we chose to show simultaneously the reconstructions associated with the three models retained for each configuration of the parameters P 2 4 of the experimental design. The videos were resized to fill 1/4 of the screen resolution ( 1440 × 2560 ) using bicubic interpolation.
For each experimental design configuration, a video containing the four sub-videos (renderings) was generated frame by frame to ensure synchronization. The randomized positioning of the videos allowed us to avoid a potential experimental bias, which was confirmed by the obtained values presented in Table 1. The participants in the experiment then selected, among the three videos associated with the visual reconstructions obtained with the different values of the factors of the experimental design, which one seemed to them to be the closest to the reference video (systematically displayed in the top left). This preferential choice was made by clicking on the chosen video, and it was recorded by the system. The next video, corresponding to the next configuration of the experimental design, was then displayed.
Each experimental configuration was displayed twice in a random order for each subject to enable the quantification of the intra-subject variation. The experiment was divided into two parts: the training and the main session. For the training session, six sequences were chosen for their temporal and spatial perceptual information to stabilize the observers’ opinion following the methodology presented in [50]. This approach allows for a representative sampling of the diversity of experiences as illustrated in Figure 6. It can be observed that the minimum of spatial information ( S I = 0.036 ) and temporal information ( T I = 4.67 ) was associated with the paper surface ( S 1 ) reconstructed using 400 light positions at high magnification level (80%), which means that the minimum distance was perceived between the original video and the reconstructions. The maximum of SI and TI jointly ( S I = 0.059 , T I = 24.41 ) was obtained with the metallic brushed surface ( S 3 ) reconstructed with 50 positions. These results were expected due to the reflectance behavior simplicity and were therefore easier to model for S 1 , and the inverse was true for the S 3 sample.
The experiment was held in a dark room, and 24 volunteers participated, with an average age of 34 years old. The experiment took an average of 23 minutes for each participant, with 4 minutes for the training session and 19 minutes for the main session. The participants thus evaluated 48 videos of approximately 13 seconds in duration, in addition to the 6 videos from the training session.
An example of the images extracted from the different videos of the experiment associated with two angular configurations (from the defined lighting path) for the three surface samples is presented in Figure 7.

4. Results and Discussion

The overall results presented in Figure 8 reflect the difference in perception between the surfaces reconstructed with the three modeling methods tested and the reference surface (raw data). It appears that with a median value of approximately 6 % , PTM was rarely chosen compared with the HSH and DMD renderings. A considerable preference was observed for the DMD reconstructions, amassing 58% of the participants’ clicks over 34% for HSH and 6% for PTM. Regarding the surface material (Figure 8b), we can notice that the HSH gain performance for surface sample S 2 (Spectralon®) gained 50% of the participants’ clicks against 48% for DMD, while HSH fell to 25% for S 3 , where the participants chose DMD as the most accurate model to reconstruct the angular reflection of light using the RTI system. Thus, it appears that even for diffuse surfaces (such as S 1 ), the HSH and DMD global modeling methods produced a lower difference in perception than the PTM method. However, we observed that the more complex the behavior was in terms of angular relfectance, the more the gap between the methods increased, with a clear advantage for the DMD method when the texture was anisotropic (such as S 3 ), which was consistent with the shapes and complexity of the decomposition bases implemented for the HSH and DMD methods.
The detailed results for the different factors and modalities of the design of the experiment are presented as a heatmaps associated with each surface sample ( S 1 3 ) in Figure 9. As for the overall results, we observed a clear preference for the HSH and DMD methods in the perception of visual reconstructions. It can be noted that, particularly for the S 3 surface sample and to a lesser extent the S 1 surface sample, the PTM method saw its performance increase and the HSH method saw its performance decrease when the angular density increased. With the S 1 sample also being relatively anisotropic (orientation of the fibers of the paper in two main directions), this result underlines once again the difficulty of the HSH approach to render the appearance produced by anisotropic textures.
Concerning the magnification, it generally appears that the higher the magnification, the greater the difference between the methods, and the more the reconstructions associated with the DMD method were chosen. This result is explained by the fact that the higher the zoom, the smaller the integration zone and therefore the averaging of the reflectance behavior associated with each pixel, and the more the complexity of the measured behavior increased. The DMD method having a decomposition basis of greater complexity then made it possible to reconstruct the local visual rendering more accurately.
Finally, concerning the effect of the density, we observed a relatively better result compared with the other methods of the PTM technique for 400 acquisition positions. This counter-intuitive result might be explained by the fact that sometimes, for high acquisition densities, the most efficient reconstruction techniques have the effect of including behaviors around the salient reconstruction angle and therefore represent in the renderings details not present in the raw image. This defect, which concerns high-frequency variations in angular reflectance frequency, is linked to the very principle of reconstruction techniques by global approximation. There is therefore, in terms of acquisition, an optimum in terms of the number of light positions associated with the RTI sequence, which is not necessarily associated with the densest possible acquisition.
We also evaluated the experimentation carried out in terms of the repeatability of the results obtained. The individual reliability per subject was measured from the participants’ votes of replicated sequences. The global average measured consistency was approximately 49%. The results presented in Figure 10 and Figure 11 show that this low consistency value was mainly due to a hesitation in the choice between HSH and DMD reconstruction for certain renderings of the experiment. This result translates a hesitation of the participants which was due, for certain tests of the experimental design, to a very strong similarity of the reconstructions obtained (The two methods performed equally, and both were much better than the PTM approach).
However, it can be noticed that the consistency increased very significantly for S 3 for the choices of the DMD method (which were also the most frequent; see Figure 8), which confirmed the superior performance of this approach in terms of the perception gap with the reference videos when the local behaviors to be modeled were complex or anisotropic. For S 2 , the HSH method retained an equivalent score to the DMD method, which confirmed the model’s capacity to reproduce a regular reflectance behavior. The main result extracted from the global data presented in Figure 8b for the S 2 sample (equivalent performance of the HSH and DMD approaches) was also confirmed by these consistency data (no significant difference). These perceptual results are consistent with those obtained in previous studies that evaluated the performance of PTM, HSH, and DMD reconstruction models from a numerical quantitative point of view [28,44,47]. Concerning the other RTI factors, such as the scale and the density of the acquisition points, the existing RTI devices often did not allow varying these parameters, and to our knowledge, although their effect is known by the users of the technique [25,41,42], their influence has not been evaluated in previous works.
Finally, we evaluated the time spent by the different participants to make a decision. Indeed, this indicator could make it possible to identify (or confirm) aberrant behaviors, which could alter the quality of the global results, such as an unreliable participant characterized by a low consistency associated with a very short analysis time. Moreover, in sensory analysis, the inspection time is often associated with the visual impact of a defect, such as in [3]. In this experiment case, this indicator could thus make it possible to quantify the difficulty of deciding. To illustrate this, we separated the consistent and non-consistent votes (Figure 12). The green box plots indicate the mean time spent for the consistent votes, and the red box plots indicate the mean time spent for the non-consistent votes. It can be noticed that the participant mean time for the non-consistent choices was lightly higher than when the participants were consistent for 50% of the participants, which could indicate the difficulty of the choice and time spent when hesitating between similar perceived reconstruction qualities.

5. Conclusions

In this paper, we proposed evaluating the use of dynamic RTI renderings for visual inspection tasks. We investigated this through psychometric experimentation the influence of different essential parameters in the RTI approach, including modeling methods, the number of lighting positions of the acquisition sequence, and the measurement scale. The psychometric experiments were conducted on reconstructed visual rendering videos to include the dynamic aspect of perception mechanisms in the methodology. The overall results reflect the difference in perception between the surfaces reconstructed with the three modeling methods tested (PTM, HSH, and DMD approaches). We observed that the more complex the reflectance behavior was, the more the gap between the perception of the rendering videos increased, with a clear advantage for the DMD method when the texture was anisotropic, which was consistent with the shapes and complexity of the decomposition basis implemented for the HSH and DMD methods. This experiment underlines the difficulty of the HSH approach in rendering the appearance produced by anisotropic textures, and it can also be noted that the PTM method saw its performance increase and the HSH method’s performance decreased when the angular density of acquisition increased. Concerning the magnification, it generally appeared in this experiment that the higher the magnification, the greater the difference between the tested methods, and the more the reconstructions associated with the DMD method were chosen. This result is explained by the fact that the integration zone was smaller for the higher magnifications. Therefore, it decreased the averaging of the reflectance behavior at each pixel and allowed us to measure more complex behaviors. The DMD method having a decomposition basis of greater complexity then made it possible to reconstruct the local visual rendering more accurately. In terms of density, we also observed a counterintuitive effect, or for a large number of acquisition positions, the implementation of a fine global approximation method, such as the HSH or DMD methods, could affect the perceived quality. Finally, this experiment shows that it is important to take into account the repeatability aspects for this type of study, which can be indicative of aberrant behavior of participants or difficulty in choosing in case of great similarity between two reconstructions. The analysis of the decision time can also make it possible to quantify the facility (or lack thereof) to discriminate between two appearances. Thus, the proposed methodology and the results obtained make it possible to highlight the importance of the choice of the investigated parameters when carrying out RTI-based dynamic perceptual assessments.

Author Contributions

Conceptualization, A.Z., G.L.G., H.C., J.-B.T. and P.J.; Data curation, A.Z.; Formal analysis, A.Z.; Funding acquisition, P.J. and S.M.; Investigation, A.Z.; Methodology, G.L.G., H.C. and A.M.; Resources, P.J.; Software, A.Z.; Supervision, G.L.G., H.C. and J.-B.T.; Validation, J.-B.T.; Writing—original draft, A.Z.; Writing—review & editing, G.L.G., H.C., J.-B.T. and A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work benefited of the funding of French National Research Agency (ANR) through the project NAPS (https://anr.fr/Projet-ANR-17-CE10-0005).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aust, J.; Mitrovic, A.; Pons, D. Comparison of Visual and Visual–Tactile Inspection of Aircraft Engine Blades. Aerospace 2021, 8, 313. [Google Scholar] [CrossRef]
  2. Aust, J.; Pons, D. Comparative Analysis of Human Operators and Advanced Technologies in the Visual Inspection of Aero Engine Blades. Appl. Sci. 2022, 12, 2250. [Google Scholar] [CrossRef]
  3. Baudet, N.; Maire, J.L.; Pillet, M. The visual inspection of product surfaces. Food Qual. Prefer. 2013, 27, 153–160. [Google Scholar] [CrossRef]
  4. Starzynska, B.; Szajkowska, K.; Diering, M.; Rocha, A.; Reis, L.P. Advances in Manufacturing; Lecture Notes in Mechanical Engineering; Springer: Cham, Switzerland, 2017; pp. 881–888. [Google Scholar]
  5. Megaw, E.D. Factors affecting visual inspection accuracy. Appl. Ergon. 1979, 10, 27–32. [Google Scholar] [CrossRef]
  6. Obein, G. Métrologie de l’apaprence. In Habilitation a Diriger les Recherches, LNE-CNAM, Paris, France; 2018; Available online: https://metrologie-francaise.lne.fr/sites/default/files/media/file/field_media_file/HDR (accessed on 20 May 2022).
  7. Nugroho, W.T.; Dong, Y.; Pramanik, A. Dimensional accuracy and surface finish of 3D printed polyurethane (PU) dog-bone samples optimally manufactured by fused deposition modelling (FDM). Rapid Prototyp. J. 2022; ahead-of-print. [Google Scholar] [CrossRef]
  8. Brown, C.A.; Hansen, H.N.; Jiang, X.J.; Blateyron, F.; Berglund, J.; Senin, N.; Bartkowiak, T.; Dixon, B.; Le Goic, G.; Quinsat, Y. Multiscale analyses and characterizations of surface topographies. CIRP Ann. 2018, 67, 839–862. [Google Scholar] [CrossRef]
  9. Le Goic, G.; Bigerelle, M.; Samper, S.; Favreliere, H.; Pillet, M. Multiscale roughness analysis of engineering surfaces: A comparison of methods for the investigation of functional correlations. Mech. Syst. Signal Process. 2016, 66, 437–457. [Google Scholar] [CrossRef]
  10. Xames, M.D.; Torsha, F.K.; Sarwar, F. A systematic literature review on recent trends of machine learning applications in additive manufacturing. J. Intell. Manuf. 2022, 1–27. [Google Scholar] [CrossRef]
  11. Ismail, N.; Malik, O.A. Real-time visual inspection system for grading fruits using computer vision and deep learning techniques. Inf. Process. Agric. 2022, 9, 24–37. [Google Scholar] [CrossRef]
  12. Khorasani, M.; Loy, J.; Ghasemi, A.H.; Sharabian, E.; Leary, M.; Mirafzal, H.; Cochrane, P.; Rolfe, B.; Gibson, I. A review of Industry 4.0 and additive manufacturing synergy. Rapid Prototyp. J. 2022; ahead-of-print. [Google Scholar] [CrossRef]
  13. Ali, M.H.; Issayev, G.; Shehab, E.; Sarfraz, S. A critical review of 3D printing and digital manufacturing in construction engineering. Rapid Prototyp. J. 2022; ahead-of-print. [Google Scholar] [CrossRef]
  14. Leger, A.; Le Goic, G.; Fauvet, E.; Fofi, D.; Kornalewski, R. R-CNN based automated visual inspection system for engine parts quality assessment. In Proceedings of the Fifteenth International Conference on Quality Control by Artificial Vision, Tokushima, Japan, 12–14 May 2021; Volume 11794, pp. 270–277. [Google Scholar]
  15. Kerr, W.; Ryan, C. Eco-efficiency gains from remanufacturing A case study of photocopier remanufacturing at Fuji Xerox Australia. J. Clean. Prod. 2001, 9, 75–81. [Google Scholar] [CrossRef]
  16. Youheng, F.; Guilan, W.; Haiou, Z.; Liye, L. Optimization of surface appearance for wire and arc additive manufacturing of Bainite steel. Int. J. Adv. Manuf. Technol. 2017, 91, 301–313. [Google Scholar] [CrossRef]
  17. Ngo, T.D.; Kashani, A.; Imbalzano, G.; Nguyen, K.T.; Hui, D. Additive manufacturing (3D printing): A review of materials, methods, applications and challenges. Compos. Part B Eng. 2018, 143, 172–196. [Google Scholar] [CrossRef]
  18. Iwata, M. Automated Visual Inspection Technology; SAE Technical Paper Series; 2003; Available online: https://saemobilus.sae.org/content/2003-01-2738/ (accessed on 20 May 2022).
  19. Maurya, P.; Gaikawad, C.; Salvi, S. Visual Inspection for Industries. Int. J. Adv. Res. Sci. Commun. Technol. 2022, 2, 87–89. [Google Scholar] [CrossRef]
  20. Sun, X.; Gu, J.; Tang, S.; Li, J. Research Progress of Visual Inspection Technology of Steel Products—A Review. Appl. Sci. 2018, 8, 2195. [Google Scholar] [CrossRef] [Green Version]
  21. Wu, Y.; Qin, Y.; Wang, Z.; Jia, L. A UAV-Based Visual Inspection Method for Rail Surface Defects. Appl. Sci. 2018, 8, 1028. [Google Scholar] [CrossRef] [Green Version]
  22. Hunter, R.S.; Harold, R.W. The Measurement of Appearance; John Wiley and Sons: New York, NY, USA, 1987. [Google Scholar]
  23. Rigg, B. The measurement of appearance, by Richard S Hunter and Richard W Harold. J. Soc. Dyers Colour. 1988, 104, 233. [Google Scholar] [CrossRef]
  24. Wang, S.; Zargar, S.A.; Yuan, F.G. Augmented reality for enhanced visual inspection through knowledge-based deep learning. Struct. Health Monit. 2021, 20, 426–442. [Google Scholar] [CrossRef]
  25. Pintus, R.; Dulecha, T.G.; Ciortan, I.; Gobbetti, E.; Giachetti, A. State-of-the-art in Multi-Light Image Collections for Surface Visualization and Analysis. Comput. Graph. Forum 2019, 38, 909–934. [Google Scholar] [CrossRef]
  26. Lemesle, J.; Robache, F.; Le Goïc, G.; Mansouri, A.; Brown, C.; Bigerelle, M. Surface reflectance: An optical method for multiscale curvature characterization of wear on ceramic–metal composites. Materials 2020, 13, 1024. [Google Scholar] [CrossRef] [Green Version]
  27. Nurit, M. Numérisation et Caractérisation de l’Apparence des Surfaces Manufacturées pour l’Inspection Visuelle. Ph.D. Thesis, Université de Bourgogne, Dijon, France, 2022. [Google Scholar]
  28. Pintus, R.; Dulecha, T.G.; Jaspe-Villanueva, A.; Giachetti, A.; Ciortan, I.; Gobbetti, E. Objective and Subjective Evaluation of Virtual Relighting from Reflectance Transformation Imaging Data. In Proceedings of the EUROGRAPHICS Workshop on Graphics and Cultural Heritage, Vienna, Austria, 12–15 November 2018. [Google Scholar]
  29. Zendagui, A.; Thomas, J.B.; Le Goïc, G.; Castro, Y.; Nurit, M.; Mansouri, A.; Pedersen, M. Quality assessment of reconstruction and relighting from RTI images: Application to manufactured surfaces. In Proceedings of the 2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Sorrento, Italy, 26–29 November 2019; pp. 746–753. [Google Scholar]
  30. Ponchio, F.; Corsini, M.; Scopigno, R. RELIGHT: A compact and accurate RTI representation for the web. Graph. Model. 2019, 105, 101040. [Google Scholar] [CrossRef]
  31. Malzbender, T.; Gelb, D.; Wolters, H.; Zuckerman, B. Enhancement of Shape Perception by Surface Reflectance Transformation; Technical Report; 2000; Available online: https://www.hpl.hp.com/techreports/2000/HPL-2000-38R1.pdf (accessed on 20 May 2022).
  32. Malzbender, T.; Gelb, D.; Wolters, H. Polynomial texture maps. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 12–17 August 2001; pp. 519–528. [Google Scholar]
  33. Mudge, M.; Malzbender, T.; Chalmers, A.; Scopigno, R.; Davis, J.; Wang, O.; Gunawardane, P.; Ashley, M.; Doerr, M.; Proenca, A.; et al. Image-Based Empirical Information Acquisition, Scientific Reliability, and Long-Term Digital Preservation for the Natural Sciences and Cultural Heritage. In Proceedings of the Eurographics 2008—Tutorials, Crete, Greece, 14–18 April 2008. [Google Scholar]
  34. Giachetti, A.; Ciortan, I.M.; Daffara, C.; Pintus, R.; Gobbetti, E. Multispectral RTI Analysis of Heterogeneous Artworks. In Proceedings of the Eurographics Workshop on Graphics and Cultural Heritage, Graz, Austria, 27–29 September 2017; Schreck, T., Weyrich, T., Sablatnig, R., Stular, B., Eds.; The Eurographics Association: Goslar, Germany, 2017. [Google Scholar]
  35. Kitanovski, V.; Hardeberg, J.Y. Objective evaluation of relighting models on translucent materials from multispectral RTI images. Electron. Imaging 2021, 33, 133-1–133-8. [Google Scholar] [CrossRef]
  36. Nurit, M.; Le Goïc, G.; Maniglier, S.; Jochum, P.; Chatoux, H.; Mansouri, A. Improved visual saliency estimation on manufactured surfaces using high-dynamic reflectance transformation imaging. In Proceedings of the Fifteenth International Conference on Quality Control by Artificial Vision, Tokushima, Japan, 12–14 May 2021; Volume 11794, pp. 111–121. [Google Scholar]
  37. Nurit, M.; Castro, Y.; Zendagui, A.; Le Goïc, G.; Favreliere, H.; Mansouri, A. High dynamic range reflectance transformation imaging: An adaptive multi-light approach for visual surface quality assessment. In Proceedings of the Fourteenth International Conference on Quality Control by Artificial Vision, Mulhouse, France, 15–17 May 2019; Volume 11172, p. 1117213. [Google Scholar]
  38. Nurit, M.; Le Goïc, G.; Lewis, D.; Castro, Y.; Zendagui, A.; Chatoux, H.; Favreliere, H.; Maniglier, S.; Jochum, P.; Mansouri, A. HD-RTI: An adaptive multi-light imaging approach for the quality assessment of manufactured surfaces. Comput. Ind. 2021, 132, 103500. [Google Scholar] [CrossRef]
  39. Luxman, R.; Nurit, M.; Le Goïc, G.; Marzani, F.; Mansouri, A. Next Best Light Position: A self configuring approach for the Reflectance Transformation Imaging acquisition process. Electron. Imaging 2021, 2021, 132–137. [Google Scholar] [CrossRef]
  40. Luxman, R.; Castro, Y.E.; Chatoux, H.; Nurit, M.; Siatou, A.; Le Goïc, G.; Brambilla, L.; Degrigny, C.; Marzani, F.; Mansouri, A. LightBot: A Multi-Light Position Robotic Acquisition System for Adaptive Capturing of Cultural Heritage Surfaces. J. Imaging 2022, 8, 134. [Google Scholar] [CrossRef] [PubMed]
  41. Castro, Y.; Pitard, G.; Zendagui, A.; Le Goïc, G.; Brost, V.; Boucher, A.; Mansouri, A. Light spatial distribution calibration based on local density estimation for reflectance transformation imaging. Int. Soc. Opt. Photonics 2019, 11172, 65–73. [Google Scholar]
  42. Castro, Y.; Pitard, G.; Le Goïc, G.; Brost, V.; Mansouri, A.; Pamart, A.; Vallet, J.M.; Luca, L.D. A new method for calibration of the spatial distribution of light positions in free-form RTI acquisitions. In Proceedings of the Optics for Arts, Architecture, and Archaeology VII, Munich, Germany, 24–26 June 2019; Volume 11058. [Google Scholar]
  43. Drew, M.S.; Hajari, N.; Hel-Or, Y.; Malzbender, T. Specularity and Shadow Interpolation via Robust Polynomial Texture Maps. In Proceedings of the British Machine Vision Conference, London, UK, 7–10 September 2009; pp. 114.1–114.11. [Google Scholar]
  44. Pitard, G.; Le Goïc, G.; Mansouri, A.; Favreliere, H.; Désage, S.F.; Samper, S.; Pillet, M. Discrete Modal Decomposition: A new approach for the reflectance modeling and rendering of real surfaces. Mach. Vis. Appl. 2017, 28, 607–621. [Google Scholar] [CrossRef]
  45. Gautron, P.; Krivanek, J.; Pattanaik, S.; Bouatouch, K. A Novel Hemispherical Basis for Accurate and Efficient Rendering. In Proceedings of the Eurographics Symposium on Rendering, Lyon, France, 25–27 June 2014. [Google Scholar]
  46. Le Goïc, G. Geometric Quality and Apperance of Surfaces, Local and Global Approaches. Ph.D. Thesis, Université de Grenoble, Saint-Martin-d’Hères, France, 2012. [Google Scholar]
  47. Pitard, G.; Le Goïc, G.; Favreliere, H.; Samper, S.; Désage, S.F.; Pillet, M. Discrete Modal Decomposition for surface appearance modelling and rendering. Int. Soc. Opt. Photonics 2015, 9525, 952523. [Google Scholar]
  48. Pintus, R.; Giachetti, A.; Pintore, G.; Gobbetti, E. Guided Robust Matte-Model Fitting for Accelerating Multi-light Reflectance Processing Techniques. In Proceedings of the British Machine Vision Conference, London, UK, 4–7 September 2017. [Google Scholar]
  49. Dulecha, T.G.; Fanni, F.A.; Ponchio, F.; Pellacini, F.; Giachetti, A. Neural reflectance transformation imaging. Vis. Comput. 2020, 36, 2161–2174. [Google Scholar] [CrossRef]
  50. ITU-T. P.910 (11/2021): Subjective Video Quality Assessment Methods for Multimedia Applications; International Telecommunication Union: Geneva, Switzerland, 2021. [Google Scholar]
  51. Engeldrum, P.G. Psychometric Scaling: A Toolkit for Imaging Systems Development; Imcotek Press: Winchester, MA, USA, 2000. [Google Scholar]
  52. Zendagui, A.; Le Goïc, G.; Chatoux, H.; Thomas, J.B.; Castro, Y.; Nurit, M.; Mansouri, A. Quality assessment of dynamic virtual relighting from RTI data: Application to the inspection of engineering surfaces. In Proceedings of the Fifteenth International Conference on Quality Control by Artificial Vision, Tokushima, Japan, 12–14 May 2021; Volume 11794, pp. 94–102. [Google Scholar]
Figure 1. Principle of the RTI technique from acquisition to relighting. The RTI acquisition is the system that allows acquisition of a surface to be made. (ac) Modeling is using the acquisitions to model the behavior per pixel. Visual relighting allows to reconstruct from the models an image lighted with a specific light position.
Figure 1. Principle of the RTI technique from acquisition to relighting. The RTI acquisition is the system that allows acquisition of a surface to be made. (ac) Modeling is using the acquisitions to model the behavior per pixel. Visual relighting allows to reconstruct from the models an image lighted with a specific light position.
Applsci 12 06610 g001
Figure 2. Surface samples: (a) S 1 = industrial paper, (b) S 2 = Spectralon® Diffuse Reflectance Standard (SRS-05-020), and (c) S 3 = metallic brushed or polished surface.
Figure 2. Surface samples: (a) S 1 = industrial paper, (b) S 2 = Spectralon® Diffuse Reflectance Standard (SRS-05-020), and (c) S 3 = metallic brushed or polished surface.
Applsci 12 06610 g002
Figure 3. (a) Custom RTI acquisition system (ImViA Laboratory). (b) The associated user interface.
Figure 3. (a) Custom RTI acquisition system (ImViA Laboratory). (b) The associated user interface.
Applsci 12 06610 g003
Figure 4. (a) Chosen light path. (be) Acquisition positions associated with parameter P 2 modalities.
Figure 4. (a) Chosen light path. (be) Acquisition positions associated with parameter P 2 modalities.
Applsci 12 06610 g004
Figure 5. User interface for the psychometric experiment ( S 1 sample). The acquired raw video sequence was presented in the top-left screen zone, and the three other areas were dedicated to the rendering videos using the three tested approximation models (PTM, HSH, and DMD), which were randomly positioned.
Figure 5. User interface for the psychometric experiment ( S 1 sample). The acquired raw video sequence was presented in the top-left screen zone, and the three other areas were dedicated to the rendering videos using the three tested approximation models (PTM, HSH, and DMD), which were randomly positioned.
Applsci 12 06610 g005
Figure 6. The experiment sequence numbered form 1 to 6 are displayed in the central graph according to their temporal and spatial perceptual information. These six experiment covers the extremal values contained in the whole sequence set as well as some average sequence (1, 5–6). The sequence 2 refers to high spatial perceptual information while the sequence 4 presents high temporal perceptual information. These six sequence are chosen for the training session.
Figure 6. The experiment sequence numbered form 1 to 6 are displayed in the central graph according to their temporal and spatial perceptual information. These six experiment covers the extremal values contained in the whole sequence set as well as some average sequence (1, 5–6). The sequence 2 refers to high spatial perceptual information while the sequence 4 presents high temporal perceptual information. These six sequence are chosen for the training session.
Applsci 12 06610 g006
Figure 7. Example of images of appearance renderings with the three implemented models (random display) extracted from the DoE experimental videos (magnification: 40%; sampling density: 50). The top-left sub-image is the reference (acquisition). (a,b) Surface sample S 1 . (c,d) Surface sample S 2 . (e,f) Surface sample S 3 .
Figure 7. Example of images of appearance renderings with the three implemented models (random display) extracted from the DoE experimental videos (magnification: 40%; sampling density: 50). The top-left sub-image is the reference (acquisition). (a,b) Surface sample S 1 . (c,d) Surface sample S 2 . (e,f) Surface sample S 3 .
Applsci 12 06610 g007
Figure 8. Percentage of participants’ clicks per approximation model: (a) global and (b) per sample.
Figure 8. Percentage of participants’ clicks per approximation model: (a) global and (b) per sample.
Applsci 12 06610 g008
Figure 9. Mean of scores of global participant voting sequences per experiment parameter P 1 4 on experiment surfaces S 1 3 .
Figure 9. Mean of scores of global participant voting sequences per experiment parameter P 1 4 on experiment surfaces S 1 3 .
Applsci 12 06610 g009
Figure 10. Mean of scores of consistent voting sequences per experiment parameters P 1 4 on experiment surfaces S 1 3 .
Figure 10. Mean of scores of consistent voting sequences per experiment parameters P 1 4 on experiment surfaces S 1 3 .
Applsci 12 06610 g010
Figure 11. Mean of scores of non-consistent voting sequences per experiment parameters P 1 4 on experiment surfaces S 1 3 .
Figure 11. Mean of scores of non-consistent voting sequences per experiment parameters P 1 4 on experiment surfaces S 1 3 .
Applsci 12 06610 g011
Figure 12. Time spent for consistent and non-consistent evaluations for each participant.
Figure 12. Time spent for consistent and non-consistent evaluations for each participant.
Applsci 12 06610 g012
Table 1. Participants’ preferred choices per reconstruction model and screen zone.
Table 1. Participants’ preferred choices per reconstruction model and screen zone.
Top RightBottom LeftBottom Right
PTM (%)486
HSH (%)373539
DMD (%)595655
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zendagui, A.; Le Goïc, G.; Chatoux, H.; Thomas, J.-B.; Jochum, P.; Maniglier, S.; Mansouri, A. Reflectance Transformation Imaging as a Tool for Computer-Aided Visual Inspection. Appl. Sci. 2022, 12, 6610. https://doi.org/10.3390/app12136610

AMA Style

Zendagui A, Le Goïc G, Chatoux H, Thomas J-B, Jochum P, Maniglier S, Mansouri A. Reflectance Transformation Imaging as a Tool for Computer-Aided Visual Inspection. Applied Sciences. 2022; 12(13):6610. https://doi.org/10.3390/app12136610

Chicago/Turabian Style

Zendagui, Abir, Gaëtan Le Goïc, Hermine Chatoux, Jean-Baptiste Thomas, Pierre Jochum, Stéphane Maniglier, and Alamin Mansouri. 2022. "Reflectance Transformation Imaging as a Tool for Computer-Aided Visual Inspection" Applied Sciences 12, no. 13: 6610. https://doi.org/10.3390/app12136610

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop