Next Article in Journal
A Comprehensive Study on the Internet of Underwater Things: Applications, Challenges, and Channel Models
Next Article in Special Issue
Crack Detection in Concrete Tunnels Using a Gabor Filter Invariant to Rotation
Previous Article in Journal
The Resistance–Amplitude–Frequency Effect of In–Liquid Quartz Crystal Microbalance
Previous Article in Special Issue
Torsional Ultrasound Sensor Optimization for Soft Tissue Characterization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Active Imaging Model to Design Visual Systems: A Case of Inspection System for Specular Surfaces

by
Jorge Azorin-Lopez
*,
Andres Fuster-Guillo
,
Marcelo Saval-Calvo
,
Higinio Mora-Mora
and
Juan Manuel Garcia-Chamizo
Department of Computer Technology, University of Alicante, Carretera San Vicente s/n, San Vicente del Raspeig, Alicante 03690, Spain
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(7), 1466; https://doi.org/10.3390/s17071466
Submission received: 29 March 2017 / Revised: 7 June 2017 / Accepted: 20 June 2017 / Published: 22 June 2017
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Spain 2017)

Abstract

:
The use of visual information is a very well known input from different kinds of sensors. However, most of the perception problems are individually modeled and tackled. It is necessary to provide a general imaging model that allows us to parametrize different input systems as well as their problems and possible solutions. In this paper, we present an active vision model considering the imaging system as a whole (including camera, lighting system, object to be perceived) in order to propose solutions to automated visual systems that present problems that we perceive. As a concrete case study, we instantiate the model in a real application and still challenging problem: automated visual inspection. It is one of the most used quality control systems to detect defects on manufactured objects. However, it presents problems for specular products. We model these perception problems taking into account environmental conditions and camera parameters that allow a system to properly perceive the specific object characteristics to determine defects on surfaces. The validation of the model has been carried out using simulations providing an efficient way to perform a large set of tests (different environment conditions and camera parameters) as a previous step of experimentation in real manufacturing environments, which more complex in terms of instrumentation and more expensive. Results prove the success of the model application adjusting scale, viewpoint and lighting conditions to detect structural and color defects on specular surfaces.

1. Introduction

Scientific models and theories aimed at explaining the behaviour of specular reflections are sufficiently consolidated. However, the automatic processing, using artificial vision techniques, of scenes where there are specular surfaces, has problems that have not been solved yet. Vision systems designed to deal with specular objects must cope with the optical difficulties associated with this type of material [1]. The surfaces have a high reflection coefficient which causes undesired reflections and shine, concealing, in some cases, chromatic, morphological and topographical information about the object.
Most artificial vision techniques ignore specular reflections and focus on the diffuse component of the interaction of light with objects. Thus, these techniques, which are conceived to deal with Lambertian surfaces, produce wrong results when other surfaces are in the visual scene [2,3]. The solutions put forward in the literature adopt opposite ways of approaching the problem: developing methods to detect specularities in images, in order to take advantage of existing artificial vision techniques; and designing new vision techniques that explicitly deal with these surfaces.
The methods designed for detecting specularities in the scene differ depending on the vision level in which they are approached. They take advantage of certain phenomena and characteristics of light to separate the contribution of diffuse and specular reflection: spectral distribution of reflections [4,5,6,7], polarization [8,9,10,11], analysis of the behaviour of specularities in several images [12,13,14,15,16], and combinations of them [17,18,19]. They are used either to avoid or to remove specularities in the original image. The solutions in the last case are at the low level of vision systems because reflections and shine are considered as noise to be removed. They offer a filtered image to be processed at higher levels.
Designing new vision techniques allows specular reflections to be considered as a peculiar characteristic of the surface, which allows better performance of the vision process [20,21,22,23,24,25,26]. They use mechanisms of active vision that, in some cases, are modifications or refinements of classic techniques. They are generally focused on the extraction of the object shape [27,28,29,30,31,32,33,34,35,36,37,38].
The techniques developed to deal with specular surfaces, either for detecting specularities or for extracting the object shape, must fulfill certain requirements, making them only viable in specific applications: for example, requirements related to specific electromagnetic characteristics of objects (i.e., specific methods for metallic or dielectric objects), having previous knowledge of the geometry of the scene or of the surface reflectance, etc.
Specifically, automated visual inspection is the most used tool for testing products in industry due to their ease of use and its cost [39,40,41,42]. However, there are very few vision systems that deal with the surface inspection of specular products. They share the same aforementioned drawback of the generic vision system for specular objects: they are very specific as they assume specific constraints. The lighting of the scene and the acquisition equipment are determinant factors in the proposed solutions since they help the detection of defects in images [43,44,45,46,47].
Techniques that use structured lighting appear in few systems cited in the literature. They are considered to be the most reliable and suitable for inspecting the 3D shape of products [36,48,49,50]. Moreover, they have advantage over other techniques including laser, time of flight or LIDAR because the same sensor is used to determine colour information instead of acquiring colour and shape information in an independent manner. Techniques differ in the way they acquire the lighting pattern by means of projecting on the surface [51,52,53] or focusing the system on the reflection [36,48,54,55] (assuming the object as part of optical system where lighting patterns are projected); the equipment used to generate patterns [56,57,58] (screens, projectors, etc.); and, in the method used to codify patterns [59,60,61,62,63].
Proposed solutions try to satisfy specific requirements by means of adapting to the application domain and to specific constraints of the products. A system designed to satisfy the quality control of a product generally cannot be applied to another system. As a consequence, the contribution of this paper is to provide an active vision model able to explain the problem of inspecting specular surfaces and able to help in designing vision systems for this purpose. Moreover, the paper proposes a method based on the model that is able to minimize the negative effects of specular surfaces in visual inspection and able to take advantage of specular reflections as a peculiar characteristic. The method is focused on controlling the acquisition conditions (e.g., lighting angles, viewpoints, chromaticity and other lighting characteristics, etc.) to maximize the likelihood of detecting defects. Particular characteristics of the inspection problem make control of the acquisition conditions possible.
In order to validate the model the use of simulation is proposed as an inspection system design methodology that could be systematically applied, studying conditions in which the inspection has to be carried out and designing solutions in a flexible way. Virtual inspection makes use of the virtual manufacturing technology to model and simulate the inspection process, and the physical and mechanical properties of the inspection equipment, to provide an environment for studying the inspection methodologies [64]. Simulation based on virtual imaging enables the rendering of realistic images, providing a very efficient way to perform tests compared to the numerous attempts of manual experiments [65]. The introduction of simulation provides a flexible and low-cost method (compared with experimentation in the laboratory) of testing original hypotheses and the benefits that can be drawn from this research.
The rest of this paper is organized as follows. Section 2 describes the active vision model. The method based on the model for inspecting specular surfaces is developed in Section 3. Section 5 and Section 6 evaluate the proposed method by controlling image acquisition conditions. Finally, Section 7 concludes the paper.

2. Active Vision Model to Deal with Specular Surfaces

We are interested in modeling the automatic process of artificial vision in order to provide solutions to the problem of inspecting specular surfaces. First of all, a model describing the image formation and the variables that take part in this process is presented.
An image I is defined as a two-dimensional representation provided by F. Let F a function that models a visual acquisition system, VAS. It includes all equipment and scene configuration to capture an image: lighting, positions, viewpoints, cameras, etc. Let ρ a vector made up of scene magnitudes that contribute to the formation of I Equation (1). Each vector ρ is an element of a representation space P related to optical magnitudes of the visual perception phenomenon.
I ( x , y ) : = F ( ρ ) , ρ = ( ρ 1 , ρ 2 , ρ 3 , ... , ρ n ) P
The components ρ i of the vector of scene magnitudes are measurable physical values involved in the process of image formation. They could be, in practice: scale, viewpoint, light intensity, frequency, saturation, etc. Each component could be modeled as a function depending on three inputs Equation (2): the subject of interest, m, in the scene (e.g., the object to be inspected), the environment, e, in which the subject is placed and, finally, the camera, c, that captures images from the scene.
ρ i = ρ i ( m , e , c )
The contribution of each element (m, e and c) can be expressed as three vectors made up of magnitudes: μ i related to the object Equation (3), ϵ i related to the environment Equation (4) and γ i related to the camera Equation (5). Intensity and wavelength of light sources, medium of transmission, relative position between scene and vision device, are examples of environment variables ϵ i . Regarding the camera contribution γ i , the variables are related to the sensor characteristics, optical and electronic elements: zoom, focus, diaphragm, size of the sensor, signal converters, etc. Finally, the reflectance, colour, shape, topography of the object are examples of object variables μ i . The values of each vector establish elements of the set M for the object, E for the environment and Γ for the camera. In Diagram 1 an outline of the magnitudes can be found.
M = { m 0 , m 1 , m 2 , ... } , m i = ( μ 1 , μ 2 , ... , μ m )
E = { e 0 , e 1 , e 2 , ... } , e i = ( ϵ 1 , ϵ 2 , ... , ϵ n )
Γ = { c 0 , c 1 , c 2 , ... } , c i = ( γ 1 , γ 2 , ... , γ l )
To define our model, it is worth to remember the sensitivity as an important static characteristic of a sensor. The sensitivity is the slope of the calibration curve (see Figure 1). First, we can define the calibration curve as the function that maps a physical scene magnitude and its representation in the image space. Depending on the camera parameters the calibration curve will have a different function for different scene magnitudes. For instance, a camera with large depth of field will have a smoother calibration curve for intensity measurement (along a larger set of scene magnitudes for intensity, the camera will be able to distinguish between them) than a camera with short depth of field which will have a very abrupt calibration curve for the same scene magnitude (only a small set of intensity values will be distinguishable). With this function, sensitivity indicates the detectable output change for the minimum change of the measured magnitude. As a naive example, the detectable output could be the ability to perceive two different color that are actually different in the real scene. If the color change is too small, the sensor would return the same value of intensity. In our case, the detectable output in F for the minimum change of the scene magnitudes:
s e n s i t i v i t y = Δ F ( ρ ) Δ ρ
Diagram 1. Levels of magnitudes involved in the formation of the image. The image function as the most abstract level on the top, then each scene magnitude, the three elements that composes ρ , and the individual measurements of each element on the bottom of the tree.
Diagram 1. Levels of magnitudes involved in the formation of the image. The image function as the most abstract level on the top, then each scene magnitude, the three elements that composes ρ , and the individual measurements of each element on the bottom of the tree.
Sensors 17 01466 g018
Usually, the camera parameters are calibrated to a set of values γ i so that the sensitivity is optimized for all variables of ρ simultaneously with respect to a single metric (not necessarily maximized for each variable ρ i ). For example, a camera could be calibrated using a specific zoom, focus and sensor size optimizing the sensitivity of the system to perceive the colour of a subject in a wide range of object distances and viewpoints but the system is not optimized separately for distances or viewpoints. However, a given camera c has maximum sensitivity for a value of each ρ i . For convenience, we are going to define the tuning point as the corresponding point ρ s in the scene magnitudes space P for each camera of the set Γ in which the sensitivity of the VAS is the optimum (see Figure 1) . The sensitivity decreases in general for values of ρ differently from ρ s .
In the same way, since the VAS depends on the contribution of the environment e, for each value of the set E, the corresponding point ρ t in the scene magnitudes will be named as the working point. The detected output of this point is related to the sensitivity curve of the VAS for each of the magnitudes ρ i because it restricts the limits where the system can work. In average conditions or in simple approaches to vision problems, its effect is usually considered to be negligible because perception takes place close to ρ s . This simplification is unacceptable in the case of adverse conditions as it occurs, for example, in dark environments, remote objects from the camera, or in the presence of specular objects that limit the capability of the VAS to perceive the scene.
Acquisition of the scene and representation on a plane carried out by F Equation (1) cause situations, related to vision in adverse conditions, where it is not possible to distinguish between the different scene magnitude vectors ρ that contribute to an image. The capacity to discern elements of P is related to the measurement in the image I of the magnitudes m, e, and/or c, that contribute to each of the components ρ i . A specific application could be interested in knowing the intensity lighting of a scene from the environment e, or the focal length from the camera c, for calibration purposes. However, generally, the contribution of the object m to the image, and, therefore, its magnitudes μ i , is the aim of the measurement. For example, as we can see in Figure 2, a VAS used to perceive color ( μ 2 ) of objects is able to distinguish the different objects in the scene. However, if it is used to perceive shape ( μ 1 ) using the same camera (including all variables γ i ), neither objects m 8 and m 9 nor m 10 and m 11 could be distinguished each other (as we mentioned above, a given camera has maximum sensitivity for a specific value of each ρ i ). Hence, capacity to discern magnitudes that contribute to the vector ρ can be delimited to distinguish elements of set M in the image (e.g., the colour of a region of an object, or the shape of a surface, or both, etc.). It is important to understand that the scene magnitudes are continuous, not discrete. The representation in the Figure 2 shows a set of magnitudes that are distinguishable in the space of the image I = F ( ρ ) , but it is an example of the certain magnitudes in the continuous space of magnitudes P.
An object could be distinguishable from another one in the VAS when they are distinguishable in the measurement performed in F (i.e., in the vertical axis of Figure 1 and Figure 2A). Let Ω m i be the set of objects that can be distinguished for an specific object m i , and let χ the minimum difference perceptible by the system (sensitivity), then Ω m i could be established as:
Ω m i = m j M : ( χ > 0 ) F ( m j ρ ) F ( m i ρ ) χ
Figure 2B, shows the measurement performed in F analysed from the point of view of the object space M considering just colour and shape. Objects close to m 1 , in the yellow area, are not distinguishable, for that object in this example. However, they are distinguishable for the object m 2 (light green area). Following this example, m 10 Ω m 9 but m 10 Ω m 11 , if the measurement performed in F takes into account both colour and shape.
A VAS has to deal with different situations that are a consequence of the subsets of P delimited by the problem to be solved (e.g., delimited by objects or by their characteristics to be analysed, or by environments, or by the cameras, etc.). A minimum value of sensitivity χ can be established in which any object is distinguishable from another considered in the acquisition (see Figure 1). The values of the vector of magnitudes ρ in which sensitivity is higher than a threshold χ conform the subset S of P defined by Equation (8).
S = ρ i : ( χ > 0 ) ( m k , m j M ) F ( ρ i ) F ( ρ i 1 ) Δ ρ χ m j Ω m k
There are no perception difficulties for situations of the subset S P . They involve values of environment and camera, in addition to characteristics of objects, that make up magnitudes ρ of the set S Equation (8). In other words, it is possible to distinguish the images of two different objects ( m i and m j in Equation (9)) from the acquisition performed by F if the VAS is working on points of S ( m i ρ denote a scene magnitude vector whose component m is the object m i ).
( m i ρ , m j ρ S ) ( m i , m j M ) ( m i m j ) F ( m i ρ ) F ( m j ρ ) > 0
Vision systems working on scene magnitudes of the complementary of S, S c , ( P \ S ) present the aforementioned adverse conditions to distinguish characteristics of objects from images. Hence, solutions have to be provided to achieve distinct images from different objects that the camera perceives as the same one. These solutions should be able to compensate the low sensitivity in S c (sensitivity less than χ in Figure 1). Among the three variables that provide values to the scene magnitudes, object is a constant due to them being the subject of interest. However, environment conditions and camera characteristics could be modified to set up scene magnitudes ρ of the subset S. Thus, the VAS is going to be able to have different images from different objects in the scene in order to distinguish them Equation (9). For this purpose, we propose two complementary alternatives (Diagram 2):
  • System Calibration (calibrating the system). This alternative tries to minimize the distance between the working point ρ t and the tuning point ρ s .
  • Measurement Enhancement (conditioning the measurement). Enhance the target measurements or parameters. This alternative can be considered as conditioning the measurement performed by the VAS.
System Calibration consists of shifting one of the points so that the working point be an element of the set S. The goal could be generating a new image of the object m i by a transformation Υ S Equation (10) in which the working point ( ρ k m i ) be close enough to the tuning point. Do not confuse calibrating the system with calibrating the camera. The alternative presented here could be calibrating the sensor as well as changing other parameters of environment or object of interest. Hence, to carry out this alternative it is necessary to adjust the environment to shift the working point, for example moving the object closer to the camera, adjusting lighting conditions, etc. (Figure 3 shows an outline of this process). On the other hand, in order to shift the tuning point (see Figure 4), the camera could be recalibrated or new acquisition equipment can be used (traditionally this is done by replacing the camera with a more suitable one). In this case, subset S of P changes for the new camera from S o to S n .
( Υ S ) ( ρ k m i S ) ( ρ m i P ) F ( ρ k m i ) = Υ S F ( ρ m i )
Diagram 2. Diagram with the different alternatives to improve the system perception.
Diagram 2. Diagram with the different alternatives to improve the system perception.
Sensors 17 01466 g019
For the second alternative, Measurement Enhancement aims to directly influence the sensitivity curve of the acquisition system. In a nutshell, it tries to somehow highlight or enhance the parameters ρ which want to be perceived. It can be carried out by means of two new alternatives. First, the output signal of the perception system can be amplified. That is, the classic conception of measurement system amplification at the signal conditioning step. Limitations of this technique are related to increasing the amplitude of the signal in ranges of minimal sensitivity (both minimum and maximum of the range of the sensor because the contribution of the object in the output signal is insignificant, the signal-to-noise ratio is very low). In this way, acquisition system improvements are limited because they are only applied in ranges of intermediate sensitivity.
The other alternative of enhancing the target measurement is increasing the differences of the values of the input magnitudes. This is to operate with large differences ( η ) of the input ( ρ ) to increase the differences of the output F for different objects ( m i and m j ) until the differences are perceptible at the output ( m j Ω m i ). Figure 5 schematically shows this concept. Elements used as input magnitudes of the set P to get large differences of the input make up the subset A in Equation (11). The goal is to reduce the number of possibilities that the VAS has to deal with; for example, restricting camera positions, viewpoints or lighting characteristics (in this paper, a lighting pattern has been used to inspect specular surfaces).
A = ρ k m i : ( η > 0 ) ( ρ l m j P ) [ ( m i m k ) ρ k m i ρ l m j η ] m j Ω m i
We model the transformation able to increase the differences of the values of the input magnitudes as Υ C :
( Υ C ) ( ρ k m i A ) ( ρ m i P ) F ( ρ k m i ) = Υ S F ρ m i
The techniques are not exclusive and can be used together for designing vision system in which images of different objects can be distinguished. An example of the use could be a system which needs to perceive two colliding objects separately through a sensor with a fisheye lens. First, a Υ S transformation by means of calibrating the camera to reduce the distortion could be applied. After, Υ C could mean to colorize the objects to enlarge the perceived difference between them.

3. Method for Inspecting Specular Surfaces

In this section, we are going to use the previous model to specify a method for inspecting specular surfaces. The objective of an automated visual inspection system, AVI, aims to determine if a product differs from the manufacturer’s specifications. This implies the AVI has to measure the object magnitudes in the scene in order to compare them with values of the magnitudes established in the design step of the product (e.g., reflectance, colour, shape, topography).
Two sets of objects are considered for modelling the AVI: M P and M I . The first one is composed of objects that are made up by magnitudes μ i defined in the manufacturing specifications. The set M I is composed by objects to be inspected looking for any deviation from objects of the set M P . These deviations cause, depending on the magnitude, different defects: morphological, chromatic, topographic defects, etc. The union of M P and M I is the subset M S = { M P M I } M Equation (3) of the possible objects that the inspection system must consider.
The inspection goal could be modelled as in Equation (13). The AVI has to decide whether an object m i of the set M I could be distinguished (9) from an object m j of the set M P if any of the magnitudes μ i m j differ in some value η from the original μ i m k . The object m j is considered the object model of m i and contains the manufacturing specifications.
( i ) ( η > 0 ) ( m j M I ) ( m k M P ) μ i m j μ i m k η m j Ω ( m k )
In order for the deviations ( η ) of the object magnitudes μ i m j from μ i m k to be detected in the image Equation (13), contributions from the environment and the camera to ρ must allow a suitable sensitivity to the AVI.
Since it was previously shown that the sensitivity of the VAS (AVI in this specific case) is optimized simultaneously for all variables of ρ , AVI sensitivity is not necessarily maximized for each variable ρ i . It is not maximized for each object m and each of its variables μ i . Moreover, generally an AVI is designed for performing a measurement of a subset of the object magnitudes μ i in the image (i.e., colour, shape). Therefore, the sensitivity of the AVI can be very low for some of the magnitudes μ i to be measured. That is, it is possible that the perception be suitable for measuring the surface colour or shape but it could not be suitable for both together. In other words, the intended measurement determines the perception capacity of the AVI. Calibration parameters or environment conditions must be adjusted to adequately perceive magnitudes μ i . This process requires great knowledge of the problem and accuracy for the solution.
For specular surfaces, the difficulty to perceive in scene magnitudes of S c is a consequence of the surface reflectance. The environment conditions produce an effect in the perception of the object being more important than other types of surfaces (e.g., Lambertian surfaces). For example, if it is considered that the calibration parameters γ i are the same for two different images, the difficulty to perceive with different environment conditions is given by:
  • The mirror itself: a given camera can be confused so that it cannot distinguish between the environment and the object. The spatial modulation of the environment contribution creates the illusion of the objects in a scene.
  • The lighting of the environment can cause shine on the surface and confuse the two images with different objects. For example, the image formed by a grey surface with a high reflection coefficient illuminated by white lights can be confused with the image of a lighter surface object.
The specular reflection causes the working point ρ t be easily located at the limit of the range of the VAS, at the maximum value of ρ that can be measured. Sensitivity is very low for the perception of objects in these conditions because specularity saturates the camera sensor.
The viable solution for compensating the lack of sensitivity produced in the ranges of the scene magnitudes ρ (in order to obtain an improved image) is related to the System Calibration that performs the perception system (see Section 2). It is necessary to increase the differences in the values of the input magnitudes, Δ ρ , to raise the differences of the output magnitude until they can be measured (Equation (12)).
In other words, using the resolution (another important sensor characteristic that indicates the smallest change in the magnitude being measured that the sensor can detect, e.g., the smallest feature size of an object or the smallest change in colour that the VAS can distinguish), since the resolution in those ranges is very low, it is necessary to force large differences at the input. Thus, it is necessary to work using a subset of scene magnitudes A P in order the perception will be suitable in the points around the tuning point and it will be facilitated in the ranges of sensor saturation. Differences in F will enable the perception among objects using the set S A in Equation (14). The elements do not necessarily correspond to those of subset S Equation (8).
S A = ρ i A : ( χ > 0 ) ( m i M P ) ( m j M I ) Δ F ( ρ i ) Δ ρ χ m j Ω m i
In addition, if the Measurement Enhancement is not sufficient to discern the defects in the inspection, the distance between the working point ρ t and the tuning point ρ s must be minimized using the Equation (10). In this case, minimization is performed on the input magnitudes of the set A. Therefore, the transformation Υ S operates with the values of the input magnitudes S A Equation (14). In consequence, Equation (15) models the proposed solution for inspecting specular surfaces combining the two proposed transformations Υ S and Υ C according to the objects to be inspected and the possible defects to be detected (see Figure 6).
( Υ C ) ( Υ S ) ( χ , η > 0 ) ( ρ k m i , ρ l m l A P ) ( ρ o m i , ρ q m j S A ) ( ρ m i , ρ m j P ) F ( ρ k m i ) = Υ C ( F ( ρ m i ) ) , F ( ρ l m j ) = Υ C ( F ( ρ m j ) ) F ( ρ o m i ) = Υ S ( F ( ρ k m i ) ) , F ( ρ q m i ) = Υ S ( F ( ρ l m j ) ) ( m i m j ) ρ k m i ρ l m j η F ( ρ o m i ) F ( ρ q m j ) χ

4. AVI Method Controlling Environmental Parameters

As it was previously shown, increasing the differences of the input magnitude values Δ ρ can be performed by means of the control of the environmental conditions or of the camera parameters. These are two input parameters of the components ρ i Equation (2) that contribute to the formation of the image I Equation (1) and are variables that can be affected by the system. The third input parameter, the object, is considered as a constant because it is the object to be inspected.
It is known that the camera, as a photoelectric transducer, provides a measurement related to the scene radiance (see Figure 7). It is a function of environment and object magnitudes. The contribution of interest to the radiance is the radiance coming from the inspected object. This radiance, L R , is related to the object reflectance, f R [66] and the irradiance E incident on the surface according to Equation (16).
L R = f R E i
Reflectance f R contributes the necessary information about the behaviour of the light interacting surface of the object. Irradiance, the second factor of Equation (16), is related to the environment variables, in fact, to be precise, to the electromagnetic radiation that reaches the surface of the object. These magnitudes affect the contribution of the object in the camera.
Controlling the luminous energy of the environment, a function of the lighting sources and the transmission modulations, is a way of working directly with input magnitudes of the system. Without considering any other perception characteristics, irradiance is the key. Structuring the energy that reaches the object is a method of affecting the environment or the object. It enables areas of the surface of the object to be isolated, the contrast in the camera to be increased or decreased, etc. Thus, this variable allow us to force large differences at the input to perceive changes in the output image.
Measurement Enhancement is the task of the transformation Υ C Equation (15). In this paper, we propose a instance Υ Φ of Υ C focused on lighting conditions of the environment. The transformation Υ Φ structures the lighting in order to establish regions on the object radiated by different spectral powers Φ . Areas of different radiances are formed in this way. The regions can be formed by means of spatial modulation, forming a grid. Also, a sequence of lightings using temporal modulation or a combination of both can be used. The regions contribute with different radiances to the input of the camera establishing independent areas in the image. The increasing of the differences at the input of the system is produced in space or time domain Equation (12).
In this paper, the transformation Υ Φ is carried out by means of spatial modulation (space domain). The different areas in the image are formed with characteristics proportional to the irradiance E. The object irradiance E actively affects the perception process. The environment parameters are established so that great lighting gradients on the object are formed. The projection of the radiance L, which arrives at the optics, forms an image with regions. The projection of the radiance L, which arrives at the optics, forms an image with regions. The gradient of the image is a function of the one generated on the object. The greater the gradient of the pattern, the greater the gradient formed on the photodetectors. Then, a larger difference among adjacent photodetectors is obtained. Therefore, each photodetector has a spectral power associated in an instant (that is the function of irradiance and object characteristics).
Structuring the lighting enables the projection of a pattern on the surface object. This pattern is deformed by the object characteristics. It may be considered that the irradiance E is modulated by the object. Then, any other object modulates the generated pattern in a different way, and, therefore, modulates the spectral power, which is received in the space or the time by each photodetector, in a different way (in terms of inspection systems, any object with defects will modulate the generated pattern in a different way than the same object without defects). Controlling the input values of the perception system, for each photodetector, enables a reduction in the elements of the set of scene magnitudes that the system has to deal with. In addition, the pattern has to be configured so that the differences of the output are perceptible according to the magnitudes of the object (shape, colour, topology, etc.) to be perceived.
Control of energy that reaches the surface of the object is needed in order to design a certain pattern. The task depends on the number of environment lights, the spatial distribution of lighting, the wavelengths that conform each of the sources, the time, the modulations of transmission, etc. Moreover, the pattern of spectral power Φ could be different in order to inspect a specific magnitude μ i of the object m. Then to this purpose, the transformation Υ Φ will determine the spectral power Φ P as a function of μ i U and m M :
Υ Φ : M × U P
For practical considerations, it is interesting that the function of energy establishing the spectral power Φ is established in terms of the field radiance L f by considering four parameters: s S , Δ D , ξ X and t R (see Figure 8).
L f : S × D × X × R P
Hence, the transformation Υ Φ could be defined by:
Υ Φ ( m , μ i ) : = L f ( s , Δ , ξ , t )
The regions of lighting R O L on the surface of the object can be determined from the parameter s. It is a function that establishes the morphology of the regions of the pattern formed on the surface. Parameter Δ determines the set of lighting characteristics that radiates each of the established regions R O L . The function ξ determines the spatial configuration of energy reached by the object. The task of the function is to distribute the lighting characteristics of the set Δ over each of the regions determined by s. Finally, the function depends on time t. If the structured lighting is temporal, it is necessary to generate a sequence of patterns. Moreover, for practical reasons, we define R O G as the regions of lighting on the source. In the same way as R O L , the R O G is the morphology of the regions of the pattern conformed on the lighting source, in this case (see Figure 8).

5. Experiments

In this section, experiments performed by simulations to validate the model to deal with specular surfaces are presented. Specifically, the objective is to prove whether the transformation Υ Φ is able to compensate for the lack of sensitivity that takes place in certain values of the magnitudes of the scene. In other words, probing whether the transformation Measurement Enhancement, by actively controlling the lightning pattern, is able to detect object characteristics (e.g., defects in visual inspection) that are not detected in other conditions. The experiments are mainly based on the scale of perception and the point of view as magnitudes of the scene.
We propose the use of simulation based on virtual imaging as an efficient way to perform a large battery of tests (different environment conditions and camera parameters) as a previous step of experimentation in real manufacturing environments, more complex in terms of instrumentation, and thus more expensive. Then, in this section, experiments performed by simulations to validate the model to deal with specular surfaces are presented.

5.1. Experimental Setup

An extensive explanation of the experimental setup is shown. It integrates the simulator, the subject of interest, and the environmental parameters that allow to design the lighting patterns.

5.1.1. Simulator

A simulator has been implemented considering the model presented in Section 2. It is able to synthesize images by F ( ρ ) Equation (1). The components of ρ i Equation (2) of the scene magnitudes (scale, viewpoint, light intensity, etc.) are modelled by the characteristics of the subject of interest, m, the environment, e, and, finally, the camera, c (see Diagram 1).
According to the characteristics μ i of the subject of interest m, the simulator is basically interested in the reflection characteristics of the surfaces. The light reflection on an object depends on the atomic nature of the surface material and on the scattering and dispersion that takes place when the light contacts it. Consequently, in order to determine the reflected flow densities, the spatial configuration of the surface μ S and its electromagnetic properties, which determine the type of material, are taken into account. These properties provide the refraction index μ n of the surface, which represents the macroscopic optical properties required.
Regarding the environment e, the problem of specifying it is limited to the specific application of inspection. Therefore, in the process of transmitting the light signal from generating sources to the image, the different lighting sources are considered together with the modulations that the signal experiences, with the exception of those referring to the object to be inspected and those of the acquisition system. As environment variables, a distinction will be made between those related to lighting sources (that make up the fundamental element in the environment magnitudes) and the relative positions of the different elements involved in the scene. In order to carry out the study of local light reflection, the Bidirectional Reflectance Distribution Function (BRDF) presented by Cook and Torrance [67] has been used because different studies show its great capacity to adapt to reflectance extracted from objects [68].
Finally, variables related to the characteristics of the perception system include both the functions that define the optics and those related to the sensor itself. The aim of this research is to study the influence of environmental conditioning on the perception of the object. Then, controlled experimentation aimed to measure the radiance coming from the subject of interest is performed. Therefore, an ideal perception system is considered (images without noise, constant sensitivity for all spectral powers, constant diaphragm, etc.). It influences the radiometric measurement only with geometric variables like the focal distance and the size of the sensor. The other variables have been set to be constant.

5.1.2. Subject of Interest

Plane objects are considered in the experiments. They permit a simple control of the angle formed by the elements involved in the scene on all regions of the object. Control is necessary due to that fact that one of the aims of experimentation is to carry out a thorough test of the viewpoint.
The set M P of the objects defined in the manufacturing specifications is made up of planes of 12 mm × 12 mm. The function μ S establishes the points in a coordinate system that is local to the object independently of microscopic shape. The surface roughness is calculated by means of the Beckmann distribution model [69] by coherence with the BRDF model used. The root mean square is assigned to 0.1 (rms, or m in [67]). According to electromagnetic characteristics, there are two types of plane: dielectric and metallic. Dielectric planes have a refraction index μ n of 1.6 and metallic ones (base on chromium) have a refraction index μ n of 2.8 and an extinction index μ n i of 3.2 using the Fresnel equations. Also, other variables of the Cook-Torrance model such as a specularity coefficient (s in [67]) of 0.75 and a constant diffuse component ( R s ) associated to a RGB of (0.6,0.6,0.6) have been considered.
The set M P to be inspected is made up of objects with the same characteristics of M P but including defects. Three types of defects have been distributed over these planes (see Figure 9): two changes in topography (a 0.6 mm-diameter crack or crater in Figure 9a,b and a change in colour (an area of 0.6 mm × 0.6 mm in Figure 9c). In this last case, a constant diffuse component R s associated to a RGB of (0.4,0.4,0.4) on a surface measuring 0.6 mm × 0.6 mm is established. In total, for each plane in M P , 3 planes (1 per type of defect) with 25 defects have been considered in M P . Table 1 summarizes the data used in the experiments.

5.1.3. Environment

Environment characteristics play an important role in validating the transformation Υ Φ Equation (19). Specifically, the goal is to study the transformation that configures the lighting of the scene to establish regions on the object surface radiated by different spectral powers by means of spatial modulation. Therefore, we have considered different spatial configurations of the energy that reaches the object surface according to different gradients by considering different s, Δ and ξ .
Since the objects are planes, the regions of lighting R O L formed by s are established on a plane. In the experiments, the parameters are established from the lighting source domain using the R O G (see Figure 8). The set of lights conform a grid so that the whole lighting extension can be defined for different conditions. Different areas of the regions of grid R O G are considered 0.1 mm × 0.1 mm, 0.2 mm × 0.2 mm, 0.3 mm × 0.3 mm and 0.6 mm × 0.6 mm to determine the influence of the size of the regions inside the areas of defects. Then, the ratios between the size of the region and the size of the defect in one of the dimensions (0.6 mm) are: 6 (0.6/0.1), 3 (0.6/0.2), 2 (0.6/0.3) and 1 (0.6/0.6).
The set Δ Equation (20) is made up of lighting sources using the same characteristics (polarization, power, etc.) except the wavelengths, δ i , to conform different spectral powers, Φ δ . The wavelengths used are from the visible electromagnetic spectrum. Also, lighting sources only radiate for specific wavelengths (monochromatic lights).
Δ = Φ δ 0 , Φ δ 1 , Φ δ 2 , ...
Finally, the function ξ determines the spatial configuration of energy emitted by the lighting. The lighting characteristics of the set Δ are distributed over each of the regions R O L determined by s establishing different gradients: spatial and amplitude. In the experiments, for practical considerations, the R O G composed of a squared grid has been used (Figure 10).
In this paper, four different configurations of lighting are considered: two for spatial gradients ( ξ x and ξ x y ) and two for amplitude gradients ( ξ L and ξ I ). Regarding the former two, the function ξ establishes spectral powers of Δ Equation (20) into two different spatial distributions. Specifically, in the experiments the spatial gradient established by ξ is organized in one direction, ξ x , and in two directions, ξ x y , (see Figure 11). Let R O G ( x , y ) be the region of column x and row y of the lighting grid and let N x , N y be the column and row of neighbouring regions of the grid. A function ξ will be defined as ξ x if it only sets up different lighting characteristics in adjacent positions of an axis of the grid (Figure 10a) and the same ones in adjacent positions of the other axis of the grid. Then, any region in the grid R O G is assigned an element of the set Δ such that:
ξ x R O G ( x , y ) , Δ = Φ δ i Δ : ξ ( N y R O G ( x , y ) , Δ ) = Φ δ i ξ ( N x R O G ( x , y ) , Δ ) Φ δ i
For the second spatial gradient, a function ξ will be defined as ξ x y if sets up different lighting characteristics in all adjacent positions of a region of the grid (Figure 10b). Any region in R O G is assigned to an element of the set Δ Equation (20) such that:
ξ x y R O G ( x , y ) , Δ = Φ δ i Δ : ξ N y R O G ( x , y ) , Δ Φ δ i ξ N x R O G ( x , y ) , Δ Φ δ i
Regarding the amplitude gradients, two configurations are also considered for ξ : Linear, ξ L , and Interlaced, ξ I . A function ξ will be defined as ξ L if sets up lighting characteristics with close wavelengths in adjacent positions N of the regions of the lighting grid R O G (see Figure 11a) whereas the Interlaced, ξ I , configuration maximize the differences among wavelengths in these positions (see Figure 11b). Then, in case of ξ L , any region in R O G is assigned to an element of the set Δ Equation (20) such that:
ξ L R O G ( x , y ) , Δ = argmin Φ δ i ( ξ N R O G ( x , y ) , Δ Φ δ i ) subject to ξ ( N ( R O G ( x , y ) ) , Δ ) Φ δ i > 0
In case of any function that is considered as an Interlaced function, ξ I , the R O G is assigned such that:
ξ I ( R O G ( x , y ) , Δ ) = argmax Φ δ i ( ξ N R O G ( x , y ) , Δ Φ δ i )
Different transformations Υ Φ are formed using combinations of functions ξ accomplishing different spatial and amplitude gradients: Linear X ( ξ L , x ), Interlaced X ( ξ I , x ), Linear XY ( ξ L , x y ), and Interlaced XY ( ξ I , x y ) to carry out the experiments (see Table 2).
The function Linear X ( ξ L , x ) is made up by using a function meeting a spatial gradient in one axis of the grid Equation (21) and an amplitude gradient formed by Linear configuration Equation (23). This function uses an ordered sequence Δ L of the set Δ . Let Φ δ i be an element of Δ that radiates energy with the wavelenght δ i . Let δ i and δ e be the minimum and maximum values and let δ n be the number of wavelengths considered in the sequence, then Δ L is:
Δ L = Φ δ 0 , Φ δ 1 , Φ δ 2 , ... , Φ δ n 1 : n 2 , δ e δ i , δ i = δ i + m o d ( i , δ n ) δ e δ i δ n 1
An element of the sequence Δ L is assigned to the region (x,y) of the grid lighting R O G by the function ξ x :
ξ x ( R O G ( x , y ) , Δ L ) = Δ L m o d ( x , n )
The differences of wavelengths among neighbouring regions are constant in one of the axes of the grid.
The function Interlaced X( ξ I , x y ) is made up by a function that accomplishes a spatial gradient in one axis of the grid Equation (21) and an amplitude gradient formed by the ’Interlaced’ configuration Equation (24). In this paper, this function uses an ordered sequence Δ E of the set Δ in which a maximum difference of the wavelengths between adjacent positions is established. Let Δ t and Δ b be the top half and bottom half of Δ L Equation (25).
Δ t = Φ δ 0 , Φ δ 1 , ... , Φ δ i 1 , Δ b = Φ δ i , Φ δ i + 1 , ... , Φ δ n 1 : i = n / 2 , Δ t Δ b = Δ L
Then, the sequence Δ E combines Δ t and Δ b in an interlaced manner:
Δ E = δ 0 , δ 1 , ... , δ n 1 : δ i = Δ t ( i / 2 ) if   ( i mod 2 ) = 0 Δ b ( i / 2 ) if   ( i mod 2 ) = 1
An element of the sequence Δ E is assigned to the region (x,y) of the grid by the function ξ x Equation (26).
The function ’Linear XY’ accomplish a spatial gradient in two axes of the grid Equation (22) and an amplitude gradient formed by the ’Linear’ configuration Equation (23). This function uses the ordered sequence Δ L Equation (25) of the set Δ . An element of the sequence Δ L is assigned to the region (x,y) of the lighting grid by the function ξ x y :
ξ x y ( R O G ( x , y ) , Δ ) = Δ ( x   mod   n ) + Δ ( y   mod   n ) 2
Finally, the function ‘Interlaced XY’ is defined as meeting a spatial gradient in two axes of the grid Equation (22) and an amplitude gradient formed by the Interlaced configuration Equation (24). This function uses the ordered sequence Δ E Equation (28) of the set Δ . An element of the sequence Δ L is assigned to the region (x,y) of the grid by the function ξ x y Equation (29).
Also, a reference lighting configuration is defined. It permits the comparison of the improvement produced by enhancing the target parameters to measure. In this case, all the regions of the grid have the same characteristics. The set Δ is built with one element (monochromatic or polychromatic lights).
Δ = δ , ξ ( R O G ( x , y ) ) = δ
The parameters used in the transformations Υ Φ are summarized below (see Table 3). The lighting covers the whole surface (12 mm × 12 mm). The areas of R O L s generated by s are 0.1 mm × 12 mm, 0.2 mm × 12 mm, 0.3 mm × 12 mm and 0.6 mm × 12 mm for the functions ‘Linear X’ and Interlaced X. The ‘Linear XY’ and ‘Interlaced XY’ functions from R O L have areas of 0.1 mm × 0.1 mm, 0.2 mm × 0.2 mm, 0.3 mm × 0.3 mm and 0.6 mm × 0.6 mm. According to lighting characteristics Δ , ‘Linear X’ and ‘Linear XY’ use the sequence Δ L Equation (25) with 10 elements for all cases whereas Interlaced X and ’Interlaced XY’ use the sequence Δ E Equation (28) with 120, 60, 40 and 20 wavelengths uniformly distributed from the visible electromagnetic spectrum (380 nm to 780 nm). Finally, the function ξ x y is used for ‘Linear XY’ and ‘Interlaced XY’. It establishes (120 × 120), (60 × 60), (40 × 40) and (20 × 20) R O L s on the plane. ‘Linear X’ and Interlaced X are made up using the function ξ x establishing 120, 60, 40 and 20 R O L s according to the areas of the regions considered. Figure 12 shows the different lighting patterns used in the experiments.

5.2. Performance Results

In order to obtain the performance results, the components ρ i Equation (2) considered are scale ρ E , angle ρ θ and intensity lighting ρ I . They are the most influential scene variables in the image formation and in the characteristics of the visual inspection systems.
The system is tuned to a set of scale values ρ E : 1 pixel/mm, 2 pixels/mm, 5 pixels/mm, 10 pixels/mm and 15 pixels/mm. The set of the component angle ρ θ , as the angle formed by the surface normal and the camera normal, is formed by the sequence from 0 to 90 with an increase of 10 (10 different angles in total). Finally, the intensity lighting ρ I is the basic parameter measured by the camera and is related to the solution proposed for increasing the perception capacity of the system. A large set of camera, environment and object variables take part in the formation of this magnitude. The goal of the tests is to study the intensity lighting according to environment values related to the proposed transformations Υ Φ and not to influence any other variables. However, the angle formed by the surface normal and the lighting plane normal is very important in the design of inspection systems. It affects the intensity lighting ρ I . Therefore, it is considered as this parameter. The values contemplated in the tests are from 0 to 90 with differences of 10 (10 different angles in total).
In order to measure the effectiveness of the transformation Υ Φ to inspect, the performance is calculated as the number of different pixels between the image of an object without defects and the image of the same object with defects. Specifically, this difference divided by the estimated number of pixels containing the defects, for a specific resolution, measures the success rate. The calculation of this rate is performed in all the possible combinations of the variables ρ i contemplated previously. A total of 68,000 images have been synthesized. That is, 17,000 images from objects without defects M P and 51,000 (17,000 × 3 types of defects) images from inspection objects M I with 3 types of defects (2 in topography and 1 in colour). The 17,000 images are the images of 17 lighting configurations (4 Υ Φ functions of environment condition using 4 different lighting areas defined by s and a reference lighting configuration Equation (30)) conditioning the measure of 2 objects (dielectric and metallic) for 500 values of vector ρ (5 ρ E , 10 ρ θ and 10 ρ I ).
The experimental results of the average success rates for scale ρ E , angle ρ θ , lighting ρ I and the influence of the size of lighting s regions are detailed in the next sections.

5.2.1. Scale of Perception

The study of the perception scale makes discerning the influence of the size of the defects in the image possible.
Figure 13a shows the success rates according to the dielectric object. The function ’Interlaced XY’ offers the best success rates whereas the ’Linear X’ offers the minimum of the transformations Υ Φ . The average difference between the best function and the reference is 4.33%. The data from functions (’Linear XY’ and ’Interlaced X’) that uses only one of the maximum proposed gradients, scale or amplitude, is very similar. The average difference is only 0.16%.
The differences in success rates according to metallic object are more noticeable (see Figure 13a). The function ’Interlaced XY’ shows similar success rates to the case of dielectric material. The values are practically independent of the type of material. However, the success rates of the reference lighting significantly decrease to values between 46% and 57.9%. This is an average difference of 5.85% lower than the success rate of the dielectric case. Therefore, the improvement of the capacity of perception of the system is better using the alternative Measurement Enhancement. It differs more than 10% using the best function (Interlaced XY). Also, the results show an increase in sensitivity using spatial distributions organized in two directions ( ξ x y ).
It is interesting to consider the shape of the curve in Figure 13a. The graph represents the success rate of the system as a function of the scale. This is pixels per millimetre and not pixels per defect. The defects used in the tests have a maximum size of 0.6 mm for one of their three dimensions. Therefore, the scale values ρ E correspond to 0.6, 1.2, 3, 6 and 9 pixels per dimension of the defect. For the minimum scale (1 pixel/mm), a crack or crater defect is projected into 0.28 pixels 2 and a chromatic defect is projected into 0.36 pixels 2 . This corresponds to an area of 7.065 and 9 pixels 2 respectively in the image of a defective plane (25 defects, see Figure 9). Then, the differences of 1 pixel in the image suppose that success rates vary 11.11% or 14.15%. The ratio of success rate to pixel is very large. Conditions for the next scale (2 pixels/mm) are similar. In this case, differences of 1 pixel in the image means the detection or not of a defect. The topographic defects are projected into an area of 1.13 pixels 2 and the chromatic ones into an area of 1.44 pixels 2 . Differences of 1 pixel for the rest of the scale values mean a lesser impact on the detection.
In short, the average success rates show higher scale ρ E , more capacity of perception of the defects both for dielectric and metallic materials, avoiding short scales (1 pixel/mm and 2 pixels/mm) due to the characteristics mentioned in the previous paragraph.
Comparing the functions Υ Φ and the reference function, according to the capacity of perception, the data shows that a greater adjustment of the scale is required using the latter lighting. For example, if the minimum threshold χ for determining the defect in the dielectric material (this parameter will depend on the type of application) is 55%. Using the reference lighting, approximately more than 2 pixels/mm are necessary to perceive the defects whereas with ’Interlaced XY’ lighting only 1 pixel/mm is needed. In the case of metals, greater adjustment of the system is required. For the same threshold χ , more than 9 pixels/mm would be necessary using reference lighting and only 2 pixel/mm for each of the functions Υ Φ proposed. The best lighting, ’Interlaced XY’, makes a system tune to only 1 pixel/mm to detect the possible defects.

5.2.2. Angle of Perception

The success rates of the different transformations Υ Φ according to the perception angle ρ θ are shown in Figure 13c,d, for the dielectric and metallic objects respectively.
The study of the average of the success rates shows that the best conditions to perceive according to ρ θ are in the interval [0 , 40 ] for all lighting configurations. The function ’Linear X’ offers the worst results of the transformations Υ Φ even though their differences are minimum, an average of 2.85%. The functions ’Linear XY’ and ’Interlaced X’ present a similar behaviour. The average difference between them is 0.77% and it reaches a maximum of 1.12% for an angle of 20 .
The best results are provided by the function ’Interlaced XY’. The improvement in the capacity of perception reaches the maximum differences, an average difference of 11.4%, in the interval [10 , 40 ] compared to using the reference lighting. According to the material type in this interval, the differences are 6.84% and 16.05% for the dielectric and metallic case respectively. This is due to the fact that the success rates significantly decrease from inspecting dielectrics materials to metallic one, an average of 8.99% in the interval. Also, the differences in the success rates between the transformations Υ Φ that light the metallic objects are more noticeable.
An angle of 20 formed by the surface normal and the camera normal sets the maximum success rate. Using the function ’Interlaced XY’ for this angle, similar results are obtained according to the type of material: 95.05% for dielectric materials and 95.19% for metallic one. Furthermore, for the metallic object in this angle, the difference between the ’Interlaced XY’ and the lighting reference is 16.53%.
The success rate and the differences between the transformations and reference lighting decrease from 40 to reach 8.2% at 90 . In the latter case, it is only possible to detect convex defects.
Comparing the functions Υ Φ and the reference function, according to the capacity of perception, the data shows that the increase in the number of the magnitudes of a scene which the system is able to perceive is more significant than in the study of the scale. For example, if the minimum threshold χ for determining the defect in the dielectric material is 80% using the reference lighting, the angle ρ θ must be less than 30 to perceive the defects. In this case, using the ’Interlaced XY’ lighting, the angle can increase by up to approximately 43 . Hence, the increase is more than 10 . Assuming a threshold of 75%, the angle formed by the surface normal and the camera normal can be established in the interval [10 –25 ] for determining the defect in the metallic material using the reference lighting. Using any transformation Υ Φ , the angle tuning can be established in the interval [0 –45 ]. Then, the increase is about 30 .

5.2.3. Intensity Lighting

Regarding the intensity lighting ρ I as the angle formed by surface normal and lighting plane normal, the success rates of the different lighting configurations are shown in the Figure 13e,f, for the dielectric and metallic objects respectively. The function ’Interlaced XY’ provides the maximum success rate and the greatest increase in the capacity of perception of the system.
The study of the inspection capacity considering the material type shows the limited sensitivity for perceiving metallic objects using angles ρ I close to 0 and the reference lighting. The characteristics of the lighting sources and of the metallic reflection cause sensor saturation in the angle interval [0 , 10 ]. The decreasing of the success rate, compared to the dielectric material, is considerably marked with values close to 20% (concretely 19.72% for 0 and 20.77% for 10 ). Also, transformations using minimum spatial differences show differences of more than 3% between metallic and dielectric in that interval: ’Interlaced X’ 3.30% (0 ) and 4.36% (10 ), ’Linear X’ 3.47% (0 ) and 3.30% (10 ). However, if the transformations with spatial distributions in two dimensions are used, sensor saturation is not important. The differences are 0.33% (0 ) and 0.45% (10 ) for the function ’Interlaced XY’ and 0.73% (0 ) and 0.37% (0 ) for ’Linear XY’. The success rate is independent of the material and the differences are not significant for the rest of the lighting angles.
The success rate shows a low gradient for the intensity lighting ρ I according to lighting angle. The magnitude has an almost constant behaviour in the interval [0 , 60 ]. The standard deviation is lower than 2 for the transformations Υ Φ whereas the value is 4.21 for the reference lighting for all angles due to the behaviour in the initial values of the metallic plane (the standard deviation for the dielectric plane is 1.44).

5.2.4. Size of Lighting Regions

The influence of the size of the lighting regions R O L defined by s on the capacity of perception according to the scale is indicated on the left of Figure 14 and according to the angle on the right of the Figure 14. The size of R O L is represented by the minimum dimension of the region.
Increasing the size of R O L decreases the rate of the regions inside the defect. This fact decreases the success rates of inspecting topographic defects using the transformations Υ Φ for all perception scales and angles. The average differences vary from 4% to 5% in the scale case. According to angle, the average differences are about 7% for the crack defect (Figure 14e) in the interval of the maximum sensitivity [10 , 40 ]. The behaviour for the crater defect (Figure 14d) is analogous; it establishes an average difference of sensitivity in that interval of 5%. The differences decrease as the perception angle increases until they are 0 for 90 .
The capacity to modulate irradiance in the defect is inversely proportional to its size. In the extreme case where the size of the bands is greater than the size of the defects, the irradiance function will not be able to modulate the reflectance of the object, so it would behave in the same way as a the reference lighting.
The analysis of the chromatic defect shows that the size of lighting regions is independent of the success rate (see Figure 14c,f). In this case, it is practically constant. Therefore, a considerable increase in the perception capacity of the system is obtained.

6. Case Study: Automobile Logo

Simulation, as a previous step of experimentation in real manufacturing environments, allows a preliminary study that can be used to discern the best acquisition path planning or the equipment characteristics to perform the inspection of the specular surfaces. Finally, in this section an example of the application of the experimental results of the scale ρ E and angle ρ θ of perception on specular surface acquisition for inspecting a 100 mm diameter Mercedes Benz metallic logo is shown. The area to be inspected is 4676.15 mm 2 . Given that the minimum threshold χ depends on the application, in this case a value of 80% is assumed.
The inspection conditions are restricted to only 5 values of the scene magnitudes ρ i for a metallic object with an unstructured lighting (reference lighting). As can be seen in Figure 15 considering scale and angle ( ρ θ , ρ E ), the scene magnitudes are the following: (0 , 2 pixels/mm), (10 , 10 pixels/mm) and those with the considered highest scale ([0 , 10 , 20 ], 15 pixels/mm). The best lighting configuration, Υ Φ Interlaced XY’, permits increase the tuning of the scene magnitudes system up to 20 points (see Figure 15): ([20 , 40 ], 1 pixel/mm), ([0 , 10 , 20 , 30 ], 2 pixels/mm), ([10 , 20 , 30 , 40o], 5 pixels/mm), ([0 , 10 , 20 , 30 , 40 ], [10, 15] pixels/mm).
The choice of the conditions (scale, angles, etc.) to capture the whole object for inspection is a complex problem. It is necessary to take into account the particularities of each solution. In this case as example, the scale ρ E is established at 15 pixels/mm. Therefore, according to the previous scene magnitudes, the perception angle can deviate by up to 20 using the reference lighting and by up to 40 using the function ’Interlaced XY’. In other words, the angle formed by the vector normal to the camera and the normal vector of the surface to be inspected must be from 0 up to 20 using the reference ligthing and from 0 up to 40 in case of ’Interlaced XY’ lighting is used.
In consequence, in order to apply the results, any point on the surface can be viewed as a point on a plane whose normal vector is the normal of the surface at that point. For example, Figure 16a shows an outline of this assumption. In this way, the surface can be analysed as a set of planes (a plane per point on the surface). Then, the experimental results (calculated for planes) can be extrapolated to calculate the perception scale and angle of surfaces of different curvature for any point on the surface. The use of planes for any point on the surface could be computationally expensive. Hence, as the function μ S establishes the points on the surface in a coordinate system that is local to the object, in practice the function μ S is defined as a triangle mesh: a collection of triangles that defines the surface shape of a polyhedral object in 3D computer graphics. The use of a triangle mesh allows the system to discretize the surface geometry as a reduced collection of planes. The number of planar faces will be determined by the geometry of the surface and the resolution used in the experiments (in this case more than 4000 polygons, although less triangles are enough, high details are not needed due to angle perception is discretized each 10 ).
Since the scene magnitudes are a function of the characteristics of the object, the environment and the camera, it is required to make decisions about the appropriate magnitudes to infer to provide them. In other words, it is necessary to decide which magnitudes of the environment or the camera have to be modified to establish the adequate values of scale and angles calculated in the experiments. The scale will be determined by the number of pixels available to the sensor by setting the camera position at a focus distance and at a constant focal length. The conditions of the angle of perception will be established by the movement on the X and Y axis of the origin of coordinates located in the center of the of the object: the Yaw and Pitch movements of the camera (see Figure 16b). Due to select the variables is a complex problem, an approximation to the optimum solution is proposed in order to determine the appropriate angles between camera and object surface. This increases the captured area of the logo and reduces the images to be captured. For this purpose, a search tree was designed using a branch and bound algorithm, in which the solutions space of each node is reduced to a maximum of five children and a maximum temporal processing level is established.
The logo inspection requires 26 captures (see Figure 17a and Table 4) using the lighting reference and a camera of 1452 × 1452 pixels to acquire a surface area of 4619.96 mm 2 (98.79% of the logo). Some captures can be made using a lower resolution of up to 510 × 510 pixels. According to the function ’Interlaced XY’, only 9 images are necessary (see Figure 17b and Table 5). The camera resolution varies between 826 × 826 and 1455 × 1455 in order to cover a total surface area of 4675.05 mm 2 (99.9%). The first capture obtains 45% of the total inspection of the logo whereas only 8.17% is obtained using the reference lighting. This fact shows how the lighting system allow to perceive in more scene magnitudes. Specifically, more points on the surface accomplish the angle of perception ρ θ .

7. Conclusions

A novel active imaging model able to increase the perception capacity of the visual system is presented. The model provides solutions to vision problems in which it is difficult to perceive. It describes the parameters involved in a visual sensorization system in three main aspects: target object to perceive, environmental conditions, and sensor parameters. Each of them is individually parameterized as object size, color, etc.; environment parameters as light, objects-sensor orientations,...; and sensor parametrization such as focal length, monochrome/RGB/3D, etc. Moreover, the model describes the perception capabilities and the limits, and present solutions to these limits. In particular, the model is instantiated for the specular object inspection, as an example of a challenging situation for visual sensor perception. The specular surfaces problem means the device must operate in the intervals of low perception, related to reflections and shine. Traditionally, automated visual inspection requires a thorough analysis of the problem where solutions include everything from the acquisition equipment to the algorithm to recognize the possible defects. As a consequence, these vision systems are oriented to concrete applications and cannot be generalized. The model presented here deals with this lack by providing a general representation of vision systems and solutions for the perception limitations.
The solution proposal of the problem related to specular surfaces provides a normalization of the image in which different objects perceived as the same can be distinguished. First, Measurement Enhancement increasing the differences of the input magnitudes is performed. In this paper, the enhancement of measurements is carried out using environment variables, concretely controlling the lighting conditions. This enhancement spatially structures the lighting in order to set up regions on the surface of the object radiated using different spectral powers forming a grid. Finally, the system is tuned in order for the magnitudes of the scene (scale, angles, intensity lighting, etc.) to be properly perceived.
According to thorough analysis of the problem characteristics, use of virtual imaging simulations as a preliminary method step for validating hypotheses of the visual inspection systems is proposed. The validation of the conditions (point of view, scale, lighting, etc.) in which the inspection has to be performed, can be carried out in a flexible and low cost manner justifying the use of simulations in an early warning step of the system viability. Hence, the model can be pre-validated before the system is developed.
The use of simulations and knowledge bases and the generalist approach of the transformation provide a general solution that can be systematically applied. The method can be applied to the resolution of different inspection problems adapting the contents of the knowledge bases, avoiding a new design solution for each problem.
A realistic simulator has been designed to carry out the experimentation. This simulator recreates the conditions of the image formation and permits the validation of the inspection systems based on the model. The test shows the use of the transformations Υ Φ improves the capacity of perception of the system compared to a homogeneous environment (only one wavelength or colour). This is both for dielectric and metallic materials with regard to the tuning of the scale, angle or lighting. The use of transformation considering maximum amplitude gradient and maximum spatial differences (’Interlaced XY’) obtains the best results to perceive the defects for all cases. In contrast, the function with minimum amplitude gradient and minimum spatial differences (’Linear X’) offers the worst results of the transformations considered to perform the Measurement Enhancement. The functions (’Linear XY’ and ’Interlaced X’) that use only one of the maximum proposed gradients, scale or amplitude, generally present similar behaviour. The function that performs a homogeneous lighting of the object surface, which is used as reference, obtains the minimum success rate in all cases. Hence, the results prove the improvement in the capacity of perception for different conditions of scale, angle and intensity lighting. The proposed method enables the detection of surface defects in a greater number of values of scale, angle of perception and lighting conditions than in normal conditions using uniform lighting. The immediate repercussion is that a smaller number of captures of the scene is needed by the system.
The research should continue by studying the Measurement Enhancement by means of the extension of the input magnitudes: using other transformations based on structured lighting (with different patterns and using the time domain) and using other parameters like variables of the capture system to provide great gradients in the image.
The simulation confirms the hypotheses. It would be wise to advance to physical experiments according to the concrete industry in which the system is to be technologically developed. Nowadays, the ’Interlaced XY’ function (in this case, the pattern is made up using the ordered sequence Δ E of grey levels instead of using wavelengths) is being tested for increasing the perception capacity of an inspection system aimed to detect shape defects on the surfaces of ceramic tiles.

Acknowledgments

This work has been supported by a grant from the University of Alicante project GRE16-28.

Author Contributions

J. Azorin-Lopez, A. Fuster-Guillo and J.M. Garcia-Chamizo conceived the model and designed the experiments; J. Azorin-Lopez and M. Saval-Calvo performed the experiments; J. Azorin-Lopez, H. Mora-Mora and J.M. Garcia-Chamizo analyzed the data; All authors wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Oren, M.; Nayar, S.K. A Theory of Specular Surface Geometry. Int. J. Comput. Vis. 1997, 24, 105–124. [Google Scholar]
  2. Jin, H.; Soatto, S.; Yezzi, A.J. Multi-View Stereo Reconstruction of Dense Shape and Complex Appearance. Int. J. Comput. Vis. 2005, 63, 175–189. [Google Scholar]
  3. Kim, S.; Kweon, I.S. Automatic model-based 3D object recognition by combining feature matching with tracking. Mach. Vis. Appl. 2005, 16, 267–272. [Google Scholar]
  4. Klinker, G.; Shafer, S.A.; Kanade, T. The measurement of highlights in color images. Int. J. Comput. Vis. 1988, 2, 7–32. [Google Scholar] [CrossRef]
  5. Bajcsy, R.; Lee, S.W.; Leonardis, A. Detection of diffuse and specular interface reflections and inter-reflections by color image segmentation. Int. J. Comput. Vis. 1996, 17, 241–272. [Google Scholar]
  6. Tan, R.T.; Nishino, K.; Ikeuchi, K. Separating Reflection Components Based on Chromaticity and Noise Analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1373–1379. [Google Scholar] [PubMed]
  7. Tan, R.T.; Ikeuchi, K. Separating Reflection Components of Textured Surfaces Using a Single Image. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 178–193. [Google Scholar] [PubMed]
  8. Wolff, L.B. Polarization-Based Material Classification from Specular Reflection. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 1059–1071. [Google Scholar]
  9. Umeyama, S.; Godin, G. Separation of Diffuse and Specular Components of Surface Reflection by Use of Polarization and Statistical Analysis of Images. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 639–647. [Google Scholar] [CrossRef] [PubMed]
  10. Bronstein, A.M.; Bronstein, M.M.; Zibulevsky, M.; Zeevi, Y.Y. Blind Separation of Reflections Using Sparse ICA. In Proceedings of the Symposium on Independent Component Analysis and Blind Signal Separation, Nara, Japan, 1–4 April 2003; pp. 227–232. [Google Scholar]
  11. Xu, L.M.; Yang, Z.Q.; Jiang, Z.H.; Chen, Y. Light source optimization for automatic visual inspection of piston surface defects. Int. J. Adv. Manuf. Technol. 2016. [Google Scholar] [CrossRef]
  12. Irani, M.; Rousso, B.; Peleg, S. Computing occluding and transparent motions. Int. J. Comput. Vis. 1994, 12, 5–16. [Google Scholar] [CrossRef]
  13. Szeliski, R.; Avidan, S.; Anandan, P. Layer Extraction from Multiple Images Containing Reflections and Transparency. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head Island, SC, USA, 15 June 2000. [Google Scholar]
  14. Swaminathan, R.; Kang, S.B.; Szeliski, R.; Criminisi, A.; Nayar, S.K. On the Motion and Appearance of Specularities in Image Sequences. In Lecture Notes in Computer Science, Proceedings of the 7th European Conference on Computer Vision (ECCV), London, UK, 28–31 May 2002; Heyden, A., Sparr, G., Nielsen, M., Johansen, P., Eds.; Springer: Berlin, Germany, 2002; Volume 2350, pp. 508–523. [Google Scholar]
  15. Lin, S.; Li, Y.; Kang, S.B.; Tong, X.; Shum, H.Y. Diffuse-Specular Separation and Depth Recovery from Image Sequences. In Lecture Notes in Computer Science, Proceedings of the 7th European Conference on Computer Vision (ECCV), London, UK, 28–31 May 2002; Heyden, A., Sparr, G., Nielsen, M., Johansen, P., Eds.; Springer: Berlin, Germany, 2002; Volume 2350, pp. 210–224. [Google Scholar]
  16. Shah, S.M.Z.A.; Marshall, S.; Murray, P. Removal of specular reflections from image sequences using feature correspondences. Mach. Vis. Appl. 2017, 28, 1–12. [Google Scholar] [CrossRef]
  17. Nayar, S.K.; Fang, X.S.; Boult, T.E. Separation of Reflection Components Using Color and Polarization. Int. J. Comput. Vis. 1997, 21, 163–186. [Google Scholar] [CrossRef]
  18. Lin, S.; Lee, S.W. Detection of Specularity Using Stereo in Color and Polarization Space. Comput. Vis. Image Underst. 1997, 65, 336–346. [Google Scholar] [CrossRef]
  19. Zhou, S.; Liang, D.; Wei, Y. Automatic detection of metal surface defects using multi-angle lighting multivariate image analysis. In Proceedings of the 2016 IEEE International Conference on Information and Automation (ICIA), Ningbo, China, 1–3 Augest 2016; pp. 1588–1593. [Google Scholar]
  20. Ikeuchi, K. Determining surface orientation of specular surfaces by using the photometric stereo method. IEEE Trans. Pattern Anal. Mach. Intell. 1981, 3, 661–669. [Google Scholar] [CrossRef] [PubMed]
  21. Schultz, H.J. Retrieving Shape Information from Multiple Images of a Specular Surface. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 195–201. [Google Scholar] [CrossRef]
  22. Koenderink, J.J.; van Doorn, A.J. Photometric Invariants Related to Solid Shape. J. Mod. Opt. 1980, 27, 981–996. [Google Scholar] [CrossRef]
  23. Zheng, J.Y.; Murata, A. Acquiring a Complete 3D Model from Specular Motion under the Illumination of Circular-Shaped Light Sources. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 913–920. [Google Scholar] [CrossRef]
  24. Solem, J.E.; Aanæs, H.; Heyden, A. A Variational Analysis of Shape from Specularities using Sparse Data. In Proceedings of the 2nd International Symposium on 3D Data Processing, Visualization and Transmission, Thessaloniki, Greece, 9 September 2004; pp. 26–33. [Google Scholar]
  25. Wang, J.; Dana, K.J. A Novel Approach for Texture Shape Recovery. In Proceedings of the 2nd International Symposium on 3D Data Processing, Visualization and Transmission, Thessaloniki, Greece, 9 September 2003; pp. 1374–1380. [Google Scholar]
  26. Kutulakos, K.N.; Steger, E. A Theory of Refractive and Specular 3D Shape by Light-Path Triangulation. Int. J. Comput. Vis. 2008, 76, 13–29. [Google Scholar] [CrossRef]
  27. Vasilyev, Y.; Adato, Y.; Zickler, T.; Ben-Shahar, O. Dense specular shape from multiple specular flows. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  28. Savarese, S.; Chen, M.; Perona, P. Local Shape from Mirror Reflections. Int. J. Comput. Vis. 2005, 64, 31–67. [Google Scholar] [CrossRef]
  29. Savarese, S.; Perona, P. Local Analysis for 3D Reconstruction of Specular Surfaces—Part II. In Lecture Notes in Computer Science, Proceedings of the 7th European Conference on Computer Vision (ECCV), London, UK, 28–31 May 2002; Heyden, A., Sparr, G., Nielsen, M., Johansen, P., Eds.; Springer: Berlin, Germany, 2002; Volume 2351, pp. 759–774. [Google Scholar]
  30. Savarese, S.; Chen, M.; Perona, P. Recovering Local Shape of a Mirror Surface from Reflection of a Regular Grid. In Lecture Notes in Computer Science, Proceedings of the European Conference on Computer Vision (ECCV), Prague, Czech Republic, 11–14 May 2004; Pajdla, T., Matas, J., Eds.; Springer: Berlin, Germany, 2004; Volume 3023, pp. 468–481. [Google Scholar]
  31. Lellmann, J.; Balzer, J.; Rieder, A.; Beyerer, J. Shape from Specular Reflection and Optical Flow. Int. J. Comput. Vis. 2008, 80, 226–241. [Google Scholar] [CrossRef]
  32. Fleming, R.W.; Torralba, A.; Adelson, E.H. Specular reflections and the perception of shape. J. Vis. 2004, 4, 798–820. [Google Scholar] [CrossRef] [PubMed]
  33. Tarini, M.; Lensch, H.P.A.; Goesele, M.; Seidel, H.P. 3D acquisition of mirroring objects using striped patterns. Graph. Models 2005, 67, 233–259. [Google Scholar] [CrossRef]
  34. Bothe, T.; Li, W.; von Kopylow, C.; Juptner, W.P.O. High-resolution 3D shape measurement on specular surfaces by fringe reflection. Opt. Metrol. Prod. Eng. 2004, 5457, 411–422. [Google Scholar]
  35. Knauer, M.C.; Kaminski, J.; Hausler, G. Phase measuring deflectometry: a new approach to measure specular free-form surfaces. Opt. Metrol. Prod. Eng. 2004, 5457, 366–376. [Google Scholar]
  36. Perard, D. Automated Visual Inspection of Specular Surfaces with Structured-Lighting Reflection Techniques. Ph.D. Thesis, Universitat Karlsruhe, Karlsruhe, Germany, 2001. [Google Scholar]
  37. Huang, P.S.; Hu, Q.; Jin, F.; Chiang, F.P. Color-encoded digital fringe projection technique for high speed three-dimensional surface contouring. Opt. Eng. 1999, 38, 1065–1071. [Google Scholar] [CrossRef]
  38. Perard, D.; Beyerer, J. Three-dimensional measurement of specular free-form surfaces with a structured-lighting reflection technique. In Proceedings of the Three-Dimensional Imaging and Laser-Based Systems for Metrology and Inspection III, Pittsburgh, PA, USA, 14 October 1997; pp. 74–80. [Google Scholar]
  39. Newman, T.S.; Jain, A.K. A Survey of Automated Visual Inspection. Comput. Vis. Image Underst. 1995, 61, 231–262. [Google Scholar] [CrossRef]
  40. Malamas, E.N.; Petrakis, E.G.; Zervakis, M.; Petit, L.; Legat, J.D. A survey on industrial vision systems, applications and tools. Image Vis. Comput. 2003, 21, 171–188. [Google Scholar] [CrossRef]
  41. Neubecker, R.; Hon, J.E. Automatic inspection for surface imperfections: Requirements, potentials and limits. In Proceedings of the Third European Seminar on Precision Optics Manufacturing, Teisnach, Germany, 12 April 2016; p. 1000907. [Google Scholar]
  42. Cui, Z.; Lu, W.; Liu, J. Real-time Industrial Vision System for Automatic Product Surface Inspection. In Proceedings of the 2016 8th International Conference on Information Management and Engineering, New York, NY, USA, 2–5 November 2016; pp. 93–97. [Google Scholar]
  43. Kulmann, L. On Automatic Visual Inspection of Reflective Surfaces. Ph.D. Thesis, Technical University of Denmark, Kongens Lyngby, Denmark, 1995. [Google Scholar]
  44. Li, J.; Parker, J.M.; Hou, Z. An intelligent system for real time automatic defect inspection on specular coated surfaces. Vis. Commun. Image Process. 2005, 5960, 596043. [Google Scholar]
  45. Parker, J.M.; Cheong, Y.L.; Gnanaprakasam, P.; Hou, Z.; Istre, J. Inspection Technology to Facilitate Automated Quality Control of Highly Specular, Smooth Coated Surfaces. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation, Washington, DC, USA, 11–15 May 2002; pp. 2567–2574. [Google Scholar]
  46. Pernkopf, F.; OĹeary, P. Image acquisition techniques for automatic visual inspection of metallic surfaces. NDT E Int. 2003, 36, 609–617. [Google Scholar] [CrossRef]
  47. Chu, H.H.; Wang, Z.Y. A vision-based system for post-welding quality measurement and defect detection. Int. J. Adv. Manuf. Technol. 2016, 86, 3007–3014. [Google Scholar] [CrossRef]
  48. Werling, S.; Mai, M.; Heizmann, M.; Beyerer, J. Inspection of Specular and Partially Specular Surfaces. Metrol. Meas. Syst. 2009, 16, 415–431. [Google Scholar]
  49. Kammel, S.; Leon, F.P. Head-mounted display for interactive inspection of painted free-form surfaces. Helmet-Head-Mounted Disp. VIII Technol. Appl. 2003, 5079, 254–264. [Google Scholar]
  50. Lu, Y.; Lu, R. Using composite sinusoidal patterns in structured-illumination reflectance imaging (SIRI) for enhanced detection of apple bruise. J. Food Eng. 2017, 199, 54–64. [Google Scholar] [CrossRef]
  51. Sanderson, A.S.; Weiss, L.E.; Nayar, S.K. Structured Highlight Inspection of Specular Surfaces. IEEE Trans. Pattern Anal. Mach. Intell. 1988, 10, 44–55. [Google Scholar] [CrossRef]
  52. Nayar, S.; Sanderson, A.C.; Weiss, L.; Simon, D. Specular surface inspection using structured highlight and Gaussian images. IEEE Trans. Robot. Autom. 1990, 6, 208–218. [Google Scholar] [CrossRef]
  53. Lai, W.W.; Zeng, X.X.; He, J.; Deng, Y.L. Aesthetic defect characterization of a polymeric polarizer via structured light illumination. Polym. Test. 2016, 53, 51–57. [Google Scholar] [CrossRef]
  54. Leon, F.P.; Kammel, S. Inspection of specular and painted surfaces with centralized fusion techniques. Measurement 2006, 39, 536–546. [Google Scholar] [CrossRef]
  55. Kammel, S.; Leon, F. Deflectometric Measurement of Specular Surfaces. In Proceedings of the 2005 IEEE Instrumentationand Measurement Technology Conference, Ottawa, ON, Canada, 16–19 May 2005; pp. 531–536. [Google Scholar]
  56. Guo, H.; Feng, P.; Tao, T. Specular surface measurement by using least squares light tracking technique. Opt. Lasers Eng. 2009, 48, 166–171. [Google Scholar] [CrossRef]
  57. Zhang, X.; North, W.P. Analysis of 3-D surface waviness on standard artifacts by retroreflective metrology. Opt. Eng. 2000, 39, 183–186. [Google Scholar] [CrossRef]
  58. Zhang, X.; North, W.P. Retroreflective Grating Generation and Analysis for Surface Measurement. Appl. Opt. 1998, 37, 2624–2627. [Google Scholar] [CrossRef] [PubMed]
  59. Hoefling, R.; Aswendt, P.; Neugebauer, R. Phase reflection: A new solution for the detection of shape defects on car body sheets. Opt. Eng. 2000, 39, 175–182. [Google Scholar] [CrossRef]
  60. Hung, Y.Y.; Shang, H.M. Nondestructive testing of specularly reflective objects using reflection three-dimensional computer vision technique. Opt. Eng. 2003, 42, 1343–1347. [Google Scholar]
  61. Hung, Y.Y.; Lin, L.; Shang, H.M.; Park, B.G. Practical three-dimensional computer vision techniques for full-field surface measurement. Opt. Eng. 2000, 39, 143–149. [Google Scholar] [CrossRef]
  62. Seulin, R.; Merienne, F.; Gorria, P. Simulation of specular surface imaging based on computer graphics: Application on a vision inspection system. EURASIP J. Appl. Signal Process. 2002, 7, 649–658. [Google Scholar] [CrossRef]
  63. Aluze, D.; Merienne, F.; Dumont, C.; Gorria, P. Vision system for defect imaging, detection, and characterization on a specular surface of a 3D object. Image Vis. Comput. 2002, 20, 569–580. [Google Scholar] [CrossRef]
  64. Virtual reality applications in manufacturing process simulation. J. Mater. Proce. Technol. 2004, 155–156, 1834–1838.
  65. Seulin, R.; Merienne, F.; Gorria, P. Simulation of Specular Surface Imaging Based on Computer Graphics: Application on a Vision Inspection System. EURASIP J. Adv. Signal Process. 2002, 2002, 801489. [Google Scholar] [CrossRef]
  66. Nicodemus, F.E.; Richmond, J.C.; Hsia, J.J.; Ginsberg, I.W.; Limperis, T. Geometrical Considerations and Nomenclature for Reflectance; Technical Report; National Bureau of Standards: Washington, DC, USA, 1977.
  67. Cook, R.L.; Torrance, K.E. A reflectance model for computer graphics. In Proceedings of the 8th Annual Conference on Computer Graphics and Interactive Techniques, New York, NY, USA, 3–7 August 1981; pp. 307–316. [Google Scholar]
  68. Ngan, A.; Durand, F.; Matusik, W. Experimental Analysis of BRDF Models. In Proceedings of the Sixteenth Eurographics conference on Rendering Techniques, Konstanz, Germany, 29 June–1 July 2005. [Google Scholar]
  69. Beckmann, P.; Spizzichino, A. The Scattering of Electromagnetic Waves from Rough Surfaces; Artech House, Inc.: Norwood, MA, USA, 1987; p. 1987. [Google Scholar]
Figure 1. Calibration curve (red) and sensitivity (green) for the visual acquisition system (VAS) including tuning ρ s and working point ρ t .
Figure 1. Calibration curve (red) and sensitivity (green) for the visual acquisition system (VAS) including tuning ρ s and working point ρ t .
Sensors 17 01466 g001
Figure 2. (A) Example of calibration curves for a VAS aimed to measure shape (green) and colour (red); (B) Measurement performed in F transformed to the object space M considering shape μ 1 and colour μ 2 . In this example, the colour of the objects m 10 and m 11 can be distinguishable but their shape cannot.
Figure 2. (A) Example of calibration curves for a VAS aimed to measure shape (green) and colour (red); (B) Measurement performed in F transformed to the object space M considering shape μ 1 and colour μ 2 . In this example, the colour of the objects m 10 and m 11 can be distinguishable but their shape cannot.
Sensors 17 01466 g002
Figure 3. Outline for a System Calibration Υ S shifting the working point to S. The initial working point ρ t m i (light green) has been moved to ρ k m i (dark green) after transformation is applied. The new working point ρ k m i is close enough to the tuning point ρ s .
Figure 3. Outline for a System Calibration Υ S shifting the working point to S. The initial working point ρ t m i (light green) has been moved to ρ k m i (dark green) after transformation is applied. The new working point ρ k m i is close enough to the tuning point ρ s .
Sensors 17 01466 g003
Figure 4. Outline for a transformation Υ S shifting the tuning point to the working point ρ k m i . Dotted line curve and light red area represents the old calibration and S o respectively. Red line curve and blue area represent the new calibration curve and the new S n . The old tuning point ρ s has been consequently moved to the new one ρ n after transformation is applied.
Figure 4. Outline for a transformation Υ S shifting the tuning point to the working point ρ k m i . Dotted line curve and light red area represents the old calibration and S o respectively. Red line curve and blue area represent the new calibration curve and the new S n . The old tuning point ρ s has been consequently moved to the new one ρ n after transformation is applied.
Sensors 17 01466 g004
Figure 5. Outline for a transformation Υ C able to increase differences of input magnitudes values until the change is perceptible in the output F. Dotted line curve and light red area represents the old calibration and S o respectively of the VAS. Red line curve and blue area represent the new calibration curve and the new S n . The vertical dotted red lines represent values of the set A that allow perceptible changes in the output (horizontal dotted red lines).
Figure 5. Outline for a transformation Υ C able to increase differences of input magnitudes values until the change is perceptible in the output F. Dotted line curve and light red area represents the old calibration and S o respectively of the VAS. Red line curve and blue area represent the new calibration curve and the new S n . The vertical dotted red lines represent values of the set A that allow perceptible changes in the output (horizontal dotted red lines).
Sensors 17 01466 g005
Figure 6. Method for inspecting specular surfaces. According to objects and defects, transformations Υ S and Υ C are selected to provide an image in which defects can be detected using computer vision techniques.
Figure 6. Method for inspecting specular surfaces. According to objects and defects, transformations Υ S and Υ C are selected to provide an image in which defects can be detected using computer vision techniques.
Sensors 17 01466 g006
Figure 7. Parameters involved in the scene radiance L R : object reflectance ( f R ) and irradiance (E) incident on the surface.
Figure 7. Parameters involved in the scene radiance L R : object reflectance ( f R ) and irradiance (E) incident on the surface.
Sensors 17 01466 g007
Figure 8. Parameters s, Δ , ξ of the transformation Υ Φ .
Figure 8. Parameters s, Δ , ξ of the transformation Υ Φ .
Sensors 17 01466 g008
Figure 9. Defects considered in the experiments.
Figure 9. Defects considered in the experiments.
Sensors 17 01466 g009
Figure 10. Spatial distribution considered for function ξ in the experiments.
Figure 10. Spatial distribution considered for function ξ in the experiments.
Sensors 17 01466 g010
Figure 11. Amplitude distribution considered for function ξ in the experiments.
Figure 11. Amplitude distribution considered for function ξ in the experiments.
Sensors 17 01466 g011
Figure 12. Samples of lighting patterns generated by the transformation υ Φ used in the experimentation.
Figure 12. Samples of lighting patterns generated by the transformation υ Φ used in the experimentation.
Sensors 17 01466 g012
Figure 13. Success rates for the dielectric (a,c,e) and metallic (b,d,f) material according to scale (a,b), angle (c,d) and intensity lighting (e,f).
Figure 13. Success rates for the dielectric (a,c,e) and metallic (b,d,f) material according to scale (a,b), angle (c,d) and intensity lighting (e,f).
Sensors 17 01466 g013
Figure 14. Success rates for inspecting a crater (a,d), crack (b,e) and chromatic defect (c,f) according to size of lighting regions and scale (ac) and angle of perception (df).
Figure 14. Success rates for inspecting a crater (a,d), crack (b,e) and chromatic defect (c,f) according to size of lighting regions and scale (ac) and angle of perception (df).
Sensors 17 01466 g014
Figure 15. Success rates for the dielectric and metallic surfaces according to lighting of reference and the transformation Υ Φ with best results (’Interlaced XY’).
Figure 15. Success rates for the dielectric and metallic surfaces according to lighting of reference and the transformation Υ Φ with best results (’Interlaced XY’).
Sensors 17 01466 g015
Figure 16. Essential considerations for transferring the conclusions of experimental values to a specific inspection of an object.
Figure 16. Essential considerations for transferring the conclusions of experimental values to a specific inspection of an object.
Sensors 17 01466 g016
Figure 17. Images of a metallic logo to be inspected using the reference lighting (a) and the function ’Interlaced XY’ (b).
Figure 17. Images of a metallic logo to be inspected using the reference lighting (a) and the function ’Interlaced XY’ (b).
Sensors 17 01466 g017
Table 1. Characterists of the objects to be generated in the experiments.
Table 1. Characterists of the objects to be generated in the experiments.
CharacteristicMetallicDielectric
Surface roughness (rms)0.1
Refraction index2.81.6
Extinction index3.20
Specularity coefficient0.75
Diffuse component (RGB)(0.6,0.6,0.6)
Diffuse component defect (RGB)(0.4,0.4,0.4)
Object size (mm)12 × 12
Defect size (mm)0.6 diameter or side
Table 2. Lighting configuration using combinations of the spatial and amplitude gradients.
Table 2. Lighting configuration using combinations of the spatial and amplitude gradients.
Spatial Gradient
X ( ξ x )XY ( ξ xy )
Amplitude gradientLinear ( ξ L ) Sensors 17 01466 i001 Sensors 17 01466 i002
ξ L , x ξ L , x y
Interlazed ( ξ I ) Sensors 17 01466 i003 Sensors 17 01466 i004
ξ I , x ξ I , x y
Table 3. Parameters for the transformations Υ Φ used in experiments.
Table 3. Parameters for the transformations Υ Φ used in experiments.
CharacteristicsLinear XLinear XYInterlazed XInterlazed XY
s: R O L areas (mm)0.1 × 120.1 × 0.10.1 × 120.1 × 0.1
0.2 × 120.2 × 0.20.2 × 120.2 × 0.2
0.3 × 120.3 × 0.30.3 × 120.3 × 0.3
0.6 × 120.6 × 0.60.6 × 120.6 × 0.6
s: R O G areas (mm)0.1 × 0.1
0.2 × 0.2
0.3 × 0.3
0.6 × 0.6
Δ (# wavelenghts)10120
60
40
20
Table 4. Environment and camera characteristics to acquire 26 images needed to inspect the metallic logo using the reference lighting.
Table 4. Environment and camera characteristics to acquire 26 images needed to inspect the metallic logo using the reference lighting.
Yaw/PitchCCDImage AreaInspected AreaAccumulated AreaInspected
(degrees)(pixels)(mm 2 )(mm 2 )(mm 2 )(%)
(0, 0)1452 × 14529377.45381.97381.978.17%
(30, −30)1218 × 12186588.08769.211151.1924.62%
(0, 40)1186 × 11865982.37761.161912.3540.90%
(−30, −30)1215 × 12156503.9758.212670.5657.11%
(40, 20)1167 × 11675691.68222.632893.1961.87%
(−40, 20)1166 × 11665680.73220.893114.0766.60%
(0, −80)550 × 550707.26143.463257.5369.66%
(−60, 70)570 × 5701038.1142.983400.5172.72%
(30, 80)510 × 510630.42142.093542.675.76%
(80, 40)519 × 5191127.5141.353683.9578.78%
(−60, −70)570 × 5701015.71140.943824.8981.80%
(−20, 80)493 × 493620.971383962.8984.75%
(0, −40)1137 × 11374144.17137.764100.6687.69%
(50, −80)543 × 543615.36131.114231.7790.50%
(−30, −80)560 × 560705.1288.054319.8292.38%
(−80, −20)576 × 5761075.7167.334387.1593.82%
(50, 60)635 × 6351744.9647.764434.9194.84%
(0, 80)550 × 550695.445.854480.7695.82%
(20, −70)526 × 5261213.0132.724513.4996.52%
(10, 30)1280 × 12806672.88314544.4897.18%
(60, −60)630 × 6301538.0129.374573.8597.81%
(−30, 40)1133 × 11335565.1725.124598.9798.35%
(−50, 70)586 × 5861177.9211.574610.5598.60%
(40, −30)1164 × 11645933.794.834615.3798.70%
(−40, −20)1192 × 11926175.954.414619.7998.80%
(10, 60)749 × 7491970.750.174619.9698.80%
Table 5. Environment and camera characteristics to acquire nine images needed to inspect the metallic logo using ’Interlaced XY’ lighting.
Table 5. Environment and camera characteristics to acquire nine images needed to inspect the metallic logo using ’Interlaced XY’ lighting.
Yaw/PitchCCDImage AreaInspected AreaAccumulated AreaInspected
(degrees)(pixels)(mm 2 )(mm 2 )(mm 2 )(%)
(0, 0)1455 × 14559410.152079.072079.0744.46%
(−10, −60)1219 × 12194252.76581.962661.0456.91%
(−10, 60)1221 × 12214234.16580.293241.3369.32%
(60, 10)1263 × 12634574.35560.573801.981.30%
(−70, 0)1174 × 11742899.19480.714282.6191.58%
(40, −60)1187 × 11874118.58197.454480.0695.81%
(30, 50)1279 × 12795563.56121.894601.9598.41%
(−80, 50)826 × 8262711.2236.554638.599.20%
(−80, −70)923 × 9231695.6536.554675.0599.98%

Share and Cite

MDPI and ACS Style

Azorin-Lopez, J.; Fuster-Guillo, A.; Saval-Calvo, M.; Mora-Mora, H.; Garcia-Chamizo, J.M. A Novel Active Imaging Model to Design Visual Systems: A Case of Inspection System for Specular Surfaces. Sensors 2017, 17, 1466. https://doi.org/10.3390/s17071466

AMA Style

Azorin-Lopez J, Fuster-Guillo A, Saval-Calvo M, Mora-Mora H, Garcia-Chamizo JM. A Novel Active Imaging Model to Design Visual Systems: A Case of Inspection System for Specular Surfaces. Sensors. 2017; 17(7):1466. https://doi.org/10.3390/s17071466

Chicago/Turabian Style

Azorin-Lopez, Jorge, Andres Fuster-Guillo, Marcelo Saval-Calvo, Higinio Mora-Mora, and Juan Manuel Garcia-Chamizo. 2017. "A Novel Active Imaging Model to Design Visual Systems: A Case of Inspection System for Specular Surfaces" Sensors 17, no. 7: 1466. https://doi.org/10.3390/s17071466

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop