Next Article in Journal
The Design and Validation of a High-Precision Angular Vibration Calibration System Based on a Laser Vibrometer
Previous Article in Journal
Applying Ant Colony Optimization to Reduce Tram Journey Times
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Defect Detection Method of Fusion Reactor Vacuum Chamber Based on Photometric Stereo Vision

1
Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031, China
2
College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics & Astronautics, Nanjing 210016, China
3
Army Academy of Artillery and Air Defense, Hefei 230031, China
4
Science Island Branch of Graduate School, University of Science and Technology of China, Hefei 230026, China
5
State Key Laboratory of Mechanics and Control for Aerospace Structures, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(19), 6227; https://doi.org/10.3390/s24196227
Submission received: 4 September 2024 / Revised: 22 September 2024 / Accepted: 24 September 2024 / Published: 26 September 2024
(This article belongs to the Section Optical Sensors)

Abstract

:
This paper addresses image enhancement and 3D reconstruction techniques for dim scenes inside the vacuum chamber of a nuclear fusion reactor. First, an improved multi-scale Retinex low-light image enhancement algorithm with adaptive weights is designed. It can recover image detail information that is not visible in low-light environments, maintaining image clarity and contrast for easy observation. Second, according to the actual needs of target plate defect detection and 3D reconstruction inside the vacuum chamber, a defect reconstruction algorithm based on photometric stereo vision is proposed. To optimize the position of the light source, a light source illumination profile simulation system is designed in this paper to provide an optimized light array for crack detection inside vacuum chambers without the need for extensive experimental testing. Finally, a robotic platform mounted with a binocular stereo-vision camera is constructed and image enhancement and defect reconstruction experiments are performed separately. The results show that the above method can broaden the gray level of low-illumination images and improve the brightness value and contrast. The maximum depth error is less than 24.0% and the maximum width error is less than 15.3%, which achieves the goal of detecting and reconstructing the defects inside the vacuum chamber.

1. Introduction

Non-destructive testing as an important part of the safe operation of nuclear power plants has been a concern of scholars around the world [1,2]. For the latest compact fusion devices, due to the large and complex thermal loads and electromagnetic loads, the first wall of the blanket and the target plate area of the divertor are subjected to surface morphology changes, cracks, and other forms of failure during operation [3,4,5], as shown in Figure 1. The length of the cracks and the depth of the defects that appeared in the first wall exceeded the centimeter scale. There are also no relevant tools and algorithms to accomplish this type of detection. To guarantee the high-parameter plasma discharges and complete the risk assessment, precise localization observations and defect reconstruction of the damaged parts must be performed [6,7]. The vacuum chamber of the fusion reactor has an annular design, and the lack of clear light source illumination equipment inside it leads to a very dim scene. To clearly observe the actual condition of the first wall of the vacuum chamber under low illumination, it is necessary to carry out research on low-light image enhancement techniques in the vacuum chamber [8,9,10]. The Retinex theory is an effective method for image enhancement in vacuum chambers [11]. Wang et al. proposed single-scale Retinex, multi-scale Retinex, and multi-scale Retinex with Color Restoration algorithms based on the characteristics of the human visual system [12,13,14]. All these algorithms obtain the illumination component using Gaussian low-pass filtering and utilize only the information from the reflected component for image enhancement [15]. To avoid overexposure, an image enhancement algorithm that preserves naturalness can be constructed. Combining the logarithmically mapped illuminance component with the reflection component can effectively avoid the loss of image information and overexposure phenomenon [16,17]. In addition, naturalness preservation using a priori knowledge of multilayer luminance statistics and using a new filtering approach to extract the illuminance component can also result in smoother low-frequency components [18]. However, Retinex still shows unnatural images in uneven, low-illumination images. Therefore, constraining the illumination component and reflection component to avoid information loss while maintaining the naturalness of the enhanced image is one of the urgent problems to be solved [19,20].
Low-light image enhancement techniques allow observation and localization of surface defects in the first wall of the vacuum chamber. To further analyze the impact of the defects on the next round of experiments in the fusion reactor, 3D reconstruction of the defects is also necessary to assess their damage [21,22]. The photometric stereo vision has the advantages of simple operation, fast measurement speed, and the ability to obtain detailed features on the surface, which are more widely used [23,24]. The 3D information of an image can be recovered from contours, brightness, motion, light and dark, etc., and the light source is not required [25,26,27]. To realize the practical application of photometric stereo vision, many scholars have carried out research on the aspects of image resolution, light source model, material reflection model, calibration method, and so on [28,29]. For the image resolution, Vazquez et al. constructed virtual point-like signal gradient sources using small triangular faces forming polyhedra to achieve super-resolution in images [30]. Scholars have categorized the reflections of the calibration sphere into specular and diffuse reflections for light source calibration. Specular reflections can be used to estimate geometrical parameters, such as the light direction of the light source. In contrast, diffuse reflections can be used to estimate photometric parameters, such as the luminous intensity of the light source [31,32,33]. Based on the multiple viewpoints to obtain the outline information of the measured object and use the interrelationships, the direction and light intensity of the light source can be estimated [34,35]. In the camera-fixed case, introducing irradiation from light sources in different directions increases the constraints on the light sources, and a unique solution can be obtained [36]. The direction of the light source can also be calibrated by comparing the error between the image obtained from the actual light source and the image obtained from the estimated light source [37]. For the 3D reconstruction of defects, Horn et al. proposed the shadow recovery method, which utilizes the shadow, luminance, and reflection information of the object image, and adds other constraints to solve for the depth information [38]. Based on the Fourier basis function 3D reconstruction method, the depth information can be solved by coming up to integrate into the frequency domain and then projecting to the spatial domain using the Fourier inverse transform [39,40]. The complexity of the vacuum chamber structure and the high specular reflection properties of the metal surface make it difficult to measure defects by photometric stereo vision [41,42,43]. How to inspect the surface inside a vacuum chamber based on a reflection model and accomplish 3D reconstruction of defects such as fine cracks is a rarely studied and challenging problem.
This paper addresses image enhancement and 3D reconstruction techniques for dim scenes inside the vacuum chamber of a nuclear fusion reactor. First, a multi-scale Retinex low-light image enhancement algorithm with adaptive weights is designed for the dim interior environment of the fusion reactor vacuum chamber. It can enhance the contrast of low-light images and recover information about details that are not revealed. Second, to meet the needs of defect detection on the surface of the vacuum chamber, a visual servo-based algorithm for defect reconstruction in the vacuum chamber is designed to accomplish the task of reconstructing defects. This paper is divided into 5 sections. Section 2 introduces the low-light image enhancement algorithm for the interior of the vacuum chamber. Section 3 presents the defect reconstruction algorithm based on photometric stereo vision. Section 4 establishes an experimental platform and conducts experiments on image enhancement and defect reconstruction algorithms, and finally, Section 5 gives the conclusion of this paper.

2. Low-Light Image Enhancement Algorithm

2.1. Image Preprocessing and Threshold Segmentation

Traditional image processing algorithms mainly include image preprocessing, filter denoising, and multi-operator construction of defective edge information. The preprocessing can reduce the non-target information in the image to be detected and highlight the needed image information. The defect information in the internal image of the vacuum chamber can be highlighted by effective threshold segmentation. The gray level of the image to be processed is denoted by r and the gray level range is [ 0 , ( L 1 ) ] . Where r = 0 denotes black and r = L 1 denotes white, T ( r ) is a monotonically increasing function on the interval 0 r ( L 1 ) . For each pixel point on the input image that has a value r there is a gray scale value s generated. Since the gray level of an image can be regarded as a random variable on the interval [ 0 , ( L 1 ) ] , let p r ( r ) and p s ( s ) denote the probability density functions of random variables r and s , respectively
s = T r = ( L 1 ) 0 r p r ( w ) d w
where w is the integral variable and s is the cumulative distribution function of the random variable r . It is easy to get
p s ( s ) = p r ( r ) d r d s
From the Leibniz criterion it follows that
d s d r = d T ( r ) d r = ( L 1 ) p r ( r )
Substituting Equation (3) into Equation (2) gives
p s s = p r r 1 L 1 p r r = 1 L 1 ,   0 s L 1
Since Equation (4) is a uniform probability density function, p s s is always uniform and independent of the form of p r r . Define the histogram of an image with a gray level range of [ 0 , ( L 1 ) ] as a discrete function
h r k = n k
where r k is the k -th level gray value and n k is the number of pixels with a gray value of r k . The total number of image pixels is usually represented by the product M N . Its ratio to each component normalizes the histogram as follows:
p r r k = n k M N ,   k = 0,1 , 2 , . L 1
where L is the number of possible gray levels in the image, and the graph of p r r k relative to r k is called the histogram. Discretizing Equation (6) gives
s k = T r k = ( L 1 ) M N j = 0 k n j ,   k = 0,1 , 2 , , L 1
The pixels of gray level r k in the input image can be mapped to the corresponding pixels of gray level s k in the output image by Equation (7). Convolutional smoothing using a Gaussian filter gives
G x , y = 1 2 π σ 2 e ( x 2 + y 2 ) 2 σ 2
F x , y = G x , y f x , y
where σ is the standard deviation of the Gaussian function, which is used to adjust the degree of smoothing of the image, f x , y is the gray value of the pixel point of the image to be detected, and F x , y is the smoothed image. Image threshold segmentation using Equation (7) has the advantage of being simple, intuitive, fast, and easy to integrate with other algorithms, and the disadvantages of being dependent on threshold settings, unable to deal with complex backgrounds, sensitive to noise, and lack of automatic adjustment mechanism. The results of crack detection inside the vacuum chamber by this method are shown in Figure 2. It is seen that the method has strict requirements on the detection object, which requires obvious image comparison and cannot carry out an in-depth and detailed study of the three-dimensional morphology of the crack defects.

2.2. Improved Multiscale Retinex Algorithm

To enhance the contrast of the low-light image inside the vacuum chamber, and at the same time recover the detailed information of the low-light unrevealed image, this paper designs an improved multi-scale Retinex low-light image enhancement algorithm with adaptive weights. The Retinex was initially proposed by Edwin H. Land based on the theory of luminance and color enhancement of human visual perception [11,12], as shown in Figure 3. The detection image inside the vacuum chamber is represented by the binary function f ( x , y ) , the incident light source is called the incident component i x , y , and the reflected light is called the reflected component r x , y , and the relationship is
f x , y = i x , y r x , y
The incident component i x , y depends mainly on the light source, which is a low-frequency component, and the reflected component is an intrinsic property of the image. The colour of the surface of the object in the visual system is determined by the reflective ability of the light wave with color invariance. To reduce the amount of computation the Equation (10) is log-transformed
R x , y = l o g f x , y i x , y = l o g f x , y l o g i x , y
where R x , y is the output value of the traditional Retinex algorithm.
From Figure 3, the core of the algorithm is to estimate the incident component i x , y of the original image and remove the reflection component r x , y which is not related to illumination to enhance the overall contrast of the image. To estimate and remove the incident components efficiently, this paper proposes a multi-scale Retinex algorithm with adaptive weights. Setting the incident component as the low-frequency part of the image, by Gaussian low-pass filtering gives
R M S R i x , y = n = 1 N ω n log f i x , y log f i x , y G n x , y
where f i x , y is the i -th component of the image, i r , denoting the colour channel, respectively. ω n and G n x , y are the weights and filters corresponding to the n -th multiscale parameter, and the variance is denoted as σ n . N denotes the number of scales. According to the engineering experience N = 3 , scale parameters can be taken as σ m i n = 15 ,   σ m i d = 80 ,   σ m a x = 250 . To avoid amplifying noise, color distortion, and halo phenomenon in some cases, different scales can be adjusted during the improvement of the logarithmic function
R n e w i x , y = n = 1 N ω n log f i x , y + μ 1 log f i x , y G n x , y + μ 2
where μ 1 and μ 2 are the incident component and tuning parameter, respectively. The parameter values of μ 1 , μ 2 are usually obtained from image comparison experiments, and in this paper, they are taken as μ 1 = μ 2 = 9.5 , respectively. To adapt to the flat and uneven regions inside the fusion reactor, the vacuum chamber image is chunked according to the number of scale parameters. The flatness evaluation function inside the vacuum chamber is defined as
ρ N = τ arccot σ N + υ
where ρ N is the flatness evaluation index, τ and υ are the adjustment coefficients, and σ N is the standard deviation of each sub-block. For flat regions inside the vacuum chamber, the standard deviation of the sub-blocks is smaller, and the value of the exponential function is larger, while the opposite is true for non-flat regions. The standard deviation σ N 2 of the subblock P n x , y partitioned according to the number of scale parameters is
σ N 2 = 1 S x , y P n P n x , y 1 S x , y P n P n x , y 2
where S is the total number of pixels in each sub-block. According to the flatness evaluation function, the scale parameter is adjusted
σ N = 1 λ σ m a x σ m i n ρ m a x ρ m i n ( ρ ρ m i n )
where λ is the scale adjustment factor and σ m i n and σ m a x are the minimum and maximum values of each sub-block, respectively. The optimal scale parameter corresponding to each sub-block can be determined by Equation (16) to enhance the brightness while reducing the loss of detailed information. After determining the scale parameters, the weights need to be adaptively adjusted
ω m = m = 1 N ( σ m a x σ m ) 2 3 m ( σ N σ m ) 2 ,   m = 1,2 , , N
By combining Equation (16) and Equation (17), the corrected scale parameters and adaptive weights can be determined. Finally, the output value of the Retinex algorithm for the low-light image inside the fusion reactor is
R f u s i o n x , y = n = 1 N { m = 1 N ω m [ log f i x , y + μ 1 log f i x , y G m x , y + μ 2 ] }
where R f u s i o n x , y is the output result after illumination image enhancement, N is the number of sub-blocks, μ 1 and μ 2 are the tuning parameters, and ω m is the weights of the adaptive scale parameters. The larger the weights, the larger the contribution value of the corresponding sub-blocks, and the better the effect of detail preservation and image contrast.

3. Defect Reconstruction Based on Photometric Stereo Vision

3.1. Modeling Photometric Stereo Vision

The principle of photometric stereo vision is that a set of images of an object individually illuminated by light sources in different directions is captured through the same point of view. According to the luminance equation, the normal vector of the surface of the object, as well as the gradient matrix in the x and y directions, is calculated to realize 3D stereo reconstruction. The main research core of photometric stereo vision is a 3D reconstruction based on reflections from materials. The ideal diffuse reflection follows Lambert’s cosine law, and its outgoing light intensity is only related to the pitch angle of the incident light
I E = k d c o s θ i = k d l i , n
where k d is the diffuse reflection coefficient and c o s θ i denotes the inner product of the normal vector n and the incident light vector l i , l i , n denotes the inner product of incident light and surface normal vector. In practice, the bidirectional reflection distribution function (BRDF) can be used to characterize the surface reflection of complex materials. Define the BRDF proportionality between incident irradiance and outgoing radiance in a specified direction as follows
ρ b d ω i , ω o = d L o ( ω o ) d E i ( ω i ) = d L o ( ω o ) L i ( ω i ) c o s θ i d ( ω i )
where ω i = θ i , φ i is the direction of the incident light, ω 0 = θ 0 , φ 0 is the outgoing light, θ is the altitude angle, φ is the azimuth angle, E i is the irradiance of incident light on the surface of the object, L o is the radiance of the outgoing light on the surface of the object, and L i is the light radiance. For the interior of the vacuum chamber, the BRDF reflection parameters can be obtained by extracting the reflection data from randomly captured images taken by a high-resolution camera at different angles and fitting the model.
According to the Phong reflection model, the intensity of reflected light at any point on the surface of an object is composed of ambient light, diffuse light, and specularly reflected light. Since the vacuum chamber of the fusion reactor is a dark environment, there is no ambient light. Each target plate in the first wall of the vacuum chamber is fabricated with frosted tungsten tiles. The specular reflection is very small, and the specular reflection is concentrated in a small range of reflection angles. With the exclusion of the specular reflection angle, the diffuse reflection model can still be used for approximate calculation. Therefore, a diffuse reflection model can be used to approximate the intensity of the reflected light for the first wall of the vacuum chamber
I = ρ l , n E
where n = [ n x , n y , n z ] T is the unit normal vector at a point inside the vacuum chamber, ρ denotes the reflection coefficient, l = [ l x , l y , l z ] T is the direction vector of the light source; E is the illuminance of the incident light and I denotes the gray value of the image. Assuming that the light sources in the camera have equal luminosity, i.e., ρ E = ρ 1 E 1 = ρ 2 E 2 = ρ 3 E 3 , and the number of light sources is 3, and the matrix consisting of the light source directions is L = [ l 1 , l 2 , l 3 ] T , then Equation (21) is rewritten as
I = ρ E L T n
From Equation (22), the normal vector at the interior viewpoint of the vacuum chamber can be obtained as
n = L 1 I L 1 I
Assuming that the depth of a point inside the vacuum chamber is z and the gradient is p , q , the gradients in its x and y directions are p = z x and q = z y , respectively. The relationship between the directional gradient and the surface normal vector is
n = ( p , q , 1 ) T p 2 + q 2 + 1
The gradient matrix P , Q of all pixels can be obtained by applying the above conversion to the image of the interior of the vacuum chamber acquired by the camera. Assuming that the number of supplementary light sources is k , the incident illumination E inside the vacuum chamber is the same, i.e., E 1 = E 2 = . . . E k , l 1 , l 2 , . . . , l k R 3 . Then the relationship between the directional gradient and the surface normal vector is
n = ( L T L ) 1 L T I ( L T L ) 1 L T I
where L = l 1 , l 2 , . . . , l k T , I = i 1 , i 2 , . . . , i k T . The computation of the normal vector inside the vacuum chamber in the case of multiple light sources by photometric stereo vision can be accomplished by Equation (25). The task of reconstructing the surface defects inside the vacuum chamber can be accomplished.

3.2. Light Source Positioning Design

During visual inspection inside vacuum chambers, the position of the light source has a significant impact on photometric stereo vision. Sacrificing luminous flux efficiency is often chosen for the uniformity of illumination. That is, less light is provided to the interior of the vacuum chamber to keep the light intensity variation low. Therefore, this paper designs a simulation system of light source lighting contour, and completes the calculation of light source positioning for uniform and efficient lighting by simulating the positional layout of various light sources. The location of the centre of the illumination region of the light source is described in terms of a Cartesian coordinate system, as shown in Figure 4. The normal polar coordinates relative to the vacuum chamber are defined as θ l and φ l . By adjusting the number of input variables, the total light flux and illumination variation can be optimized. Based on the input parameter values, the designed optimal light source system can create a set of illumination configurations on its own to calculate the illumination profile observed by the robot. The luminous intensity of the point light source in each direction is
I θ = I 0 F ( θ ) = I 0 ( c o s θ ) g
where g is the angular attenuation factor related to the uniformity of the light source and I 0 is the light intensity along the direction of the main optical axis. If g = 0 , it is an ideal light source; if g = 1 , it is a Lambertian light source.
The supplementary light source is a hemispherical LED point light source with a diameter of 5   m m , and its luminous intensity and the light intensity obtained on the plane with a distance of d from the LED are
I L E D θ = i = 1 n c 1 i c o s ( θ c 2 i ) c 3 i
E r , θ = I 0 c o s θ d 2
where c 1 i , c 2 i , c 3 i are the parameters to be determined and n is the number of cumulative terms of the cosine function. When the reflection characteristics of the material are approximated as Lambertian characteristics, the dual Lambertian illumination model can be obtained
E = K α z m + 1 L L E D A L E D x i x 0 2 + y i y 0 2 + z 2 m + 3 2 = K z m + 1 d 2 + z 2 m + 3 2
where K α is the camera sensor response parameter, L L E D is the luminous brightness of the LED, A L E D is the area of the light emitting chip of the LED, and the value of m under the Lambertian light source is 1 . d is the distance between the point light source and the measured position of the vacuum chamber, and z is the depth value of the measured position. The task of completing the optimal lighting configuration of the system according to the designed efficient light source positioning system is shown in Table 1, and the output is shown in Figure 5. Related filtering optimizations can be accomplished by testing with multiple sets of instances. By applying Lambert’s cosine emission law and the distance factor from the secondary target to the lens, the final optical radiation intensity can be determined. In addition, illumination from multiple light sources can be efficiently calculated using positional symmetry. The optimal light source positioning system designed in this paper can broaden the setup output results to provide optimized light arrays for crack detection inside vacuum chambers without the need for time-consuming experimental testing.

4. Defect Detection Experiments

4.1. Experimental Platform

The vacuum chamber robot hardware system is shown in Figure 6a, which mainly consists of an 11-degree-of-freedom robot, an end binocular camera, and a light source system. The control system consists of a high-performance multi-axis motion control card and a NVIDIA Jetson Nano control board, which brings sufficient arithmetic power for the end processing of the 3D reconstruction of the fissure. The camera is a RGBD binocular camera, and its specific solid-state parameters are shown in Table 2. The experimental object of the vacuum chamber defect detection designed in this paper is a composite matte plate, whose length, width, and height are divided into 400   m m × 400   m m × 20   m m , as shown in Figure 6b. To reduce the randomness error of detection, the actual depths of the three defects on the simulated plate in the vacuum chamber selected for the experiment are 15   m m , 10   m m and 5   m m , respectively.
The software system of the fusion robot inspection algorithm mainly consists of a path planning module and a defect detection module, as shown in Figure 7. The 3D path planning program of the fusion robot is then realized by calling the function functions directly in the motion control card. The defect detection system for the vacuum chamber is mainly based on a VMware virtual machine for wireless communication. In the constructed wireless local area network (WLAN), Putty-0.67 software is utilized for the serial interface connection task. The camera carried at the end of the robot can continuously acquire images of the interior of the vacuum chamber during the inspection process. Relevant depth information and 3D construction of defects inside the vacuum chamber can be accomplished in real-time.

4.2. Low-Light Image Enhancement Experiment

To verify the effectiveness of the image enhancement algorithm proposed in this paper, the defects inside the vacuum chamber are first detected using traditional methods. The detection steps mainly include smoothing processing, threshold segmentation and, defect contour extraction, etc. The experimental results of contour information detection of simulated defects inside the vacuum chamber are shown in Figure 8. The analysis results show that the contour length information can be obtained by setting the input image average brightness and overall size parameters, and the defect contour extraction is initially realized. The results of depth information detection are shown in Figure 9. The depth of detection takes the colour of the hue, saturation, and value (HSV) space, and the different colours indicate the proximity to the location of the detection camera. From the experimental results, it can be seen that the reconstructed crack location and the actual location are different, and there is a large amount of noise in the detection results at different moments. The actual defect location of the simulated plate is 600   m m away from the end camera of the robot, and the experimental results show that the blue area indicates a depth value of 620   m m and the red area indicates a depth value of 640   m m , while the actual defect depth value is between 630   m m and 650   m m . It can be seen that the traditional method has a large error in the detection results, and only obtains the approximate distance of the detection target, which is unable to realize the accurate defect reconstruction task. And when there is a low-light situation, the method is almost completely ineffective.
To this end, an improved multi-scale Retinex algorithm with adaptive weights is designed in this paper for low illumination image enhancement, and the experimental results are shown in Figure 10. Figure 10a–d shows the original illuminated images of the experimental scene and the simulated defective plates. Figure 10e–h shows the results of the low-illuminated images after enhancement. It can be observed that the multi-scale Retinex algorithm with improved adaptive weights designed in this paper can enhance the contrast of the image and maintain the detailed information with no additional added noise. From Figure 10h, it can be observed that all three defect information can be detected using the algorithm designed in this paper.
To facilitate the observation of the enhancement effect, the images before and after enhancement are processed with normalized image gray-level histogram, as shown in Figure 11. Where Figure 11a–d shows the normalized gray level histogram of the images before enhancement, it can be observed that the overall gray level of each image is low and concentrated in the low gray level region. Figure 11e–h shows the normalized grayscale histogram of the image after the enhancement process, and it can be seen that the grayscale levels have all been broadened to varying degrees, the overall luminance values have been enhanced, and the contrast has also been improved, which proves the effectiveness of the image enhancement algorithm.

4.3. Defect Reconstruction Experiment

When the low-light image is too blurred, even with the enhancement algorithm, it is still not possible to obtain more defect information, especially the detection of small defect information. To accurately reconstruct the defect information, this paper uses three parallel light sources in different directions to increase the light illumination of the vacuum chamber and carries out 3D reconstruction of defects based on the principle of photometric stereo vision. The incident unit vectors of the three light sources are l 1 , l 2 , and l 3 . By lighting up the light sources one by one, the gray values I 1 , I 2 , and I 3 of the three images under different illumination can be obtained, and the value of the normal vector n can be obtained through the calculation in Section 3.1. The experimental detection process with three light sources acting individually is shown in Figure 12.
Since the vector product of the normal to the simulated plate and the light source is proportional to the irradiance, from this, we can obtain the ratio of the intensity of the light sources of #1, #2, and #2 #3 to each other. In this paper, the scatter interpolant function is utilized to store the above two ratios and to build a light intensity lookup table, where the illuminance under the action of light source #1 is 215 lux, the illuminance under the action of light source #2 is 3380 lux, and the illuminance under the action of light source #3 is 577 lux. The value of the gradient at a point on the detection image has been defined as ( p , q ) in Section 3.1 and used as a function of the image intensity. Set q and p to range from 10   to 10 in steps of 0.1 . Since a lookup table has been constructed to find the gradient ( p , q ) for each pixel, a gradient map can be constructed with the same size of the defective image.
The distance between the camera and the specimen in the actual detection experiment is 400   m m , in which the measured values of the width and the depth are taken as the maximum of the contour extraction and the 3D reconstruction. The gradient and normal direction of each pixel inside the vacuum chamber can be efficiently calculated using the lookup table. Then, integration is performed along a path to recover the relative distance of each pixel of the defect to complete the 3D reconstruction of the cracked defect in the image, and the results are shown in Figure 13 and Table 3. Where x is the length of the defect in the horizontal direction, y is the length of the defect in the vertical direction, and z is the depth of the vertically detected plate. Detailed location and depth information of defects can be obtained from the experimental results. The maximum depth error is less than 24.0 % , and the maximum width error is less than 15.3 % . There is a certain degree of aberration in the shape of defects, mainly due to the following reasons: (1) the condition of integrable edges in photometric stereo vision is restricted; (2) the camera’s viewing angle limitation, which makes the defect location produce more shadows and highlights areas leading to errors in the gradient map establishment. In general, the gradient maps of the vacuum chamber defect detection are well established, and the goal of defect recognition and reconstruction is accomplished.

5. Conclusions

In this paper, an improved multi-scale Retinex low-light image enhancement algorithm with adaptive weights is designed to detect low-light images inside the fusion reactor vacuum chamber. It can not only enhance the image contrast in the low illumination situation inside the vacuum chamber but also recover the detailed information of the image that is not shown in the low illumination situation, and effectively maintain the clarity and contrast of the image. A defect detection system based on photometric stereo vision is constructed for defect detection and 3D reconstruction inside vacuum chambers. The normal vector and gradient field of the object surface are calculated from the illumination direction and the light/dark relationship of the vacuum chamber image. The gradient matrix of all the pixels on the image is obtained by matrix transformation, which ultimately accomplishes the task of reconstructing the 3D depth matrix. To optimize the position of the light source, a simulation system is designed for a given light source illumination profile. The optimized light array can be provided for crack detection inside vacuum chambers without time-consuming experimental testing. Finally, a vacuum chamber inspection robot platform mounted with a binocular stereo vision camera is built, and image enhancement and defect reconstruction experiments are performed separately. The results show that the multi-scale Retinex low-light image algorithm based on adaptive weights can broaden the gray level of low-light images and increase the brightness value and contrast. The defect reconstruction algorithm based on photometric stereo vision can obtain the detailed position and depth information of different defects, with a maximum depth error of less than 24.0 % and a maximum width error of less than 15.3 % , which realizes the task objectives of detecting and reconstructing surface defects inside the vacuum chamber.
In the future, we will continue researching defect detection algorithms for fusion reactor vacuum chambers. The defect detection experiments under different illumination will be conducted through the China Fusion Engineering Test Reactor, which is currently under construction, to verify the effectiveness of the image enhancement and defect reconstruction algorithms.

Author Contributions

Conceptualization, G.Q. and H.Z.; methodology, H.Z.; software, G.Q. and H.Z.; validation, G.Q. and Y.C.; formal analysis, G.Q. and H.Z.; data curation, F.W., Y.X. and S.L.; writing—original draft preparation, G.Q. and H.Z.; visualization, R.Z., X.Q. and C.Z.; supervision, A.J.; project administration, C.Z. and A.J.; funding acquisition, C.Z. and A.J. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was partially funded by the National Natural Science Foundation of China (Grant Nos. 12305251), the Postdoctoral Fellowship Program of CPSF (Grant Nos. GZB20230770), the Research Fund of the State Key Laboratory of Mechanics and Control for Aerospace Structures (1005-ZAG23011) and the Comprehensive Research Facility for Fusion Technology Program of China under Contract no. 2018-000052-73-01-001228.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nooralishahi, P.; Ibarra-Castanedo, C.; Deane, S.; López, F.; Pant, S.; Genest, M.; Avdelidis, N.P.; Maldague, X.P.V. Drone-based non-destructive inspection of industrial sites: A review and case studies. Drones 2021, 5, 106. [Google Scholar] [CrossRef]
  2. Nash, C.; Karve, P.; Adams, D. Diagnosing nuclear power plant pipe wall thinning due to flow accelerated corrosion using a passive, thermal non-destructive evaluation method: Feasibility assessment via numerical experiments. Nucl. Eng. Des. 2022, 386, 111542. [Google Scholar] [CrossRef]
  3. Yousaf, T.; Bradley, M. Helium plasma immersion ion implantation studies of tungsten and tungsten heavy alloys for fusion plasma facing components. Radiat. Eff. Defects Solids 2023, 178, 143–159. [Google Scholar] [CrossRef]
  4. Patel, P.; Vaghela, H.; Muralidhara, S.; Das, J.; Chang, H.-S. Design and performance assessment of cold compressor for a typical cold box of large tokamak machines. Cryogenics 2021, 118, 103331. [Google Scholar] [CrossRef]
  5. Gliss, C.; Bachmann, C.; Ciattaglia, S.; Drumm, B.; Camacho, M.G.; Moscato, I.; Mull, T.; Palermo, I. Integrated design of tokamak building concepts including ex-vessel maintenance. Fusion Eng. Des. 2022, 177, 113068. [Google Scholar] [CrossRef]
  6. Qin, G.; Xu, Y.; He, W.; Qi, Q.; Zheng, L.; Hu, H.; Cheng, Y.; Zuo, C.; Zhang, D.; Ji, A. Design and Development of an Air–Land Amphibious Inspection Drone for Fusion Reactor. Drones 2024, 8, 190. [Google Scholar] [CrossRef]
  7. Shi, S.; Song, Y.; Cheng, Y.; Feng, H.S.; Liu, C.; Villedieu, E.; Bruno, V.; Pastor, P.; Khan, S.U.D.; Tang, H.J.; et al. Design and Implementation of Storage Cask System for EAST Articulated Inspection Arm (AIA) Robot. J. Fusion Energy 2015, 34, 711–716. [Google Scholar] [CrossRef]
  8. Wang, Y.; Liu, H.; Fu, Z. Low-light image enhancement via the absorption light scattering model. IEEE Trans. Image Process. 2019, 28, 5679–5690. [Google Scholar] [CrossRef]
  9. Feng, X.; Li, J.; Hua, Z. Low-light image enhancement algorithm based on an atmospheric physical model. Multimed. Tools Appl. 2020, 79, 32973–32997. [Google Scholar] [CrossRef]
  10. Fu, X.; Zeng, D.; Huang, Y.; Liao, Y.; Ding, X.; Paisley, J. A fusion-based enhancing method for weakly illuminated images. Signal Process. 2016, 129, 82–96. [Google Scholar] [CrossRef]
  11. Rasheed, M.; Guo, G.; Shi, D.; Khan, H.; Cheng, X. An empirical study on retinex methods for low-light image enhancement. Remote Sens. 2022, 14, 4608. [Google Scholar] [CrossRef]
  12. Wang, X.; Yi, R.; Sun, M.; Zhang, Z. Low-light Image Enhancement Algorithm Based on Improved Multi-scale Retinex with Adaptive Brightness Compensation. In Proceedings of the International Conference on Computer Supported Cooperative Work in Design (CSCWD), Tianjin, China, 8–10 May 2024; pp. 2234–2239. [Google Scholar]
  13. Li, J.; Hao, S.; Li, T.; Zhuo, L.; Zhang, J. RDMA: Low-light image enhancement based on retinex decomposition and multi-scale adjustment. Int. J. Mach. Learn. Cybern. 2024, 15, 1693–1709. [Google Scholar] [CrossRef]
  14. Jha, M.; Bhandari, A. NSDIE: Noise Suppressing Dark Image Enhancement Using Multiscale Retinex and Low-Rank Minimization. ACM Trans. Multimed. Comput. Commun. Appl. 2024, 20, 1–22. [Google Scholar] [CrossRef]
  15. Ibrahim, H.; Kong, N. Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 2007, 53, 1752–1758. [Google Scholar] [CrossRef]
  16. Wang, S.; Zheng, J.; Hu, H.; Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef]
  17. Guo, X.; Hu, Q. Low-light image enhancement via breaking down the darkness. Int. J. Comput. Vis. 2023, 131, 48–66. [Google Scholar] [CrossRef]
  18. Shuhang, W.; Gang, L. Naturalness preserved image enhancement using a priori multi-layer lightness statistics. IEEE Trans. Image Process. 2018, 27, 938–948. [Google Scholar]
  19. Wen, S.; Hu, X.; Ma, J.; Sun, F.; Fang, B. Autonomous robot navigation using Retinex algorithm for multiscale image adaptability in low-light environment. Intell. Serv. Robot. 2019, 12, 359–369. [Google Scholar] [CrossRef]
  20. Zong, Y.; Liang, J.; Wang, H.; Ren, M.; Zhang, M.; Li, W.; Lu, W.; Ye, M. An intelligent and automated 3D surface defect detection system for quantitative 3D estimation and feature classification of material surface defects. Opt. Laser Eng. 2021, 144, 106633. [Google Scholar] [CrossRef]
  21. Ikehata, S. Scalable, detailed and mask-free universal photometric stereo. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 13198–13207. [Google Scholar]
  22. Landstrom, A.; Thurley, M. Morphology-Based Crack Detection for Steel Slabs. IEEE J. Sel. Top. Signal Process. 2012, 6, 866–875. [Google Scholar] [CrossRef]
  23. Ju, Y.; Lam, K.; Xie, W.; Zhou, H.; Dong, J.; Shi, B. Deep Learning Methods for Calibrated Photometric Stereo and Beyond. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 1–19. [Google Scholar] [CrossRef] [PubMed]
  24. Shi, B.; Wu, Z.; Mo, Z.; Duan, D.; Yeung, S.-K.; Tan, P. A benchmark dataset and evaluation for non-lambertian and uncalibrated photometric stereo. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3707–3716. [Google Scholar]
  25. Zhang, K.; Li, L.; Liu, Q. Dark-field structured illumination microscopy for highly sensitive detection of 3D defects in optical materials. Opt. Laser Eng. 2023, 161, 107340. [Google Scholar] [CrossRef]
  26. Cao, Y.; Ding, B.; Chen, J.; Liu, W.; Guo, P.; Huang, L.; Yang, J. Photometric-stereo-based defect detection system for metal parts. Sensors 2022, 22, 8374. [Google Scholar] [CrossRef] [PubMed]
  27. Karami, A.; Menna, F.; Remondino, F. Combining photogrammetry and photometric stereo to achieve precise and complete 3D reconstruction. Sensors 2022, 22, 8172. [Google Scholar] [CrossRef]
  28. Meccaf, R.; Wetzlert, A.; Brucksteint, A.M.; Kimmel, R. Near Field Photometric Stereo with Point Light Sources. Siam J. Imaging Sci. 2014, 7, 2732–2770. [Google Scholar] [CrossRef]
  29. Xie, L.; Song, Z.; Jiao, G.; Huang, X.; Jia, K. A practical means for calibrating an LED based photometric stereo system. Opt. Laser Eng. 2015, 64, 42–50. [Google Scholar] [CrossRef]
  30. Vazquez, G.; Lacapmesure, A.; Martínez, S.; Martínez, O.E. SUPPOSe 3Dge: A Method for Super-Resolved Detection of Surfaces in Volumetric Fluorescence Microscopy. J. Opt. Photon. Res. 2024, 1–9. [Google Scholar]
  31. Zhou, W.; Kambhamettu, C. A unified framework for scene illuminant estimation. Image. Vis. Comput. 2008, 26, 415–429. [Google Scholar] [CrossRef]
  32. Queau, Y.; Durix, B.; Wu, T.; Cremers, D.; Lauze, F.; Durou, J.-D. LED-based Photometric Stereo: Modeling Calibration and Numerical Solution. J. Math. Imaging Vis. 2018, 60, 313–340. [Google Scholar] [CrossRef]
  33. Takai, T.; Maki, A.; Niinuma, K.; Matsuyama, T. Difference sphere: An approach to near light source estimation. Comput. Vis. Image Underst. 2009, 113, 966–978. [Google Scholar] [CrossRef]
  34. Xu, S.; Wallace, A.M. Recovering surface reflectance and multiple light locations and intensities from image data. Pattern Recogn. Lett. 2008, 29, 1639–1647. [Google Scholar] [CrossRef]
  35. Su, Y.; Tang, X.; Cai, Z.; Wu, J.; Chen, Y.; Hua, M.; Wan, W. Performance improvement of projection-type multiview holographic three-dimensional display using spatial light modulators. Opt. Laser Eng. 2020, 129, 106079. [Google Scholar] [CrossRef]
  36. Al Sobbahi, R.; Tekli, J. Comparing deep learning models for low-light natural scene image enhancement and their impact on object detection and classification: Overview, empirical evaluation, and challenges. Signal Process. Image Commun. 2022, 109, 116848. [Google Scholar] [CrossRef]
  37. Song, Z.; Nie, Y.; Song, Z. Photometric stereo with quasi-point light source. Opt. Laser Eng. 2018, 111, 172–182. [Google Scholar] [CrossRef]
  38. Tian, X.; Liu, R.; Wang, Z.; Ma, J. High quality 3D reconstruction based on fusion of polarization imaging and binocular stereo vision. Inform. Fusion 2022, 77, 19–28. [Google Scholar] [CrossRef]
  39. Li, Z.; Ji, S.; Fan, D.; Yan, Z.; Wang, F.; Wang, R. Reconstruction of 3D Information of Buildings from Single-View Images Based on Shadow Information. ISPRS Int. J. Geo-Inf. 2024, 13, 62. [Google Scholar] [CrossRef]
  40. Jian, Z.; Wang, X.; Zhang, X.; Su, R.; Ren, M.; Zhu, L. Task-Specific Near-Field Photometric Stereo for Measuring Metal Surface Texture. IEEE Trans. Ind. Inform. 2023, 20, 6019–6029. [Google Scholar] [CrossRef]
  41. Jonasson, E.T.; Boeuf, J.; Kyberd, S.; Skilton, R.; Burroughes, G.; Amayo, P.; Collins, S. Reconstructing JET using LIDAR-Vision fusion. Fusion Eng. Des. 2019, 146, 110952. [Google Scholar] [CrossRef]
  42. Jonasson, E.T.; Boeuf, J.; Murcutt, P.; Kyberd, S.; Skilton, R. Improved reconstruction of JET using LIDAR-Vision fusion. Fusion Eng. Des. 2020, 161, 112061. [Google Scholar] [CrossRef]
  43. Niu, L.; Aha, L.; Mattila, J.; Gotchev, A.; Ruiz, E. A stereoscopic eye-in-hand vision system for remote handling in ITER. Fusion Eng. Des. 2019, 146, 1790–1795. [Google Scholar] [CrossRef]
Figure 1. Fusion reactor vacuum chamber defect detection.
Figure 1. Fusion reactor vacuum chamber defect detection.
Sensors 24 06227 g001
Figure 2. Vacuum chamber defect detection (a) original image of cracks (b) grayscale image (c) Gaussian filtered image (d) standard gradient map (e) binary map (f) image with noisy regions removed.
Figure 2. Vacuum chamber defect detection (a) original image of cracks (b) grayscale image (c) Gaussian filtered image (d) standard gradient map (e) binary map (f) image with noisy regions removed.
Sensors 24 06227 g002
Figure 3. The principles of the human visual system.
Figure 3. The principles of the human visual system.
Sensors 24 06227 g003
Figure 4. Light source location and illumination profile (a) LED light source relative position and variables (b) single light source contour.
Figure 4. Light source location and illumination profile (a) LED light source relative position and variables (b) single light source contour.
Sensors 24 06227 g004
Figure 5. Light source location and illumination profile for multi-input tests (a) light 1: θ 1 = 0 ° ,   α 1 = 0 ° (b) light 2: θ 1 = 30 ° ,   α 1 = 30 ° (c) light 3: θ 1 = 30 ° ,   α 1 = 30 ° (d) light 4: θ 1 = 30 ° ,   α 1 = 90 ° (e) light 5: θ 1 = 60 ° ,   α 1 = 30 ° (f) light 6: θ 1 = 30 ° ,   α 1 = 30 ° .
Figure 5. Light source location and illumination profile for multi-input tests (a) light 1: θ 1 = 0 ° ,   α 1 = 0 ° (b) light 2: θ 1 = 30 ° ,   α 1 = 30 ° (c) light 3: θ 1 = 30 ° ,   α 1 = 30 ° (d) light 4: θ 1 = 30 ° ,   α 1 = 90 ° (e) light 5: θ 1 = 60 ° ,   α 1 = 30 ° (f) light 6: θ 1 = 30 ° ,   α 1 = 30 ° .
Sensors 24 06227 g005
Figure 6. Experimental platform for detecting defects (a) robot (b) drive Box (c) wiring terminals (d) drivers (e) defects of setting.
Figure 6. Experimental platform for detecting defects (a) robot (b) drive Box (c) wiring terminals (d) drivers (e) defects of setting.
Sensors 24 06227 g006
Figure 8. Detection results of internal defects by conventional method (a) original image (b) smoothing (c) horizontal detection (d) vertical detection (e) defect extraction (f) contour measurement.
Figure 8. Detection results of internal defects by conventional method (a) original image (b) smoothing (c) horizontal detection (d) vertical detection (e) defect extraction (f) contour measurement.
Sensors 24 06227 g008
Figure 9. Defect depth maps at different moments by conventional method (a) defective samples (b) depth map at t = 0 s (c) depth map at t = 10 s (d) depth map at t = 30 s.
Figure 9. Defect depth maps at different moments by conventional method (a) defective samples (b) depth map at t = 0 s (c) depth map at t = 10 s (d) depth map at t = 30 s.
Sensors 24 06227 g009
Figure 10. Experiments on low-light image enhancement of cabin surface by improved method (a) traditional method main view (b) traditional method low illumination (c) traditional method side view (d) traditional method very low illumination (e) improved method main view (f) improved method low illumination (g) improved method side view (h) improved method very low illumination.
Figure 10. Experiments on low-light image enhancement of cabin surface by improved method (a) traditional method main view (b) traditional method low illumination (c) traditional method side view (d) traditional method very low illumination (e) improved method main view (f) improved method low illumination (g) improved method side view (h) improved method very low illumination.
Sensors 24 06227 g010
Figure 11. Normalized histograms of image enhancement experiments (a) main view histogram (b) low illumination histogram (c) side view histogram (d) very low illumination histogram (e) main view of improved method (f) low illumination of improved method (g) side view of improved method (h) very low illumination of improved method.
Figure 11. Normalized histograms of image enhancement experiments (a) main view histogram (b) low illumination histogram (c) side view histogram (d) very low illumination histogram (e) main view of improved method (f) low illumination of improved method (g) side view of improved method (h) very low illumination of improved method.
Sensors 24 06227 g011
Figure 12. Detection results under different LED light sources (a) #1 light source detection experiments (b) #2 light source detection experiments (c) #3 light source detection experiments (d) #1 light source detection image (e) #2 light source detection image (f) #3 light source detection image.
Figure 12. Detection results under different LED light sources (a) #1 light source detection experiments (b) #2 light source detection experiments (c) #3 light source detection experiments (d) #1 light source detection image (e) #2 light source detection image (f) #3 light source detection image.
Sensors 24 06227 g012
Figure 13. Surface defect normal vectors and 3D reconstruction results (a) Normal vector map of defect #1 (b) 3D reconstruction results of defect #1 (c) Normal vector map of defect #2 (d) 3D reconstruction results of defect #2 (e) Normal vector map of defect #3 (f) 3D reconstruction results of defect #3.
Figure 13. Surface defect normal vectors and 3D reconstruction results (a) Normal vector map of defect #1 (b) 3D reconstruction results of defect #1 (c) Normal vector map of defect #2 (d) 3D reconstruction results of defect #2 (e) Normal vector map of defect #3 (f) 3D reconstruction results of defect #3.
Sensors 24 06227 g013
Figure 7. Control system for defect inspection platform.
Figure 7. Control system for defect inspection platform.
Sensors 24 06227 g007
Table 1. Input values for the light source positioning system.
Table 1. Input values for the light source positioning system.
LED LightX-AxisY-AxisHeight θ l φ l
Light 110 cm20 cm60 cm
Light 210 cm20 cm60 cm30°30°
Light 340 cm20 cm35 cm30°30°
Light 440 cm80 cm110 cm30°90°
Light 5100 cm20 cm60 cm60°30°
Light 6100 cm60 cm110 cm30°30°
Table 2. RGBD binocular camera solid state parameters.
Table 2. RGBD binocular camera solid state parameters.
TypesParameters
Camera Size 59   m m × 17   m m × 11   m m
Depth accuracy
Depth resolution
1 m ± 6 mm
640 × 400
Deep field of viewH: 67.9° V: 45.3°
RGB field of viewH: 71.0° V: 43.7°
Baseline 120.0   m m
Monitoring range 2.1   m m
Synchronization accuracy< 1   m s
Scope of work 0.8   m 5   m
Table 3. Defect detection results of specimens inside the vacuum chamber.
Table 3. Defect detection results of specimens inside the vacuum chamber.
No.DepthWidthDepth ErrorWidth Error
#111.57 mm34.6 mm22.9%15.3%
#28.3 mm11.9 mm17.0%12.5%
#33.8 mm16.3 mm24.0%8.7%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qin, G.; Zhang, H.; Cheng, Y.; Xu, Y.; Wang, F.; Liu, S.; Qin, X.; Zhao, R.; Zuo, C.; Ji, A. Research on Defect Detection Method of Fusion Reactor Vacuum Chamber Based on Photometric Stereo Vision. Sensors 2024, 24, 6227. https://doi.org/10.3390/s24196227

AMA Style

Qin G, Zhang H, Cheng Y, Xu Y, Wang F, Liu S, Qin X, Zhao R, Zuo C, Ji A. Research on Defect Detection Method of Fusion Reactor Vacuum Chamber Based on Photometric Stereo Vision. Sensors. 2024; 24(19):6227. https://doi.org/10.3390/s24196227

Chicago/Turabian Style

Qin, Guodong, Haoran Zhang, Yong Cheng, Youzhi Xu, Feng Wang, Shijie Liu, Xiaoyan Qin, Ruijuan Zhao, Congju Zuo, and Aihong Ji. 2024. "Research on Defect Detection Method of Fusion Reactor Vacuum Chamber Based on Photometric Stereo Vision" Sensors 24, no. 19: 6227. https://doi.org/10.3390/s24196227

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop