Next Article in Journal
Optimal Transport Meshless Method Based Fatigue Life Calculation Method for Hydraulic Pipelines under Combined Excitation
Previous Article in Journal
Optimized Two-Tier Caching with Hybrid Millimeter-Wave and Microwave Communications for 6G Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Surface Illumination as a Factor Influencing the Efficacy of Defect Recognition on a Rolled Metal Surface Using a Deep Neural Network

1
Department of Industrial Automation, Ternopil National Ivan Puluj Technical University, Ruska Str. 56, 46001 Ternopil, Ukraine
2
Walker Department of Mechanical Engineering, Cockrell School of Engineering, The University of Texas at Austin, Austin, TX 78712, USA
3
EPAM School of Digital Technologies, American University Kyiv, 02000 Kyiv, Ukraine
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(6), 2591; https://doi.org/10.3390/app14062591
Submission received: 28 January 2024 / Revised: 28 February 2024 / Accepted: 11 March 2024 / Published: 20 March 2024
(This article belongs to the Section Surface Sciences and Technology)

Abstract

:
Modern neural networks have made great strides in recognising objects in images and are widely used in defect detection. However, the output of a neural network strongly depends on both the training dataset and the conditions under which the image was acquired for analysis. We have developed a software–hardware method for evaluating the effect of variable lighting on the results of defect recognition using a neural network model. The proposed approach allows us to analyse the recognition results of an existing neural network model and identify the optimal range of illumination at which the desired defects are recognised most consistently. For this purpose, we analysed the variability in quantitative parameters (area and orientation) of damage obtained at different degrees of illumination for two different light sources: LED and conventional incandescent lamps. We calculated each image’s average illuminance and quantitative parameters of recognised defects. Each set of parameters represents the results of defect recognition for a particular illuminance level of a given light source. The proposed approach allows the results obtained using different light sources and illumination levels to be compared and the optimal source type/illuminance level to be figured out. This makes implementing a defect detection environment that allows the best recognition accuracy and the most controlled product quality possible. An analysis of a steel sheet surface showed that the best recognition result was achieved at an illuminance of ~200 lx. An illuminance of less than ~150 lx does not allow most defects to be recognised, whereas an illuminance larger than ~250 lx increases the number of small objects that are falsely recognised as defects.

1. Introduction

Recognising defects in rolled metal is a primary scientific and technical task [1]. Depending on the rolling machine type, roll wear type, etc., the surface may contain various defects (scratches, abrasions, dimples, pits, etc.) [2], characterised by diverse geometry and morphology. This makes defectoscopy algorithms reasonably complicated. In addition, defects may have different shapes and be of different types. This causes errors in their classification and recognition since certain defects are similar in shape and structure [3,4,5].
In recent years, many approaches based on convolutional neural networks have been proposed to address the problem of recognising the position of defects on the image of a surface [6,7,8]. With an adequate training sample, this recognition technique allows for a high accuracy (more than 95%). For example, the paper in [9] introduced a lightweight convolutional neural network, WearNet, to realise automatic scratch detection for components in contact sliding, such as those in metal forming. WearNet can reach an excellent classification accuracy of 94.16% with a much smaller model size and faster detection speed.
However, in line with other optical–digital methods, the techniques considered are highly dependent on the image acquisition conditions, which predetermine their use in natural industrial conditions [10,11]. Changing the illumination parameters, the position of the light source relative to the object of interest, and the presence of several light sources may have a crucial effect on the image obtained using a digital camera [12,13,14,15]. As a result, several images of the same surface obtained with a digital camera in different light and photo-shooting conditions may be significantly different. Changing the conditions under which digital images are obtained may enhance (or, on the contrary, blur) individual surface artefacts, which may significantly affect the final recognition result. Thus, insufficient (or, conversely, excessive) illumination of the surface may entail diagnostic errors. In this regard, investigating the effect of the illumination level on the recognition of digital images of surfaces with defects appears relevant and essential. This will provide a better understanding of the possibilities inherent in this method and make it possible to find the conditions that minimise recognition errors [16,17,18].
So, based on computer vision, surface defect detection remains challenging due to uneven illumination and non-stable image-obtaining conditions [19]. Low contrast and the heterogeneous patterns of defects make this task even more complicated. Current methods usually present a lack of adaptability for various scenes, and the insufficient repeatability of measurement results in an environment with uneven illuminance (daylight/lamplight/mixed light, etc.). The paper in [19] proposes a method that employs saliency detection and intrinsic image decomposition, which can improve the detection of diverse types of defects under uneven illumination.
The paper in [20] proposed a method for detecting surface image defects based on a convolutional neural network, which is based on the adjustment of convolutional neural networks and training parameters and on changing the structure of the network to identify various defects accurately. Experiments on the defect inspection of copper and steel images show that the convolutional neural network can automatically learn features without preprocessing the image and correctly identify various types of image defects affected by uneven illumination, thus overcoming the drawbacks of traditional machine vision inspection methods under uneven illumination.
The authors in [21] propose a joint-prior-based uneven illumination enhancement method that improves defect detection using a particular semi-coupled retinex model.
The mentioned methods can make defect detection more accurate, but they do not give insight into the repeatability of measurement results (under uneven illumination) and do not allow the best defect detection conditions to be estimated.
Modern investigations showed that neural network-based defect detection methods allow high accuracy to be reached in the recognition of different classes of surface defects [22,23,24]. However, investigating and streamlining neural networks’ capabilities and limitations in detecting, classifying, and calculating the parameters of the most common group defects appear crucial [25,26,27]. This article considers how the illumination level applied to the surface of a specimen made of rolled metal may affect the results of recognising defects using a neural network.
Our experiments used several numerical parameters (defect area and orientation) to characterise the recognised defects. In addition, their variability was determined under different types of light sources and illuminance levels. An application for analysing the recognition results and calculating/analysing quantitative parameters was used for the image analysis, along with a previously developed and trained convolutional neural network.

2. Research Technique

2.1. Object of Interest

The surface of a metal plate with scratches applied onto it was used to analyse the effect of illumination on the detection and parametric description of defects found in rolled metal. Most scratch-type defects of the rolled metal result from wear of metal rollers, scale adhesion, etc., during which the rolled metal surface is subjected to mechanical scratching. Therefore, the scratches were applied mechanically, making it possible to accurately consider simulated defects to reflect defects obtained in production conditions [2]. Optical–digital methods for identifying defects of the rolled metal surface are impeded by a non-uniform scatter of illumination on the surface of interest. The surface colour may also change due to the oxidation of rolled metal, thus making another impediment. These facts were considered while designing the laboratory and experimental setup.

2.2. Setup for Investigating the Effect of Illuminance

The power consumption in industrial conditions can be reduced by lighting devices based on semiconductor light sources, the main advantages of which include high luminous efficiency, long service life, and environmental friendliness [28]. The standard values of qualitative and quantitative light parameters can be ensured by increasing the power and, consequently, the light flux coming out of lighting devices at the level of the diagnosed surface of the metal strip, which leads to a variation in the illuminance level.
The problem of analysing the influence of illumination on defect recognition accuracy consists of the following:
evaluating the surface illuminance level and determining its influence on the defect recognition algorithm;
defining the most favourable lighting conditions for diagnostics.
The design and general view of the experimental setup are presented in Figure 1 and Figure 2. The setup consists of a box (KAMERA), on the bottom of which we placed the ES specimen of interest. The reflected component of the light flux affected the specimen surface. To eliminate this effect, a coating with a reflection coefficient close to zero was applied to the inner surface of the camera.
Replaceable modules with the EL1 incandescent lamp and EL2 LEDs were used as light sources in the camera’s upper part. These light sources were powered by the alternating voltage source, the laboratory autotransformer, and the constant voltage source, respectively, which, in turn, were powered by the mains. The illuminance level applied to the specimen was varied by changing the supply voltage and, naturally, the light flux coming out of the light sources. The supply voltage of the module with the incandescent lamp varied from 80 V to 230 V. The LED module’s constant voltage varied from 8 V to 13 V. The V1 voltmeter connected in parallel to the light sources recorded the supply voltage of the light sources. The illumination measurement results were compared to the LUXMETER device readings to provide high accuracy in measurement results.
Five minutes after applying voltage to the light sources, after their transition to the preset operating conditions, plate surface images (Figure 3) were obtained using the Canon EOS 1300D camera (Canon, Tokyo, Japan) at room temperature. The aperture number was 5, and the shutter speed was 1/60 s. Fragment 1 of the plate image shown in Figure 3 was used for further analysis. The resolution of this image fragment was 2650 × 4734 pixels.

2.3. Photometric Characteristics of the Light Source

Before obtaining the images, we measured the photometric characteristics of the two light sources used, namely, the dependence of the light flux on the consumed voltage (Figure 4) and the light scatter. The light intensity curves that characterise the light sources are shown in Figure 5. The photometric characteristics were measured in the specialised lighting laboratory of VATRA Corporation OSP LLC.

2.4. Calculating Illumination of the Specimen Surface

The straight line showing the illumination component, obtained based on the light flux from the light sources, was calculated for the specimen surface in the zone of the obtained images. The specimen surface illuminated with the LED lamp was calculated based on the geometric scheme presented in Figure 6a. In this scheme, the LED light source (2) with its geometric and optical centre at point O is represented by a uniformly bright disk ( d = 0.158 m ) placed at a distance ( h = O O 1 = 0.200   m ) from the surface covered by the image 1 zone.
E A the illumination of point A, which is located on the specimen surface covered by the image zone, is calculated according to the following equation [26]:
E A = π L sin 2 φ ,
where L is the brightness of the LED light source; φ is the half value of the angle at which the LED light source is visible from the calculation point.
The following ratios determine brightness:
L = M π = Φ π d 4 2 π = 4 Φ π d 2 ,
where M , F are the brightness and the light flux coming out of the LED light source, respectively.
φ = 1 2 φ 2 φ 1 = 1 2 a r c t g l O A 1 + r h a r c t g l O A 1 r h = = 1 2 a r c t g l O A 1 + r h l O A 1 r h 1 + l O A 1 + r h l O A 1 r h = 1 2 a r c t g 2 h r h 2 + l O A 1 2 r 2 ,
where l O A 1 = l A O 1 is the OA1 segment length; r = d 2 .
Substituting (2) and (3) into (1), we obtain:
E A = π 4 Φ π d 2 sin 2 1 2 a r c t g 2 h r h 2 + l O A 1 2 r 2 = = 4 Φ π d 2 1 cos a r c t g 2 h r h 2 + l O A 1 2 r 2 2 = = 2 Φ π d 2 1 1 1 + 2 h r h 2 + l O A 1 2 r 2 2 = = 2 Φ π d 2 1 h 2 + l O A 1 2 r 2 h 2 + l O A 1 2 r 2 + 4 h 2 r .
Before obtaining images, we measured distances between points O 0 and O1 along the coordinate axes. The position of the point O 0 on the plate is shown in Figure 3. Along the abscissa axis, l O 0 O 1 x = 0.083 . Along the ordinate axis, l O 0 O 1 y = 0.112 m .
Then, the l A O 1 distance was determined as:
l A O 1 = x A x O 1 2 + y A y O 1 2 = = x A x O 0 + l O 0 O 1 x 2 + y A y O 0 + l O 0 O 1 y 2 = = l A O 0 x + l O 0 O 1 x 2 + l A O 0 x + l O 0 O 1 y 2 ,
where l A O 0 x = x A x O 0 , l A O 0 y = y A y O 0 is the difference between the relevant coordinates of points A and O0.
Numerical values of l A O 0 x and l A O 0 y are obtained from the following equations:
l A O 0 x = n A n O 0 l p F h 0 F , l A O 0 x = m A m O 0 l p F h 0 F ,
where n A , n O 0 = 4696 , m A , m O 0 = 708 is the ordinal number of columns and rows of the digital image, which corresponds to the positions of points A and O0, respectively; l p = 4.3 1 0 6 m is the pixel size of the matrix photoconverter of the Canon EOS 1300D camera; F = 35 1 0 3 m is the focal length of the camera’s optical system; h 0 = 38.71 1 0 3 m is the distance between the optical system and the matrix photoconverter’s surface.
We calculate the specimen image zone illuminated by the incandescent lamp (the calculation scheme is presented in Figure 6b) based on the distance inverse square law, according to which [26]:
E A = I α cos α l O A 2 = I α cos α l O A 1 2 + h 2 ,
α = arc t g l O A 1 h ,
where I α is the light intensity in the direction of the calculation point; α is the angle between the I α direction and a normal to the calculation plane.
l O A 1 was calculated by Equations (5) and (6), given that l O 0 O 1 x = 0.103 m and l O 0 O 1 y = 0.154 m . Moreover, the distance between the luminous body of the incandescent lamp and the calculation plane was h = O O 1 = 0.125 m .
Light intensity I α in the direction of the calculation point is determined using the following equation:
I α = K f I α α ,
K f = Φ ( U ) Φ ( 220 B ) ,
I α α = 498.000 α 6 + 1871.500 α 5 2676.600 α 4 + + 1784.800 α 3 555.440 α 2 + 69.791 α + 74.004
where I α α is the function for approximating the dependence of light intensity on the illumination angle (Figure 7) obtained at the consumed voltage of 220 V (the relative maximum error is 1.17%); K f is the coefficient, which considers the following ratio: light coming out of the incandescent lamp at a given consumed voltage versus light coming out at a voltage of 220 V.
The obtained specimen images were loaded into the MATLAB package, in which the illumination of the horizontal surface covered by the image zone was calculated for each digital image element. Equations (4)–(6) were used for the images obtained using the LED light source, while Equations (7)–(11) were for those obtained using the incandescent lamp. Figure 8 shows the scatter of illumination over the surface in the digital image. We can see the specimen fragment obtained using the LED light source (Figure 8a) and the incandescent lamp (Figure 8b) at a consumed 13 V and 170 V, respectively.
The average illuminance level of the specimen shown in the image was used to analyse the defect recognition efficiency under different illumination conditions. The latter was taken as the technical parameter of illumination [29]. The current illumination values generated by the light sources considered were brought to this parameter. This was done for the following reasons:
(1)
The defect was studied over the entire plate surface, which fell into the image zone. Therefore, the average illumination level characterises the total light flux that falls on the surface considered.
(2)
When designing lighting systems, the average illumination level of the working surface is the standard parameter given in the documents [30].
We calculate the average illumination level E a v , which fell into the image zone from the equation given in [29,30].
E a v = i , j = 1 i = m j = n E A i , j m · n
where m = 4734 ,   n = 2650   is the size (in pixels) of the specimen fragment considered.

2.5. Calculating the Illumination in the Zone of the Image Fragment

The results of calculating the illumination in the zone of the image fragment, obtained at different values of supply voltage of the light sources, are presented in Table 1 and Table 2.

2.6. Neural Network for Defect Recognition

A neural network with a U-net architecture [31,32] (Figure 9) and the ResNet152-based encoder [33] were used to recognise defects on the surface of a metal strip. This allowed for the semantic segmentation of the image by marking the arrays of image pixels as belonging to damage. The ResNet152 encoder makes it possible to create a detailed map of features characteristic of damage, while the U-net decoder unit allows projecting the features recognised in the image back onto the original image, thus highlighting the areas with damage.
To train the neural network, we used:
images of rolled steel from the Severstal company (Russia) posted on the Kaggle platform in 2019 [34];
images of defects from the research entitled “Detecting Defects on Rolled Metal Surface” (placed on the Kaggle platform in 2020 [35]).
The training images were screened, checked, and marked by experts. A total of 11,000 images were compiled into a training sample (5800 images with defects of various shapes and orientations and 5200 defect-free images). The training sample was divided into the test part (10% of the total number of images), the validation part (15%), and the training part (75%). The training and validation samples were used to train the neural network, and the test sample was used to evaluate the trained model.
The classical SGD (Stochastic gradient descent) optimiser with the Nesterov moment was used to train the neural network. In SGD, instead of using the entire dataset for each iteration, only a single random training example (or a small batch) is selected to calculate the gradient and update the model parameters. The advantage of using SGD is its computational efficiency, especially when dealing with large datasets. This made it possible to accelerate the optimiser in the right direction and smooth out the fluctuations of the loss function [36,37].
The D S C metric (Dice similarity coefficient or Sørensen–Dice coefficient) was used to evaluate the quality of segmentation:
D S C = 2 Y t r u e Y p r e d Y t r u e + Y p r e d
where Y t r u e is the array of pixels that belong to the valid object of damage in accordance with the markup, and Y p r e d is the array of pixels that belong to the detected object of damage.
D S C measures the similarity between two datasets—pixels that belong to recognised and actual defects. In other words, D S C shows the proportion of overlap between the two sets, normalised by the size of the sets. This metric ranges from 0 (no overlap) to 1 (perfect overlap).
The training sample contains images of different sizes. Therefore, regions of 256 × 256 pixels (corresponding to the size of the neural network’s input layer) were randomly selected from images larger than 256 × 256 pixels during training. Then, the input tensor was formed. In addition, the augmentation technique was used. For this, each frame (image) was randomly transformed (using horizontal or vertical display and rotation by a multiple of 90°). This approach made it possible to diversify training data significantly and provided conditions under which training packages are never repeated in practice.
The model with the best D S C metric was chosen as the final model for further research. The relevant test metric is D S C @ 0.55 = 0.93 .

2.7. Calculation of Quantitative Parameters of Defects

As a result of image recognition, we obtain a set of damage fragments at the output, each containing a set of connected pixels (at least one background pixel between any two such fragments). The area and inclination of the damage fragment were used to quantify the damage found independently. The damage fragment area was calculated as the total number of pixels. The inclination of fragment θ was calculated as the angle between the equivalent ellipse’s central axis and the image’s X axis (Figure 10).

3. Defect Recognition Results and Analysis

The developed application was used to process a series of images obtained during the experiment, with different illumination levels applied to a specimen with surface damage. Figure 11 shows the initial images of surface damage obtained using the LED source with the average illumination of 324, 159, 64, and 37 lx (Figure 11a) and the results of their recognition using a neural network (Figure 11b). Images obtained using the incandescent lamp with the average illumination of 270, 149, 70, and 36 lx are shown in Figure 12.
Regardless of nearly the same illumination range for both light sources (37...324 lx for the semiconductor source and 36...270 lx for the incandescent lamp), the results obtained using the neural network model differ significantly (see Figure 11b and Figure 12b). With the LED source, both horizontal and vertical scratches were recognised, while the incandescent lamp was nearly unable to detect vertical scratches. Less pronounced parts of most defects were also recognised in different ways. The colour range of damage found on the plate illuminated by the incandescent lamp is apparently closer to that of undamaged areas; therefore, the damage becomes less pronounced.
Thus, changing the light source type appears sufficient to affect the final result of recognition significantly.
This indicates that the images used to train the neural network appear close to those obtained using the LED source. Therefore, the training sample should be expanded to include the images obtained under the incandescent lamps to improve the recognition results.

4. Discussion

As is seen from the results obtained, the area of the sections recognised as damage increases with an increase in the illuminance level applied to the surface of interest. This is because minor scratches (and other morphological elements of the surface) become more visible.
A transition from a lower to a higher light is especially noticeable (from 37 to 64 lx for the LED source or from 36 to 70 lx for the incandescent lamp). In the former case, the vertical scratches can hardly be seen in the image, while with the average light of ~64 lx, some become recognisable. At 159 lx, the neural network recognises most of them (with the LED source).
When increased above 150 lx, the light level adds nothing to detecting new scratches (except, perhaps, very small ones). At the same time, it makes the objects of damage somewhat larger. Therefore, we can conclude that the total area of damage in the image increases with an increase in the light level.
We note that the neural network is very sensitive even in low light, especially for images obtained using the LED source; most pronounced damage is well-recognised at the average light level of 64 lx, even though the camera adds darkness to the images, making damage practically invisible (Figure 11a, third image). However, the damage is no longer recognisable at 37 lx on average (Figure 11a, the last image). In this case, pixel intensity varies from 1 to 5 in the zone with unrecognisable damage (see the initial photo). In other words, we can state that with 37 lx, the input image obtained is not informative enough to conclude the presence of the objects of interest. Therefore, the neural network model is good at recognising the objects of interest in all the images where at least a small gradient is preserved between the damage pixels and the background. Low-light images have another feature; small objects become invisible in them. Therefore, low light should be used when the most pronounced damage needs to be recognised. This will provide for much lower noise in the image caused by an increased detail of surface artefacts that may not belong to damage (in high light). The area of damage is one of the most essential and apparent parameters characterising the surface. Figure 13 shows diagrams of the total areas recognised as damage using the two light sources considered. There is a significant difference in the total areas recognised as damage. This is because thin vertical scratches are practically not recognised when using the incandescent lamp (Figure 12b).
When the illumination generated by the LED source varies from 35 to 160 lx, the total area recognised as damage noticeably increases from 5000 pixels to 100,000 pixels (Figure 13a). A further increase in illumination does not entail a significant increase in this parameter (its values range from 100 to 125 thousand pixels). That is, the neural network has detected all the damage it was trained for. Further increase in the light level will not add any new details that could be identified as damage. A similar pattern is observed for the incandescent lamp (Figure 13b). With the light level of 108 lx, the area of damage recognised increases only slightly. An exception is a very high light of 270 lx, at which some vertical scratches, which were not detected before, are starting to be recognised.
The scatter of damage areas recognised in the image under the LED source is shown in Figure 14a,b. On the scatter plot (Figure 14a), each marker corresponds to a particular fragment of damage. The graph allows us to conclude that most fragments have areas up to 15 thousand pixels. Areas bigger than 20 thousand pixels contain individual fragments. The box plot (Figure 14b) contains percentiles that characterise the scatter of areas. The lower boundary of the rectangle corresponds to the 25th percentile, the upper to the 75th percentile, and the line between them to the 50th percentile (median). The percentile indicates the number of defects in an area smaller than the specified one. For instance, the 25th percentile with an area of 5000 pixels means that 25% of all fragments have an area smaller than 5000 pixels. Outlier lines on this diagram show the values, which fall within ±1.5 of the interquartile range I R Q = a 75 a 25 (where a 75 , a 25 are the 75th and 25th percentiles, respectively).
For the incandescent lamp (Figure 14c,d), the scatter of the recognised defect areas is approximately in the same range (average light above 108 lx). Thus, after reaching a certain “saturation” level, the scatter of the “damage area” values is the same for both light sources. Therefore, in addition to having only slight changes in the areas of fragments recognised, only a few new fragments are recognised. This is especially noticeable in the case of the incandescent lamp.
Bar plots drawn up for the LED source allow us to conclude that regardless of the light level, half of all damage fragments recognised have an area of up to 7000 pixels (the median). The number of very big fragments increases with the light level. Thus, with 220 lx on average, fragments of damage with an area of 40,000 pixels and more appear in the image. This is because many more small defects can be recognised in higher light. Moreover, when located close, they merge into one large fragment.
The lowest scatter of fragment areas is observed at the light level up to 100 lx (for both light sources). Speaking about the object of interest, only the most pronounced horizontal damages, which have approximately the same shape, direction, and area, are recognised in low light. As the light level increases, the scatter of areas also increases, showing approximately the same results for all light levels.
Figure 15 contains scatter plots showing the orientation (tilt angle) of the damage fragments found. In the diagram, each marker represents a piece of damage recognised. Three groups of damage can be distinguished, the slope of which relative to the x axis is close to 0 rad (zone 1) and ± π 2 rad (zones 2 and 3). This corresponds to the visually recognisable directions of scratches on the test specimen (Figure 13), which contain mainly horizontal and vertical scratches. Angles ± π 2 and ± π 2 have the same orientation, given that damage is not represented by a direction vector. Note that the x axis is vertical and the y axis is horizontal in Figure 11 and Figure 12.
The incandescent lamp helped recognise defects that have the same orientation. Given this, the scatter plot became even more compact (Figure 15b). Defects of different orientations could be found only at the highest light (~270 lx).
To assess the general influence of the illuminance level on defect recognition, the calculated results were compared with the data marked by the expert. The Dice similarity coefficient (13) was used as a metric.
Figure 16 shows the dependence of the DSC coefficient on the illuminance level applied to the test surface under the LED source. The best result ( D S C = 0.93 ) was obtained at 196 lx. A lower DSC at lower light levels means that some defects are still not recognised. DSC decreases at a high light because the surface formations become more visible, and the recognised fragments become wider. In addition, small fragments appear in high light, erroneously identified as damage.

5. Conclusions

A new software–hardware-based method for evaluating the effect of illumination on the efficacy of recognising metal strip defects has been created, which includes the specifically developed laboratory setup and artificial intelligence algorithms. The proposed method, combined with digital imaging and computer techniques, allows for testing the parameters of technical control systems intended to detect metal rolling defects and evaluate their performance.
The method is based on the statistical estimation of quantitative defect parameters. Defect area and orientation were used as descriptive quantitative parameters for the investigated surface.
The influence of variable surface illumination on the variability of the quantitative parameters (area and orientation) of recognised damage was investigated for the first time using the proposed method. The optimal lighting range for two light sources (LED unit and incandescent lamp) was found.
A specimen to which variable illumination from two different light sources was applied was used as an example. This was done to illustrate that changing the type of illumination can significantly affect the recognition results. Regardless of the light source, changing the average illumination in the lower range (which may differ slightly for various light sources) greatly impacts the recognition result. When varied in this range, illumination can significantly affect the result of damage recognition. At the same time, a further increase in illumination does not significantly affect the quantitative parameters of the recognition results but slightly increases their scatter.
The best result for investigated steel sheet defects ( D S C = 0.93 ) was obtained at illuminance ~200 lx. Illuminance less than 150 lx does not allow the recognition of most defects, whereas illuminance larger than 250 lx increases the number of small objects falsely recognised as defects.
The proposed method allows us to estimate the variance in quantitative parameters depending on illuminance and choose the optimal level of illuminance to detect defects with the highest accuracy. This will aid in adopting the defect recognition platform in actual industrial conditions and enhancing the quality of the investigated equipment.

Author Contributions

Supervision and Data validation: I.K. and P.M.; Data collection, Information analysis, Writing, and Editing: P.M., I.K., Y.O., V.M., O.S., D.B., H.K. and R.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kumar, A.; Gupta, S. Real time DSP based identification of surface defects using content-based imaging technique. In Proceedings of the IEEE International Conference on Industrial Technology 2000 (IEEE Cat. No.00TH8482), Goa, India, 19–22 January 2000; Volume 1, pp. 113–118. [Google Scholar] [CrossRef]
  2. Brezinová, J.; Vináš, J.; Maruschak, P.; Guzanová, A.; Draganovská, D.; Vrabel’, M. Sustainable Renovation within Metallurgical Production; RAM: Lüdenscheid, Germany, 2017; 215p. [Google Scholar]
  3. Huang, L.-P.; Hsu, Q.-C.; Liu, B.-H.; Lin, C.-F.; Chen, C.-H. Light Source Modules for Defect Detection on Highly Reflective Metallic Surfaces. Metals 2023, 13, 861. [Google Scholar] [CrossRef]
  4. Liu, X.; Xu, K.; Zhou, D. Improvements for the Recognition Rate of Surface Defects of Aluminum Sheets. In Light Metals 2019; The Minerals, Metals & Materials Series; Chesonis, C., Ed.; Springer: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
  5. Ma, Z.; Li, Y.; Huang, M.; Huang, Q.; Cheng, J.; Tang, S. Automated real-time detection of surface defects in manufacturing processes of aluminum alloy strip using a lightweight network architecture. J. Intell. Manuf. 2023, 34, 2431–2447. [Google Scholar] [CrossRef]
  6. Choi, D.C.; Jeon, Y.J.; Kim, S.H.; Moon, S.; Yun, J.P.; Kim, S.W. Detection of pinholes in steel slabs using gabor filter combination and morphological features. ISIJ Int. 2017, 57, 1045–1053. [Google Scholar] [CrossRef]
  7. Zhao, Y.J.; Yan, Y.H.; Song, K.C. Vision-based automatic detection of steel surface defects in the cold rolling process: Considering the influence of industrial liquids and surface textures. Int. J. Adv. Manuf. Technol. 2017, 90, 1665–1678. [Google Scholar] [CrossRef]
  8. Luo, Q.; Fang, X.; Su, J.; Zhou, J.; Zhou, B.; Yang, C.; Liu, L.; Gui, W.; Tian, L. Automated Visual Defect Classification for Flat Steel Surface: A Survey. IEEE Trans. Instrum. Meas. 2020, 69, 9329–9349. [Google Scholar] [CrossRef]
  9. Li, W.; Zhang, L.; Wu, C.; Cui, Z.; Niu, C. A new lightweight deep neural network for surface scratch detection. Int. J. Adv. Manuf. Technol. 2022, 123, 1999–2015. [Google Scholar] [CrossRef] [PubMed]
  10. Nieniewski, M. Morphological Detection and Extraction of Rail Surface Defects. IEEE Trans. Instrum. Meas. 2020, 69, 6870–6879. [Google Scholar] [CrossRef]
  11. Rashwan, H.A.; Mohamed, M.A.; Garcia, M.A.; Mertsching, B.D. Illumination robust optical flow model based on histogram of oriented gradients. In German Conference on Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2013; pp. 354–363. [Google Scholar]
  12. Li, Y.; Yu, F. CDMY: A Lightweight Object Detection Model Based on Coordinate Attention. In Proceedings of the IEEE 10th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, 17–19 June 2022; pp. 1258–1263. [Google Scholar] [CrossRef]
  13. Song, G.; Song, K.; Yan, Y. Saliency detection for strip steel surface defects using multiple constraints and improved texture features. Opt. Lasers Eng. 2020, 128, 106000. [Google Scholar] [CrossRef]
  14. Liu, Y.; Zhang, C.; Dong, X. A survey of real-time surface defect inspection methods based on deep learning. Artif. Intell. Rev. 2023, 56, 12131–12170. [Google Scholar] [CrossRef]
  15. Li, D.; Ge, S.; Zhao, K.; Cheng, X. A Shallow Neural Network for Recognition of Strip Steel Surface Defects Based on Attention Mechanism. ISIJ Int. 2023, 63, 525–533. [Google Scholar] [CrossRef]
  16. Prunella, M.; Scardigno, R.M.; Buongiorno, D.; Brunetti, A.; Longo, N.; Carli, R.; Dotoli, M.; Bevilacqua, V. Deep Learning for Automatic Vision-Based Recognition of Industrial Surface Defects: A Survey. IEEE Access 2023, 11, 43370–43423. [Google Scholar] [CrossRef]
  17. Sun, D.; Roth, S.; Black, M.J. A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them. Int. J. Comput. Vis. 2014, 106, 115–137. [Google Scholar] [CrossRef]
  18. Evstafev, O.; Shavetov, S. Surface Defect Detection and Recognition Based on CNN. In Proceedings of the 8th International Conference on Control, Decision and Information Technologies (CoDIT), Istanbul, Turkey, 17–20 May 2022; pp. 1518–1523. [Google Scholar] [CrossRef]
  19. Qiu, Y.; Tang, L.; Li, B.; Niu, S.; Niu, T. Uneven Illumination Surface Defects Inspection Based on Saliency Detection and Intrinsic Image Decomposition. IEEE Access 2020, 8, 190663–190676. [Google Scholar] [CrossRef]
  20. Wu, H.; Liu, Y.; Gao, W.; Xu, X. Uneven illumination surface defects inspection based on convolutional neural network. arXiv 2023, arXiv:1905.06683. [Google Scholar] [CrossRef]
  21. Qiu, Y.; Niu, S.; Niu, T.; Li, W.; Li, B. Joint-Prior-Based Uneven Illumination Image Enhancement for Surface Defect Detection. Symmetry 2022, 14, 1473. [Google Scholar] [CrossRef]
  22. Konovalenko, I.; Maruschak, P.; Kozbur, H.; Brezinová, J.; Brezina, J.; Nazarevich, B.; Shkira, Y. Influence of Uneven Lighting on Quantitative Indicators of Surface Defects. Machines 2022, 10, 194. [Google Scholar] [CrossRef]
  23. Konovalenko, I.; Maruschak, P.; Kozbur, H.; Brezinová, J.; Brezina, J.; Guzanová, A. Defectoscopic and Geometric Features of Defects That Occur in Sheet Metal and Their Description Based on Statistical Analysis. Metals 2021, 11, 1851. [Google Scholar] [CrossRef]
  24. Konovalenko, I.; Maruschak, P.; Brezinová, J.; Prentkovskis, O.; Brezina, J. Research of U-Net-Based CNN Architectures for Metal Surface Defect Detection. Machines 2022, 10, 327. [Google Scholar] [CrossRef]
  25. Singh, S.A.; Desai, K.A. Automated surface defect detection framework using machine vision and convolutional neural networks. J. Intell. Manuf. 2023, 34, 1995–2011. [Google Scholar] [CrossRef]
  26. Ding, H.; Xia, B. YOLOv5s-DNF: A lighter and real-time method for detecting surface defects in steel. In Proceedings of the 4th International Conference on Computer Vision, Image and Deep Learning (CVIDL), Zhuhai, China, 12–14 May 2023; pp. 564–569. [Google Scholar]
  27. Gao, Y.; Gao, L.; Li, X. A hierarchical training-convolutional neural network with feature alignment for steel surface defect recognition. Robot. Comput.-Integr. Manuf. 2023, 81, 102507. [Google Scholar] [CrossRef]
  28. Belyakova, I.; Piscio, V.; Maruschak, P.; Shovkun, O.; Medvid, V.; Markovych, M. Operation of Electronic Devices for Controlling Led Light Sources When the Environment Temperature Changes. Appl. Syst. Innov. 2023, 6, 57. [Google Scholar] [CrossRef]
  29. Simons, R.H.; Bean, A.R. Lighting Engineering Applied Calculations, 1st ed.; Routledge: London, UK, 2020; 536p. [Google Scholar]
  30. Lindsey, J.L. Applied Illumination Engineering; The Fairmont Press, Inc.: Lilburn, GA, USA, 1997; 516p. [Google Scholar]
  31. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation; MICCAI 2015, Part III, LNCS 9351; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  32. Konovalenko, I.; Hutsaylyuk, V.; Maruschak, P. Classification of surface defects of rolled metal using deep neural network ResNet50. In Proceedings of the 13th International Conference on Intelligent Technologies in Logistics and Mechatronics Systems (ITELMS 2020), Panevezys, Lithuania, 1 October 2020; pp. 41–48. [Google Scholar]
  33. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
  34. Kaggle Severstal: Steel Defect Detection. Can You Detect and Classify Defects in Steel? 2019. Available online: https://www.kaggle.com/c/severstal-steel-defect-detection (accessed on 4 December 2023).
  35. Kaggle: SD-Saliency-900. Saliency Detection for Strip Steel Surface Defects. 2020. Available online: https://www.kaggle.com/datasets/alex000kim/sdsaliency900 (accessed on 4 December 2023).
  36. Bengio, Y.; Boulanger-Lewandowski, N.; Pascanu, R. Advances in Optimizing Recurrent Networks. arXiv 2012, arXiv:1212.0901v2. [Google Scholar]
  37. Liu, M.; Senin, N.; Su, R.; Leach, R. Measurement of laser powder bed fusion surfaces with light scattering and unsupervised machine learning. Meas. Sci. Technol. 2022, 33, 074006. [Google Scholar] [CrossRef]
Figure 1. Design of the experimental setup. KAMERA—lightproof box; W—transparent window; F—photo camera; EL—light source; ES—specimen; BL1—photosensitive element; LUXMETER—luxmeter; V1—voltmeter; T1—voltage regulator.
Figure 1. Design of the experimental setup. KAMERA—lightproof box; W—transparent window; F—photo camera; EL—light source; ES—specimen; BL1—photosensitive element; LUXMETER—luxmeter; V1—voltmeter; T1—voltage regulator.
Applsci 14 02591 g001
Figure 2. General view of the experimental setup. KAMERA—lightproof box; F—photo camera; EL1—incandescent lamp, EL2—LED module; LUXMETER—luxmeter; V1—voltmeter; T1—voltage regulator.
Figure 2. General view of the experimental setup. KAMERA—lightproof box; F—photo camera; EL1—incandescent lamp, EL2—LED module; LUXMETER—luxmeter; V1—voltmeter; T1—voltage regulator.
Applsci 14 02591 g002
Figure 3. Specimen image obtained with the Canon EOS 1300D camera: 1—the image area for which the illumination was calculated; O0—the starting point of reference of conventional coordinates.
Figure 3. Specimen image obtained with the Canon EOS 1300D camera: 1—the image area for which the illumination was calculated; O0—the starting point of reference of conventional coordinates.
Applsci 14 02591 g003
Figure 4. Dependence of the light flux on the consumed voltage for the LED source (a) and for the incandescent lamp (b).
Figure 4. Dependence of the light flux on the consumed voltage for the LED source (a) and for the incandescent lamp (b).
Applsci 14 02591 g004
Figure 5. The light intensity curve obtained for the LED source at a consumed voltage of 12 V (a) and for the incandescent lamp at a consumed voltage of 220 V (b).
Figure 5. The light intensity curve obtained for the LED source at a consumed voltage of 12 V (a) and for the incandescent lamp at a consumed voltage of 220 V (b).
Applsci 14 02591 g005
Figure 6. Geometric schemes for estimating the illuminance level of the specimen surface lit by the LED light source (a) and incandescent lamp (b).
Figure 6. Geometric schemes for estimating the illuminance level of the specimen surface lit by the LED light source (a) and incandescent lamp (b).
Applsci 14 02591 g006
Figure 7. Graph showing the dependence of I α α , which was obtained experimentally (1) at a consumed voltage of 220 V and using a sixth-order polynomial (2) with a relative maximum error of 1.17%.
Figure 7. Graph showing the dependence of I α α , which was obtained experimentally (1) at a consumed voltage of 220 V and using a sixth-order polynomial (2) with a relative maximum error of 1.17%.
Applsci 14 02591 g007
Figure 8. Scatter of light over the surface, which fell into the digital image obtained when illuminated by the LED light source (a) and the incandescent lamp (b) at a consumed voltage of 13 V and 170 V, respectively.
Figure 8. Scatter of light over the surface, which fell into the digital image obtained when illuminated by the LED light source (a) and the incandescent lamp (b) at a consumed voltage of 13 V and 170 V, respectively.
Applsci 14 02591 g008
Figure 9. The general architecture of the U-net neural network. Blue rectangles indicate multi-channel feature maps. The number of channels is presented above them. White rectangles indicate the feature maps that were copied. The feature map size is presented near the lower left edge. Arrows indicate the direction of operations.
Figure 9. The general architecture of the U-net neural network. Blue rectangles indicate multi-channel feature maps. The number of channels is presented above them. White rectangles indicate the feature maps that were copied. The feature map size is presented near the lower left edge. Arrows indicate the direction of operations.
Applsci 14 02591 g009
Figure 10. Equivalent ellipse for the damage fragment and its inclination θ .
Figure 10. Equivalent ellipse for the damage fragment and its inclination θ .
Applsci 14 02591 g010
Figure 11. Digital images of the metal surface with scratch-like defects were obtained for the LED source at the average illumination of 324, 159, 64, and 37 lx, respectively (a); the result of defect recognition by a neural network (b).
Figure 11. Digital images of the metal surface with scratch-like defects were obtained for the LED source at the average illumination of 324, 159, 64, and 37 lx, respectively (a); the result of defect recognition by a neural network (b).
Applsci 14 02591 g011
Figure 12. Images of the metal surface with scratch-like defects were obtained for the incandescent lamp at the average illumination of 270, 149, 70, and 36 lx, respectively (a); the result of defect recognition by a neural network (b).
Figure 12. Images of the metal surface with scratch-like defects were obtained for the incandescent lamp at the average illumination of 270, 149, 70, and 36 lx, respectively (a); the result of defect recognition by a neural network (b).
Applsci 14 02591 g012aApplsci 14 02591 g012b
Figure 13. Dependence of the area recognised as damage on the illumination produced by the LED source (a) and the incandescent lamp (b).
Figure 13. Dependence of the area recognised as damage on the illumination produced by the LED source (a) and the incandescent lamp (b).
Applsci 14 02591 g013
Figure 14. Scatter plot (a,c) and box plot (b,d) of defect areas for LED (a,b) and incandescent lamp (c,d) sources.
Figure 14. Scatter plot (a,c) and box plot (b,d) of defect areas for LED (a,b) and incandescent lamp (c,d) sources.
Applsci 14 02591 g014
Figure 15. Scatter plot showing the orientation of damage fragments recognised for LED (a) and incandescent lamp (b) sources.
Figure 15. Scatter plot showing the orientation of damage fragments recognised for LED (a) and incandescent lamp (b) sources.
Applsci 14 02591 g015
Figure 16. Dependence of the D S C metric on the average surface illumination for the LED light source.
Figure 16. Dependence of the D S C metric on the average surface illumination for the LED light source.
Applsci 14 02591 g016
Table 1. The results of calculating the illumination of the horizontal surface, which fell into the zone of the specimen image obtained when illuminated by the LED light source.
Table 1. The results of calculating the illumination of the horizontal surface, which fell into the zone of the specimen image obtained when illuminated by the LED light source.
Supply voltage, V13.012.512.011.511.010.510.09.59.08.58.0
Luminous flux, Lm106102908172645243322112
Brightness, cd/m2172116561461131511691039844698520341195
Maximum illumination in the area of the image fragment, lx4704533993593202842311911429353
Minimum illumination in the area of the image fragment, lx1951881661491331189679593922
Average illumination in the area of the image fragment, lx324312275248220196159131986437
Table 2. The results of calculating the illumination level of the horizontal surface, which fell into the zone of the specimen image obtained when illuminated by the incandescent lamp.
Table 2. The results of calculating the illumination level of the horizontal surface, which fell into the zone of the specimen image obtained when illuminated by the incandescent lamp.
Supply voltage, V17016415514713512210588
Luminous flux, Lm382330261211153995129
Axial force of light, cd29.1725.2019.9316.1111.687.563.892.21
Maximum illumination in the area of the image fragment, lx4613993152551851206235
Minimum illumination in the area of the image fragment, lx140121957756361911
Average illumination in the area of the image fragment, lx270233184149108703620
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Maruschak, P.; Konovalenko, I.; Osadtsa, Y.; Medvid, V.; Shovkun, O.; Baran, D.; Kozbur, H.; Mykhailyshyn, R. Surface Illumination as a Factor Influencing the Efficacy of Defect Recognition on a Rolled Metal Surface Using a Deep Neural Network. Appl. Sci. 2024, 14, 2591. https://doi.org/10.3390/app14062591

AMA Style

Maruschak P, Konovalenko I, Osadtsa Y, Medvid V, Shovkun O, Baran D, Kozbur H, Mykhailyshyn R. Surface Illumination as a Factor Influencing the Efficacy of Defect Recognition on a Rolled Metal Surface Using a Deep Neural Network. Applied Sciences. 2024; 14(6):2591. https://doi.org/10.3390/app14062591

Chicago/Turabian Style

Maruschak, Pavlo, Ihor Konovalenko, Yaroslav Osadtsa, Volodymyr Medvid, Oleksandr Shovkun, Denys Baran, Halyna Kozbur, and Roman Mykhailyshyn. 2024. "Surface Illumination as a Factor Influencing the Efficacy of Defect Recognition on a Rolled Metal Surface Using a Deep Neural Network" Applied Sciences 14, no. 6: 2591. https://doi.org/10.3390/app14062591

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop