Next Article in Journal
Guiding Breathing at the Resonance Frequency with Haptic Sensors Potentiates Cardiac Coherence
Next Article in Special Issue
Unmanned Aerial Systems and Deep Learning for Safety and Health Activity Monitoring on Construction Sites
Previous Article in Journal
An Automated Skill Assessment Framework Based on Visual Motion Signals and a Deep Neural Network in Robot-Assisted Minimally Invasive Surgery
Previous Article in Special Issue
Optical Panel Inspection Using Explicit Band Gaussian Filtering Methods in Discrete Cosine Domain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optical Imaging Deformation Inspection and Quality Level Determination of Multifocal Glasses

1
Department of Industrial Engineering and Management, Chaoyang University of Technology, Taichung 413310, Taiwan
2
Department of Civil, Architectural, and Environmental Engineering, The University of Texas at Austin, Austin, TX 78712-0273, USA
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(9), 4497; https://doi.org/10.3390/s23094497
Submission received: 20 March 2023 / Revised: 1 May 2023 / Accepted: 3 May 2023 / Published: 5 May 2023

Abstract

:
Multifocal glasses are a new type of lens that can fit both nearsighted and farsighted vision on the same lens. This property allows the glass to have various curvatures in distinct regions within the glass during the grinding process. However, when the curvature varies irregularly, the glass is prone to optical deformation during imaging. Most of the previous studies on imaging deformation focus on the deformation correction of optical lenses. Consequently, this research uses an automatic deformation defect detection system for multifocal glasses to replace professional assessors. To quantify the grade of deformation of curved multifocal glasses, we first digitally imaged a pattern of concentric circles through a test glass to generate an imaged image of the glass. Second, we preprocess the image to enhance the clarity of the concentric circles’ appearance. A centroid-radius model is used to represent the form variation properties of every circle in the processed image. Third, the deviation of the centroid radius for detecting deformation defects is found by a slight deviation control scheme, and we gain a difference image indicating the detected deformed regions after comparing it with the norm pattern. Fourth, based on the deformation measure and occurrence location of multifocal glasses, we build fuzzy membership functions and inference regulations to quantify the deformation’s severity. Finally, a mixed model incorporating a network-based fuzzy inference and a genetic algorithm is applied to determine a quality grade for the deformation severity of detected defects. Testing outcomes show that the proposed methods attain a 94% accuracy rate of the quality levels for deformation severity, an 81% recall rate of deformation defects, and an 11% false positive rate for multifocal glass detection. This research contributes solutions to the problems of imaging deformation inspection and provides computer-aided systems for determining quality levels that meet the demands of inspection and quality control.

1. Introduction

Curved components can often meet a wide range of structural requirements because they allow greater geometric freedom in design. They can establish the general look and feel of the design [1]. Components with curved surfaces are popular in engineering practices such as automotive, aerospace, optics, etc. A glass is a piece of a lens, plastic, or other transparent material that is curved on one or both sides; these curves bend the light passing through the lens. Such optical devices tend to converge and diverge light beams through refraction to produce optical images. However, components with curved surfaces often need to be inspected, which can lead to complications that reduce the sensitivity of defect detection and increase the chance of false alarms [2,3].
Multifocal glasses are a new type of lens that can fit both nearsighted and farsighted vision on the same lens. This property allows the glass to have various curvatures in distinct regions within the glass during the grinding process. However, the disadvantage of multifocal glasses is that when the curvature changes abnormally, the glasses are prone to optical deformation during imaging. Because the glasses are utilized to directly envelop the eyes of the user, the deformation of the glass will cause imaging errors, which will bring trouble or even danger to the daily activities of the user. For example, dizziness may occur when going downstairs, or the inability to correctly decide the location between steps may be hazardous for the person. Figure 1 shows diagrammatic sketches of the imaging deformations of the stairs at close range using defective and normal multifocal glasses, respectively.
Progressive multifocal glasses allow people with myopia and presbyopia to easily and effectively improve their vision [4,5]. The progressive multifocal glass has areas of nearsighted, farsighted, progressive, and blindness, as shown in Figure 2. The advantage of wearing progressive multifocal glasses is that when you need to see farther places (scenery, buildings, etc.), you can use the area of the top half of the lens; when you need to see a closer place (mobile phone, book, etc.), you can use the bottom half area of the lens; the middle region is the progressive area for adjustment. The disadvantage of this lens is that there will be blind spots in the lower right and lower left. Frequent deformations in the blind areas also need attention since these deformations will explicitly influence the imaging quality of objects viewed by users.
Although the blind spot location is usually not larger than in the other areas, the use of this area is very important and requires special inspection. At present, the degree of deformation of the blind area is inspected by inspectors operating manual instruments. Since imaging deformation has no regular form and clear borders, it is frequently difficult to identify and quantify, particularly on curved multifocal glasses. Furthermore, outwardly curved glasses tend to hinder the identification of deformation defects in multifocal glasses due to their high transmittance and reflectivity. This study develops an optical inspection system to quickly inspect and classify deformation defects in multifocal glasses to replace professional assessors.
The rest of the article is composed as follows. First, we review the articles on the current techniques used by visual inspection systems for transparent products. Second, we describe the proposed image procedures to detect imaging deformation defects and determine the quality level of multifocal glasses. Third, we conduct tests and assess the performance of the suggested approach and traditional techniques. Finally, we conclude the contributions and indicate further directions.

2. Literature Review

Automated visual inspection (AVI) is a crucial step in quality assessment for production processes as it assures high quality and enhances productivity by rigorously examining and evaluating all products in an industrial process [6,7]. The AVI system integrates computer vision and machine learning techniques and has been widely applied in various industries [8]. To improve the quality of glass products, many researchers have developed automated optical inspection devices using advanced image processing methods to detect the shape and surface of glass products and analyze their optical properties. These studies focus on detecting surface defects in industrial precision glass-related products, such as: applying precise band Gaussian filtering based on discrete cosine space to detect the appearance flaws of capacitive touch screens with directional textures [9], proposing an optical detection method for aspheric glasses in semiconductor sensor modules [10], combining Hilbert–Huang transform with random forest model to locate the flaw positions on the front or back side of the screens [11], and using convex hull algorithm after Fourier filtering to implement car mirror detection structure [12]. These vision inspection systems mainly target surface defects of glass-related industrial products.
Image deformation caused by perspective requires image correction before further image processing [13]. In previous studies on deformation detection and correction methods, Mantel et al. [14] proposed a technique for identifying perspective deformation in photoluminescence images of photovoltaic screens, and Cutolo et al. [15] presented fast methods for calibrating see-through head-mounted displays using calibration camera. Apparently, most studies on perspective-related deformations have focused on deformation correction for optical glass [16,17].
Transmission deformation refers to the image quality degradation of an observed object due to refractive media. Previous studies on deformation detection via the transmission of industrial components include Dixon [18] who designed a system to measure optical deformation in aircraft transparencies using digital imaging and a decision tree-based classifier. Youngquist et al. [19] introduced a new method of interpreting transmitted deformations and enabled the use of phase-shifting interferometers to estimate deformations in large optical windows. Chiu et al. [20] constructed an optical system based on small drift control charts to identify deformation defects in curved car mirrors. Gerton et al. [21] analytically investigated the deformation patterns of Ronchi meshes to assess the impact of deformation on eyewear products. Lin et al. [22] employed the Hough transform voting scheme to inspect deformation defects for see-through glass products.
Glasses are wearable products that are closely related to human daily activities and require strict testing and verification. If the eyeglass lenses have excessive defects, the user’s safety will be compromised. Wang et al. [23] performed a symmetric energy analysis on different color spaces to evaluate the coating quality of glass. Yao et al. [24] employed low-angle LED (light-emitting diode) illumination and image normalization techniques for computer vision and lens categorization to identify glass surface defects. Karangwa et al. [25] developed a visual inspection platform integrating deep learning models and semantic segmentation to detect and classify the visual defects of optical glass. Lin et al. [26] designed an optical inspection system based on computer vision for various optical components, such as camera lenses, glasses, and other optical devices. Lin [27] proposed a method based on adaptive vision, combining wavelet feature extraction and support vector machine classification to classify lens images and determine the grade of eyeglasses.
Currently, most optical inspection systems for transparent glass products mainly detect surface defects and do not identify imaging deformation defects. The imaging deformation defects in curved glass surfaces are highly transmissive and reflective and are difficult to accurately detect [20]. Most of the related studies on perspective deformation focus on the deformation correction of optical glasses. At present, few works have applied optical inspection systems to detect imaging deformation defects in eyeglasses. Therefore, we propose an optical inspection system based on a slight deviation control scheme to detect imaging deformation defects in multifocal glasses. With proper parameter settings and robustness analysis, the method can recognize not only severe deformation defects but also slight deformation defects.

3. Research Method

This study presents an optical inspection system with familiar norm patterns of image capture and employs a slight deviation control scheme to check for deformation defects, and a mixed intelligent model incorporating a genetic algorithm and a network-based fuzzy inference system to decide the grade of deformation severity of the multifocal glasses. To measure the grade of deformation of a multifocal glass, we first digitally imaged a pattern of concentric circles through a test glass to generate an imaged image of the glass deformation. This deformation image is examined to investigate the deformation’s presence and the defects’ location. Secondly, the imaged image is preprocessed to enhance the clarity of the concentric circles’ appearance. A centroid-radius model is adopted to represent the form variation properties of every concentric circle in the processed image. Thirdly, by looking for small changes in characteristic distance deviations to detect deformation defects through a slight deviation control scheme, a difference image showing the detected deformation defects can be obtained. Fourthly, according to the deformation measurements and locations that occurred during the training phase, the deformation’s fuzzy membership functions and inference rule sets are established. Finally, a mixed model incorporating a genetic algorithm and a fuzzy inference model is adopted to judge the grade in the deformation severity of the detected deformation defects. Figure 3 shows the workflow of the stages of the proposed approach.

3.1. Image Capture and Image Preprocessing

The adopted test samples are 6.0 mm thick and 48.4 mm in diameter and are arbitrarily drawn from the production line of a multifocal glass producer. In order to capture digital imaging images of norm patterns by testing samples to create imaging deformation maps of samples, this work suggests an imaging acquisition device using a concentric circular pattern for imaging image extraction. Figure 4 illustrates the arrangement of the image acquisition apparatus to capture the test images of multifocal glasses. The test sample is inserted horizontally into the custom-made fixture in front of the norm pattern. A norm pattern of essentially concentric circles is placed on the foundation of the stand. A camera with a mount is applied to capture images from the sight transferred through the test glass on a pattern of concentric circles. To capture a digital image of a norm pattern with appropriate brightness, the light source control of the surroundings in which the image is acquired is also important.
Figure 5a,b shows two images captured from transmission imaging of the concentric circular pattern through a normal multifocal glass and a defective glass, respectively. The flawed image has noticeable deformation in the upper right area. The acquired images are preprocessed in a number of stages to enhance the clarity of the appearance of objects on light-transmitting glasses. In order to quantify the degree of deformation of the acquired pattern images, Figure 5c depicts the binarized and thinned images of defect samples by applying the Otsu method [28] and thinning algorithm [29] to perform segmentation and thinning operations sequentially when using a concentric circular pattern. With these two methods, most concentric circles are separated from the background and thinned to become binary and 1-pixel width images. The results show that moderately deformed defects on transparent glass surfaces are correctly segmented in binary images, regardless of small differences in deformation.

3.2. Feature Representation

When using coordinates for image feature processing, some problems arise. When the image is translated, zoomed, or rotated, the judgment result will be erroneous due to the coordinate changes. Hence, it should be depicted by geometric properties. We use the centroid-radius model [30] to represent the geometrical properties of every circle in the image by calculating the distances from the edge points to the centroid. The coordinates of the edge points are transformed into vectors of distance properties by the Euclidean metric. The reason for using Euclidean distance is that it is invariant to translation, scaling, and rotation.
We use the commonly used concentric circular pattern, which consists of eight concentric circles. The centroid radius r s u is the Euclidean distance calculated from the centroid O(x0, y0) and the s-th boundary point (xs,u, ys,u). The distances to the u-th circle in the concentric circular pattern are:
r s u = ( x s , u x 0 ) 2 + ( y s , u y 0 ) 2 , s = 1 , 2 , 3 ,  
For this pattern of concentric circles, many centroid radii of the u-th concentric circle can yield a distance vector R u , expressed as follows:
R u = { r 1 u , r 2 u , r 3 u , r s u , } , u = 1 , 2 , 3 ,   , 8
Due to scale and rotation invariance, we normalize the distance vector R u to 0 and 1 by dividing each value by the maximum value of the distance vector to obtain a normalized distance vector Q u :
Q u = q 1 u , q 2 u , q 3 u , , q s u , ,   where   q s u = r s u / max ( r s u )
When calculating the distances of centroid radii from the u-th circle to the circle’s centroid and normalizing it to a vector Q u , all points and Euclidean distances from the feature vector can be plotted, as shown in Figure 6a. Figure 6b indicates that the more the normalized distance values are, the farther they are from the center position, and the potential deformation in these concentric circle regions is more obvious.

3.3. Deformation Detection by a Slight Deviation Control Scheme

The distance feature vectors of all complete concentric circles in a test image are contrasted with those of the defect-free image, and the distance deviations of the respective edge points are measured to locate potential deformations in the test image. To detect slight deviations in distance variations, we propose a slight deviation control scheme, the exponentially weighted moving average (EWMA) scheme, which is usually applied in statistical process control [31,32]. We apply the EWMA scheme to find slight changes in distance deviations to detect deformation defects.
The EWMA scheme is also a good option to detect slight drifts [33,34]. The exponentially weighted moving average Z s with the s-th sample point is defined as:
Z s = λ   q s + 1 λ   Z s 1
where the initial value of Z s is the target value Z 0   =   μ 0 and the λ is the named constant weight located in the space 0 <   λ   ≦ 1. The value of parameter λ in the space 0.05~0.25 is suitable for the detection of slight deviations in practical application. A recommended rule is to adopt smaller λ values to detect slight variations. The control limit of the upper bound (UCL) and the control limit of lower bound (LCL) for the EWMA scheme are expressed as [33]:
U C L s = μ 0 + L σ λ 2 λ 1 1 λ 2 s
L C L s = μ 0 L σ λ 2 λ 1 1 λ 2 s
The parameter designs of the chart are the multiple of the standard deviation σ applied in the control limits (L) and the value of constant weight λ . The capability of the EWMA scheme is roughly comparable to that of the CUSUM scheme, and in some respects, it is simpler to establish the model and manipulate the schema [35,36]. Figure 7a is the output of the fifth circle in the concentric circular pattern of a test image performed by the EWMA scheme. Figure 7b shows when EWMA is used to detect the deformation defect regions; the boundary point range of the defect position will thus be clearer.

3.4. Quality Level Determination of Deformation Severity by the Fuzzy Inference System

This research applies a fuzzy-related model to automatically detect changes in deformation severity [37]. The genetic algorithm-based adaptive neuro-fuzzy inference system (GA-based ANFIS) model combines genetic algorithms and adaptive-network-based fuzzy inference theory, consisting of FIS and back-propagation network (BPN). In this study, a GA-based ANFIS model is used to judge the quality grade of deformation in multifocal glasses. When the merits of these techniques are combined, the judgment accuracy of the detection system will be notably enhanced.
By contrasting the detected deformation defect image with the concentric circular pattern image, Figure 8a shows that the regions labeled by red lines are the deformation defects, and the regions labeled by white lines are the norm pattern. In addition, the points on the found defects are contrasted to the corresponding points in the concentric circular pattern, respectively. The Manhattan metric vector U denoting the measurement of the deformation amount is defined as follows:
u s = x m , s x n , s + y m , s y n , s
U = u 1 , u 2 , u 3 ,   , u s ,   ,   s = 1 , 2 , 3 ,  
where x m , s and y m , s are the (x, y) coordinates of the s-th edge point in a test image, x n , s and y n , s are the corresponding (x, y) coordinates of the s-th edge point in the concentric circular pattern image.
Figure 8b shows a difference image labeled according to three different deformation severities. The image is distinguished as three regions according to three distinct degrees of deformation: the red-line portion is the low allowance area, which comprises the first and second circles, named zone A; the blue-line portion is the medium allowance area, which consists of the third, fourth, and fifth circles, named zone B; the green-line portion is the high allowance area, composed of the rest circles, named zone C. The measurements of the individual deformations of the three zones are fed into the model for fuzzy inference and grade judgment.

3.4.1. Fuzzy Inference System of Deformation Levels

The main purpose of the FIS model is not only to transform the measured values into fuzzy membership functions but also to set up fuzzy inference regulations and modes [38,39]. Its merit is that, while the input is fuzzy details, an appropriate corresponding value can be output through the process of establishing inference rules and defuzzification algorithms. In this work, the measures of deformation in zones A, B, and C are employed as the input items to categorize the severities of glass deformation. When performing fuzzy inference, it is first necessary to transform the feature values of the distortion variables into membership functions. We use Gaussian membership functions to set the range of feature values as follows:
G a u s s i a n u ; σ , c = e 1 2 ( u c σ ) 2
where σ is the standard deviation and c is the center point of the Gaussian membership function.
Table 1 summarizes three features as the input items of the FIS model, which are the deformation degrees of the three zones, A, B, and C, respectively, and the output item is the quality grade of deformation. In the boundary settings of the fuzzy sets of the three input measures, while the input is the deformation of zone A, due to the low allowance, it is set to two levels, and the other inputs and outputs are three levels. The membership functions and fuzzy set definitions of the input items are shown in Table 2.
When the fuzzy membership functions have been set up, the fuzzy regulation base could be created based on the allowance of the deformation level of the multifocal glass. The closer the deformation defect occurs to the middle of the glass, the smaller the allowed allowance is, and the degree of deformation is severe. The closer the deformation flaw region is to the glass boundary, the higher the tolerance is allowed and is then categorized as slight on the deformation level. The fuzzy regulations are created according to the empirical regulations of experts. Three inputs, U1, U2, and U3, are the deformation levels of the zones A, B, and C, respectively, and the outputs are deformation levels. For example, while the deformation measure U1 of zone A is small (A1) and the deformation measure U2 of zone B is small (B1) and the deformation measure U3 of zone C is small (C1), the severity of the output Y is a slight distortion (Y1). A fuzzy regulation base containing eighteen regulations is created in this experiment.
We use the TSK (Takagi–Sugeno–Kang) fuzzy model [39] as an inference engine, and the model combines the application of fuzzy regulations using the IF-THEN format. The result of each regulation is a linear combination of the input factors and the constant term. The final result is a weighted average of the results of each rule. It mostly employs fuzzy regulations to describe a non-linear system. The merits of this approach are rapid computational efficiency, good cooperation with adaptive optimization techniques, and continual output values, which are very suitable for arithmetical analysis. While the deformation degrees of the three zones are input into the FIS and deduced by all the regulations, the accurate output values can be produced by means of the defuzzification procedure via applying the weighted average scheme. After the outcomes of all regulations are calculated, the last output could be determined.

3.4.2. Adaptive Neuro-Fuzzy Inference System for Determining Quality Level of Deformations

ANFIS model is mostly a network-based fuzzy inference model set up by merging the concepts of fuzzy theory and neural networks [40,41]. It is assessed by repeatedly varying the values of parameters and minimizing the error functions. This work establishes a five-layer architecture diagram of the ANFIS model by means of the deformation measurements of the three zones of the five layers—input layer, regulation layer, normalization layer, inference layer, and output layer—as indicated in Figure 9. By means of the learning process of the method, the training can be performed iteratively, and the parameters will be repeatedly corrected by calculating the error values of the parameters during each training. While the training errors converge to the least values or to the number of training times and achieves the preset largest number of times, the training procedure is terminated to obtain a finer trained network-based FIS than the initial parameter setting.

3.4.3. Genetic Algorithm (GA)-Based Adaptive Neuro-Fuzzy Inference System for Determining Quality Level of Deformations

Genetic algorithm mostly uses the procedure of reproduction and inheritance of organisms by means of mutation and crossover of chromosomes. Firstly, the original population of the GA is established by using the membership function parameters of the deformation measures, and the fitness values of each parameter combination are evaluated. The parameters of every parameter combination are then uniformly crossed and mutated, resulting in more diverse parameter values. The merit of this arithmetic is that it could be upgraded from a local optimal answer to a globally optimal solution by a mutation function.
When a fuzzy inference system is created, it consists of membership functions of the three zones of deformations, the regulation set, and the inference module. The GA-based ANFIS method primarily optimizes the arithmetic in two stages [42,43]. The first stage in optimization is to apply ANFIS to calculate the deviations between forecasted values and true solutions, and optimize the answers by the gradient descent scheme. However, applying the gradient descent scheme could only search for a local optimum answer. The other stage of optimization is to apply GA to assess the fitness values of parameter combinations and choose finer parameters from crossover to share information, and eventually to mutate to widen the ranges of practicable parameters. Upgrading the previously obtained local optimum answer to the global optimum answer is the goal of the second stage.

4. Experiments and Results

To confirm the capabilities of the proposed approach using concentric circular patterns, the performances of these recommended techniques are assessed on 550 sample images (350 training images and 200 test images) with different degrees of deformation. Each captured image has a size of 256 × 256 pixels, and each pixel contains 8 bits of gray scale. The algorithm of the realized deformation defect detection system is edited on the MATLAB application software platform, and implemented on the MATLAB R2013 version on the desktop (INTEL CORE i5-8250U 1.60 GHz, 32 GB RAM). Figure 10 shows the user interface design of the developed deformation defect detection system, showing all the processing steps using the concentric circular pattern in the multifocal glass.
In this study, experimentally detected images are compared for correctness with manually labeled images. In terms of checking deformation defects, recall, precision, and accuracy are adopted as performance evaluation metrics for the proposed models. When the above performance indices are higher, the inspection performance is better. The recall rate is the regions of correctly identified true defects (True Positives, TP) divided by the regions of correctly identified true defects (TP) plus the regions of true defects incorrectly labeled as non-defects (False Negatives, FN). It can be thought of as the fraction of true defects that are correctly identified in the set of all real defects. The precision rate is the regions of exactly identified true defects (TP) divided by the regions of correctly identified true defects (TP) plus the regions of non-defects incorrectly labeled as defects (False Positives, FP). It can be thought of as the fraction of these detected defects that are real defects. The accuracy rate is the regions of exactly identified true defects (TP) and the regions of correctly labeled true non-defects (TN) divided by the total regions of a testing instance (TP + TN + FP + FN). It represents the rate of correctly identified defects and non-defects over the total region of a testing image. If the dataset is imbalanced (both defect and non-defect classes have significantly different regions of testing images), the accuracy rate is not a good metric [44,45].
In terms of determining the deformation quality level, the accuracy rate is modified from individual deformation defects to a performance evaluation index with individual images as the basic unit. The detected deformations in an image are judged combinedly into three categories (slight, average, and severe) by the GA-based ANFIS model and they are checked that each class is correctly classified. The accuracy rate is the number of test images categorized into the exact level class divided by the total number of test images.

4.1. Performance Assessment of Various Line Thicknesses in the Concentric Circle Patterns

The size in pixel units of the line thickness of each circle on the concentric standard patterns influences the detection efficiency of the suggested approach for deformation defects. Smaller deformation defects will be more completely identified if an appropriate line thickness pixel size is chosen in the concentric standard pattern. We use a computer program to generate standard concentric patterns with different line thicknesses and then use these standard patterns with different line widths together with test samples to capture test images. We examine concentric circular patterns with line widths from 1 to 6 pixels in concentric circles by the suggested approach. Figure 11 shows the images acquired by the suggested method, employing patterns of concentric circles with line widths of five kinds of pixel sizes and the results of a defect sample. We find that concentric circles with a thickness of one pixel are less sensitive to the detection of deformation defects, resulting in the lowest recall rate. On the other hand, concentric circles with a greater pixel thickness are more sensitized to the detection of deformation defects and lead to more false positive alarms. Table 3 denotes that the detection outcomes of the concentric circular patterns with line widths of 2 pixels and 3 pixels are more appropriate, with a higher recall rate and a lower false positive rate and a better deformation detection performance.

4.2. Performance Assessment of Applying EWMA Slight Deviation Control Scheme

In order to evaluate the inspection performance of multifocal glass deformation defects, Table 4 summarizes the detection and quality level judgment outcomes of the approach suggested in this work. The EWMA slight deviation control scheme of the proposed method is evaluated based on the outcomes by expertise assessors. The average recall rate of deformation inspection across all tests performed by the EWMA scheme is 81.09%. However, the precision rate of the EWMA scheme is significantly higher at 89.06%. The proposed EWMA scheme has a high deformation recall rate and a low false positive rate. Figure 12 shows some outcomes of concentric imaging deformation inspection of the suggested method employing the EWMA slight deviation control scheme. The mean execution time for a test image with a size of 256 × 256 pixels is 0.2847 s by the EWMA scheme. Thus, the proposed EWMA scheme overcomes the difficulty of detecting deformation defects in multifocal glasses and goes beyond its capability to correctly distinguish slight deformation defects from normal areas.
To assess the performance of the classification of deformation defect severity in multifocal glasses, three classification models BPN [46], ANFIS, and GA-based ANIFS are further evaluated based on the results of expertise assessors. It can be seen from Table 4 that no matter what deformation level classification model is applied, the accuracy rate of the deformation grade of the GA-based ANFIS method is larger than those of the BPN and ANFIS models. Based on the above analysis, we find that the suggested mixed method incorporating the EWMA scheme and the GA-based ANFIS method is a superior slight-deviation detection and grade determination technique for multifocal lens imaging deformation detection and severity judgment.

4.3. Performance Assessment of Using Distinct Norm Patterns in Deformation Detection by the Suggested Method

Using the method based on the Hough transform [22], two conventional norm patterns, a checkered pattern, and a dot pattern, were applied to detect deformations to differentiate the results of deformation defect detection. Because we adopt the three norm patterns to create consistent deformation defect images by choosing the same deformed locations and deformation degrees, the deformation distinctions among the three norm patterns can be contrasted more precisely. To show the impact of deformation detection on the consistent deformation images, Figure 13 shows some detection results of the Hough transform-based method, proposed method, and inspector for deformation defects using a checkered pattern, a dot pattern, and a concentric circular pattern, respectively. The Hough transform-based method with the checkered pattern makes many wrong identifications not only in missing alarms but also in false positives, and the same method with the dot pattern also results in some wrong identifications in missing alarms and false positives on the deformation defect inspection of multifocal glasses. The suggested approach with the concentric circular pattern can check most of the deformation defects and make less false identifications. Table 5 sums up the results of the imaging deformation inspection by the Hough transform-based methods and the suggested approach using the three standard modes. This demonstrates that the suggested approach with the concentric circular pattern outperforms the current techniques using the checker pattern and dot pattern in the deformation defect detection of multifocal glass images.

4.4. Robustness Tests on Changing the Brightness of the Image Illumination for Deformation Detection Results by the Suggested Approach

This study uses different parameter settings of the capture-related devices to take various test image sets and then select the test image set with the best defect detection effect. The parameter settings of this selected test image set are used as the standard for subsequent image acquisition. During the image acquisition process, the brightness of the acquired image is easily affected by the intensity of ambient light, which in turn affects the detection results. We investigate whether the detection results of the proposed method are susceptible to certain changes in image brightness to test the robustness of the method. In this study, the EWMA control scheme is applied to detect deformation defects, and the GA-based ANFIS method is adopted to categorize the severity of deformation defects. The performance evaluation results using different brightness variations are shown in Table 6, and the PR (precision–recall) chart [47,48] showing the detection performance variation trend is plotted in Figure 14. It can be seen that when the brightness of the image becomes brighter or darker by more than one standard deviation, the detection recall rate decreases significantly, and the imprecision rate also increases significantly. Figure 15 shows the local deformation detection results of the proposed method to systematically vary the image illumination brightness. Although some deformed regions are missing, most of them are detected and the overall detection rate is still better when the original brightness is applied. The proposed method is moderately sensitive to changes in light intensity. The results show that deformation defects in most multifocal glass images are accurately identified in the resulting images despite slight and moderate illumination changes.
The proposed concentric circular pattern outperforms the other two norm patterns in detecting slight to average deformations and small to medium area deformations. Therefore, the use of concentric circular patterns is more suitable for detecting deformation defects with less deformation in multifocal glasses. The main merit of this research method is to use the centroid radius descriptor to understand the deformation state of each edge point and use the EWMA control scheme to detect small deformations. In addition, using the GA-based ANFIS classifier model, the parameters that only converge in the local domain are extended to the global domain, which improves the classification effect.
Since the proposed method is mainly established on extracting features from geometric properties for deformation detection, it is moderately sensitive to changes in illumination intensity. If the brightness variation range is within (μ ± 1σ) as a whole, it will have little effect on the detection results of this study. However, large changes in illumination can significantly increase grayscale variation, which in turn can significantly affect defect detection. To conquer the restrictions of the proposed method, it is suggested to update the statistics (mean and standard deviation) of the intensity in the training samples when the illumination changes significantly.

5. Conclusions

This research presents a mixed approach constructed using computer vision and fuzzy theory techniques to detect deformation defects and decide the deformation level of multifocal glasses. It investigates the detection of imaging deformation defects in multifocal glass images and the classification of deformation severity. In this study, a vision system using concentric circular patterns for imaging is first developed to obtain test images showing imaging deformed areas and binarize and refine the circle edges in the image. If the boundary point-to-centroid distance value of the concentric circle goes beyond the upper or lower limit of the suggested EWMA scheme, it indicates that there is a deformation defect in this boundary point area. Then, through comparing the discovered defect image, the norm pattern is used to measure the amount of deformation. By partitioning the probable locations of defects into three zones of slight, average, and severe deformation, we summarize the individual deformation measures of the three zones. Finally, a GA-based ANFIS model is suggested to categorize the severity of multifocal glass deformation. The suggested approach is effective and efficient in detecting deformation defects and classifying the severity of deformed regions on multifocal glass images. Testing outcomes show that the proposed methods attain a 94% accuracy rate of deformation severity quality grades, an 81% recall rate of deformation defects, and an 11% false positive rate in multifocal glass deformation detection. Further studies can extend the proposed method to the problem of imaging deformation defect inspection of curved glass-related products, for example, the distortion detection of automobile windshields and the deformation detection of automobile rearview mirrors.

Author Contributions

Conceptualization, H.-D.L., T.-H.L. and H.-C.W.; methodology, H.-D.L. and T.-H.L.; software, T.-H.L. and C.-H.L.; validation, H.-D.L., T.-H.L., C.-H.L. and H.-C.W.; formal analysis, H.-D.L., T.-H.L. and H.-C.W.; investigation, C.-H.L. and H.-C.W.; resources, H.-D.L. and H.-C.W.; data curation, T.-H.L. and C.-H.L.; writing—original draft preparation, H.-D.L., C.-H.L. and H.-C.W.; writing—review and editing, H.-D.L., C.-H.L. and H.-C.W.; visualization, T.-H.L. and C.-H.L.; supervision, H.-D.L. and H.-C.W.; project administration, H.-D.L. and H.-C.W.; funding acquisition, H.-D.L. and T.-H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science and Technology Council (R.O.C.), grant number MOST 104-2221-E-324-010.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Special thanks to the National Science and Technology Council (R.O.C.) for providing financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhou, W.; Shao, Z.; Yu, J.; Lin, J. Advances and Trends in Forming Curved Extrusion Profiles. Materials 2021, 14, 1603. [Google Scholar] [CrossRef] [PubMed]
  2. Lane, C.J. The inspection of curved components using flexible ultrasonic arrays and shape sensing fibres. Case Stud. Nondestruct. Test. Eval. 2014, 1, 13–18. [Google Scholar] [CrossRef]
  3. Ji, K.; Zhao, P.; Zhuo, C.; Chen, J.; Wang, X.; Gao, S.; Fu, J. Ultrasonic full-matrix imaging of curved-surface components. Mech. Syst. Signal Process. 2022, 181, 109522. [Google Scholar] [CrossRef]
  4. Jiang, W.; Bao, W.; Tang, Q.; Wang, H. A variational-difference numerical method for designing progressive-addition lenses. Comput.-Aided Des. 2014, 48, 17–27. [Google Scholar] [CrossRef]
  5. Loos, J.; Greiner, G.; Seidel, H.P. A variational approach to progressive lens design. Comput.-Aided Des. 1998, 30, 595–602. [Google Scholar] [CrossRef]
  6. Ren, Z.; Fang, F.; Yan, N.; Wu, Y. State of the art in defect detection based on machine vision. Int. J. Precis. Eng. Manuf.-Green Technol. 2022, 9, 661–691. [Google Scholar] [CrossRef]
  7. Ming, W.; Shen, F.; Li, X.; Zhang, Z.; Du, J.; Chen, Z.; Cao, Y. A comprehensive review of defect detection in 3C glass components. Measurement 2020, 158, 107722. [Google Scholar] [CrossRef]
  8. Chai, J.; Zeng, H.; Li, A.; Ngai, E.W. Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Mach. Learn. Appl. 2021, 6, 100134. [Google Scholar] [CrossRef]
  9. Lin, H.-D.; Tsai, H.-H.; Lin, C.-H.; Chang, H.-T. Optical Panel Inspection Using Explicit Band Gaussian Filtering Methods in Discrete Cosine Domain. Sensors 2023, 23, 1737. [Google Scholar] [CrossRef]
  10. Kuo, C.-F.J.; Lo, W.-C.; Huang, Y.-R.; Tsai, H.-Y.; Lee, C.-L.; Wu, H.-C. Automated defect inspection system for CMOS image sensor with micro multi-layer non-spherical lens module. J. Manuf. Syst. 2017, 45, 248–259. [Google Scholar]
  11. Lin, H.-D.; Qiu, Z.-T.; Lin, C.-H. Incorporating Visual Defect Identification and Determination of Occurrence Side in Touch Panel Quality Inspection. IEEE Access 2022, 10, 90213–90228. [Google Scholar] [CrossRef]
  12. Chiu, Y.-S.P.; Lin, H.-D.; Cheng, H.-H. Optical inspection of appearance faults for auto mirrors using Fourier filtering and convex hull arithmetic. J. Appl. Res. Technol. 2021, 19, 279–293. [Google Scholar] [CrossRef]
  13. Santana-Cedrés, D.; Gomez, L.; Alemán-Flores, M.; Salgado, A.; Esclarín, J.; Mazorra, L.; Alvarez, L. Automatic correction of perspective and optical distortions. Comput. Vis. Image Underst. 2017, 161, 1–10. [Google Scholar] [CrossRef]
  14. Mantel, C.; Villebro, F.; Parikh, H.R.; Spataru, S.; Benatto, G.A.D.R.; Sera, D.; Poulsen, P.B.; Forchhammer, S. Method for Estimation and Correction of Perspective Distortion of Electroluminescence Images of Photovoltaic Panels. IEEE J. Photovolt. 2020, 10, 1797–1802. [Google Scholar] [CrossRef]
  15. Cutolo, F.; Fontana, U.; Cattari, N.; Ferrari, V. Off-Line Camera-Based Calibration for Optical See-Through Head-Mounted Displays. Appl. Sci. 2019, 10, 193. [Google Scholar] [CrossRef]
  16. Hou, Y.; Zhang, H.; Zhao, J.; He, J.; Qi, H.; Liu, Z.; Guo, B. Camera lens distortion evaluation and correction technique based on a colour CCD moiré method. Opt. Lasers Eng. 2018, 110, 211–219. [Google Scholar] [CrossRef]
  17. Liu, X.; Li, Z.; Zhong, K.; Chao, Y.; Miraldo, P.; Shi, Y. Generic distortion model for metrology under optical microscopes. Opt. Lasers Eng. 2018, 103, 119–126. [Google Scholar] [CrossRef]
  18. Dixon, M.; Glaubius, R.; Freeman, P.; Pless, R.; Gleason, M.P.; Thomas, M.M.; Smart, W.D. Measuring optical distortion in aircraft transparencies: A fully automated system for quantitative evaluation. Mach. Vis. Appl. 2010, 22, 791–804. [Google Scholar] [CrossRef]
  19. Youngquist, R.C.; Skow, M.; Nurge, M.A. Optical distortion evaluation in large area windows using interferometry. In Proceedings of the 14th International Symposium on Nondestructive Characterization of Materials, Marina Del Rey, CA, USA, 22–26 June 2015. [Google Scholar]
  20. Chiu, S.W.; Hsieh, K.-S.; Lin, H.-D. Effective mathematical schemes for measuring the surface distortions of curved mirrors with applications. Far East J. Math. Sci. 2018, 103, 483–502. [Google Scholar] [CrossRef]
  21. Gerton, K.M.; Novar, B.J.; Brockmeier, W.; Putnam, C. A Novel Method for Optical Distortion Quantification. Optom. Vis. Sci. 2019, 96, 117–123. [Google Scholar] [CrossRef]
  22. Lin, H.D.; Lo, Y.C.; Lin, C.H. Computer-aided transmitted deformation inspection system for see-through glass products. Int. J. Innov. Comput. Inf. Control 2022, 18, 1217–1234. [Google Scholar]
  23. Le, N.T.; Wang, J.-W.; Wang, C.-C.; Nguyen, T.N. Automatic Defect Inspection for Coated Eyeglass Based on Symmetrized Energy Analysis of Color Channels. Symmetry 2019, 11, 1518. [Google Scholar] [CrossRef]
  24. Yao, H.B.; Ping, J.; Ma, G.D.; Li, L.W.; Gu, J.N. The System Research on Automatic Defect Detection of Glasses. Appl. Mech. Mater. 2013, 437, 362–365. [Google Scholar] [CrossRef]
  25. Karangwa, J.; Kong, L.; Yi, D.; Zheng, J. Automatic optical inspection platform for real-time surface defects detection on plane optical components based on semantic segmentation. Appl. Opt. 2021, 60, 5496–5506. [Google Scholar] [CrossRef]
  26. Lin, Y.; Xiang, Y.; Lin, Y.; Yu, J. Defect detection system for optical element surface based on machine vision. In Proceedings of the 2019 IEEE 2nd International Conference on Information Systems and Computer Aided Education, Dalian, China, 28–30 September 2019; pp. 415–418. [Google Scholar]
  27. Lin, T.-K. An Adaptive Vision-Based Method for Automated Inspection in Manufacturing. Adv. Mech. Eng. 2014, 6, 616341. [Google Scholar] [CrossRef]
  28. Sezgin, M.; Sankur, B.L. Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging 2004, 13, 146–156. [Google Scholar]
  29. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 4th ed.; Pearson: New York, NY, USA, 2018. [Google Scholar]
  30. Kong, X.; Luo, Q.; Zeng, G.; Lee, M.H. A new shape descriptor based on centroid–radii model and wavelet transform. Opt. Commun. 2007, 273, 362–366. [Google Scholar] [CrossRef]
  31. Montgomery, D.C. Statistical Quality Control: A Modern Introduction, 7th ed.; John Wiley & Sons Singapore Pte. Ltd.: Singapore, 2013. [Google Scholar]
  32. Yu, F.J.; Yang, Y.Y.; Wang, M.J.; Wu, Z. Using EWMA control schemes for monitoring wafer quality in negative binomial process. Microelectron. Reliab. 2011, 51, 400–405. [Google Scholar] [CrossRef]
  33. Lucas, J.M.; Saccucci, M.S. Exponentially Weighted Moving Average Control Schemes: Properties and Enhancements. Technometrics 1990, 32, 1–12. [Google Scholar] [CrossRef]
  34. Sukparungsee, S.; Areepong, Y.; Taboran, R. Exponentially weighted moving average—Moving average charts for monitoring the process mean. PLoS ONE 2020, 15, e0228208. [Google Scholar] [CrossRef]
  35. Vera do Carmo, C.; Lopes, L.F.D.; Souza, A.M. Comparative study of the performance of the CuSum and EWMA control charts. Comput. Ind. Eng. 2004, 46, 707–724. [Google Scholar]
  36. Khoo, M.B.; Teh, S.Y. A study on the effects of trends due to inertia on EWMA and CUSUM charts. J. Qual. Meas. Anal. 2009, 5, 73–80. [Google Scholar]
  37. Öztürk, Ş.; Akdemir, B. Fuzzy logic-based segmentation of manufacturing defects on reflective surfaces. Neural Comput. Appl. 2018, 29, 107–116. [Google Scholar] [CrossRef]
  38. Talpur, N.; Abdulkadir, S.J.; Alhussian, H.; Hasan, M.H.; Aziz, N.; Bamhdi, A. Deep Neuro-Fuzzy System application trends, challenges, and future perspectives: A systematic survey. Artif. Intell. Rev. 2022, 56, 865–913. [Google Scholar] [CrossRef] [PubMed]
  39. Takagi, T.; Sugeno, M. Fuzzy Identification of Systems and Its Applications to Modeling and Control. IEEE Trans. Syst. Man Cybern. 1985, 15, 116–132. [Google Scholar] [CrossRef]
  40. Jang, J.-S.R. ANFIS: Adaptive-Network-Based Fuzzy Inference System. IEEE Trans. Syst. Man Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef]
  41. Walia, N.; Kumar, S.; Singh, H. A Survey on Applications of Adaptive Neuro Fuzzy Inference System. Int. J. Hybrid Inf. Technol. 2015, 8, 343–350. [Google Scholar] [CrossRef]
  42. Azadeh, A.; Saberi, M.; Anvari, M.; Azaron, A.; Mohammadi, M. An adaptive network based fuzzy inference system–genetic algorithm clustering ensemble algorithm for performance assessment and improvement of conventional power plants. Expert Syst. Appl. 2011, 38, 2224–2234. [Google Scholar] [CrossRef]
  43. Olayode, I.O.; Tartibu, L.K.; Alex, F.J. Comparative Study Analysis of ANFIS and ANFIS-GA Models on Flow of Vehicles at Road Intersections. Appl. Sci. 2023, 13, 744. [Google Scholar] [CrossRef]
  44. Powers, D.M. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness & correlation. J. Mach. Learn. Technol. 2011, 2, 37–63. [Google Scholar]
  45. Sofaer, H.R.; Hoeting, J.A.; Jarnevich, C.S. The area under the precision-recall curve as a performance metric for rare binary events. Methods Ecol. Evol. 2018, 10, 565–577. [Google Scholar] [CrossRef]
  46. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  47. Brodersen, K.H.; Ong, C.S.; Stephan, K.E.; Buhmann, J.M. The binormal assumption on precision-recall curves. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 4263–4266. [Google Scholar]
  48. Cook, J.; Ramadas, V. When to consult precision-recall curves. Stata J. Promot. Commun. Stat. Stata 2020, 20, 131–148. [Google Scholar] [CrossRef]
Figure 1. Diagrammatic sketches of the imaging deformation of the stairs at close range using multifocal glasses: (a) defective glasses, (b) normal glasses. (The range of the white frame is the field of view of the eyes wearing glasses, and the range of the red circle is the place where the imaging area is deformed).
Figure 1. Diagrammatic sketches of the imaging deformation of the stairs at close range using multifocal glasses: (a) defective glasses, (b) normal glasses. (The range of the white frame is the field of view of the eyes wearing glasses, and the range of the red circle is the place where the imaging area is deformed).
Sensors 23 04497 g001
Figure 2. Functional block diagram of a progressive multifocal glass.
Figure 2. Functional block diagram of a progressive multifocal glass.
Sensors 23 04497 g002
Figure 3. The workflow of the stages of the proposed approach.
Figure 3. The workflow of the stages of the proposed approach.
Sensors 23 04497 g003
Figure 4. The proposed image acquisition system using the concentric circular pattern for image capture.
Figure 4. The proposed image acquisition system using the concentric circular pattern for image capture.
Sensors 23 04497 g004
Figure 5. Two images captured from transmission imaging of the concentric circular pattern by a multifocal glass: (a) normal sample; (b) defective sample; (c) binarized and thinned image of the defective sample.
Figure 5. Two images captured from transmission imaging of the concentric circular pattern by a multifocal glass: (a) normal sample; (b) defective sample; (c) binarized and thinned image of the defective sample.
Sensors 23 04497 g005
Figure 6. The distance values of edge points to the centroid in a concentric circle: (a) Euclidean distance diagram; (b) the edge points and corresponding normalized distance values.
Figure 6. The distance values of edge points to the centroid in a concentric circle: (a) Euclidean distance diagram; (b) the edge points and corresponding normalized distance values.
Sensors 23 04497 g006
Figure 7. The outputs of the fifth circle in the concentric circular pattern of a test image performed by the EWMA scheme: (a) edge points of the fifth circle; (b) EWMA control chart.
Figure 7. The outputs of the fifth circle in the concentric circular pattern of a test image performed by the EWMA scheme: (a) edge points of the fifth circle; (b) EWMA control chart.
Sensors 23 04497 g007
Figure 8. Two resulting deformation images: (a) a difference image labeled by detected deformations in red, and (b) a difference image labeled according to three distinct deformation severities (severe level in red, average level in blue, and slight level in green).
Figure 8. Two resulting deformation images: (a) a difference image labeled by detected deformations in red, and (b) a difference image labeled according to three distinct deformation severities (severe level in red, average level in blue, and slight level in green).
Sensors 23 04497 g008
Figure 9. The structure diagram of the proposed ANFIS for determining deformation levels.
Figure 9. The structure diagram of the proposed ANFIS for determining deformation levels.
Sensors 23 04497 g009
Figure 10. The implemented system with user interface design shows all the processes of the proposed method using the concentric circular pattern.
Figure 10. The implemented system with user interface design shows all the processes of the proposed method using the concentric circular pattern.
Sensors 23 04497 g010
Figure 11. The images captured by the suggested method employing patterns of concentric circles with line widths of five kinds of pixel sizes and the results of a defect sample.
Figure 11. The images captured by the suggested method employing patterns of concentric circles with line widths of five kinds of pixel sizes and the results of a defect sample.
Sensors 23 04497 g011
Figure 12. Some outcomes of concentric imaging deformation inspection by employing EWMA slight deviation control scheme.
Figure 12. Some outcomes of concentric imaging deformation inspection by employing EWMA slight deviation control scheme.
Sensors 23 04497 g012
Figure 13. Some results of the proposed method and inspector for imaging deformation inspection employing three conventional norm patterns.
Figure 13. Some results of the proposed method and inspector for imaging deformation inspection employing three conventional norm patterns.
Sensors 23 04497 g013
Figure 14. A PR chart of imaging deformation inspection by the proposed method under different lighting conditions.
Figure 14. A PR chart of imaging deformation inspection by the proposed method under different lighting conditions.
Sensors 23 04497 g014
Figure 15. Some detection outcomes of imaging deformation inspection performed by the suggested approach for systematic changes in image lighting (the images with red frames are processed from the selected test image set).
Figure 15. Some detection outcomes of imaging deformation inspection performed by the suggested approach for systematic changes in image lighting (the images with red frames are processed from the selected test image set).
Sensors 23 04497 g015
Table 1. The input and output items of the suggested FIS model.
Table 1. The input and output items of the suggested FIS model.
InputsOutputs
FeaturesU1: Deformation measure in zone AU2: Deformation measure in zone BU3: Deformation measure in zone CY: Distortion levels
DegreesA1: Small
A2: Large
B1: Small
B2: Medium
B3: Large
C1: Small
C2: Medium
C3: Large
Y1: Slight
Y2: Average
Y3: Severe
Table 2. The corresponding membership functions, fuzzy sets, and ranges of the input measures.
Table 2. The corresponding membership functions, fuzzy sets, and ranges of the input measures.
Input ItemsMembership Functions of MeasuresFuzzy Sets and Ranges of Measures
Deformation measure U1 in zone ASensors 23 04497 i001 μ A 1 u 1 ;   8.3 ,   4.8 μ A 2 u 1 ;   1084 ,   765
Deformation measure U2 in zone BSensors 23 04497 i002 μ B 1 u 2 ;   69 ,   17.6 μ B 2 u 2 ;   385 ,   895.7 μ B 3 u 2 ;   384 ,   2392
Deformation measure U3 in zone CSensors 23 04497 i003 μ C 1 u 3 ; 287 ,   117.2 μ C 2 u 3 ; 468.5 ,   1766 μ C 3 u 3 ; 670 ,   3637
Table 3. Performance metrics for deformation defect detection on captured images by the suggested method employing patterns of concentric circles with line widths of six pixel sizes.
Table 3. Performance metrics for deformation defect detection on captured images by the suggested method employing patterns of concentric circles with line widths of six pixel sizes.
Line Thicknesses1 Pixel2 Pixels3 Pixels4 Pixels5 Pixels6 Pixels
Recall (%)54.3680.7280.1275.9079.1577.49
Precision (%)94.9396.0296.0290.4795.3291.06
Table 4. Performance metrics for detecting deformed regions and determining the quality levels of multifocal glasses by the proposed approach.
Table 4. Performance metrics for detecting deformed regions and determining the quality levels of multifocal glasses by the proposed approach.
Deformation Detection TechniquesEWMA Control Scheme
Recall (%)81.09
Precision (%)89.06
Processing time (s)0.2847
Quality level determination modelsBPNANFISGA based ANFIS
Accuracy (%)70.0070.6794.00
Table 5. Performance metrics of imaging deformation detection by the suggested approach employing three conventional norm patterns.
Table 5. Performance metrics of imaging deformation detection by the suggested approach employing three conventional norm patterns.
Norm PatternsHough Transform-Based Methods [22]Concentric Circular Pattern
Checkered PatternDot Pattern
Recall (%)33.2458.2077.03
Precision (%)37.6481.2276.86
Accuracy (%)94.7098.9499.47
Table 6. Performance metrics of imaging deformation inspection using the suggested approach for changing the brightness of the image illumination.
Table 6. Performance metrics of imaging deformation inspection using the suggested approach for changing the brightness of the image illumination.
Lighting Intervals(μ − 3σ)(μ − 2σ)(μ − 1σ)μ(μ + 1σ)(μ + 2σ)(μ + 3σ)
Recall (%)70.4677.3880.3589.8182.5471.9364.49
Precision (%)71.9278.9181.9790.5883.6672.2365.57
Accuracy (%)99.8999.9299.9299.9699.9399.8999.87
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, H.-D.; Lee, T.-H.; Lin, C.-H.; Wu, H.-C. Optical Imaging Deformation Inspection and Quality Level Determination of Multifocal Glasses. Sensors 2023, 23, 4497. https://doi.org/10.3390/s23094497

AMA Style

Lin H-D, Lee T-H, Lin C-H, Wu H-C. Optical Imaging Deformation Inspection and Quality Level Determination of Multifocal Glasses. Sensors. 2023; 23(9):4497. https://doi.org/10.3390/s23094497

Chicago/Turabian Style

Lin, Hong-Dar, Tung-Hsin Lee, Chou-Hsien Lin, and Hsin-Chieh Wu. 2023. "Optical Imaging Deformation Inspection and Quality Level Determination of Multifocal Glasses" Sensors 23, no. 9: 4497. https://doi.org/10.3390/s23094497

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop