Next Article in Journal
A CFD-Based Methodology for Impact Assessment of Industrial Emissions and Evaluation of Mitigation Measures for Regulatory Purposes
Next Article in Special Issue
Estimated-State Feedback Fuzzy Compensator Design via a Decentralized Approach for Nonlinear-State-Unmeasured Interconnected Descriptor Systems
Previous Article in Journal
A Fast Density Peak Clustering Method for Power Data Security Detection Based on Local Outlier Factors
Previous Article in Special Issue
Application of Wearable Gloves for Assisted Learning of Sign Language Using Artificial Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EW-YOLOv7: A Lightweight and Effective Detection Model for Small Defects in Electrowetting Display

1
College of Information Science and Technology, Zhongkai University of Agriculture and Engineering, Guangzhou 510225, China
2
Intelligent Agriculture Engineering Research Center, Zhongkai University of Agriculture and Engineering, Guangzhou 510225, China
3
Guangzhou Key Laboratory of Agricultural Product Quality and Safety Traceability Information Technology, Zhongkai University of Agriculture and Engineering, Guangzhou 510225, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Processes 2023, 11(7), 2037; https://doi.org/10.3390/pr11072037
Submission received: 19 May 2023 / Revised: 26 June 2023 / Accepted: 3 July 2023 / Published: 7 July 2023
(This article belongs to the Special Issue Processes in Electrical, Electronics and Information Engineering)

Abstract

:
In order to overcome the shortcomings of existing electrowetting display defect detection models in terms of computational complexity, structural complexity, detection speed, and detection accuracy, this article proposes an improved YOLOv7-based electrowetting display defect detection model. The model effectively optimizes the detection performance of display defects, especially small target defects, by integrating GhostNetV2 modules, Acmix attention mechanisms, and NGWD (Normalized Gaussian Wasserstein Distance) Loss. At the same time, it reduces the parameter size of the network model and improves the inference efficiency of the network. This article evaluates the performance of an improved model using a self-constructed electrowetting display defect dataset. The experimental results show that the proposed improved model achieves an average detection rate (mAP) of 89.5% and an average inference time of 35.9 ms. Compared to the original network, the number of parameters and computational costs are reduced by 19.2% and 64.3%, respectively. Compared with current state-of-the-art detection network models, the proposed EW-YOLOv7 exhibits superior performance in detecting electrowetting display defects. This model helps to solve the problem of defect detection in industrial production of electrowetting display and assists the research team in quickly identifying the causes and locations of defects.

1. Introduction

Electrowetting (EW) is a phenomenon in which an electric field is used to alter the contact angle of a droplet on a solid surface, enabling precise manipulation and regulation of the droplet. In the 1980s, Beni et al. [1] demonstrated the electrowetting effect using mercury droplets and coined the term “electrowetting,” initiating research in this field. In the 1990s, Sondag-Huethorst and Fokkink et al. [2,3] observed the electrowetting effect on sulfide-modified metal electrodes but were still limited by electrolysis. Berge et al. [4] proposed covering the metal electrode with an insulating layer to solve the electrolysis problem, leading to rapid development of electrowetting technology. Electrowetting technology offers several advantages, including fast response time, low power consumption, simple structure, and high integration [5,6,7], making it one of the future directions for display technology. In recent years, significant breakthroughs have been made in electrowetting display technology, achieving long-term stable video display and laying the foundation for industrial production. However, the production process is prone to various imperfections, and defects, such as burn-in, charge trapping, and pixel wall distortion, are frequently observed and documented in numerous experiments, compromising the display quality and economic value [8]. To improve the commercialization of electrowetting display technology, non-destructive testing methods, such as deep learning, must be used to accurately identify and classify defects in electrowetting devices. Statistical analysis should also be conducted to determine the causes and locations of defects and improve and repair device structure and production processes.
Along with the rapid development of electrowetting display technology, the display resolution of electrowetting devices is also increasing year by year. This requires that the electrowetting display defect detection network should have a high ability to detect small defects and at the same time take into account the real-time requirements of defect detection. Therefore, a lightweight network model should be used as the backbone network of the defect detection model, which can improve the detection speed while reducing the network’s parameter and computational complexity, thus reducing the cost of network deployment and operation. At present, there are few studies on electrowetting display defect detection, and most of them are concentrated in the traditional machine vision field. Liao [9] proposed an improved Otsu algorithm, which was optimized for the case where electrowetting images are usually unimodal, improving the performance of algorithm segmentation background and electrowetting display defects. Xiong [10] proposed a histogram-gradient-weighted method that calculates the gradient value of each gray level in the histogram and weights the gray levels based on the gradient value to obtain a new histogram. This method effectively improves the precision, stability, and robustness of detecting electrowetting display defects. In the field of display defect detection, using convolutional neural networks (CNN) for display defect detection has been attracting researchers’ attention. Chang [11] et al. used a convolutional neural network model to perform multi-classification processing on micro-defects of Thin-Film Transistor Liquid Crystal Displays (TFT-LCDs), effectively identifying micro-defects on TFT-LCD panels; Çelik [12] compared RetinaNet [13], M2Det [14], and YOLOv3 [15] networks for detecting pixel-level defects and found that RetinaNet-based architecture provided balanced results in terms of accuracy and time usage. However, these studies did not specifically optimize and improve the characteristics of electrowetting display device defects, resulting in low detection accuracy, slow detection speed, and bulky models. They did not consider the requirements for network detection accuracy, real-time monitoring performance, and generalization performance in industrial applications, which may result in missed or false detections of display defects during production.
At present, there are two primary categories of detection schemes. The first is based on the equivalent capacitance method [16,17], which identifies defects such as electrical damage and non-ideal oil movement in pixel units by monitoring changes in equivalent capacitance values. However, this approach has several limitations, including high detection costs, slow detection speeds, and low accuracy in detecting small target defects. The second category is based on machine vision [18,19]. In research on electrowetting display device defect detection in this field, Chiang et al. successfully identified defects in electrowetting display devices using automatic optical detection and calculated the type of defect. Luo et al. proposed a low-cost drive and detection scheme for detecting defects in electrowetting display devices, successfully detecting multiple electrowetting defects. The method of detecting defects in electrowetting display devices based on machine vision technology has the advantages of low detection cost and fast detection speed, but it still has shortcomings in terms of generalization. The types of electrowetting display device defects that can be detected are limited and cannot be detected in real time.
With the continuous development of deep learning technology, defect detection technology based on deep learning has become one of the mainstream detection methods in industrial production. Deep learning has the characteristics of high accuracy, convenient deployment, and strong robustness. In terms of detection technology, target detection algorithms based on deep learning have been widely used. Common target detection algorithms include Faster R-CNN [20], YOLO [21], SSD [22], etc. These algorithms can effectively detect the location and type of electrowetting display defects in the image and have a certain accuracy and real-time performance. In addition, according to the different requirements of network performance in different detection scenarios, researchers have proposed some improved target detection algorithms, such as Mask R-CNN [23], Cascade R-CNN [24], etc., which have improved in detection accuracy, speed, and multi-scale detection. However, as of now, research on defect detection networks specifically optimized for electrowetting display that defect detection is still in its infancy and has not achieved a balance between detection performance and network lightweighting. This article aims to construct a high-performance electrowetting display defect detection network EW-YOLOv7 (Electrowetting-You Only Look Once Version 7), which is based on the YOLOv7 [25] detection network and makes targeted improvements to the high-precision, low-latency, and lightweight requirements of electrowetting display defect detection.

2. Research on Electrowetting Display Defect Detection Algorithm

The model construction process for this paper is shown in Figure 1. We annotated the original 5040 images to construct a common electrowetting display device defect dataset. The dataset was divided into training, testing, and validation sets in an 8:1:1 ratio. We then used the training and testing sets to train different electrowetting defect detection models and performed ablation experiments on the trained models using the validation set. After evaluating the experimental results, we selected the detection model with the strongest overall detection performance to complete the construction of the electrowetting display device defect detection model.

2.1. Target Detection Algorithm

YOLO series networks are high-performance single-stage object detection models renowned for their excellent detection performance.
This paper proposes an improved YOLO series network: EW-YOLOv7, specifically designed for display defect detection tasks in electrowetting device production, as shown in Figure 2. Based on the latest version of the YOLO series, YOLOv7, this network has been optimized from two aspects: first, to improve its detection capability for small targets in electrowetting display; and, second, to reduce hardware requirements for the detection network.
To achieve these two optimizations, the EW-YOLOv7 network adopts the following strategies: firstly, to improve the representation ability of the network model and reduce the interference of invalid targets on the detection model, we introduce the ACmix attention mechanism into the original network to enhance the network’s ability to extract image features; secondly, to address problems such as large parameter volume, complex computation, and slow detection speed of the original YOLOv7 network, we integrate the EW-GhostNetV2 backbone network module into the original network backbone to reduce network parameters and computation volume, thereby reducing hardware requirements; finally, we use EW-NGWDLoss improved for electrowetting display detection as EW-YOLOv7′s loss function. Thanks to the insensitivity of this loss function to electrowetting display defects, especially small target defects’ position changes, the network’s recognition effect has been improved. Experimental results show that, compared with the original network, improved EW-YOLOv7 has better detection effects for electrowetting display defects in complex environments.

2.2. Introduction of Acmix Attention Mechanism

To improve the accuracy of the EW-YOLOv7 network in detecting electrowetting display defects, this paper introduces the Acmix attention mechanism into the improved EW-YOLOv7 model to enhance the network’s performance in detecting small target defects of electrowetting display.
The Acmix attention mechanism is an attention mechanism that combines self-attention and cross-attention. The former is used to calculate the correlation between each element and other elements in a sequence, while the latter is used to calculate the correlation between elements in different sequences. Combining the two can effectively improve the performance of a network in detecting small target defects of electrowetting display.
Compared with a single self-attention or cross-attention mechanism, Acmix can better capture the correlation between different elements in a sequence, thereby improving the network’s detection performance. The structure of the Acmix module is shown in Figure 3.
Acmix consists of two stages:
The first stage: The H × W × C electrowetting display defect features are reshaped into N pieces through three 1 × 1 convolution operations, and 3 × N defect sub-features with a size of ( H × W × C / N ) are obtained.
The second stage: This stage consists of a self-attention path and a convolution path. The self-attention path uses a self-attention mechanism to enhance the expression ability of constructing features while retaining global information. Specifically, the 3 × N sub-features output by stage one correspond to three electrowetting display defect feature maps with a size of ( H × W × C / N ) as the query, key, and value of the multi-head self-attention module. After shifting operation, feature fusion, and convolution operation, the electrowetting display defect feature with a size of H × W × C is obtained. The convolution path uses a convolution layer with a kernel size of k to perform full connection transformation on the sub-features output by stage one, and then performs shifting operation, feature fusion, and convolution operation to obtain the electrowetting display defect feature with a size of H × W × C .
Finally, the results obtained by processing the two are added together, and the intensity is controlled by two learnable scalar paths The formula is as follows:
F o u t = α F a t t + β F c o n v
where F o u t represents the final output of the Acmix module, F a t t represents the output of the self-attention path, F c o n v represents the output of the convolution attention path, and α and β are learnable scalars that reflect the model’s bias towards convolution or self-attention at different depths.

2.3. Integrating Lightweight Backbone Network Module EW-GhostNetV2

The original YOLOv7 network has problems, such as high network transmission volume, high computational complexity, and slow detection speed, which puts high demands on the performance of edge devices. In the process of industrial deployment, it will undoubtedly increase the cost of network deployment. To address this issue, this paper proposes a lightweight backbone network module EW-GhostNetV2 based on the GhostNetV2 improvement, which is suitable for YOLOv7, to reduce the computational cost of EW-YOLOv7 during detection and improve the model’s inference speed. This module introduces a decoupled fully connected attention mechanism (DFC) based on the fully connected layer on the basis of the original Ghost module. It can be quickly executed on common hardware and can capture the dependence relationship between long-distance pixels, effectively enhancing the extended features generated by cheap operations in the Ghost module.
When implemented using convolution, the theoretical complexity is:
O ( K H H W + K W H W )
Here, K H and K W , respectively, represent the height and width of the convolution kernel, and W and H , respectively, represent the width and height of the image.
The input defect feature X R H * W * C of the electrowetting display is sent in parallel to the Ghost and DFC branches to obtain the output feature Y and attention matrix A. The results of the two branches are multiplied by each other after using S i g m o i d ( ) to normalize A:
O = S i g m o i d ( A ) V ( X )
Compared with the efficient Ghost module, DFC Attention is not so concise. Directly processing this attention mechanism in parallel with the Ghost module will introduce high computational costs. In general, reducing the height and width of the features to half of their original length will reduce 75% of the FLOPs of DFC Attention. Therefore, downsampling the feature size in the horizontal and vertical directions can improve the speed of DFC Attention execution.
As shown in Figure 4 and Figure 5, EW-GhostNetV2 performs the first Ghost module and DFC Attention in parallel to enhance the extended features and then inputs the enhanced features into the second Ghost module to generate output features. Compared with the inverted bottleneck design of GhostNet using only two Ghost modules, EW-GhostNetV2 captures long-distance dependencies between pixels at different spatial positions, and the model’s expression ability is enhanced.

2.4. Introduction of Normalized Gaussian Wasserstein Distance

The CioU loss function used in the original YOLOv7 is very sensitive to the sensitivity of objects of different sizes. As shown in Figure 6, the predicted box is very sensitive to the offset of small target defects in the electrowetting display. Even a tiny positional change can cause a huge change in IoU. For normal-sized electrowetting display defects, the IoU changes very little after the same size positional change, indicating that the IoU measurement has variability for defects of different scales. For small target defects in electrowetting display, IoU is not a good measurement method. Therefore, we introduce the Normalized Gaussian Wasserstein Distance Loss function in the improved YOLOv7 network to meet the needs of detecting small target defects in electrowetting devices.

2.4.1. Bounding Box Two-Dimensional Gaussian Distribution Modeling

Conventional bounding boxes are represented by rectangles, and their corresponding IoUs focus more on the fit between bounding boxes, which is not suitable for small target defects in electrowetting display. The detection of small target defects in electrowetting displays should pay more attention to the position of the defect center because, for small target defects in electrowetting display, bounding boxes mostly contain background pixels, and background pixels are mostly concentrated around the edges, while the foreground is generally in the middle. To better weight the pixels in the bounding box, the bounding box is modeled as a two-dimensional Gaussian distribution, and its inscribed ellipse is:
( x c x ) 2 ( w / 2 ) 2 + ( y c y ) 2 ( h / 2 ) 2 = 1
where ( c x , c y ) is the center of the rectangle and ( w , h ) is the length and width of the rectangle.
The probability density function of the two-dimensional Gaussian distribution is:
f ( L | μ , ) = exp ( 1 2 ( L μ ) T 1 ( L μ ) ) 2 π | Σ | 1 2
where L is the position variable, ( μ , ) represents the mean vector and covariance matrix.
Under the ( L μ ) T Σ 1 ( L μ ) = 1 condition, the equi-value line of the two-dimensional Gaussian distribution can be approximated by an inscribed ellipse. At this time, the bounding box can be modeled as a two-dimensional Gaussian distribution:
μ = c x c y , = w 2 4 0 0 h 2 4

2.4.2. Normalized Gaussian Wasserstein Distance

After completing the two-dimensional Gaussian modeling of the bounding box, the Wasserstein Distance (WD) in the optimal transport theory is used to calculate the distance between the predicted distribution and the true distribution (obtained by transforming the predicted bounding box and true bounding box):
W 2 2 ( A , B ) = μ a μ b 2 2 + Σ a 1 / 2 Σ b 1 / 2 F 2
Substituting the two distributions yields:
W 2 2 ( A , B ) = c x a , c y a , w a 2 , h a 2 T , c x b , c y b , w b 2 , h b 2 T 2 2
Finally, using exponential normalization, the Normalized Wasserstein Distance (NWD) is obtained:
N W D ( N a , N b ) = exp ( W 2 2 ( N a , N b ) C )
where C denotes the parameter; the value is related to the dataset, and, in this paper, C is taken as the average absolute size of the target in the dataset.

2.5. Network Model Training and Evaluation Indicators

This study evaluates the accuracy, detection speed, and network structure complexity of network detection of electrowetting display defects through ablation experiments. The main indicators used to judge the detection performance of the network are detection accuracy (P), recall rate ®, and average AP value (mAP). The calculation formulas are as follows:
P = X T P X T P + X F P
R = X T P X T P + X F N
m A P = i = 1 K A P i K
Weight size (Weight (MB)), parameter size, and Giga Floating-point Operations Per Second (GFLOPS) are selected as standards to evaluate network lightweighting. In addition, network inference time (ms) is selected as an indicator for evaluating network detection speed.

3. Experimental Results and Analysis

3.1. Dataset Analysis

We acquired and processed images of various electrowetting display devices exhibiting common defects and created a novel dataset: Common Electrowetting Display Device Defect Dataset. We applied data augmentation techniques, such as cropping, adding noise, changing brightness, etc., to the original 560 sample images to generate a total of 5040 electrowetting display device images. The dataset comprises seven categories based on the defect type, as follows:
  • Functional display device: Figure 7a;
  • Pixel wall distortion: Figure 7b, voltage alters droplet morphology, resulting in irregular pixel wall dimensions that impair display quality;
  • Charge trapping: Figure 7c, ions in electrolyte solution accumulate on solid surface under electric field, forming charge layer that diminishes voltage-induced force on droplet, leading to contact angle saturation that constrains electrowetting modulation range;
  • Conductive layer damage: Figure 7d, current produces heat that causes conductive layer to overheat and burn or melt, compromising electrowetting stability and reliability;
  • Ink opening: Figure 7e, oil phase and water phase interface instability causes oil phase to separate into small droplets or films that affect display uniformity and clarity;
  • Ink leakage: Figure 7f, insufficient interfacial tension between oil phase and water phase allows oil phase to escape from fluid chamber, resulting in display malfunction or damage to other components;
  • Hydrophobic layer deterioration: Figure 7g, prolonged use erodes hydrophobicity of hydrophobic layer, preventing droplets from forming optimal contact angle on it, impairing electrowetting performance.
The resolution of each image is 577 × 488 pixels. Table 1 presents the sample distribution of each category in the dataset. Figure 6 displays the original sample images of each category.
In order to meet the high precision requirements of electrowetting display defects, this study used the Python-OpenCV library to perform data augmentation on different types of defects in the original dataset. By applying data processing operations, such as flipping, rotating, cropping, scaling, and color adjustment, a total of 5040 image samples were obtained. These data augmentation techniques simulate the image quality degradation caused by machine or environmental factors. We use this dataset to train an electrowetting display defect detection network to enhance its detection capabilities and robustness in complex production environments.

3.2. Experimental Results Analysis

3.2.1. Comparison of Verification Results of Different Detection Algorithms

To confirm that the EW-YOLOv7 model proposed in this paper can achieve faster detection accuracy and meet the ability of lower deployment cost, comparative experiments were conducted with other advanced object detectors, such as Faster RCNN, SSD, YOLOv5, YOLOv7, and other models, on common electrowetting display defect datasets. The specific data are shown in the Table 2:
Thanks to the One-Stage structure, SSD and YOLO series network models can complete target localization and classification in one forward propagation, which greatly reduces inference speed and model parameter consumption. Compared with other algorithms that use Two-Stage structures, such as Faster RCNN, they have higher efficiency. Even the poorly performing One-Stage model SSD in comparative experiments has only 39.3% and 30.6% of Param and interface time of Faster RCNN, respectively. In addition, the YOLO series network also adopts a lightweight backbone network model to further reduce redundant calculations and improve feature extraction capabilities. The Param and interface time of the original YOLOv7 network are reduced by 62.2% and 28.9%, respectively, compared with SSD. At the same time, the YOLO series network also has good performance in detecting defects in electrowetting display. Compared with SSD, YOLOv5 and YOLOv7 have increased mAP by 1.3% and 6.7%, respectively, especially when dealing with dense electrowetting display targets and overlapping electrowetting display targets, which can effectively avoid missed detection and false detection problems. However, there is still room for optimization in the detection accuracy of electrowetting display defects, especially small targets, as well as model lightweighting. For the detection task of electrowetting display defects, this paper proposes an EW-YOLOv7 network model based on the existing YOLOv7 network model, which is optimized for targeted improvement of the model while ensuring that the calculation amount is within a controllable range to achieve high recall rate, high detection rate, and fast inference goals. Through experimental evaluation on common electrowetting display defect datasets, the EW-YOLOv7 network model ranks among the top in all indicators, achieving a balance between detection accuracy, detection speed, and network model lightweighting.
We conducted identical experiments on the YOLOv7-tiny version, integrating the same improvement module as EW-YOLOv7 into the tiny version of v7 to produce EW-YOLOv7-tiny (hereafter referred to as EW-tiny). In this trial, EW-tiny’s detection performance was surpassed only by the original YOLOv7 and EW-YOLOv7. Furthermore, compared to the original YOLOv7-tiny model, EW-tiny’s Param and interface time indices were further reduced. If a lightweight electrowetting defect detection model is required, then EW-tiny is undoubtedly the superior choice.

3.2.2. Ablation Experiment

In order to verify the superiority of the algorithm proposed in this paper on the detection effect of electrowetting display defects, ablation experiments were conducted on common electrowetting display defect datasets for each improved module, and the experimental results are shown in the Table 3 and Table 4:
As shown in the table above, integrating the Acmix attention mechanism and NGWD Loss into the YOLOv7 algorithm improves the detection accuracy of the network, which is significantly reflected in the detection accuracy of small target defects.
As demonstrated in Table 5, the integration of the EW-ACmix attention mechanism and NGWD Loss function significantly enhanced the performance of the YOLOv7 model in detecting small and medium target defects in electro-wetted display devices. The accuracy in detecting chapping trapping and degradation defects increased compared to the original network. However, the attention mechanism weakened the model’s ability to detect a wide range of defects, resulting in a 9.7% decrease in accuracy for normal display images when only the EW-ACmix module was integrated. The introduction of the Loss function also increased the network’s parameters, weights, and inference time, impacting its deployment performance. In terms of lightweighting, integrating EW-GhostNetV2 into the v7 model reduced the network’s parameters, GFLOPS, weights, and inference time by 19.3%, 64.3%, 28.7%, and 29.6%, respectively. The DFC attention mechanism improved EW-GhostNetV2′s ability to capture long-range pixel dependencies and detect small targets by 2.7%, 6%, and 7.5% for normal display, chapping trapping, and degradation, respectively. This enhanced the network’s deployment performance while slightly improving its detection accuracy, achieving a balance between model lightness and detection performance.
The three modules exhibit excellent performance in terms of detection accuracy and network lightweighting. By integrating them simultaneously into the network, we obtained the proposed EW-YOLOv7 model, whose detection process is illustrated in Figure 8. The integration of the lightweight GhostNetV2 module reduced the number of network parameters and GFLOPS by 19.2% and 64.3%, respectively, compared to the original YOLOv7 model, significantly decreasing the amount of network computation. The inference time and weight size of EW-YOLOv7 were reduced by 28.9% and 28.4%, respectively, greatly enhancing its deployment capability. In terms of detection accuracy, the optimization of the Acmix attention mechanism and NGWD loss function for detecting defects, particularly small target defects, improved the network’s ability to identify such defects and increased its average detection accuracy by 8.7% compared to the original network. The experimental results demonstrate that EW-YOLOv7 outperforms the original YOLOv7 network in terms of detection accuracy, speed, and lightweight deployment and is well-suited for defect detection in the industrial production process of electrophoretic display devices.

4. Conclusions

This manuscript addresses the current challenge of instability and low precision in the detection of defects in electro-wetted display devices. To provide data support for this field, we have constructed a dataset comprising 5040 sample images that encompass seven major categories of defects in electro-wetted display devices. We propose a lightweight defect detection network, EW-YOLOv7, based on YOLOv7, and have made targeted enhancements by integrating the EW-GhostNetV2 module and EW-ACmix attention mechanism and introducing the NGWD Loss function. These improvements enhance the network’s performance in detecting defects in electro-wetted display devices and its deployment capability. The experimental results demonstrate that EW-YOLOv7 outperforms other mainstream detection networks in terms of accuracy, speed, and model lightweightedness, making it well-suited for deployment in the industrial production process of electro-wetted display devices. Compared to traditional machine-vision-based methods, our deep-learning-based approach exhibits stronger generalization and is less susceptible to environmental factors and image quality. It can identify an unlimited variety of defects in electro-wetted devices and achieve high-speed real-time detection, making it more suitable for deployment in complex industrial production environments. However, there is still room for improvement in the model’s ability to identify small target defects. This could be achieved by adding a small target detection layer to enhance the semantic information obtained from image acquisition. Additionally, our dataset does not yet cover all types of defects in electro-wetted devices; further expansion is necessary to improve the model’s generalization before it can be formally applied to industrial production. In future work, we will continue to refine the model structure by optimizing the multi-scale feature fusion network and enhancing its adaptability to different production environments.

Author Contributions

Conceptualization, Z.Z.; Methodology, Z.L.; Formal analysis, N.C.; Investigation, J.W., Z.X. and S.L.; Writing—original draft, Z.Z.; Writing—review and editing, N.C.; Visualization, Z.L.; Funding acquisition, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by National Natural Science Foundation of China (nos. 61871475, 61471133), Guangdong Science and Technology Plan (nos. 201905010006, 2017B010126001), the Foundation for High-level Talents in Higher Education of Guangdong Province (nos. 2017GCZX001; 2017KQNCX097), Guangzhou Science and Technology Plan (no. 201904010233), and, in part, by the Guangzhou Rural Science and Technology Specialists Project (no. 20212100058).

Institutional Review Board Statement

The study did not involve humans or animals.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Beni, G.; Hackwood, S. Electro-wetting displays. Appl. Phys. Lett. 1981, 38, 207–209. [Google Scholar] [CrossRef]
  2. Jackel, J.L.; Hackwood, S.; Veselka, J.J.; Beni, G. Optical waveguide lightmode spectroscopy immunosensors. Appl. Opt. 1983, 22, 1765–1770. [Google Scholar] [CrossRef] [PubMed]
  3. Sondag-Huethorst, J.A.M.; Fokkink, L.G.J. Electrochemical detection of nitroaromatic compounds using a thin-layer cell with a carbon-fiber working electrode. J. Electroanal. Chem. 1994, 367, 49–57. [Google Scholar] [CrossRef]
  4. Berge, B. Electrocapilarity and wetting of insulator film by water. Comptes Rendus Acad. Sci. Paris Sci. II 1993, 317, 157–163. [Google Scholar]
  5. Giraldo, A.; Aubert, J.; Bergeron, N.; Li, F.; Slack, A.; van de Weijer, M. Transmissive Electrowetting-Based Displays for Portable Multi-Media Devices. SID Symp. Dig. Tech. Pap. 2009, 40, 479. [Google Scholar] [CrossRef]
  6. Ku, Y.S.; Kuo, S.W.; Huang, Y.S.; Chen, C.Y.; Lo, K.L.; Cheng, W.Y.; Shiu, J.W. Single-layered multi-color electrowetting display by using ink-jetprinting technology and fluid-motion prediction with simulation. J. Soc. Inf. Disp. 2011, 19, 488. [Google Scholar] [CrossRef]
  7. Heikenfeld, J.; Steckl, A.J. Intense switchable fluorescence in light wave coupled electrowetting devices. Appl. Phys. Lett. 2005, 86, 011105. [Google Scholar] [CrossRef]
  8. He, T.; Jin, M.; Eijkel, J.C.; Zhou, G.; Shui, L.L. Two-phase microfluidics in electrowetting displays and its effect on optical performance. Biomicrofluidics 2016, 10, 011908. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Qinkai, L.; Shanling, L.; Zhixian, L.; Zheliang, C.; Tiantian, L.; Biao, T. Electrowetting defect image segmentation based on improved Otsu method. Opto-Electron Eng. 2020, 47, 190388. [Google Scholar] [CrossRef]
  10. Xiong, L.; Liao, Q.; Lin, S.; Lin, Z.; Guo, T. Defect Detection of Electrowetting Display Based on Histogram Gradient Weighting. Laser Optoelectron. Prog. 2021, 58, 1210003. (In Chinese) [Google Scholar]
  11. Chang, Y.C.; Chang, K.H.; Meng, H.M.; Chiu, H.C. A Novel Multicategory Defect Detection Method Based on the Convolutional Neural Network Method for TFT-LCD Panels. Math. Probl. Eng. 2022, 2022, 6505372. [Google Scholar] [CrossRef]
  12. Çelik, A.; Küçükmanisa, A.; Sümer, A.; Çelebi, A.T.; Urhan, O. A real-time defective pixel detection system for LCDs using deep learning based object detectors. J. Intell. Manuf. 2022, 33, 985–994. [Google Scholar] [CrossRef]
  13. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  14. Zhao, Q.; Sheng, T.; Wang, Y.; Tang, Z.; Chen, Y.; Cai, L.; Ling, H. M2det: A single-shot object detector based on multi-level feature pyramid network. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 9259–9266. [Google Scholar]
  15. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  16. Yi, L.; Biao, T.; Guisong, Y.; Yuanyuan, G.; Linwei, L.; Alex, H. Progress in Advanced Properties of Electrowetting Displays. Micromachines 2021, 12, 206. [Google Scholar] [CrossRef] [PubMed]
  17. Luo, Z.J.; Luo, J.K.; Zhao, W.W.; Cao, Y.; Lin, W.J.; Zhou, G.F. A high-resolution and intelligent dead pixel detection scheme for an electrowetting display screen. Opt. Rev. 2018, 25, 18–26. [Google Scholar] [CrossRef]
  18. Chiang, H.-C.; Tsai, Y.-H.; Yan, Y.-J.; Huang, T.-W.; Mang, O.-Y. Oil Defect Detection of Electrowetting Display. In Optical Manufacturing and Testing XI; Fähnle, O.W., Williamson, R., Kim, D.W., Eds.; SPIE: Washington, DC, USA, 2015. [Google Scholar]
  19. Luo, Z.; Peng, C.; Liu, Y.; Liu, B.; Zhou, G.; Liu, S.; Chen, N. A Low-Cost Drive and Detection Scheme for Electrowetting Display. Processes 2023, 11, 586. [Google Scholar] [CrossRef]
  20. Ren, S.; He, K.; Girshick, R.B.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In IEEE Transactions on Pattern Analysis and Machine Intelligence; IEEE: New York, NY, USA, 2015; Volume 39, pp. 1137–1149. [Google Scholar]
  21. Redmon, J.; Divvala, S.K.; Girshick, R.B.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  22. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.E.; Fu, C.; Berg, A.C. SSD: Single Shot MultiBox Detector. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  23. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R.B. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  24. Cai, Z.; Vasconcelos, N. Cascade R-CNN: Delving Into High Quality Object Detection. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6154–6162. [Google Scholar]
  25. Wang, C.; Bochkovskiy, A.; Liao, H.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:abs/2207.02696. [Google Scholar]
Figure 1. Flowchart for the construction of electrowetting display device defect detection model.
Figure 1. Flowchart for the construction of electrowetting display device defect detection model.
Processes 11 02037 g001
Figure 2. EW-YOLOv7 network architecture.
Figure 2. EW-YOLOv7 network architecture.
Processes 11 02037 g002
Figure 3. ACmix module architecture.
Figure 3. ACmix module architecture.
Processes 11 02037 g003
Figure 4. DFC attention mechanism structure.
Figure 4. DFC attention mechanism structure.
Processes 11 02037 g004
Figure 5. EW-GhostNetV2 module structure.
Figure 5. EW-GhostNetV2 module structure.
Processes 11 02037 g005
Figure 6. The sensitivity analysis of IoU on tiny and normal scale objects. Note that each grid denotes a pixel; box A denotes the ground truth bounding box; boxes B and C denote the predicted bounding boxes with 1 pixel and 4 pixels diagonal deviation, respectively. (a) Tiny scale object; (b) Normal scale object.
Figure 6. The sensitivity analysis of IoU on tiny and normal scale objects. Note that each grid denotes a pixel; box A denotes the ground truth bounding box; boxes B and C denote the predicted bounding boxes with 1 pixel and 4 pixels diagonal deviation, respectively. (a) Tiny scale object; (b) Normal scale object.
Processes 11 02037 g006
Figure 7. Seven major flaws (original image).
Figure 7. Seven major flaws (original image).
Processes 11 02037 g007
Figure 8. EW-YOLOv7 detection process.
Figure 8. EW-YOLOv7 detection process.
Processes 11 02037 g008
Table 1. Sample distribution of the datasets.
Table 1. Sample distribution of the datasets.
ClassesTotalTrainTest
Burnt720576144
Charge Trapping720576144
Deformation720576144
Degradation720576144
Oil Leakage720576144
Oil Splitting720576144
Normal720576144
Total504040321008
Table 2. Results of different detection models.
Table 2. Results of different detection models.
ModelsPrecisionRecallmAPParamInterface Time (ms)
Faster RCNN0.635 0.8210.693250.69 M230.4
SSD0.714 0.8630.75698.48 M70.6
YOLOv50.766 0.8560.76927.56 M38.9
YOLOv70.826 0.8840.82337.22 M50.2
YOLOv7-tiny0.6590.7860.4856.17 M24.3
EW-YOLOv70.8691.0000.89530.07 M35.9
EW-YOLOv7-tiny0.8140.9660.7876.03 M20.6
Table 3. Ablation experiment results (detection performance).
Table 3. Ablation experiment results (detection performance).
MethodACmixGhostNetV2NGWDPrecisionRecallmAP
YOLOv7×××0.8260.8840.823
YOLOv7××0.8371.0000.857 (+4.1%)
YOLOv7××0.8350.8690.831 (+0.9%)
YOLOv7××0.8161.0000.842 (+2.3%)
YOLOv7×0.8741.0000.868 (+5.4%)
YOLOv7×0.8330.9430.882 (+7.1%)
YOLOv7×0.8630.9120.845 (+2.6%)
YOLOv70.8691.0000.895 (+8.7%)
Table 4. Ablation experiment results (model scale).
Table 4. Ablation experiment results (model scale).
MethodACmixGhostNetV2NGWDParamWeight
(MB)
Interface Time (ms)GFLOPS
YOLOv7×××37.22 M 74.950.2103.3
YOLOv7××38.43 M75.653.6103.3
YOLOv7××30.02 M53.435.336.8
YOLOv7××39.22 M76.855.7103.3
YOLOv7×30.08 M53.737.436.8
YOLOv7×40.27 M79.457.6103.3
YOLOv7×30.04 M54.336.736.8
YOLOv730.07 M53.235.936.8
Table 5. Ablation experiment results (normal, charge trapping, degradation).
Table 5. Ablation experiment results (normal, charge trapping, degradation).
MethodACmixGhostNetV2NGWDAP
NormalCharge TrappingDegradation
YOLOv7×××0.9260.3880.497
YOLOv7××0.8290.6570.746
YOLOv7××0.9530.4480.572
YOLOv7××0.9210.5780.622
YOLOv7×0.9210.6740.746
YOLOv7×0.8790.6790.783
YOLOv7×0.9430.6470.686
YOLOv70.9260.6950.783
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, Z.; Chen, N.; Wu, J.; Xv, Z.; Liu, S.; Luo, Z. EW-YOLOv7: A Lightweight and Effective Detection Model for Small Defects in Electrowetting Display. Processes 2023, 11, 2037. https://doi.org/10.3390/pr11072037

AMA Style

Zheng Z, Chen N, Wu J, Xv Z, Liu S, Luo Z. EW-YOLOv7: A Lightweight and Effective Detection Model for Small Defects in Electrowetting Display. Processes. 2023; 11(7):2037. https://doi.org/10.3390/pr11072037

Chicago/Turabian Style

Zheng, Zihan, Ningxia Chen, Jianhao Wu, Zhixuan Xv, Shuangyin Liu, and Zhijie Luo. 2023. "EW-YOLOv7: A Lightweight and Effective Detection Model for Small Defects in Electrowetting Display" Processes 11, no. 7: 2037. https://doi.org/10.3390/pr11072037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop