Next Article in Journal
Hybrid Harmony Search-Simulated Annealing Algorithm for Location-Inventory-Routing Problem in Supply Chain Network Design with Defect and Non-Defect Items
Previous Article in Journal
Extraction of Irregularly Shaped Coal Mining Area Induced Ground Subsidence Prediction Based on Probability Integral Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Follow-Up Control and Image Recognition of Neck Level for Standard Metal Gauge

College of Control Science and Engineering, China University of Petroleum, Qingdao 266580, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(18), 6624; https://doi.org/10.3390/app10186624
Submission received: 30 July 2020 / Revised: 16 September 2020 / Accepted: 18 September 2020 / Published: 22 September 2020
(This article belongs to the Section Applied Industrial Technologies)

Abstract

:

Featured Application

This image recognition approach for gauge neck levels can effectively reduce measurement times, decrease manmade errors in liquid level readings, and improve the efficiency of pipe prover validation.

Abstract

An image recognition technique is proposed for determining optimal neck levels for standard metal gauges, in the process of validating pipe provers. A camera-level follow-up control system was designed to achieve automated tracking of fluid levels by a camera, thereby preventing errors from inclined viewing angles. An orange background plate was placed behind the tube to reduce background interference, and highlight scale numbers/lines and concave meniscus. A segmentation algorithm, based on edge detection and K-means clustering, was used to segment indicator tubes and scales in the acquired images. The concave meniscus reconstruction algorithm and curve-fitting algorithm were proposed to better identify the lowest point of the meniscus. A characteristic edge detection model was used to identify centimeter-scale lines corresponding to the meniscus. A binary tree multiclass support vector machine (MCSVM) classifier was then used to identify scale numbers corresponding to scale lines and determine the optimal neck level for standard metal gauges. Experimental results showed that measurement errors were within ±0.1 mm compared to a ground truth acquired manually using Vernier calipers. The recognition time, including follow-up control, was less than 10 s, which is much lower than the switching time required between measuring individual tanks. This automated measurement approach for gauge neck levels can effectively reduce measurement times, decrease manmade errors in liquid level readings, and improve the efficiency of pipe prover validation.

1. Introduction

The metering of oil flow is critical for crude oil production and trade. As such, flow meters must be regularly verified to ensure accuracy. Standard pipe provers and volume methods are often used to test flow meters in varying conditions. Therefore, provers must be regularly checked against a traceable reference to demonstrate compliance with accuracy and repeatability requirements [1,2].
Water calibration is often used to verify pipe provers. The volume of a standard metal gauge includes the volume of the lower main body (which is fixed) and the volume of the neck. As a result, if the liquid level in the neck can be measured accurately, the volume can be calculated with high precision. According to the NIST standards [2], for volumetric field standards the maximum permissible errors should be ±0.05% of the nominal capacity, and reading errors of the neck level should be ±0.2 mm correspondingly. Because some normal level sensors cannot be used to measure the level due to being unfit in range, accuracy or installation, manual readings with Vernier calipers are used to measure the level [3], which prevents automated calculation of the water volume. In the two-color gauge recognition system designed by Changsheng et al. [4,5], a horizontal and vertical endpoint line (HVPL) identification technique was used to read scale numbers and line positioning was achieved using horizontal projection statistics. However, this approach did not consider measurement errors caused by differences in camera viewing angles, which reduced its accuracy and therefore did not meet the accuracy requirement of ±0.2 mm. In the automatic verification system designed by Zhang Haipeng [6], the automatic collection of liquid level is realized by image recognition, but the center of gravity of the concave meniscus of the liquid tube is used as the reading point in the liquid level, and the reading method is based on the level of the liquid level at two standard points, and thus the accuracy is lower than ±0.5 mm, which does not meet the accuracy requirement of ±0.2 mm.
In this study, an automated measurement technique utilizing image recognition is proposed for neck level measurements. The inclusion of computer vision, rather than manual readings of liquid levels, compensates for the accuracy problems faced by conventional liquid level sensors. A camera-level follow-up control system was designed to achieve automated tracking of fluid levels using a camera, thereby improving level measurement accuracy other than inclined viewing angles. A curve-fitting algorithm was used to identify the lowest point of the meniscus. This increases the accuracy of level measurements and the efficiency of volumetric calculations for prover verification.

2. Experimental Design

2.1. The Water Calibration System for the Pipe Prover

The water verification system is illustrated in Figure 1 and can be described as follows. In the project, a 6000 L pipe prover needs be validated. Two 1000 L standard metal gauges were used to alternately measure the volume Vs of water flowing through the standard tube section of a pipe prover. The volume Vps at the calibrated temperature and pressure was then converted to Standard Temperature and Pressure (STP, 20 °C and 101.325 kPa) and used to represent the standard volume of the pipe prover. This process can be expressed as:
V s = V s i
V s i = V 0 + h h 0 s
V s i —The volume of water measured by the standard metal gauge, L; V s —The volume of water measured by the accumulative standard metal gauge, L; V 0 —Standard metal gauge capacity, L; h —Liquid level height in the tube, mm; h 0 —Liquid level height corresponding to the rated volume, mm; s —Volume per unit height, L/mm.
The liquid level height h in the tube is the only unknown parameter used for calculating the standard metal gauge volume Vsi. The accuracy of readings in the standard measurement neck determines the accuracy of water volume calculations in the metal measuring device. As such, measurement precision for liquid levels must be within 0.2 mm in the calibration system. Verification regulations also require that liquid level readings be completed within a specific timeframe. The static setting time (after filling) can be denoted as T1, the reading time for the liquid level is represented by T2, the water release time for the bottom valve is T3, and the time required for a standard metal gauge to be filled is indicated by T4. As such, the condition T1 + T2 + T3 < T4 must be satisfied. Liquid level readings for standard metal gauges are typically completed within 20 s, after the gauge is placed in the water and bubbles are allowed to dissipate.
This study proposes a new level measurement technique, based on image recognition, to improve the accuracy and speed of liquid height readings. The methodology includes two components: acquisition of liquid images and automated recognition of liquid levels.

2.2. Design of Automatic Acquisition System for Gauge Neck Level

The NIST standards [2] stipulate that the reading of a glass tube should be performed with the line of sight and liquid level at the same height. For a given camera, a larger field of vision and a smaller focal length result in greater distortion, increased image deformation, and lower precision. As shown in Figure 2, a larger angular view and increased reading errors occur when the concave meniscus level is not at the same horizontal height as the camera. Concave meniscus readings exhibit large errors when the deviation angle exceeds a certain limit and precision requirements for pipe prover verification cannot be met.
A camera-based liquid level follow-up control system was designed to solve this problem, by achieving automated tracking of liquid levels and improving the accuracy of image acquisition other than the inclined viewing angle. In this process, the camera is mounted on a linear guide with a ball screw, allowing the lens to be controlled manually. A camera with a wide-angle lens in initial position A first acquires an image of the entire glass tube. This image is only used to approximate the liquid level and calculate the vertical distance between the camera and the height of the liquid column. The camera is then driven by a stepper motor to position C, where the angle of inclination provides a view of the concave meniscus within the allowed range. After the camera is moved to the required position, the focal length is adjusted and another image is acquired for accurate liquid level measurements, thereby achieving automated tracking.
The hardware for the proposed liquid level acquisition system, shown in Figure 3, primarily consists of a linear guide rail, a camera, and an infrared photoelectric detection switch, used to determine the recognition range. A programmable logic controller (PLC) drives the step motor. Liquid level photos taken by the camera are transmitted to the upper computer, which processes the images in real time. An orange background plate was placed behind the tube to reduce background interference, and highlight the scale numbers/lines and concave meniscus.

2.3. Automatic Recognition Algorithm for Gauge Neck Level

A sample image collected by the proposed system is shown in Figure 4. The height of the liquid level (h) was determined using image processing, performed by the aforementioned computer. The included recognition algorithm is designed to improve both accuracy and speed. Figure 4 demonstrates that the liquid tube and scale are relatively small in the images, which were segmented to remove background structures and identify the liquid level edge. The algorithm used to identify meter neck level first divided the tube and scale images, to identify a mapping relationship between the physical distances represented in each picture. This information was then used to determine the true height of the liquid level. A workflow diagram for this algorithm is shown in Figure 5.

3. Image Segmentation Algorithm Based on Edge Detection and K-Means Clustering Algorithm

Images collected by the liquid level acquisition system were 1024 × 768 pixels with three RGB channels. The original image offers high resolution, which is helpful for improving accuracy but increases runtime. From the figure above, it is evident the effective information in the green area includes the scale and liquid indicator. As such, the acquired images were segmented to improve algorithm efficiency and extract the useful information. Segmentation, particularly for color images, is a critical component of image processing and analysis [7]. Common algorithms include the threshold, clustering, region growth, and edge detection techniques. For example, K-means clustering has successfully been applied in fruit segmentation, medical image processing, and other fields. Liming et al. used clustering in color space to segment images of Bayberry plants [8]. Yongfang et al. used a graph cut method based on K-means clustering to subdivide nuclear magnetic resonance images of the brain [9]. Edge detection can also be used to segment images by finding high-contrast boundaries between adjacent pixels. For example, Meng et al. used edge detection and automatic seed region growing to segment target objects [10].
The target area in the image exhibits obvious linear features. The vertical edges can be divided into four structures: the left edge of the scale, the right edge of the scale, the left edge of the tube, and the right edge of the tube. The K-means clustering algorithm can classify multidimensional data using the shortest distance between subclass information and cluster centers, based on the number of clusters. As such, a hybrid methodology, combining image segmentation based on edge detection and K-means clustering, is proposed in this study for automated liquid level identification.
Prior to segmentation, a series of image processing steps were performed. This consisted of denoising the original image with a filter, using the Sobel operator to extract edge information in the vertical direction, and applying the Hough straight-line detection model to identify edges. The K-means clustering algorithm was then used to classify the resulting line information, locate the edge of the target area, crop the original image, and achieve precise segmentation of the scale and liquid tube images.

3.1. Filtering of Images

A filtering step was included to suppress noise in the original images and preserve details. This involved the use of a smoothing kernel and a low-frequency enhanced spatial domain filtering technique. Some operations, such as low-pass filtering, can blur edge features. However, this application required detecting edges without losing feature information. Special classes of filters exist to simultaneously optimize these two objectives, including anisotropic diffusion and bilateral filtering, the effects of which are shown for the original image [11]. The results produced by these two algorithms are visually similar. Quantitative assessment was also performed using peak signal-to-noise ratio [12] (PSNR) and the structural similarity index [13] (SSIM).
PSNR is based on the average calculation of gray values for image pixels. It is a basic quality evaluation algorithm and a commonly used indicator for measuring signal distortion. PSNR compares the original and distorted images on a pixel-by-pixel basis, calculates the total (summed) error between the pixels, and provides an evaluation score. Its value is typically in the range of 20-50, with larger values representing higher image quality. SSIM compares the brightness, contrast, and structural similarity of the original and processed images, returning a value in the range of 0–1. Larger values indicate better image quality, with unity indicating identical data. PSNR and SSIM values for the liquid level images are shown in Table 1. An analysis of these results suggests that bilateral filtering is superior to anisotropic diffusion, as measured by both the PSNR and SSIM. Bilateral filtering considers both proximity in the spatial domain and similarity in grayscale and brightness. As such, it is independent of anisotropic diffusion filtering. Processing results were also more effective because of connections between pixels. As such, bilateral filtering was selected for use in this study.

3.2. Improved Edge Detection Algorithm for Image

Bilateral filtering was used to denoise the original image while preserving edge features, which facilitated vertical edge detection for the liquid tube and scale. Edge information in other directions was lessened to reduce interference. This improved vertical edge detection model was designed using specific image features and a conventional edge detection algorithm. The edges of the image are a set of points on the boundary of two different characteristic regions, reflecting the discontinuity and distortion of local features. As a result, most of the useful information contained in the image resides on the edge.
Edges are denoted by large discrepancies between a pixel and its neighboring pixels, which can be represented by a vector with the attributes of amplitude and direction [14,15]. Common edge detection algorithms include the Roberts, Prewitt, Sobel, Canny, and Laplace operators.
The Roberts operator uses only the 2 × 2 neighborhood of the current pixel, which is computationally simple but sensitive to noise. This operation can be expressed mathematically as: [16]:
G 1 = [ 1 0 0 1 ]     G 2 = [ 0 1 1 0 ]
The Prewitt operator reduces the influence of noise on edges using a 3 × 3 neighborhood, defined as [17]:
G x = [ 1 1 1 0 0 0 1 1 1 ]     G y =   [ 1 0 1 1 0 1 1 0 1 ]
The final edge image can then be acquired using G = m a x ( G x , G y ) or G = G x + G y . The Sobel operator improves on the accuracy of the Prewitt detection by using a weighted approach to calculate differences. This technique not only detects edge points more accurately, it can further suppress the influence of noise. The operator can be represented as [18]:
G x 1 = [ 1 0 1 2 0 2 1 0 1 ]     G y 1 = [ 1 2 1 0 0 0 1 2 1 ]
Sobel edge detection is one of the most common discrete differential operators. It can be used to calculate the norm of a gradient vector corresponding to pixel points, indicating edges in either the horizontal or vertical direction. As such, the Sobel operator is better suited for processing edge information along the X and Y axes (horizontal and vertical vector components) and can be inaccurate for angled structures. In contrast, the Canny [19] operator identifies edges using the maximum value of an image signal function. This involves removing noise with a Gaussian filter, calculating the amplitude and direction of the gradient, performing nonmaximum value suppression, and connecting the edges of an image with a lag threshold technique. The Canny operator also exhibits strong noise reduction capabilities. It can balance noise, effectively detect edge information, and smooth pixel gradients [20].
The Laplace kernel is an isotropic two-order differential operator. It is independent of direction and exhibits rotational invariance. Its second derivative is defined as [21]:
2 f ( x , y ) = 2 f x 2 + 2 f y 2
The Laplace template is:
G 11 = [ 0 1 0 0 4 1 0 1 0 ]     G 21 = [ 1 1 1 1 8 1 1 1 1 ]
This approach is sensitive to singular and boundary points and is often applied to image sharpening. Edge detection results for the Laplace method are shown in Figure 6.
Edge detection was used in this study to identify the left and right edges of the scale and liquid tube, which are both vertical boundaries. As seen in the figures, the Roberts, Sobel (vertical), and Laplace operators caused information loss along the Y direction, which does not meet the requirements of the study. In contrast, the Prewitt and Canny operators preserved edge information but failed to eliminate unneeded structures, which complicated subsequent processing. The Sobel operator preserved valuable information in the X direction (such as the left edge of the liquid tube), but did not distinguish between weak edges and noise. As such, an improved edge detection methodology, suitable for image segmentation in this study, was developed by combining the advantages of the Sobel and Canny techniques. The block diagram of the algorithm is shown in Figure 7.
This algorithm is designed to detect the edges of the caliper and indicator tube. As such, it only needs to consider vertical edge information, while ignoring horizontal structures. In the first step, the gradient is calculated in the X direction and the influence of horizontal edge elements is eliminated. Noise resulting from nontarget edges detected by the Sobel algorithm is reduced using K-convolution and a Gaussian smoothing filter. The amplitude and square of the image gradient are then acquired with the Canny operator template (Gx1 and Gy1) approximation. Nonmaximum value suppression was then performed as gray values were set to the value of background pixels. Edges were identified using local optimal values in the neighborhood satisfaction gradient. Nonmaximum values were suppressed, other edge pixels were removed, and (as a result) only fine lines were retained. Hough linear detection was used to transform the problem of finding collinear points in the original image into the task of finding peak points in a parameter space, using coordinate space conversion relationships. The edges detected by a Hough line transform [20] are shown in Figure 8.
It is evident from the image that the proposed algorithm can effectively detect edges on both sides of the liquid tube and the scale. Linear information was also acquired for the target edge and straight lines were ignored in nonedge regions. However, a jagged line was formed on the edge of the scale and liquid tube, due to the influence of other noise elements (such as interference from uneven lighting). Therefore, a new algorithm is needed to automatically classify these lines and accurately divide the two effective regions in the original image.

3.3. K-Means Clustering of Edge Information

Once image segmentation is complete, the disorganized line sets must be accurately classified to identify the left and right edges of the scale and liquid tube indicator. After the original image is divided into liquid tube and scale regions, K-means clustering is used to identify structures of interest [22]. Given a random dataset X = {x1, x2,…,xn}, the algorithm randomly selects K objects as initial clustering centers and a corresponding subset of data C = {ci, I = 1,2,…,K}. Each cluster object corresponds to a cluster center μi. The Euclidean distance from this center to a given point mi can then be calculated using:
J ( c i ) = x k c i m i X i μ i 2
The K-means clustering algorithm can be iteratively repeated to minimize the square of the distance from a given data set to the clustering center as follows [23]:
J ( C ) = i = 1 k J ( c i )
The accurate clustering of multidimensional data requires an optimal value for K. In order to separate the two effective regions, the liquid tube and the scale, four straight lines are required (left edge of the scale, right edge of the scale, left edge of the liquid tube, and right edge of the liquid tube), so the value of K is 4 in this study. Segmented lines were classified by identifying the peripheral edges of the scale and indicator tube, rather than the clustering centers. The x coordinates detected by the Hough transform were selected as a test data set, for the convenience of representing a fixed y coordinate in a 2D image. Clustering results are shown in Figure 9, where it is evident that the combination of edge detection and clustering produced an accurate segmentation, with edges of the scale and liquid tube being clearly identified. This approach is simple and efficient when provided with an optimal value for K.
After processing and segmentation, the original image is divided into scale and indicator tube regions. The following section discusses the scale image and the corresponding relationship between scale digits and pixel coordinate positions. The location of the concave meniscus in the liquid tube is then used to calculate the true height h of the liquid level.

4. A recognition Algorithm for Scale Image

This section discusses the techniques used to extract effective information (i.e., scale line and increment digits) from segmented images. In this process, the liquid level was digitally identified and a correspondence between scale and position coordinates was established.

Recognition of Scale Numbers and Scale Lines

In order to realize the identification of the scale, the edge position search method is used to accurately identify the pixel position of the scale line, a binary tree multiclass support vector machine (MCSVM) classifier was chosen to identify scale numbers, and the correspondence between the scale and the position coordinates of the pixel is obtained.
As is shown in Figure 10, for the scale line of scale image, according to the characteristics of different types of scale line length, we used edge features to detect the number of edges in each column of scale image, clustered and counted the number of edge features and edge coordinates under each classification according to the different number of edges in each column of 1 cm, 5 mm and 1 mm scale lines, and then recognized the 1 cm scale line.
The edge identification steps for an M × N input image can be summarized as follows:
(1)
The number of upper and lower edge features were counted in each column.
(2)
These features were then divided into 1 cm, 1 cm + 5 mm, and 1 cm + 5 mm + 1 mm scale categories using K-means clustering. The column containing only 1 cm tick marks was identified as the scale edge.
(3)
Corresponding pixel coordinates for upper and lower edge features were sequentially detected and stored in the matrices E1 and E2, respectively.
(4)
The pixel coordinates corresponding to specific edge features in each column were acquired by averaging E1 and E2.
(5)
The corresponding element in the matrix Ei was used as the y coordinate corresponding to 1 cm tick marks.
In this way, the number of columns and the characteristic coordinates were collected for all vertical edges. The column that met the required edge feature number was then identified, as demonstrated by the yellow line in Figure 11. The corresponding coordinates for the vertical edge were used as the y coordinates of the 1 cm scale, as shown in Figure 12. This technique was used to identify scale lines and a support vector machine (SVM) was included to read scale numbers [24]. A sample matrix containing digital scale and pixel coordinates is shown in Table 2.

5. Extraction and Reconstruction of the Concave Meniscus

The influence of gravity and surface tension causes the liquid in the tube to form a concave meniscus, which can be identified in the images from the sudden change in pixel values. After applying an adaptive graying method, Sobel edge detection was used to extract the liquid level, as shown in Figure 13. This adaptive approach, demonstrated in Figure 13c, can effectively improve the accuracy of liquid level detection beyond conventional techniques (see Figure 13d). The resulting image after edge detection and removal of smaller connected domains is shown in Figure 13e).
Although the tube edges are clear, the meniscus boundaries are obscured by factors such as lighting and latent fluid on the tube walls. As such, a fitting reconstruction was performed to determine precise liquid level coordinates. The liquid level curve equation is fixed by the constraint that hardware and environmental conditions in the verification system remain constant. An image was acquired in standard test conditions and a leveling curve was produced by data fitting. Inaccurate data in the conventional test process were then calibrated using the known standard level curve, as shown in Figure 14.
The liquid level was collected and extracted multiple times at a temperature of 25 °C, a relative humidity of 50%, an atmospheric pressure of 101 kPa, and sufficient lighting, according to volume verification regulations [25]. The best image was selected after processing and used for fitting, as shown in Figure 15e. The lower edge of the extracted region provided the ‘C’ line for meniscus measurements and the data were fitted to the level curve using a least squares algorithm.
This process can be described as follows. For a given set of data { ( x i , y i ) , i = 0 , 1 , 2 m } , the interpolating function f ( x ) is defined as the sum of the squared errors for the fitted model and the original data:
S ( a 0 , a 1 , a 2 a n ) = i = 0 m [ f ( x ) y i ] 2
The optimal approximation to the original data, the least squares fitting curve f(x), is then found by minimizing the sum of squared errors [26].
Polynomial fitting involves the use of a model function defined as f ( x ) = a 0 + a 1 x + a n x n . Input coordinates were collected from the lower edge of the standard concave meniscus image and used to calculate the quadratic function:
f ( x ) = 0.003367 ( x + a ) 2 + 0.4137 ( x + a ) + b
The resulting fitting curve for the concave meniscus, in which a = 0 and b = 625.6, is shown in Figure 16. The results of using this function to reconstruct a liquid level line are shown in Figure 17. The white curve in Figure 17b represents the measured level and the red curve defines the reconstructed meniscus surface. The sum of pixels is denoted by N1 in the overlapping region and by N2 on the actual level curve. The evaluation score for the fitted function is given by:
T = N 1 N 2
where higher values of T indicate a more accurate fit. The function f(x) can be used to represent the liquid level curve and its minimum value can denote the pixel height coordinate l . After identifying the lowest point on the curve and the corresponding relationship between scale numbers and pixels, the neck liquid level h2 can be calculated as follows:
h 2 = n a + l 1 l 0 l 1 l 2
where l0 is the pixel coordinate of the liquid level, l2 and l1 are the pixel coordinate values corresponding to the upper and lower 1 cm scale lines (respectively), and na is the actual scale value corresponding to l1. Pixel coordinate values for the concave liquid level and scale numbers were then determined automatically, as shown in Figure 18. In the sample images shown above, the pixel coordinate value was determined to be 561 for the neck liquid level, corresponding to a scale height of 17–18 cm. The corresponding value of h2 was determined to be 171.22 mm.

6. Experimental Verification

The proposed algorithm for measuring liquid neck levels using image recognition was assessed using a pipe prover verification system in actual working conditions. Automated measurements were compared to manual readings collected with Vernier calipers, the results of which are shown in Table 3. It is evident that the image recognition results are close to the manual readings, with an absolute error of ±0.1 mm. The recognition time, including follow-up control, was less than 10 s, which meets industry requirements.

7. Conclusions

In this study, an automated technique for measuring liquid gauge neck levels using image recognition was proposed and validated. The accuracy of the proposed system was within 0.1 mm of manual measurements, which meets the 0.2 mm industry requirement. The entire follow-up control, collection, processing, and recognition time for liquid level was less than 10 s, which is significantly lower than the required switching time between measurement tanks. As such, the proposed system satisfies both accuracy and speed requirements for the reading of gauge neck levels in pipe prover verification systems. The proposed method can effectively overcome long collection times, tedious measurements, and large errors associated with manual liquid level readings. It can also improve both the efficiency of verification and the degree of automation for a pipe prover system.

Author Contributions

All authors analysed the data and were involved in writing the manuscript; wrote the initial draft of the paper. C.H. conceived the idea of the study, analysed the data and revised the manuscript. C.X. designed the experiments, analysed the data and wrote the initial draft of the manuscript. X.X. performed the experiments, analysed the data and wrote the initial draft of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Shandong Provincial Natural Science Foundation [grant number ZR2016EEM46].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. JJG 209-2010. In Verification Regulations of Pipe Prover; China Metrology Press: Beijing, China, 2010.
  2. Available online: https://www.nist.gov/system/files/documents/pml/wmd/hb-105-3-2010.pdf (accessed on 12 September 2019).
  3. Ren, G. Flow Rate Automatic Calibration System Development and its Application. Master’s Thesis, Xi’an Shiyou University, Xi’an, China, 2011. [Google Scholar]
  4. Sun, T.; Zhang, C.; Li, L.; Tian, H.; Qian, B.; Wang, J. Research on Image Segmentation and Extraction Algorithm for Bicolor Water Level Gauge. In Proceedings of the 25th Chinese Control and Decision Conference (CCDC), Guiyang, China, 25–27 May 2013; IEEE Computer Society: Washington, DC, USA, 2013; pp. 2779–2783. [Google Scholar]
  5. Sun, B.; Zhang, C.; Liu, Z.; Tian, H.; Zhang, H. Research on HVPL for visual detection of bicolor water level gauge. In Proceedings of the 26th Chinese Control and Decision Conference (CCDC), Changsha, China, 31 May–2 June 2014; IEEE Computer Society: Washington, DC, USA, 2014; pp. 2094–2099. [Google Scholar]
  6. Zhan, H. The Design and Implementation of Fuel Dispensers Automatic Verification System. Master’s Thesis, Jiangsu University, Zhenjiang, China, 2016. [Google Scholar]
  7. Zhang, C.; Feng, G.; Liu, Z.; Sun, B.; Zhang, H. Research on water level gauge image identification based on intelligent remote viewing. In Proceedings of the 28th Chinese Control and Decision Conference (CCDC), Yinchuan, China, 28–30 May 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1561–1566. [Google Scholar]
  8. Xu, L.; Lv, J. Bayberry Image Segmentation based on Homomorphic Filtering and K-means Clustering Algorithm. Trans. Chin. Soc. Agric. Eng. 2015, 31, 202–208. [Google Scholar]
  9. Yongfang, W.; Xin, Y. Graph Cuts Medical Image Segmentation Algorithm Based on K-means Clustering. Comput. Eng. 2011, 37, 232–234. [Google Scholar]
  10. Meng, W.; Jianping, L. Image Segmentation Algorithm based on Edge Detection and Automatic Seed Region Growing. J. Xi’an Univ. Posts Telecommun. 2011, 16, 16–19. [Google Scholar]
  11. Fan, Y.; Liu, X. The Research of Relationship between Anisotropic Diffusion and Bilateral Filtering. Microcomput. Inf. 2006, 10, 245–247. [Google Scholar]
  12. Xi, Y.; Zhang, Q.; Qi, Y. Image Quality Evaluation Model Based on the Combination of PSNR and SSIM. J. Image Graph. 2018, 11, 19–24. [Google Scholar]
  13. Wang, Z.; Bovik, A.C.; Sheikh, H.R. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Gao, C.; Zhang, T.; Qu, Y. Advances in Research on Edge Detection of Images. Sci. Technol. Rev. 2010, 28, 112–117. [Google Scholar]
  15. Xuan, W. The Study and Application for a Liquid Level Detection System based on Image Edge Extraction. Master’s Thesis, Hunan University, Changsha, China, 2010. [Google Scholar]
  16. Roberts, L.G. Machine perception of three-dimensional solids. In Optical and Electro-optical Information Processing; MIT Press: Cambirodge, MA, USA, 1965; pp. 159–197. [Google Scholar]
  17. Prewitt, J.M.S. Object enhancement and extraction. In Picture Processing and Psychopictorics; Academic Press: New York, NY, USA, 1970; pp. 75–149. [Google Scholar]
  18. Pringle, K.K. Visual perception by a computer. In Automatic Interpretation and Classification of Images; Academic Press: New York, NY, USA, 1969; pp. 277–284. [Google Scholar]
  19. Canny, J. A Computational Approach to Edge Detection; IEEE Computer Society: Washington, DC, USA, 1986. [Google Scholar]
  20. Duan, R.; Zhao, W.; Huang, S.; Chen, J. A fast linear detection algorithm based on improved Hough transform. J. Instrum. 2010, 31, 2774–2780. [Google Scholar]
  21. van Vliet, L.J.; Young, I.T. A nonlinear laplace operator as edge detector in noisy images. Comput. Vis. Graph. Image Process. 1989, 45, 167–195. [Google Scholar] [CrossRef] [Green Version]
  22. Macqueen, J. Some Methods for Classification and Analysis of Multivariate Observations. In Proceedings of the Berkeley Symposium on Mathematical Statistics and Probability, University of California, Los Angeles, CA, USA, 21 June–18 July 1965; pp. 281–297. [Google Scholar]
  23. JU, Z.Y.; Kai-liang, L.; MAO, Y.Y. Fruit and Vegetable Image Segmentation Method based on K-means Clustering and Two Watershed. Softw. Guide 2018, 17, 217–219+223. [Google Scholar]
  24. Ding, S.F.; Qi, B.J.; Tan, H.Y. An Overview on Theory and Algorithm of Support Vector Machines. J. Univ. Electron. Sci. Technol. China 2011, 40, 2–10. [Google Scholar]
  25. JJG 209-2010. In Verification Regulations of Volume Tubes; China Metrology Press: Beijing, China, 2010.
  26. Chen, L.Z. Least square method in curve fitting by matrix operation. Transducer Microsyst. Technol. 2001, 20, 30–31. [Google Scholar]
Figure 1. A schematic diagram of the water calibration system used in the pipe prover. Labels include: 1—pool, 2—liquid level collection system, 3—pump, 4—ball push device, 5—volume tube, 6—infrared photoelectric detection switch, (V1–V8)—solenoid valves, (D1 and D2)—detection switches, (P1 and P2)—pressure transmitters, (T1 and T2)—temperature transmitters, C1—commutator, (R1 and R2)—standard metal gauges, and M1—flowmeter.
Figure 1. A schematic diagram of the water calibration system used in the pipe prover. Labels include: 1—pool, 2—liquid level collection system, 3—pump, 4—ball push device, 5—volume tube, 6—infrared photoelectric detection switch, (V1–V8)—solenoid valves, (D1 and D2)—detection switches, (P1 and P2)—pressure transmitters, (T1 and T2)—temperature transmitters, C1—commutator, (R1 and R2)—standard metal gauges, and M1—flowmeter.
Applsci 10 06624 g001
Figure 2. Readings of a concave surface from different view angles, including (a) outside of readings of a concave surface, (b) looking down, (c) same horizontal height as the eye, (d) looking up, (e) outside of readings of a concave surface.
Figure 2. Readings of a concave surface from different view angles, including (a) outside of readings of a concave surface, (b) looking down, (c) same horizontal height as the eye, (d) looking up, (e) outside of readings of a concave surface.
Applsci 10 06624 g002
Figure 3. A diagram of the proposed automated acquisition system for liquid level measurements. Labels include: 1—indicator tube, 2—infrared photoelectric detection switch, 3—measurement neck, 4—ball screw linear guide, 5—camera position A, 6—camera position B, and 7—stepper motor.
Figure 3. A diagram of the proposed automated acquisition system for liquid level measurements. Labels include: 1—indicator tube, 2—infrared photoelectric detection switch, 3—measurement neck, 4—ball screw linear guide, 5—camera position A, 6—camera position B, and 7—stepper motor.
Applsci 10 06624 g003
Figure 4. The original drawing of the liquid level.
Figure 4. The original drawing of the liquid level.
Applsci 10 06624 g004
Figure 5. A general block diagram for the liquid level recognition algorithm.
Figure 5. A general block diagram for the liquid level recognition algorithm.
Applsci 10 06624 g005
Figure 6. Edge detection results for the (a) grayscale image of the original graph using various operators, including the (b) Roberts, (c) Prewitt, (d) Sobel (X direction), (e) Sobel (Y direction), (f) Sobel (standard), (g) Canny, and (h) Laplace operators.
Figure 6. Edge detection results for the (a) grayscale image of the original graph using various operators, including the (b) Roberts, (c) Prewitt, (d) Sobel (X direction), (e) Sobel (Y direction), (f) Sobel (standard), (g) Canny, and (h) Laplace operators.
Applsci 10 06624 g006
Figure 7. Improved block diagram of edge detection algorithm.
Figure 7. Improved block diagram of edge detection algorithm.
Applsci 10 06624 g007
Figure 8. (a) Improved edge and (b) Hough linear detection.
Figure 8. (a) Improved edge and (b) Hough linear detection.
Applsci 10 06624 g008
Figure 9. The result of image segmentation using the K-means clustering algorithm, including (a) the initial linear edge data, (b) the cluster data set, (c) clustering results and cluster centers, (d) clustering four, (e) an effect map based on clustering edge segmentation
Figure 9. The result of image segmentation using the K-means clustering algorithm, including (a) the initial linear edge data, (b) the cluster data set, (c) clustering results and cluster centers, (d) clustering four, (e) an effect map based on clustering edge segmentation
Applsci 10 06624 g009
Figure 10. Upper and lower edge features in scale image.
Figure 10. Upper and lower edge features in scale image.
Applsci 10 06624 g010
Figure 11. Recognition results for the 1 cm calibration line (right rotation 90 degrees).
Figure 11. Recognition results for the 1 cm calibration line (right rotation 90 degrees).
Applsci 10 06624 g011
Figure 12. Recognition results for digital and scale line.
Figure 12. Recognition results for digital and scale line.
Applsci 10 06624 g012
Figure 13. Recognition results for the adaptive gray method applied to the liquid tube, including (a) the original image, (b) the adaptive gray image, (c) edge detection results for the adaptive grayscale image, (d) results for the traditional grayscale image, and (e) the removal of smaller connected domains.
Figure 13. Recognition results for the adaptive gray method applied to the liquid tube, including (a) the original image, (b) the adaptive gray image, (c) edge detection results for the adaptive grayscale image, (d) results for the traditional grayscale image, and (e) the removal of smaller connected domains.
Applsci 10 06624 g013
Figure 14. A block diagram for the curve fitting and reconstruction algorithms applied to modeling the concave meniscus.
Figure 14. A block diagram for the curve fitting and reconstruction algorithms applied to modeling the concave meniscus.
Applsci 10 06624 g014
Figure 15. The extraction of standard concave liquid level images, including (a) the adaptive gray image, (b) the edge detection image, (c) the outline image, (d) the maximum outline, and (e) the smoothness processing image.
Figure 15. The extraction of standard concave liquid level images, including (a) the adaptive gray image, (b) the edge detection image, (c) the outline image, (d) the maximum outline, and (e) the smoothness processing image.
Applsci 10 06624 g015
Figure 16. Fitting curves for (a) the standard concave meniscus. The images include (b) the level curve, (c) the fitting line (shown in red), and (d) an enlarged view of the resulting fit.
Figure 16. Fitting curves for (a) the standard concave meniscus. The images include (b) the level curve, (c) the fitting line (shown in red), and (d) an enlarged view of the resulting fit.
Applsci 10 06624 g016
Figure 17. Reconstruction of the concave liquid level, including (a) Actual level line, (b) Liquid level extraction and reconstruction, (c) Reconstructed effect.
Figure 17. Reconstruction of the concave liquid level, including (a) Actual level line, (b) Liquid level extraction and reconstruction, (c) Reconstructed effect.
Applsci 10 06624 g017
Figure 18. A schematic diagram showing the process used for measuring the metered neck level.
Figure 18. A schematic diagram showing the process used for measuring the metered neck level.
Applsci 10 06624 g018
Table 1. An evaluation index for the two filtering algorithms.
Table 1. An evaluation index for the two filtering algorithms.
IndexBilateral FilteringAnisotropic Diffusion
PSNR41.538135.9861
SSIM0.99520.9908
Table 2. A relationship table between the scale indicator and pixel values.
Table 2. A relationship table between the scale indicator and pixel values.
Scale Reading (n)212019181716151413
Pixel Coordinate (l)59157255353451549647745843
Table 3. A comparison between the proposed liquid level recognition algorithm and manual readings with Vernier calipers.
Table 3. A comparison between the proposed liquid level recognition algorithm and manual readings with Vernier calipers.
Serial NumberImage Recognition (mm)Reading of Vernier Caliper (mm)Absolute Error (mm)
196.5496.490.05
2100.84100.92−0.08
3105.93105.870.06
4112.76112.78−0.02
5121.41121.44−0.03
6130.85130.91−0.06
7134.96134.890.07
8145.45145.50−0.05
9152.28152.31−0.03
10165.44165.390.05

Share and Cite

MDPI and ACS Style

Hua, C.; Xie, C.; Xu, X. Follow-Up Control and Image Recognition of Neck Level for Standard Metal Gauge. Appl. Sci. 2020, 10, 6624. https://doi.org/10.3390/app10186624

AMA Style

Hua C, Xie C, Xu X. Follow-Up Control and Image Recognition of Neck Level for Standard Metal Gauge. Applied Sciences. 2020; 10(18):6624. https://doi.org/10.3390/app10186624

Chicago/Turabian Style

Hua, Chenquan, Chengjin Xie, and Xuan Xu. 2020. "Follow-Up Control and Image Recognition of Neck Level for Standard Metal Gauge" Applied Sciences 10, no. 18: 6624. https://doi.org/10.3390/app10186624

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop