2.1. Characteristics of Froth Images
As shown in
Figure 1 and
Figure 2, due to bubble’s 3D geometry and the light reflection, there exist one or more high gray value regions in each bubble, called “white spot areas”. The boundary regions among the adjacent bubbles have low gray values, called “boundary areas”, where the local gray value minima points can be considered as bubble boundary pixels. For images of large bubbles, the number of bubbles in an image is small, and the white spot areas are large regions and the boundary regions are long on average (see
Figure 1); for images of small and fine bubbles, the number of bubbles in an image is great, and the white spot areas are small regions and the boundary regions are short narrow on average (see
Figure 2). It is noted that all of the images used in this paper are the lead froth images obtained from Jin Dong flotation plant in China. In the plant, the raw ore mainly contains lead and zinc, and also includes sulfur, iron, silver, copper and other sulfide minerals, and the Gangue minerals include quartz, feldspar, pyroxene, garnet, chlorite and other silicate minerals. Hence, for the image segmentation, the flotation images are complicated; we have to study a new image segmentation algorithm.
For the above reasons, before image segmentation, a froth image can be classified into the class of large or non-large bubbles. For an image of large bubbles, a large template is used in the subsequent local gray value minima detection process; the small template is used for an image of non-large bubbles.
Figure 1.
Gray value characteristics of the image of large bubbles. (a) a large bubble image; (b) left: a part of the image (a); right: the 3D surface of the left image; (c) left: large bubbles and the profile line; right: the gray level histogram of the profile corresponding to the left image.
Figure 1.
Gray value characteristics of the image of large bubbles. (a) a large bubble image; (b) left: a part of the image (a); right: the 3D surface of the left image; (c) left: large bubbles and the profile line; right: the gray level histogram of the profile corresponding to the left image.
2.2. Edge Detection on Improved Local Gray Value Minima
According to the above analyses of the gray value variations in a froth image, it can be concluded that if all the local gray value minima points are detected, the information of the minima points can be used for bubble delineation on edges.
For a large bubble, its boundary region is a long border area, and there are often multiple local valley points in its boundary area. Moreover, because of the rough surface of a large bubble, the gray value distribution on the bubble surface is quite uneven, as shown in
Figure 3, in other words, a number of the local gray value minima points exist in the area, and these points belong to the non-bubble boundary points. For this kind of information, once directly extracted, this non-boundary information often form many closed curves that are very difficult to be removed by subsequent non-boundary information filters. Thus, to make the non-bubble boundary information as less as possible, for large bubbles, the segmentation algorithm adopts a large sized kernel for the local gray value minima point detection, e.g., 5 × 5 or 7 × 7, therefore, the comparison distance is increased. To further remove the speckle noise on a froth surface, the algorithm uses the gradation weighted average gray values in a local area for comparison instead of a single pixel gray value.
Figure 2.
Gray value characteristic of the image of small bubbles. (a) A small bubble image; (b) left: a part of the image (a); right: the 3D surface of the left image; (c) left: small bubbles and the profile line; right: the gray level histogram of the profile corresponding to the left image.
Figure 2.
Gray value characteristic of the image of small bubbles. (a) A small bubble image; (b) left: a part of the image (a); right: the 3D surface of the left image; (c) left: small bubbles and the profile line; right: the gray level histogram of the profile corresponding to the left image.
Figure 3.
Gray value variation between two bubbles. Left: large bubble and the profile line; Right: the gray value histogram of the profile corresponding to the left image from bottom to top.
Figure 3.
Gray value variation between two bubbles. Left: large bubble and the profile line; Right: the gray value histogram of the profile corresponding to the left image from bottom to top.
Based on the above idea, an edge detection method based on the improved local gray value minima is designed and developed to extract bubble edges.
We use
to denote an original froth image, is for its edge image, and all the values in
are set as “0” in initialization.
Step 1, for an image of large bubbles, the algorithm identifies whether the current detecting pixel is a local gray value minima point in the kernel of 5 × 5. The judgment conditions in four directions are as follows:
where
are weight coefficients, which are generally inversely proportional to the pixel distance to the detecting pixel, e.g.,
. The four groups of formulae are, respectively, used to detect the local gray values of the minima points in the 0°, 45°, 90°, and 135° directions of the current detecting pixel. In
Figure 4, the left diagram shows the four detection directions, where the area closed by the solid curve represents the first comparison region, the area closed by the dotted curve is the second comparison area; the right image shows the detection template in 0° direction, where the shadow parts are the pixels to be used for searching and comparing. The situations in other directions are similar to this.
Figure 4.
Diagram for detection algorithm. (a) The four directions; (b) detection template in 0° direction of a froth image of large bubbles.
Figure 4.
Diagram for detection algorithm. (a) The four directions; (b) detection template in 0° direction of a froth image of large bubbles.
For an image of non-large bubbles, we search for the local gray value minima points in a 3 × 3 kernel, and the above judgment conditions are simplified as:
If one of the conditions is satisfactory, the detected point is marked as an edge point, that is to set , and its location and direction are also marked; otherwise, go to Step 2.
Step 2, for all images, we identify whether the current detecting pixel is a local gray value minima point in a 5 × 5 kernel. It is similar to step 1 but with a set of different conditions. This step is applied to extract the edge point in which the gray value is equal to that in another point in its neighborhood region. The judgment conditions are illustrated as:
where, the first four formulae are utilized to detect the local gray value minima points, respectively, in 0°, 45°, 90°, and 135° directions of the current detecting pixel, and the rest four formulae are used in the reverse directions 180°, 225°, 270° and 315°, respectively.
If one of the above conditions is met, the detected point is marked as an edge point, i.e., set , and its location and direction is also marked.
In accordance with the above-described detection method, each pixel in an image is detected to see if it is a local minima point in a certain direction. Any pixel having the local gray value minima in a certain direction is assigned as an edge point. It is noted that the edge point detection procedure is done after image classification.
Figure 5 gives the workflow of the edge detection algorithm. The edge detection results for different classes of bubble images are shown in
Figure 6.
As shown in
Figure 6, we can see that the significant bubble edges are detected and most of the white spot edges are eliminated for any class of bubble images. Those isolated points, short line segments, and other non-boundary information are removed in the subsequent post-processing procedures.
Figure 5.
Algorithm workflow of the edge detection based on local gray value minima.
Figure 5.
Algorithm workflow of the edge detection based on local gray value minima.
Figure 6.
Edge detection results based on the proposed algorithm. (a–c) represent the original images of large, middle and small bubbles separately; (d–f) are the edge detection results corresponding to the images in (a), (b) and (c), respectively.
Figure 6.
Edge detection results based on the proposed algorithm. (a–c) represent the original images of large, middle and small bubbles separately; (d–f) are the edge detection results corresponding to the images in (a), (b) and (c), respectively.
2.3. Image Pre-Processing and Classification
In general, there are a lot of noise in an original froth image, which makes the froth surface complicated. The noise might be detected as local gray value minima points to be reserved for they satisfy some conditions. So we need to smooth an original image to alleviate the noise before image segmentation. One simple way to reduce noise is to use a smoothing filter such as a mean filter or a Gaussian filter, and the de-noising procedure can be done before segmenting the classified froth images.
Figure 7 shows the images smoothed by a Gaussian filter, from which we can see that the gray value variance in the image is more even after filtering, and it benefits the subsequent edge extraction procedures based on the local gray value minima.
Another preprocessing task is to classify these froth images according to bubble sizes. We have done a lot about the froth image classification with statistical analysis as shown in the reference [
15]. Its statistical data have proven the following two facts: One is that the size of the white spot area on a bubble is proportional to the bubble size; the other is that the number of white spots in an image is inversely proportional to the average bubble size. In this study, we do image classification based on the latter. In order to extract the white spots in an image, many traditional threshold algorithms, such as Otsu threshold [
24], can be used. As well known, the Otsu threshold algorithm is an automatic threshold segmentation method with the good performance for images having a obvious threshold between objects and background, but it is ineffective for the images having the unclear background, for example, for the images of small and tiny bubbles, the white spots are too small or of low gray values, therefore, it cannot effectively extract the white spots well. Moreover, after image segmentation, the region labeling and area size calculation must be carried out to calculate the number and the size distribution of white spots. To alleviate the problems of the Otsu threshold algorithm and to further simplify the image classification, this paper uses a corner detection method to conveniently get the number of white spots to do the bubble image classification.
Figure 7.
Gray value variance comparison before and after Gaussian filtering. (a) Original froth image and a profile line; (b) Gray value histogram of the profile corresponding to the image (a) from left to right; (c) Result smoothed by the Gaussian filter and profile line at the same position; (d) Gray value histogram of the profile corresponding to the image (c) from left to right.
Figure 7.
Gray value variance comparison before and after Gaussian filtering. (a) Original froth image and a profile line; (b) Gray value histogram of the profile corresponding to the image (a) from left to right; (c) Result smoothed by the Gaussian filter and profile line at the same position; (d) Gray value histogram of the profile corresponding to the image (c) from left to right.
A corner point contains the information of the image gray changes. Generally, it is believed that the corner point is the point of a maximum curvature value or a drastic change of the brightness in an image. These points not only retain the important image features but also effectively reduce the amount of data, which can improve the computation speed. A corner point plays a very important role in motion estimation, target tracking, image matching, and others in computer vision. In a froth image, the white spots of bubbles are the areas with a drastic intensity change, which is consistent with the definition of the corner point. Thus, the white spots can be obtained through a corner point detection method, and the number of corner points represents the number of white spots.
At present, the corner detection algorithms can be divided into three types: they are based on gray-scale images, the binary image and the contour curve, respectively. Harris corner detection algorithm is a gray level image corner detection method based on a template [
25]. The algorithm detects the gray value change in the neighborhood of the detected pixel, and defines the pixel as a corner point when the change is large enough.
We now give a brief mathematical expression of the Harris corner detection algorithm. Denoting the image intensities by
I, and the window function by
, the window function is operated on the image pixel by pixel to detect the gray level change. The change
E produced by a shift (
u,v) is given by:
where,
,
specify the first gradient results in
x,
y directions, respectively. It uses the Gaussian equation as the window function:
The change
E, for the small shift
(u,
v), can be concisely written as:
where,
M is a 2 × 2 symmetric matrix, and it can be written as:
where,
M can be made as a diagonalization matrix:
where
and
are the eigenvalues of
M, they are proportional to the principal curvatures of the local auto-correlation function, and form a rotationally invariant description of
M.
Define the following formulation for the corner response function
R:
where,
k is an empirical constant, usually
k is in the range of 0.04–0.06.
R is positive in the corner region, negative in the edge region, and small in the flat region.
The Harris corner point detection algorithm is to compare the corner response function R with a threshold value given in advance. If R is greater than the threshold, then the pixel is marked as a corner point. In the improved Harris corner detection algorithm, the corner point determination condition is changed as: in the two eigenvalues, if the smaller one is greater than a given threshold, the pixel is determined as a corner point. In the mean time, the improved algorithm sets another tolerance distance parameter; in the given tolerance distance it only retains a strong corner point. By setting the reasonable tolerance distance parameter, the improved Harris corner detection algorithm is more suitable for being used to extract the white spots for image classification, and the calculation is simple.
The extraction results of the corner points both in an image of large bubbles and in an image of small bubbles are shown in
Figure 8. As we can see, most of the white spots are successfully marked as the corner points. Although there exist some false detection points and some white spots are missed, which only account for a small proportion, it does not affect obtaining the accurate image classification result.
Figure 8.
Improved Harris corner point detection. (a) Result of the large bubbles with tolerance distance 15; (b) Result of the small bubbles with tolerance distance 11.
Figure 8.
Improved Harris corner point detection. (a) Result of the large bubbles with tolerance distance 15; (b) Result of the small bubbles with tolerance distance 11.
As an illustrative example, we choose six bubble images randomly to do the corner point detection. The six images, of size 384 × 288, are easily distinguished as the images of large, middle and small bubbles, and two of them are of large bubbles, two are of middle and the rest are of small bubbles.
Table 1 shows the basic data and classification information, and the corresponding corner point number distribution is shown in the left of
Figure 9.
From
Table 1, we see that, with the tolerance distance decreasing, the changes of the numbers of the corner points in the different bubble images are growing more and more. With tolerance distance 40, the difference of the numbers of the three bubble images is little; with tolerance distance 8, the difference is great. With the distance 11, the corner point numbers in the images of small and middle bubbles are both above 200 and the numbers in the images of large bubbles are still below 200.
To verify the above general discipline, with the threshold 5 and the tolerance distance 11 pixels, we do the further test on 10 images of large bubbles and 20 images of middle, small and tiny bubbles, the specific data are shown on the right of
Figure 9, where, the red dots represent the number of the corner points in the images of large bubbles, the green dots represent the images of middle and small bubbles, and the blue dots are for the images of tiny bubbles. The data distribution illustrates that the mentioned classification rule is feasible. Based on the above data analyses, we obtain the classification rules as follows: with the threshold 5 and the tolerance distance 11 pixels, if the number of corner points is above 200, the image belongs to that of non-large bubbles (the images of small, tiny and middle bubbles); otherwise, it belongs to that of large bubbles.
Figure 9.
Corner point numbers (white spot numbers) distribution. (
a) Corner point number distribution from
Table 1; (
b) Corner point number distribution of 30 different types of bubble images.
Figure 9.
Corner point numbers (white spot numbers) distribution. (
a) Corner point number distribution from
Table 1; (
b) Corner point number distribution of 30 different types of bubble images.
Table 1.
Classification analysis based on the number of corner points (threshold of corner point detection algorithm is 5).
Table 1.
Classification analysis based on the number of corner points (threshold of corner point detection algorithm is 5).
Tolerance Distance (Pixel) | Large Image No. 1 | Large Image No. 2 | Middle Image No. 1 | Middle Image No. 2 | Small Image No. 1 | Small Image No. 2 |
---|
5 | 210 | 200 | 269 | 310 | 787 | 677 |
8 | 187 | 176 | 249 | 284 | 539 | 470 |
11 | 169 | 164 | 213 | 236 | 359 | 372 |
15 | 138 | 136 | 162 | 176 | 234 | 213 |
20 | 97 | 106 | 113 | 115 | 148 | 134 |
25 | 75 | 80 | 80 | 88 | 104 | 103 |
30 | 58 | 56 | 63 | 63 | 77 | 74 |
40 | 41 | 38 | 42 | 41 | 49 | 47 |
50 | 27 | 30 | 28 | 31 | 35 | 32 |
2.4. Post-Processing after Image Segmentation
After the image classification and the local gray value minima detection, most significant bubble edges exist in the magnitude gradient image. But the result image may not be satisfactory, because there are some isolated points, short line segments and small closed curves caused by noise, and some bubble edges are discontinuous. In order to overcome the problems, we have to do a number of post operations. Our post-processing subroutines include noise filtering, thinning, endpoint detection and gap linking, and region merging.
The isolated points and some short line segments with 2–3 pixels can easily be removed by some common filtering operators. For the images of small and middle bubbles, the non-boundary information is little, and after filtering the edge image for 3–4 times, the isolated points, the short line segments and other non-boundary points can almost be removed. For the images of large bubbles, the filtering steps should be done more times to remove the noise effectively.
Before the edges are further thinned into of a width of one pixel, the simple noise filtering is performed. The last task is to make the closed bubble boundaries, which is done by endpoint detection and gap linking. We use the connection number of the detecting pixel to determine if the pixel is an endpoint. The connection number,
, is calculated by the following formula:
where
denotes No.
k pixel of the eight-neighborhood of the detecting pixel, as shown in
Figure 10; and
represents the gray value of pixel
, where we define:
when the pixel is a white point (edge point), and
when the pixel is an non-edge point; and when
, we define
.
For each edge point, the connection number
is calculated. When =1, the detected edge point is marked as an endpoint, and both its direction and location are marked.
Figure 10.
Gap linking diagram between the two endpoints and the representation of the pixels in the eight-neighborhood of the detecting pixel x.
Figure 10.
Gap linking diagram between the two endpoints and the representation of the pixels in the eight-neighborhood of the detecting pixel x.
When all the endpoints are marked, the next step is to link the endpoints based on the principle of the similar direction and the nearest distance. We search for another endpoint in its 5 × 5 kernel as the detected endpoint as center. When a new endpoint is found, and its direction is the same to that of the detected endpoint, the new endpoint is linked along the direction to the current endpoint. When the direction is different, we choose the endpoint with the shortest distance to link along the direction of the current endpoint. When there is no endpoint to be found, then it skips the current endpoint and processes the next endpoint.
Through this procedure, there are some short line segments left. We use a line threshold to remove these false edges. The algorithm of Region merging is also used to remove some complex closed curves.