Next Article in Journal
The Radioactive Rare Metal Mineralization in the World-Class Sn-Nb-Ta-U-Th-REE-Deposit Madeira (Pitinga, Amazonas State, Brazil): With Special Reference to the Complex Alteration of Pyrochlore-Group Minerals
Previous Article in Journal
3D Geostatistical Modeling and Metallurgical Investigation of Cu in Tailings Deposit: Characterization and Assessment of Potential Resources
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Prediction of Ash Content in Flotation-Recovered Clean Coal Based on NRBO-CNN-LSTM

by
Yujiao Li
1,
Haizeng Liu
1,* and
Fucheng Lu
2
1
School of Materials Science and Engineering, Anhui University of Science and Technology, Huainan 232001, China
2
School of Computer Science, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Minerals 2024, 14(9), 894; https://doi.org/10.3390/min14090894
Submission received: 2 August 2024 / Revised: 19 August 2024 / Accepted: 29 August 2024 / Published: 30 August 2024

Abstract

:
Ash content is an important production indicator of flotation performance, reflecting the current operating conditions of the flotation system and the recovery rate of clean coal. It also holds significant importance for the intelligent control of flotation. In recent years, the development of machine vision and deep learning has made it possible to detect ash content in flotation-recovered clean coal. Therefore, a prediction method for ash content in flotation-recovered clean coal based on image processing of the surface characteristics of flotation froth is studied. A convolutional neural network –long short-term memory (CNN-LSTM) model optimized by Newton–Raphson is proposed for predicting the ash content of flotation froth. Initially, the collected flotation froth video is preprocessed to extract the feature dataset of flotation froth images. Subsequently, a hybrid CNN-LSTM network architecture is constructed. Convolutional neural networks are employed to extract image features, while long short-term memory networks capture time series information, enabling the prediction of ash content. Experimental results indicate that the prediction accuracy on the training set achieves an R value of 0.9958, mean squared error (MSE) of 0.0012, root mean square error (RMSE) of 0.0346, and mean absolute error (MAE) of 0.0251. On the test set, the prediction accuracy attains an R value of 0.9726, MSE of 0.0028, RMSE of 0.0530, and MAE of 0.0415. The proposed model effectively extracts flotation froth features and accurately predicts ash content. This study provides a new approach for the intelligent control of the flotation process and holds broad application prospects.

1. Introduction

Flotation is the most widely applied technology in coking coal preparation plants. As a crucial component of the coal preparation system, the effectiveness of coal slime flotation is key to determining the quantity and quality of the final clean coal product. Flotation is a complex process influenced by numerous factors, including flotation equipment, flotation reagents, slurry concentration, impeller speed, and slurry pH value [1]. These factors intertwine and constrain each other, making the control of the flotation process highly challenging. Flotation froth, as an external manifestation of the flotation process, contains crucial information reflecting flotation performance. Long-term studies have shown that the appearance characteristics of flotation froth, such as bubble diameter, bubble quantity, bubble color, and bubble movement speed, are correlated with the ash content of clean coal. By detecting and analyzing the surface features of froth, the ash content of clean coal can be effectively predicted, thus enabling accurate assessment of flotation performance [2].
Although the flotation process has achieved industrial process automation [3], the rapid and accurate measurement of clean coal ash content remains a weakness in the research. Currently, common methods for detecting ash content have several drawbacks. The high-temperature ashing method can precisely determine ash content, but the process is cumbersome and time-consuming, making it unable to respond promptly to changes in flotation input parameters, leading to feedback delays [4]. The radioactive ash measurement method can provide real-time or short-term results, but radioactive isotopes may pose radiation risks, requiring professional operation and strict safety measures [5]. The gamma-ray analyzer method is easily affected by coal particle size and moisture content, and the cost of purchasing and maintaining the equipment is high [6,7]. The photoelectric ash measurement [8] method reduces sampling time, but the results are easily influenced by sample color, surface characteristics, and lighting conditions, limiting its measurement accuracy.
In recent years, machine learning has developed rapidly, and significant breakthroughs have been made in image processing technology. With the increasing need to overcome these issues, machine vision systems have been developed for predicting flotation ash content [9]. Currently, cameras are primarily used to replace manual observation to capture froth images, extract visual features from the images to obtain data information, and input this information into flotation ash prediction systems to achieve accurate predictions of clean coal ash content [10]. Y. Bai proposed an ash prediction algorithm based on multiple linear regression. This algorithm extracts texture feature parameters of flotation froth images using gray-level histograms and gray-level co-occurrence matrices, and then inputs these parameters into the algorithm system to predict ash content [11]. Wen, Zhiping et al. have extracted 88 features from flotation froth images to establish a feature engineering study on its performance in ash content prediction. The results showed that the support vector regression method used in feature engineering could effectively predict the ash content of coal flotation concentrates [2]. Tan and his team extracted the average gray value of images to analyze the relationship between this parameter and flotation concentrate ash content, indicating that gray value can be an indicator of flotation concentrate ash content [12]. Qiu and co-authors [13] identified seven features of coal image gray-level histograms and modeled them using polynomial regression methods, polynomial regression methods with feature selection, and particle swarm optimization support vector machines. The results showed that the PRFS method performed the best, with predicted ash content values closely matching the measured ash content values [13].
Advanced machine vision and deep learning algorithms, such as neural networks, have brought about new technological breakthroughs in the detection of ash content in fine flotation-recovered coal [14]. Chaurasia et al. predicted the ash content and yield percentage of clean coal in a multiple gravity separator using artificial neural networks [15]. Tang and his co-authors developed a real-time prediction system based on image processing and BP neural network modeling, which obtained prediction values highly consistent with actual values [16]. Massinaei et al. measured the ash content of concentrates and extracted bubble size, foam velocity, color, and structural features using a machine vision system. The BRANN prediction model was employed to predict the ash content of concentrates based on the extracted foam features [17]. Wen et al. [18] applied transfer learning with a small dataset to predict ash content in coal flotation concentrates and ResNet 101 was selected as the convolutional neural network framework. The results demonstrate that transfer learning exhibits excellent performance in ash content prediction. Lu et al. [19] combined deep learning with complex likelihood analysis, using the BFGS algorithm to search for optimal parameters and extract highly correlated feature inputs for neural network prediction of ash content in flotation fine coal.
LSTM is a variant of RNN designed to overcome the vanishing and exploding gradient problems when processing long sequence data [20]. Davi Guimarães et al. compared power consumption prediction using long short-term memory and bidirectional LSTM deep neural networks [21]. Abhilash Gogineni and his team evaluated the effectiveness of using fly ash and various admixtures as input factors to predict compressive strength at 7, 14, and 28 days using classification and regression trees (CART) and long short-term memory (LSTM) neural networks. The results showed that LSTM demonstrated superior predictive capabilities [22]. Li et al. proposed a long short -term memory (LSTM) neural network with sliding windows to predict the composition of secondary blast furnace ash that had not been tested. The experimental results show that the prediction results of the model are close to the true values, confirming the feasibility of the FA-LSTM model with sliding windows to accurately predict the composition of the material [23]. LSTM has made significant contributions in various fields in recent years. Intelligent optimization algorithms combined with LSTM usage have improved model shortcomings.
A CNN-LSTM flotation-recovered fine coal ash prediction model based on NRBO optimization is proposed in this paper. Firstly, the video of flotation foam is preprocessed, and the feature dataset is extracted from the flotation foam image. Then, by constructing a CNN-LSTM hybrid network architecture, a convolutional neural network is used to extract image features, and an LSTM is used to capture long-term dependencies in the sequence data. NRBO optimizes the super parameters in CNN-LSTM architecture, such as convolution kernel size, network layer number, learning rate, etc., and realizes the prediction of fine coal ash in flotation concentrates.

2. Experiments

This study designed a machine vision system consisting of three core components: an image acquisition module, an experimental operation module, and an image processing module. The image acquisition system includes a Hikvision MV-CS200-100GC industrial color camera (Hangzhou, China), lenses, and a 240 W LED light source. The camera was positioned approximately 30 cm above the flotation foam layer surface, with its signal cable connected to a Windows 10 laptop (CPU i7, 16 GB RAM, 1 T). The light source was placed on top of the flotation machine. The experimental operation system utilizes a 1.5 L mechanical stirring flotation machine. Coal samples from the Huainan Panji Coal Preparation Plant in Anhui Province, with particle sizes less than 0.5 mm, were used for the flotation experiments. To ensure stable shooting conditions, a light shield was used to cover the imaging area. Image information collected was transferred to a PC for image processing and parameter calculation, conducted using Python 3.8 in PyCharm software (2022.03.03). The flotation experimental setup is illustrated in Figure 1.
In this study, to collect video images of flotation processes under different process parameters, we designed 25 independent experiments with varied process conditions. The experimental procedures followed the standard GB/T 30046.1-2013 [24] for coal slurry flotation testing. The flotation experiment proceeded as follows: 1 L of tap water and 150 g of coal sample were sequentially added to the flotation cell. After stirring with a mechanical mixer for 2 min, a collector was added to the surface of the slurry after 1 min, and then foaming agent was added; the dosage of the foaming agent was 28.98 μL, 33.12 μL, 37.26 μL, 41.41 μL, 45.55 μL, respectively. The dosage of the added collector was 110.63 μL, 119.85 μL, 129.07 μL, 138.29 μL, and 147.51 μL, respectively, as shown in Table 1. Bubbles generated in the slurry collided and adhered to coal slurry particles, forming mineralized bubbles that floated to the slurry surface. Figure 2 illustrates the operational schematic of the flotation cell. Foam was scraped out using a scraper for 60 s, and the collected foam was the flotation concentrate. The collected flotation concentrate was filtered, dried in a 75 °C oven for 8 h, and finally, ash content was determined using combustion weighing technology. This study obtained 25 videos, each containing 100 images, totaling 2500 images with a resolution of 2048 × 2048. The high resolution facilitated capturing detailed features of the foam. The dataset was divided into two parts: 70% for training the model and the remaining 30% for testing the model.

3. Collection and Processing of Flotation Image Data

3.1. Image Preprocessing

3.1.1. Histogram Equalization

To obtain a more accurate flotation image feature dataset, the collected flotation experiment videos were preprocessed. Histogram equalization is an image processing technique [25,26] that stretches the range of grayscale levels in an image, making the image’s pixel values more uniformly distributed across the entire range. This process not only expands the brightness range of the image but also enhances the contrast, making various details in the image more clearly visible [27]. Figure 3 shows a comparison between the original image and the image after histogram equalization, where the horizontal axis represents pixel values and the vertical axis represents the number of pixels. After histogram equalization, the grayscale distribution of the flotation foam images was optimized, the impact of light noise was effectively suppressed, and the contrast was significantly improved. The improvement of contrast helps to distinguish the foam boundary and texture, so that different regions in the image can be more accurately identified and segmented in the feature extraction.

3.1.2. Filtering Enhancement

The minimum mean filter is a widely used filtering algorithm in the field of image processing, primarily for smoothing images and removing noise while preserving image details [28]. Using a variable-size filtering window, this study employs a 4 × 4 filter matrix that moves pixel by pixel across the collected flotation foam images to perform convolution operations [29]. For each neighborhood covered by the window, the minimum value of all pixel values was calculated and replaced the central pixel value of the window. The minimum mean filter can remove isolated high-brightness or low-brightness noise points, such as salt and pepper noise, significantly improving the quality of flotation foam images. This enhancement makes the features of flotation foam clearer, aiding subsequent image analysis and ash content prediction.

3.1.3. Fractional-Order Differentiation

Fractional-order differentiation has gradually gained attention in the field of image processing. Introducing non-integer order differentiation operations enables fine control over image details and textures [30]. Although there are different definitions of fractional-order differentiation, the numerical solution derived from the G-L definition [31] is closer to the exact solution. Therefore, this study derives fractional-order differentiation from the G-L definition. The definition of fractional-order differentiation is:
D t α f ( t ) 1 h α k = 0 ( 1 ) k α k f ( t k h )
Here, α denotes the fractional order and h represents the time step.
α k is the generalized binomial coefficient, defined as α k = Γ ( α + 1 ) Γ ( k + 1 ) Γ ( α k + 1 ) , for k = 0 , 1 , 2 , , the weight of each term is ( 1 ) k α k . f ( t k ) is the value of the function f at t k .
In accordance with the uniform interval h = 1 , the formula simplifies to:
D t α f ( t ) k = 0 ( 1 ) k α k f ( t k )
The differential-difference expression is:
d α d x α f ( t ) f ( t ) + ( α ) f ( t 1 ) + α ( α + 1 ) 2 f ( t 2 ) + α ( α + 1 ) ( α + 2 ) 6 f ( t 3 ) + + Γ ( α + 1 ) n ! Γ ( α n + 1 ) f ( t n )
When k = 0 , 1 , 2 , the generalized binomial coefficient is: α 0 = 1 , α 1 = α , α 2 = α ( α 1 ) 2 . During the preprocessing of flotation foam images, the foam images are first subjected to histogram equalization to alleviate the influence of light noise and enhance image contrast. Subsequently, the images undergo a minimum filtering enhancement algorithm based on fractional-order differentiation coefficients. In this study, an order of 0.2 is chosen, and a 4 × 4 matrix filter is employed. The matrix sequentially scans each pixel of the image, arranges the pixel values within the selected region in ascending order, and computes the weighted average of the top four pixel values with the four coefficients of the fractional-order differentiation differential. The four coefficients of the fractional-order differentiation differential [32] are: 1 , α , ( α 2 α ) / 2 , α ( α + 1 ) ( α + 2 ) 6 , if the four pixels are f 0 , f 1 , f 2 , f 3 , by averaging them according to the blow formula g ( x , y ) , the resulting value serves as the new value for the target pixel, achieving precise control over image details and textures
g ( x , y ) = 1 ( 6 + 6 α 2 11 α α 3 ) / 6 [ f 0 α f 1 + α 2 α 2 f 2 + α ( α + 1 ) ( α + 2 ) 6 f 3 ]

3.2. Bubble Diameter

The watershed algorithm [33] is an image segmentation technique based on topological morphology, widely used for processing images with overlapping or adjacent objects. The traditional watershed algorithm achieves image segmentation by simulating the process of water spreading out from low-lying areas (local minima) [34]. However, in this paper, high grayscale values represent ridges, and low grayscale values represent valleys. By simulating rainfall, the process of water flowing from high points to low points (local minima) achieves image segmentation. Each local minimum and its influence area are referred to as a catchment basin, and the boundary of the catchment basin forms the watershed. Morphological opening operations [35] are then used to remove noise points in the foam images. The watershed algorithm precisely segments the edges of flotation foam images, and each segmented foam region is identified. The diameter of the circumscribed circle for each segmented region is calculated, representing the bubble diameter. The average bubble diameter per frame for each video group is shown in Figure 4a. Bubble diameter is a key physical characteristic in the flotation process. Generally, smaller bubbles increase the collision frequency between particles and bubbles, enabling more particles to rise to the foam layer. This process aids in the separation of impurity particles, reducing the ash content in fine coal [36]. Thus, bubble diameter can serve as a direct indicator of flotation performance, helping to predict the ash content of flotation foam.

3.3. Bubble Count

The methods for calculating bubble diameter and bubble count differ, each focusing on characterizing different grayscale regions. The calculation of bubble diameter mainly describes the low-grayscale foam regions, while the bubble count method extracts the contours of high-grayscale bubbles. In this study, the Sobel operator algorithm [37] was used to detect the number of bubbles in the images. The bubble count per frame for each video group is shown in Figure 4b. Bubble count is another key physical characteristic in the flotation process. Generally, the number of bubbles generated during flotation reflects the stability and efficiency of the flotation process. A higher bubble count usually indicates a uniform distribution of flotation agents and a stable flotation process, potentially leading to lower ash content. Therefore, bubble count can serve as an indirect indicator of flotation performance, helping to predict the ash content of flotation foam.

3.4. Bubble Color

Color is one of the important features of flotation foam images, reflecting the fine coal load under different flotation conditions, which indirectly indicates the ash content of the fine coal [38]. The darker the color, the lower the ash content; the lighter the color, the higher the ash content. RGB is the most widely used color space in daily life, containing rich bubble-color information. The RGB color space provides abundant color information and direct perception [39], while the HSI color space offers brightness invariance and feature enhancement advantages. Therefore, based on the HSI visual model, flotation color images are converted to the HSI color space [40]. The HSI color space includes three basic features: hue, saturation, and intensity, which are closer to human subjective color perception than the RGB color space. All flotation images were converted to the HSI color space, extracting the H, S, and I components of each image, and then taking the average value of these three components to obtain the color texture feature parameters for each image. The conversion process from the RGB color space to the HSI color space for flotation foam images is shown in Equation (5)
I = R + G + B 3 S = 1 3 R + G + B min ( R , G , B ) H = θ G B 2 π θ G < B
where   θ = cos 1 0.5 [ ( R G ) + ( R B ) ] ( R G ) 2 + ( R B ) ( G B )
Here, R , G , B represents the pixel values on their respective color space matrices, and is the hue angle, where θ ∈ [0, 360].
In predicting the ash content of flotation froth, the selection of RGB and HSI color spaces aims to fully utilize their respective advantages and characteristics. The distribution of bubble color features in RGB and HSI for each group’s video frames are shown in Figure 4c,d. By combining these two color spaces, more comprehensive and stable color features can be extracted, thereby improving the accuracy and robustness of ash content prediction.

3.5. Bubble Grayscale Histogram Features

In this study, the flotation images were converted to grayscale images through grayscale transformation, and the grayscale histogram features were extracted. The grayscale transformation formula used is as follows:
Gray = 0.299 * R + 0.567 * G + 0.114 * B
Here, R, G, and B are the values of the flotation image in the red, green, and blue channels, respectively.
The grayscale histogram features are used to represent the global texture characteristics of coal flotation froth images, maintaining global invariance, meaning they do not change with image transformations such as rotation and scaling. In this study, the extracted grayscale histogram features include mean, median, variance, skewness, and kurtosis The mean represents the average brightness of the image, reflecting the overall grayscale level; the median represents the middle grayscale value of the image, reducing the impact of extreme values on brightness estimation; the variance represents the overall fluctuation of grayscale values, indicating the image’s contrast and detail variation; the skewness reflects the asymmetry of the grayscale histogram, where values closer to zero indicate a more symmetrical histogram and larger values indicate a skewed histogram; and the kurtosis indicates whether the grayscale values are concentrated around the mean. Higher kurtosis values indicate that grayscale values are concentrated near the mean, while lower kurtosis values indicate that grayscale values are more dispersed. Let the range of gray levels in a grayscale image be 0 ,   L 1 , and the image size be M × N . The formulas for calculating the five grayscale histogram features are as follows:
h ( i ) = x = 1 M y = 1 N δ ( f ( x , y ) i )
Mean = i = 0 L 1 i · p ( i )
Median = median ( i · p ( i ) )
Variance = i = 0 L 1 ( i μ ) 2 · p ( i )
Skewness = 1 σ 3 i = 0 L 1 ( i μ ) 3 · p ( i )
Kurtosis = 1 σ 4 i = 0 L 1 ( i μ ) 4 · p ( i ) 3
Here: h ( i ) is the number of pixels with gray level i . f ( x , y ) is the grayscale value of the image at position ( x , y ) . δ is the function of Kronecker delta, when f ( x , y ) = i , δ ( f ( x , y ) i ) = 1 , otherwise δ ( f ( x , y ) i ) = 0 . p ( i ) is the probability of gray level i .
Through the extraction and analysis of the aforementioned grayscale features, a more comprehensive understanding of the brightness and texture characteristics of flotation foam images can be achieved. The grayscale feature statistics including mean, skewness, kurtosis, median, and variance for each frame of all groups are shown in Figure 4e, Figure 4f, Figure 4g, Figure 4h, and Figure 4i, respectively. These provide important insights for further prediction of ash content.

3.6. Bubble Velocity

In the task of extracting horizontal movement speed of flotation bubbles [41], a series of advanced algorithms and techniques were employed in this study to ensure accurate detection and matching of bubble feature points in the images, thereby further calculating the bubble’s motion speed. Firstly, the SURF algorithm was utilized for bubble feature point detection [42]. SURF accelerates convolution operations using integral images and detects bubble feature points based on the determinant of the Hessian matrix. To match feature points across different images, the FLANN matcher was employed, which uses an efficient approximate nearest neighbor search algorithm to quickly find the best matching pairs from a large number of feature points. To remove incorrect matches and improve matching accuracy, the RANSAC algorithm [43] was used to optimize the matching results of bubbles. Its main steps involve randomly sampling a set of bubble matches and calculating model parameters, followed by validating model parameters to compute the number of inliers (matches that fit the model). This iterative process repeats to obtain the model parameters with the highest number of inliers, thereby eliminating outliers and obtaining optimized matching results. Based on the optimized bubble-matching feature points, the Lucas-Kanade optical flow method [44] was employed to calculate the speed of motion of the flotation foam bubbles. The L-K optical flow method accurately estimates the bubble’s speed of motion by solving the optical flow equations within local windows of the image. The average speed of the bubbles in each frame of the various groups is shown in Figure 4j, and the corresponding points of optical flow matching with the dynamic characteristics of the cleaned coal flotation foam images are shown in Figure 5.

4. Model Construction

4.1. CNN and LSTM Models

The convolutional neural network (CNN) is a deep learning model that has achieved significant success in the field of computer vision. The basic structure of CNN consists of an input layer, convolutional layers, pooling layers, fully connected layers, and an output layer [45]. The convolutional layer is the core of the CNN, which performs element-wise multiplication and summation between a movable small window and the input image. This small window consists of a set of fixed weights called kernels. In the fine coal flotation foam feature dataset, convolution is performed with 4 × 1 sized kernels, as depicted in Figure 6. Pooling layers are primarily used to reduce the spatial size of feature maps, thereby reducing the computational complexity and number of parameters in the model. They also help prevent overfitting. The pooling operation used in this study is average pooling, as illustrated in Figure 7. Fully connected layers typically reside at the end of the convolutional neural network, converting the feature maps extracted by preceding layers into the final output of the network. The CNN architecture designed in this study utilizes 32 convolutional kernels of a 4 × 1 size with a stride of 1, initialized using He initialization. A ReLU [46] activation function is applied to the output of each convolutional layer. Following each convolutional module, a 4 × 4 average pooling layer with a stride of 1 is applied to reduce the dimensionality of the flotation foam image feature maps. Subsequently, the convolved data are flattened into a one-dimensional vector for further processing by the LSTM layer.
LSTM (long short-term memory) is a special type of recurrent neural network (RNN) with a significantly more complex structure compared to traditional RNNs. As a unique variant of RNNs, LSTM introduces a distinctive gating mechanism. This design enables LSTM to capture long-term dependencies while avoiding the problems of vanishing or exploding gradients, thus enhancing its ability to learn and represent sequential data. Consequently, LSTM has become an indispensable component in modern deep learning [47].
In an LSTM network, information flow is controlled through the introduction of three gates: the forget gate, the input gate, and the output gate [48]. The LSTM network structure is shown in Figure 8. When using LSTM for predicting flotation-recovered clean coal ash content, the input consists of the dataset processed by a CNN, which includes features such as bubble diameter, quantity, color, grayscale, and velocity. The forget gate determines how much of the previous timestep’s dataset c(t − 1) is retained at the current timestep c(t), thus controlling the influence of past bubble feature data on the current ash content prediction. The input gate decides how much of the current bubble feature input x(t) is saved to the cell state c(t). It comprises two parts: the sigmoid function determines the input gate weight it, and the tanh function calculates the candidate state Ct, which is used to update the predicted ash content of clean coal. Finally, the output gate controls how much of the cell state c(t) is output to the current output value h(t) of the LSTM. This gate determines which information should be passed to the next timestep’s hidden state or output as the current ash content prediction result.
Specifically, for a given timestep t and flotation feature dataset X t , the LSTM calculates the following values:
Forget   gate :   f t = σ ( W f · [ h t 1 , X t ] + b f )
Input   gate :   i t = σ ( W i · [ h t 1 , X t ] + b i )
Update   cell   state :   C ˜ t = tanh ( W c · [ h t 1 , X t ] + b c )
Cell   state   update :   C t = f t * C t 1 + i t * C ˜ t
Output   gate :   o t = σ ( W o · [ h t 1 , X t ] + b o )
Update   of   hidden   states :   h t = o t * tanh ( C t )
Here, σ represents the sigmoid function, W f is the forget gate weight matrix, and b f is the forget gate bias term. W i is the input gate weight matrix, and b i is the input gate bias term.
After processing with the CNN, the processed feature dataset is flattened into a one-dimensional vector and input into the LSTM layer. In this paper, two LSTM layers are employed: the first LSTM layer has 128 hidden units, and the number of hidden units in the second LSTM layer is determined using the NRBO optimization algorithm. The output mode is set to the output of the last timestep. To prevent overfitting, a dropout layer is designed to randomly drop 25% of the units. After passing through a fully connected layer, the predicted ash content value is output. LSTM excels at capturing long-term dependencies in sequential data, while CNN is proficient at extracting local features from image data. The CNN-LSTM composite network structure is illustrated in Figure 9. By combining the strengths of both methods, this approach reduces the number of parameters and the risk of overfitting, thus providing more accurate predictions, better performance, and higher training efficiency.

4.2. NRBO Optimized CNN-LSTM

The Newton–Raphson-based optimizer (NRBO) is a novel intelligent optimization algorithm [49], NRBO has powerful global search and local optimization capabilities.
In this study, the NRBO algorithm is used to optimize three key parameters of the CNN-LSTM model: learning rate, number of hidden layer nodes, and regularization coefficient. First, the parameters for the NRBO algorithm are set, including the number of search agents (10), maximum iterations (8), number of optimization parameters (3). The objective function accepts a set of parameters configured with the learning rate range [1 × 10−4, 1 × 10−1], the initial range of the number of hidden layer nodes [5, 100], and the initial range of the regularization coefficient [1 × 10−6, 1 × 10−1], which are used to build and train the model and evaluate its performance. Model performance was assessed by root mean square error (RMSE), which measures the difference between predicted and actual values to determine the effectiveness of each parameter configuration. The NRBO optimization algorithm is based on two key mechanisms: a Newton–Laverson search rule (NRSR) and a trap avoidance operator (TAO). NRSR improves the search ability and convergence speed of the algorithm by using the first and second derivatives to accelerate the update of the knowledge position. TAO helps the algorithm avoid falling into local optimal solutions by increasing the randomness and diversity of solutions. Through many iterations, the NRBO algorithm gradually optimizes the model parameters. In each iteration, the algorithm evaluates the performance of the current parameter settings and adjusts them according to the natural rhythm strategy to explore a more optimal solution. Finally, a hybrid CNN-LSTM network is constructed using the optimized parameter configuration. Other parameters of the model are shown in Table 2, the learning rate is reduced from the initial 0.1 to 0.00005, and the optimal parameter configuration is used for the second layer architecture of LSTM.

5. Results

A mathematical model for ash content prediction was established using the NRBO-CNN-LSTM algorithm. Figure 10, Figure 11, Figure 12 and Figure 13 demonstrate the performance of ash content predictions on the training and test sets from different perspectives. Figure 10, Figure 11 and Figure 12 show the comparison of prediction results on the training and test sets, respectively. In these figures, red triangles represent the actual values, while blue dots represent the model’s predicted values. From the training set prediction results in Figure 10, it can be observed that the blue dots (predicted values) almost overlap with the red triangles (actual values), indicating that the model fits the training set very well. Figure 11 shows the specific errors between predicted and actual values on the training set, with most errors concentrated in the range of [−0.1, 0.1]. The errors are small and stable across most training samples. Combined with the results in Table 3 for R2, MSE, RMSE, MAPE, MAE, and RPD on the training set, the high R2 and low RMSE, MSE, and MAE values further demonstrate the model’s excellent fitting ability, with very small errors between predicted and actual values indicating the model’s accuracy in predicting clean coal ash content. Similarly, the test set prediction comparison in Figure 12, the specific error plot in Figure 13, and the evaluation metrics in Table 3 for the test set indicate that the model performs very well on unseen data. This means that the model not only has a strong fitting ability on the training data but also generalizes well to new data, demonstrating good generalization capabilities. Through the evaluation on the test set, the NRBO-CNN-LSTM model exhibits strong predictive power and good generalization ability in the ash content prediction task, accurately predicting ash content values.

6. Conclusions

The NRBO-optimized CNN-LSTM model proposed in this study demonstrates excellent performance in predicting the ash content of flotation froth. By preprocessing the flotation experiment videos, the model extracts key image features and constructs a hybrid CNN-LSTM network. The CNN effectively captures essential image features, while the LSTM component successfully identifies long-term dependencies in the time series. NRBO optimization identifies the optimal combination of hyperparameters. The adaptive adjustment capabilities and optimization performance of the NRBO algorithm enhance the accuracy and robustness of the CNN-LSTM model. Experimental results show that the model achieves R-values of 0.9958 and 0.9726 on the training and test sets, respectively, indicating high prediction accuracy and strong robustness. This suggests that the model can operate reliably in complex flotation environments. In summary, this study provides a new technical approach and theoretical basis for the intelligent and automated control of the flotation process, offering significant practical value and promising development potential. Future research could explore potential optimization avenues, such as in-creasing dataset diversity, incorporating more critical features from the flotation process, or experimenting with other deep learning architectures to further enhance the model’s prediction accuracy and generalizability.

Author Contributions

Y.L.: methodology, software, formal analysis, visualization, writing—original draft, writing—review and editing. H.L.: conceptualization, methodology, funding acquisition, supervision, writing—original draft, writing—review and editing. F.L.: supervision, validation, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Anhui Province Coal Clean Processing and Carbon Reduction Engineering Research Center Foundation (CCCE-023001).

Data Availability Statement

The data that were used are confidential.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, S.; Li, L.; Qu, J.; Liu, Q.; Tang, L.; Tao, X.; Fan, H. Oily bubble flotation technology combining modeling and optimization of parameters for enhancement of flotation of low-flame coal. Powder Technol. 2018, 335, 171–185. [Google Scholar] [CrossRef]
  2. Wen, Z.; Zhou, C.; Pan, J.; Nie, T.; Jia, R.; Yang, F. Froth image feature engineering-based prediction method for concentrate ash content of coal flotation. Miner. Eng. 2021, 170, 107023. [Google Scholar] [CrossRef]
  3. Shean, B.J.; Cilliers, J.J. A review of froth flotation control. Int. J. Miner. Process. 2011, 100, 57–71. [Google Scholar] [CrossRef]
  4. Richaud, R.; Herod, A.A.; Kandiyoti, R. Comparison of trace element contents in low-temperature and high-temperature ash from coals and biomass. Fuel 2004, 83, 2001–2012. [Google Scholar] [CrossRef]
  5. Pak, Y.; Pak, D.; Ponomaryova, M.; Imanov, M.; Balbekova, B. Express measurement of solid fuel ash content by nuclear gamma-method. Appl. Radiat. Isot. 2019, 147, 54–58. [Google Scholar] [CrossRef] [PubMed]
  6. Rizk, R.A.M.; El-Kateb, A.H.; Abdul-Kader, A.M. On-line nuclear ash gauge for coal based on gamma-ray transmission techniques. J. Radioanal. Nucl. Chem. 1999, 242, 139–145. [Google Scholar] [CrossRef]
  7. Lv, W.; Wang, Y.; Li, L.; Jin, L.; Wang, C. Mechanism of measuring ash with low-energy gamma rays for equal coal seam thickness. Int. J. Coal Prep. Util. 2024, 44, 1–14. [Google Scholar] [CrossRef]
  8. Yu, A.; Liu, H.; Wang, C.; Lv, J.; Wang, F.; He, S.; Wang, L. Online Ash Content Monitor by Automatic Composition Identification and Dynamic Parameter Adjustment Method in Multicoal Preparation. Processes 2022, 10, 1432. [Google Scholar] [CrossRef]
  9. Morar, S.H.; Harris, M.C.; Bradshaw, D.J. The use of machine vision to predict flotation performance. Miner. Eng. 2012, 36–38, 31–36. [Google Scholar] [CrossRef]
  10. Aldrich, C.; Avelar, E.; Liu, X. Recent advances in flotation froth image analysis. Miner. Eng. 2022, 188, 107823. [Google Scholar] [CrossRef]
  11. Bai, Y. 5G Industrial IoT and Edge Computing Based Coal Slime Flotation Foam Image Processing System. IEEE Access 2020, 8, 137606–137615. [Google Scholar] [CrossRef]
  12. Tan, J.; Liang, L.; Peng, Y.; Xie, G. The concentrate ash content analysis of coal flotation based on froth images. Miner. Eng. 2016, 92, 9–20. [Google Scholar] [CrossRef]
  13. Qiu, Z.; Dou, D.; Zhou, D.; Yang, J. On-line prediction of clean coal ash content based on image analysis. Measurement 2021, 173, 108663. [Google Scholar] [CrossRef]
  14. Zarie, M.; Jahedsaravani, A.; Massinaei, M. Flotation froth image classification using convolutional neural networks. Miner. Eng. 2020, 155, 106443. [Google Scholar] [CrossRef]
  15. Chaurasia, R.C.; Sahu, D.; Suresh, N. Prediction of ash content and yield percent of clean coal in multi gravity separator using artificial neural networks. Int. J. Coal Prep. Util. 2021, 41, 362–369. [Google Scholar] [CrossRef]
  16. Tang, M.; Zhou, C.; Zhang, N.; Liu, C.; Pan, J.; Cao, S. Prediction of the Ash Content of Flotation Concentrate Based on Froth Image Processing and BP Neural Network Modeling. Int. J. Coal Prep. Util. 2021, 41, 191–202. [Google Scholar] [CrossRef]
  17. Massinaei, M.; Jahedsaravani, A.; Mohseni, H. Recognition of process conditions of a coal column flotation circuit using computer vision and machine learning. Int. J. Coal Prep. Util. 2022, 42, 2204–2218. [Google Scholar] [CrossRef]
  18. Wen, Z.; Jia, R.; Liu, H.; Zhou, C. Transfer learning using small-sized dataset for concentrate ash content prediction of coal flotation. Int. J. Coal Prep. Util. 2023, 43, 1358–1375. [Google Scholar] [CrossRef]
  19. Lu, F.; Liu, H.; Lv, W. Deep correlation and precise prediction between static features of froth images and clean coal ash content in coal flotation: An investigation based on deep learning and maximum likelihood estimation. Measurement 2024, 224, 113843. [Google Scholar] [CrossRef]
  20. Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to Forget: Continual Prediction with LSTM. Neural Comput. 2000, 12, 2451–2471. [Google Scholar] [CrossRef]
  21. Da Silva, D.G.; Meneses, A.A. Comparing Long Short-Term Memory (LSTM) and bidirectional LSTM deep neural networks for power consumption prediction. Energy Rep. 2023, 10, 3315–3334. [Google Scholar] [CrossRef]
  22. Gogineni, A.; Rout, M.K.D.; Shubham, K. Evaluating machine learning algorithms for predicting compressive strength of concrete with mineral admixture using long short-term memory (LSTM) Technique. Asian J. Civ. Eng. 2024, 25, 1921–1933. [Google Scholar] [CrossRef]
  23. Li, W.; Ren, B.; Zhang, X.; Liu, Y.; Yao, J. Prediction of Multiple Components of Secondary Ash in Blast Furnace Based on Firefly Algorithm Optimized LSTM. In Proceedings of the 2023 5th International Conference on Industrial Artificial Intelligence (IAI), Shenyang, China, 21–24 August 2023; pp. 1–6. [Google Scholar] [CrossRef]
  24. Gong, X.; Jiang, W.; Hu, S.; Yang, Z.; Liu, X.; Fan, Z. Comprehensive utilization of foundry dust: Coal powder and clay minerals separation by ultrasonic-assisted flotation. J. Hazard. Mater. 2021, 402, 124124. [Google Scholar] [CrossRef]
  25. Khan, M.F.; Goyal, D.; Nofal, M.M.; Khan, E.; Al-Hmouz, R.; Herrera-Viedma, E. Fuzzy-Based Histogram Partitioning for Bi-Histogram Equalisation of Low Contrast Images. IEEE Access 2020, 8, 11595–11614. [Google Scholar] [CrossRef]
  26. Singh, K.; Vishwakarma, D.K.; Walia, G.S.; Kapoor, R. Contrast enhancement via texture region based histogram equalization. J. Mod. Optic 2016, 63, 1444–1450. [Google Scholar] [CrossRef]
  27. Xiong, J.; Yu, D.; Wang, Q.; Shu, L.; Cen, J.; Liang, Q.; Chen, H.; Sun, B. Application of Histogram Equalization for Image Enhancement in Corrosion Areas. Shock. Vib. 2021, 2021, 8883571. [Google Scholar] [CrossRef]
  28. Wang, Z.; Zhao, H.; Zeng, X. Constrained Least Mean M-Estimation Adaptive Filtering Algorithm. IEEE Trans. Circuits Syst. II Express Briefs 2021, 68, 1507–1511. [Google Scholar] [CrossRef]
  29. Xiao, H.; Guo, B.; Zhang, H.; Li, C. A Parallel Algorithm of Image Mean Filtering Based on OpenCL. IEEE Access 2021, 9, 65001–65016. [Google Scholar] [CrossRef]
  30. Zhang, Y.-S.; Zhang, F.; Li, B.-Z. Image restoration method based on fractional variable order differential. Multidimens. Syst. Signal Process. 2018, 29, 999–1024. [Google Scholar] [CrossRef]
  31. Hemalatha, S.; Margret Anouncia, S. G-L fractional differential operator modified using auto-correlation function: Texture enhancement in images. Ain Shams Eng. J. 2018, 9, 1689–1704. [Google Scholar] [CrossRef]
  32. El Hamidi, A.; Tfayli, A. Identification of the derivative order in fractional differential equations. Math. Method Appl. Sci. 2021, 44, 8397–8413. [Google Scholar] [CrossRef]
  33. Zhang, W.; Liu, D.; Wang, C.; Liu, R.; Wang, D.; Yu, L.; Wen, S. An Improved Python-Based Image Processing Algorithm for Flotation Foam Analysis. Minerals 2022, 12, 1126. [Google Scholar] [CrossRef]
  34. Roerdink, J.B.T.M.; Meijster, A. The Watershed Transform: Definitions, Algorithms and Parallelization Strategies. Fund. Inform. 2000, 41, 187–228. [Google Scholar] [CrossRef]
  35. Nie, Y.; Wang, H.; Qin, Y.; Sun, Z. Distributed and morphological operation-based data collection algorithm. Int. J. Distrib. Sens. Netw. 2017, 13, 1550147717717593. [Google Scholar] [CrossRef]
  36. Yianatos, J.; Vallejos, P. Limiting conditions in large flotation cells: Froth recovery and bubble loading. Miner. Eng. 2022, 185, 107695. [Google Scholar] [CrossRef]
  37. Tian, R.; Sun, G.; Liu, X.; Zheng, B. Sobel Edge Detection Based on Weighted Nuclear Norm Minimization Image Denoising. Electronics 2021, 10, 655. [Google Scholar] [CrossRef]
  38. Zhang, K.; Wang, W.; Cui, Y.; Lv, Z.; Fan, Y.; Zhao, X. Deep learning-based estimation of ash content in coal: Unveiling the contributions of color and texture features. Measurement 2024, 233, 114632. [Google Scholar] [CrossRef]
  39. Lhermitte, E.; Hilal, M.; Furlong, R.; O’Brien, V.; Humeau-Heurtier, A. Deep Learning and Entropy-Based Texture Features for Color Image Classification. Entropy 2022, 24, 1577. [Google Scholar] [CrossRef]
  40. Bonifazi, G.; Serranti, S.; Volpe, F.; Zuco, R. Characterisation of flotation froth colour and structure by machine vision. Comput. Geosci. 2001, 27, 1111–1117. [Google Scholar] [CrossRef]
  41. Lu, F.; Liu, H.; Lv, W. Prediction of Clean Coal Ash Content in Coal Flotation through a Convergent Model Unifying Deep Learning and Likelihood Function, Incorporating Froth Velocity and Reagent Dosage Parameters. Processes 2023, 11, 3425. [Google Scholar] [CrossRef]
  42. Huang, Q.; Xiang, T.; Zhao, Z.; Wu, K.; Li, H.; Cheng, R.; Zhang, L.; Cheng, Z. Directional region-based feature point matching algorithm based on SURF. J. Opt. Soc. Am. A 2024, 41, 157–164. [Google Scholar] [CrossRef] [PubMed]
  43. Li, X.; Zhu, J.; Ruan, Y. Vehicle Seat Detection Based on Improved RANSAC-SURF Algorithm. Int. J. Pattern Recogn. 2021, 35, 2155004. [Google Scholar] [CrossRef]
  44. Al-Qudah, S.; Yang, M. Large Displacement Detection Using Improved Lucas–Kanade Optical Flow. Sensors 2023, 23, 3152. [Google Scholar] [CrossRef] [PubMed]
  45. Chen, L.; Li, S.; Bai, Q.; Yang, J.; Jiang, S.; Miao, Y. Review of Image Classification Algorithms Based on Convolutional Neural Networks. Remote Sens. 2021, 13, 4712. [Google Scholar] [CrossRef]
  46. You, W.; Shen, C.; Wang, D.; Chen, L.; Jiang, X.; Zhu, Z. An Intelligent Deep Feature Learning Method with Improved Activation Functions for Machine Fault Diagnosis. IEEE Access 2020, 8, 1975–1985. [Google Scholar] [CrossRef]
  47. Xiao, F.; Xue, W.; Shen, Y.; Gao, X. A New Attention-Based LSTM for Image Captioning. Neural Process Lett. 2022, 54, 3157–3171. [Google Scholar] [CrossRef]
  48. Landi, F.; Baraldi, L.; Cornia, M.; Cucchiara, R. Working Memory Connections for LSTM. Neural Netw. 2021, 144, 334–341. [Google Scholar] [CrossRef]
  49. Sowmya, R.; Premkumar, M.; Jangir, P. Newton-Raphson-based optimizer: A new population-based metaheuristic algorithm for continuous optimization problems. Eng. Appl. Artif. Intel. 2024, 128, 107532. [Google Scholar] [CrossRef]
Figure 1. Coal flotation experimental equipment system diagram based on AI vision.
Figure 1. Coal flotation experimental equipment system diagram based on AI vision.
Minerals 14 00894 g001
Figure 2. Internal principle diagram of flotation cell.
Figure 2. Internal principle diagram of flotation cell.
Minerals 14 00894 g002
Figure 3. Comparison of flotation foam images before and after histogram equalization: (a) original image; (b) grayscale histogram of the original image; (c) image after histogram equalization; (d) grayscale histogram of the image after histogram equalization.
Figure 3. Comparison of flotation foam images before and after histogram equalization: (a) original image; (b) grayscale histogram of the original image; (c) image after histogram equalization; (d) grayscale histogram of the image after histogram equalization.
Minerals 14 00894 g003
Figure 4. Distribution of extracted bubble features per frame for each group. (a) Extraction of average bubble diameter per frame for each group; (b) extraction of bubble count per frame for each group; (c) extraction of rgb color per frame for each group; (d) extraction of hsi color per frame for each group; (e) extraction of the mean grayscale value of bubbles per frame for each group; (f) extraction of the skewness of bubbles per frame for each group; (g) extraction of each group’s bubble kurtosis per frame; (h) extraction of each group’s bubble median per frame; (i) extraction of bubble variance per frame for each group; (j) extraction of average bubble velocity per frame for each group.
Figure 4. Distribution of extracted bubble features per frame for each group. (a) Extraction of average bubble diameter per frame for each group; (b) extraction of bubble count per frame for each group; (c) extraction of rgb color per frame for each group; (d) extraction of hsi color per frame for each group; (e) extraction of the mean grayscale value of bubbles per frame for each group; (f) extraction of the skewness of bubbles per frame for each group; (g) extraction of each group’s bubble kurtosis per frame; (h) extraction of each group’s bubble median per frame; (i) extraction of bubble variance per frame for each group; (j) extraction of average bubble velocity per frame for each group.
Minerals 14 00894 g004aMinerals 14 00894 g004b
Figure 5. Coal flotation foam image of the dynamic characteristics of the light flow matching corresponding map points: (a) original graph feature point detection; (b) the feature points of adjacent frames are matched by optical flow.
Figure 5. Coal flotation foam image of the dynamic characteristics of the light flow matching corresponding map points: (a) original graph feature point detection; (b) the feature points of adjacent frames are matched by optical flow.
Minerals 14 00894 g005
Figure 6. Flotation foam characteristic data convolution operation diagram. (a) 0 × 173 + 1 × 372 + 0 × 361 + 1 × 418 = 790. (b) 0 × 164.22 + 1 × 170.59 + 0 × 180.42 + 1 × 190.95 = 361.54. (c) 0 × 70.19 + 1 × 71.27 + 0 × 72.62 + 1 × 72.75 = 144.02. (d) 0 × 61.3 + 1 × 61.23 + 0 × 61.09 + 1 × 61.94 = 123.17.
Figure 6. Flotation foam characteristic data convolution operation diagram. (a) 0 × 173 + 1 × 372 + 0 × 361 + 1 × 418 = 790. (b) 0 × 164.22 + 1 × 170.59 + 0 × 180.42 + 1 × 190.95 = 361.54. (c) 0 × 70.19 + 1 × 71.27 + 0 × 72.62 + 1 × 72.75 = 144.02. (d) 0 × 61.3 + 1 × 61.23 + 0 × 61.09 + 1 × 61.94 = 123.17.
Minerals 14 00894 g006
Figure 7. Flotation foam characteristic data pooling operation diagram.
Figure 7. Flotation foam characteristic data pooling operation diagram.
Minerals 14 00894 g007
Figure 8. LSTM network structure diagram.
Figure 8. LSTM network structure diagram.
Minerals 14 00894 g008
Figure 9. CNN-LSTM composite network structure.
Figure 9. CNN-LSTM composite network structure.
Minerals 14 00894 g009
Figure 10. Flotation foam training set predicted value and actual ash value training results.
Figure 10. Flotation foam training set predicted value and actual ash value training results.
Minerals 14 00894 g010
Figure 11. Flotation foam training set predicted value and actual ash value error diagram.
Figure 11. Flotation foam training set predicted value and actual ash value error diagram.
Minerals 14 00894 g011
Figure 12. Flotation foam test set predicted value and actual ash value training results.
Figure 12. Flotation foam test set predicted value and actual ash value training results.
Minerals 14 00894 g012
Figure 13. Flotation foam test set predicted value and actual ash value error diagram.
Figure 13. Flotation foam test set predicted value and actual ash value error diagram.
Minerals 14 00894 g013
Table 1. Dosage of reagents in flotation froth experiments.
Table 1. Dosage of reagents in flotation froth experiments.
Frother (μL)Collector (μL)
28.98110.63
33.12119.85
37.26 129.07
41.41138.29
45.55147.51
Table 2. NRBO optimizes the parameter setting of CNN-LSTM hybrid network model.
Table 2. NRBO optimizes the parameter setting of CNN-LSTM hybrid network model.
ParametersSet Values
MiniBatchSize128
Optimal L2 Regularization Coefficient1 × 10−6
Optimal Number of Hidden Nodes70
Optimal Initial Learning Rate0.1
Dropout0.25
Epoch500
Time13
Table 3. The training set verifies the error table between the predicted ash value and the actual ash value.
Table 3. The training set verifies the error table between the predicted ash value and the actual ash value.
Evaluation MetricsR2MSERMSEMAPEMAERPD
Training Set0.9920.0020.0440.0030.02510.972
Test Set0.9460.0130.1160.0080.0614.316
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Liu, H.; Lu, F. Research on Prediction of Ash Content in Flotation-Recovered Clean Coal Based on NRBO-CNN-LSTM. Minerals 2024, 14, 894. https://doi.org/10.3390/min14090894

AMA Style

Li Y, Liu H, Lu F. Research on Prediction of Ash Content in Flotation-Recovered Clean Coal Based on NRBO-CNN-LSTM. Minerals. 2024; 14(9):894. https://doi.org/10.3390/min14090894

Chicago/Turabian Style

Li, Yujiao, Haizeng Liu, and Fucheng Lu. 2024. "Research on Prediction of Ash Content in Flotation-Recovered Clean Coal Based on NRBO-CNN-LSTM" Minerals 14, no. 9: 894. https://doi.org/10.3390/min14090894

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop