Next Article in Journal
Graph Model-Based Lane-Marking Feature Extraction for Lane Detection
Previous Article in Journal
Health Condition Estimation of Bearings with Multiple Faults by a Composite Learning-Based Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization of Position and Number of Hotspot Detectors Using Artificial Neural Network and Genetic Algorithm to Estimate Material Levels Inside a Silo

1
School of Civil and Environmental Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722, Korea
2
ICE Meca Tech Co., Ltd., Sagimakgol-ro 45 Beon-gil 14, Jungwon-gu, Seongnam-si 13209, Korea
3
Korea Midland Power Co., Ltd., 89-37, Boryeongbuk-ro 160, Boryeong-si 33439, Korea
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(13), 4427; https://doi.org/10.3390/s21134427
Submission received: 28 May 2021 / Revised: 23 June 2021 / Accepted: 25 June 2021 / Published: 28 June 2021
(This article belongs to the Section Physical Sensors)

Abstract

:
To realize efficient operation of a silo, level management of internal storage is crucial. In this study, to address the existing measurement limitations, a silo hotspot detector, which is typically utilized for internal silo temperature monitoring, was employed. The internal temperature data measured using the hotspot detectors were used to train an artificial neural network (ANN) algorithm to predict the level of the internal storage of the silo. The prediction accuracy was evaluated by comparing the predicted data with ground truth data. We combined the ANN model with the genetic algorithm (GA) to improve the prediction accuracy and establish efficient sensor installation positions and number to proceed with optimization. Simulation results demonstrated that the best predictive performance (up to 97% accuracy) was achieved when the ANN structure was 9-19-19-1. Furthermore, the numbers of efficient sensors and sensors positions determined using the proposed ANN-GA technique were reduced from seven to five or four, thereby ensuring economic feasibility.

1. Introduction

Coal and biomass are important sources of energy in the modern society. Industries that generally utilize these resources synthesize, collect, and store them in large quantities for utilization as per the requirement. Therefore, safe storage is a crucial aspect. Silos are employed in several industries as the most preferred structure for storing resources because of their storage capacity and multiple control functions. Therefore, determining the level of resources in the silo is critical for resource management and silo operations. Several tools and methods have been used for this purpose. The most commonly used methods include the plumb-bob method, weight and cable method, ultrasonic techniques, guided wave radar (GWR), laser meters, load cells, and thru-air radar (TAR) [1,2,3]. Each technique has its own advantages and disadvantages. Ultrasonic and laser techniques are highly effective for measuring the material level inside a silo. However, they are not practical owing to the presence of dust inside the silo [2,4]. The application of a load cell to measure the amount of material inside a silo is another widely used simple method, but this method cannot be applied to large silos. However, GWR and TAR techniques can address this disadvantage. Although GWR systems are generally more precise, they require an induction cell capable of transmitting and receiving electromagnetic (EM) energy while performing measurements at a single level point [2,5]. TAR requires an antenna for transmission and reception of the EM waves. Furthermore, the beam width used in this technique is limited. The number of antennas to be installed and the required measurement points increases with the increase in silo diameter. Consequently, a more economical and practical method is required to measure the material level inside a silo.
In this study, we employed a silo hotspot detector to overcome these limitations. This hotspot detector is developed to detect the possibility of fire, such as natural ignition of the silo’s internal storage. It is installed inside the silo and detects the possibility of fire by directly measuring the internal and external temperatures of the storage in real time. After checking the temperature measurement results of the detector, it can be seen that the temperatures inside and outside the material are different. This can be used to infer the internal material level of the silo based on the temperature results of the detector. In some cases, the error value is small compared to the actual value; however, it is also confirmed that significant errors occur. Therefore, it is difficult to estimate accurate level values simply by dividing the boundaries; because of this, other methods are needed to mitigate this problem.
Machine learning (ML) techniques, which are applied in various ways in various fields, can be used to solve the aforementioned problems. ML techniques analyze the existing data to predict other relevant data and future results [6]. The most commonly used ML techniques include the k-nearest neighbor (KNN) algorithm [7], support vector machine (SVM) [8], and artificial neural networks (ANNs) [8]. ANNs are generally preferred owing to their simplified mathematical definitions and short training times. A back-propagation neural network (BPNN), which is a specific type of ANN, is a multi-layer feed-forward neural network (MFNN) trained using a back-propagation (BP) algorithm. The BP algorithm is a local search algorithm that iteratively updates the weight and bias of neural networks to minimize errors with real data and ANN prediction results. BPNN has been utilized by many researchers because of its ability to map relationships between two data in complex, nonlinear relationships.
Furthermore, genetic algorithms (GAs) are optimization methods that use selection, crossover, and mutation operators to search the response space more efficiently without multi-pointing and initial conjecture in each iteration. Chandwani et al. [9] studied the effectiveness of combining ANNs and GAs for slump modeling of ready-mixed concrete based on cement, fly ash, sand, coarse aggregate, mixing, and water-binder ratios. Gao et al. [10] developed a fault-finding system for lithium-ion batteries using GA optimization to predict the optimal operating parameters. Cui et al. [11] utilized a GA to optimize the indoor Wi-Fi positioning. Zhang et al. [12] conducted a GA-ANN-based short-term wind speed prediction study for wind generator installation. Li et al. [13] optimized the internal combustion engine strategy through the ANN model, and an ANN-GA model was introduced to improve accuracy and stability. In addition, there are a variety of studies that combine neural networks (NNs) and GAs to solve the problem [14,15,16]. As such, research using ANNs and GAs is still actively applied to the present in various engineering domains. Nevertheless, methodologies using ANNs and GA to measure the internal material level of silos have not been studied so far.
To overcome the limitations of existing silo material-level measurement techniques, this study proposes a novel method for measuring material level by applying data obtained through a silo hotspot detector, which monitors silo internal temperature, to ANN. In addition, we propose a method to combine GA and ANN to obtain the optimal number of sensors and installation position to improve accuracy and secure economic feasibility in real-world applications. The remainder of this paper is organized as follows: Section 2 introduces the silo hotspot detector and the experimental process. Section 3 discusses the applied ML algorithm, the application method, and the analysis process. Section 4 discusses the GA, the algorithm applied for optimization, and describes the optimization process. The results and discussion are presented in Section 5 and Section 6, respectively. The final section concludes the study and discusses the scope of future research.

2. Dataset Deployment

2.1. Experimental Data Collection

As mentioned above, the data required for this study were obtained from the experiments. The silo hotspot detector used in this study is shown in Figure 1, and its specifications are listed in Table 1. To obtain the temperature data measured during the silo operation, the detector was installed, as shown in Figure 2. The specifications of the target silo and positions of the detector and sensors are shown in Figure 2. The temperature was measured at seven positions by installing the temperature sensors, as shown in Figure 2. The silo internal material level, which is another data required for this study, was measured using the well-known plumb-bob method. The machines and measurement methods used in the plumb-bob method are shown in Figure 3. It is a device that can send a rope-type machine with ironing tips down from the top of the silo to measure the distance lowered when it touches the surface of the material and estimates the level of the current silo material. The specifications for the level-measuring devices are shown in Table 2.

2.2. Experimental Result

Some experimental results, which were obtained from the prediction of the level of materials inside the silo, were extracted and analyzed. Figure 4 shows the experimental results for 10 cases. The temperature data measured through the hotspot detector and the section with a large temperature difference in each case were assumed to be at the material level. Table 3 shows a comparison between the assumed material-level values and the measured values. This indicates that some level values can be estimated; however, there are instances where large differences between the assumed and measured values can be seen. Therefore, it can be confirmed that other methods are needed to obtain reliable material-level values.

2.3. Feature Extraction

First, from the acquired data, the features used for learning were screened for efficient learning and for improving the accuracy of the ML algorithm; these features included the following three data types:
  • Temperature data from the sensors (F1): The data recorded by the seven temperature sensors installed on the silo hotspot detector, which could be located above or below the material level, were included in one dataset for each level and represented the principal data in this study.
  • Atmospheric temperature (F2): This refers to the ambient air temperature around the silo, which acts as a reference for determining the air temperature inside the silo.
  • Number of sensors below the material level (F3): The seven temperature sensors in the silo hotspot detector could be located above or below the material level. Therefore, it is important to account for the number of internal sensors for each set of acquired data in the dataset. The number of internal sensors at different material levels is shown in Table 4.

2.4. Dataset Implementation

To ensure efficient training and accuracy prediction by the ANN model, the datasets were constrained within the range of the training data. The extreme (maximum and minimum) values of the training dataset were set such that they included the values in all the datasets used [9]. The training and test datasets comprising the temperature and silo-level values are listed in Table 5. A total of 3080 datasets were utilized as training data and test data.

3. Methodology

In this study, the neural network toolbox included in the commercial software MATLAB R2020a was used to perform the analysis.

3.1. Artificial Neural Network (ANN)

ANN is an information-processing paradigm that processes complex systems by mimicking the learning processes of the human brain. Although the input data and output data relationships are difficult to define, application of ANN allows adequate correlation between two, e.g., complex nonlinear relationships. The accuracy of the prediction results obtained using ANNs is considerably influenced by the ANN architecture, which includes the number of layers in the network, number of neurons in each layer, type of transmission, and training functions in each layer [17] and different architectures that need to be applied for each problem. The architecture applicable to this study is discussed in Section 3.2.
In this study, we applied the MFNN, a type of MLP [18,19,20,21,22,23], which has proven to be well-suited to several applications. This neural network comprises three layers: an input layer, a hidden layer, and an output layer, as shown in Figure 5.
The first step for applying ANN to the problem to be solved is constructing a training dataset for the problem. The training data are propagated from the input layer to the output layer through the hidden layer, where the output value is generated after adjusting the weights and biases of each layer. The error between the predicted output and actual value is calculated and propagated in the reverse direction. The algorithm re-adjusts the weights and biases according to the calculated error. After this process is repeated, the final prediction results are determined when the target error value is reached.
For the efficient learning of ANNs, appropriate values for the learning rate and momentum parameter were selected, as shown in Table 6. The learning rate is a common parameter in numerous learning algorithms, affecting the rate at which ANNs obtain solutions. In the ANN, the learning rate was similar to the step-size parameter of the gradient descent algorithm. If the step size is too large, the system will go back and forth for the true solution, or it will be complete. If the step size is too small, the system will require a significant amount of time to converge to the final solution. The momentum parameter prevents the system from focusing on the local minimum or saddle point. The high momentum parameters increase the convergence rate of the system but can overshoot the minimum value; this can destabilize the system. In contrast, low momentum parameters are unavoidable local minima and slow down the training speed of the system [24]. The combination of these parameters assists BP algorithms to overcome the local minima effect. The weight change is determined by learning rate and the momentum parameter, such as Equations (1) and (2).
Δ w n = α Δ w n 1 η e w
where
e = 1 N i = 1 N T i K i 2
Here, w represents the weight between two neurons; Δ w and Δ w n 1 represent the changes in weights at n and n 1 iterations, respectively, α is the kinetic factor, η is the learning rate, e is the computational error, T i is the actual output, and K i is the prediction output.

3.2. Setup of ANN Architecture and Parameters

3.2.1. Number of Hidden Layers

Many researchers have reported that an ANN with a three-layer structure comprising one hidden layer functions appropriately [18,21,23,25,26,27,28]. However, other researchers have suggested using two hidden layers to analyze the nonlinear relationship between the data [29]; moreover, in some studies, two hidden layers were reported as optimal structures for problem solving. [19,20,30,31]. Optimization rules or concrete methods for determining the number of hidden layers are yet to be established. Consequently, in this study, we compared the network features with one and two hidden layers using trial and error methods and determined the optimal number of hidden layers.

3.2.2. Number of Neurons in the Layers

The number of neurons in the input and output layers is typically determined at the beginning of the study, as it determines the actual number of data used and the number predicted. Because no optimization rule is available to determine the required number of neurons in the hidden layer, we used the values proposed by Zhang et al. [32] for this purpose, that is, n/2, n, 2n, and 2n + 1, where n represents the number of input variables. We selected the optimal number of neurons by comparing the results for each case. Because the number of neurons in the input layer was nine, the ANN structures in this study were 9-5-1, 9-9-1, 9-18-1, 9-19-1, 9-5-5-1, 9-9-9-1, 9-18-18-1, and 9-19-19-1.

3.2.3. Transfer and Training Function

In an ANN, the transfer function determines the strength of the output value after converting the weighted sum of the input data to the output neuron. A standard criteria for determining the transfer function has not yet been established; hence, the function should be adapted to the problem that is being solved [33]. Therefore, in this study, the following commonly used transfer functions were adopted: the tangent transfer function (tansig) for hidden layers and the linear transfer function (purelin) for the output layers [34]. The training function employed a scaled conjugate gradient (SCG) algorithm.
A systematic update of the weight and deflection values was performed using the SCG developed by Møller [35]. This algorithm combines a model-trust region approach with a conjugate gradient approach; the latter uses a step-size scaling mechanism to avoid time-consuming line searches per learning iteration. Moreover, the SCG algorithm does not consider user-dependent parameters with important values [35]. The overall flow diagram of the ANN models applied in this study is shown in Figure 6; the parameters of the architecture are listed in Table 6.

3.3. Evaluation of the Trained Model Performance

Two statistical parameters were adopted to evaluate the performance of the trained ANN model. These included the mean absolute error (MAE), mean squared error (MSE), and coefficient of correlation (R). The performance metrics are calculated as follows:
MAE = 1 N i = 1 N T i K i
MSE = 1 N i = 1 N T i K i 2
R = i = 1 N T i T ¯ K i K ¯ i = 1 N T i T ¯ 2 i = 1 N K i K ¯ 2
where N is the total number of dataset, T is the target observed value, K is the ANN predicted output value, T ¯ is the average of the observed value, and K ¯ is the average of the ANN predicted value. MAE represents the mean value converted from the difference between the observed and the predicted value to the absolute value. MSE represents the square root of the mean error between the observed and predicted values. A low MAE and MSE indicates better predictive performance. The association strength between observations and predictions was quantified using Pearson’s correlation coefficient (R). However, R depends on the linear relationship between the real and predicted values, resulting in biased results if the relationship between the two values contains outliers. Therefore, the closer the R value is to 1, the more accurate the predicted results from the ANN model. Combining the three aforementioned performance metrics can provide unbiased estimates of the predictive capabilities of neural network models.

4. Optimization

In this study, the silo internal material level was predicted using temperature data and ANN; additionally, optimization tasks were performed to improve the prediction accuracy, minimize the number of temperature measurement points, and optimize positions. The subsequent sections describe the algorithms and methods applied to the performed optimization tasks.

4.1. Genetic Algorithm (GA)

GA is an optimization algorithm that mimics the principles of natural evolution of living things and is widely applied and utilized in various domains, including engineering [36]. Figure 7 illustrates the concept of GA. The algorithm leverages the selection, crossing, and mutation operators shown in the biological evolution process to produce better results in the next generation compared to the present generation. The selection operator evaluates the fit of the existing response elements and selects the best one among them. The crossover operator combines the two selected responses from the parent to create a new, unique response. Finally, the mutation operator changes the elements of a given response, creating a wider variety of responses. Responses that evaluate the suitability of new responses created by crossover and mutation operators and exhibit a high fit are passed on to the next generation. This process is repeated until an optimal fitness score is obtained for a problem.
This allows GA to navigate solutions by mutual cooperation between multiple objects, making it easier to find better solutions compared to exploration of simple parallel solutions. Moreover, unlike other algorithms that require differential values of the evaluation function, GA only requires that the current adaptation is discernible. Therefore, the algorithm is simple and can be applied even when the evaluation function is discontinuous. For this reason, this study performed optimization tasks using GA.

4.2. Optimizing Position and Number of Temperature Measurement Points

In this study, the objective of optimizing the silo internal material-level prediction process was to improve the accuracy of the material-level prediction while determining the optimal position and number of temperature measurement points. To this end, we used the ANN-GA hybrid approach. The steps involved in the ANN-GA approach are depicted in Figure 8. The parameters used are listed in Table 7.
Steps involved in the ANN-GA process:
  • Selection of the number of temperature measurement positions (ts) and configuring the ANN architecture. Establishing training algorithms and termination conditions; optimizing network training
  • Selection of parameters, that is, the population size (Npop) and the maximum number of generations (Gen), for the optimization process.
  • Determining the output value of each response using trained ANNs; determining the suitability of the output value using the objective function.
  • Evaluation of conformity values using selection, crossover, and mutation.
  • Generation of a new response from the previous step; evaluation of the conformity value of the new response. Determining the ranking of all responses based on the target function value with the most appropriate Npop value forming the next generation.
  • Repetition of steps 3–5 until one of the following termination conditions is satisfied:
    • The maximum number of generations is exceeded.
    • The fitness function reaches the target value or a specific deviation from the target value.
    • There is no improvement in the fitness value during the specified generation.

5. Results and Discussion

5.1. ANN Model Sensitivity Based on Input Value Combination

Prior to the silo material level prediction over ANN, we confirmed the sensitivity of the ANN model when multiple combinations of data features compiled in Section 2.3 were applied as input data. The total data combinations are F1 (Case 1), F1 + F2 (Case 2), F1 + F3 (Case 3), and F1 + F2 + F3 (Case 4). The results for each case are shown in Table 8. Sensitivity analysis confirms that Case 4 performs best. Therefore, in this analysis, the combination of input data is applied as shown in Case 4.

5.2. Training Results

The training results are summarized in Figure 9 and Table 9. For one hidden layer, the MSE value was minimum at 0.0786, and the R value was maximum at 0.9912 in Case 4. For two hidden layers, the MAE value was minimum at 0.2377, the MSE value was minimum at 0.0315, and the R value was maximum at 0.99612 in Case 8. As shown in Figure 9, the ANN architecture had an identical impact on the MAE, MSE, and R values. The prediction accuracy was higher when the number of hidden layers was two than in one, and the prediction accuracy increased as the number of neurons increased.
In general, the accuracy of ANN training results vary with the number of hidden layers and neurons [17]. This effect was identified in the present study. It was confirmed that the more suitable the structure was for the problem it sought to solve, the more accurate the ANN training result was.

5.3. Test Results

Tests were performed on eight ANN models utilizing data not used for training. Each structure consists of a different number of hidden layers and neurons. The analysis of test results allowed us to determine whether overfitting may occur during the training process. This is an important aspect when applying the developed ANN model to real-world sites.
The test results are shown in Figure 10 and Table 9. For one hidden layer, the minimum value of MAE and MSE was 0.2709 and 0.1808 in Case 4, and the maximum value of R was 0.9758 in Case 1. For two hidden layers, the MAE value was minimum (0.2391) in Case 8, the MSE value was minimum (0.1545), and the R value was maximum (0.98406) in Case 6. The test results differed from those of the training results. The MAE and MSE value was higher than the training result, and the R value was lower. This indicates that the test results were less accurate than the training results. However, despite these differences, the test results showed high accuracy and confirmed that no overfitting occurred during the training process.
To verify the accuracy of the test results more directly, the actual and predictive results were compared, and the differences were identified. Considering the training and test results for various ANN architectures, we compared the test results for the selected structure (Case 8: 9-19-19-1) with the actual results, as shown in Figure 11. The comparison results confirm the following results.
  • The ANN predicted changes in the level of materials inside the silo owing to material discharge and charging.
  • An inspection of the results confirmed that the temperature data could be used to predict the level of material inside the silo.

5.4. Optimization Results

As noted in Section 4.2, the variable in the optimization process indicates the position of the internal silo temperature measurement point. Optimization was performed to obtain the optimal measurement position for cases with three, four, five, and six optimization points. Based on the target function (MSE), we evaluated 10 ts populations per generation. As an end condition, we set the optimization process to terminate when the fitness value did not change upon repeating more than 150 generations. Furthermore, ANN training was conducted using the optimized position, and the material level was predicted. Figure 12 shows the optimization endpoints for each variable.
Figure 13 shows the plot of the optimized position based on the number of temperature measurement points inside the silo. Table 10 shows the ANN training results when the sensors were placed at optimized positions based on the number of internal silo temperature-measurement points. Optimization shows that although the number of temperature-measurement points decreased by approximately 50%, the difference in MAE and MSE values were not significantly different. For four, five, and six measurement points, the MAE and MSE value decreased and the R value increased. This implies a higher prediction accuracy when setting the temperature measurement points at the optimal positions. The number of measurement points was optimized for cases with three, four, five, and six points, and the MSE values for these cases were 0.0415, 0.0234, 0.0189, and 0.0078, respectively. The MSE values tended to decrease when the number of measurement points increased; however, the values in each case were close to 0. Consequently, even if the number of measurement points was reduced to three, the prediction accuracy of ANNs can be expected to be very high if the measured temperature at the optimized position was used as the input value.
Table 10 shows the ANN test results when the sensors were optimally positioned. Figure 14 shows the comparison of the internal silo material levels obtained through experiments with the levels predicted by the ANN at the optimized position, showing the error between the two results. Figure 15 shows the comparison of the level-value errors predicted from the optimization positions via boxplot. As shown in Figure 15, the prediction error for 4, 5, and 6 points was approximately 50% lower than the prediction error for 3 points. As with the training results, the MAE and MSE values decreased by approximately 50%, and the R values increased. This confirms that if the number of measurement points was four, five, and six, the accuracy of the prediction results obtained based on the optimized input values was high.

6. Conclusions

Herein, we propose the use of an ANN to predict the level (output) of a material inside a silo based on the temperature (input) within the silo. The test data were experimentally obtained. The proposed method could accurately predict the material level of large-capacity silos, which was relatively difficult to measure in practice. A method to optimize the number and positions of temperature-measurement points required to predict material-level values was also proposed. The global search function of the GA was used to optimize the number and positions of the temperature measurement points because the accuracy of the ANN’s material-level prediction varied depending on the number and position of the temperature measurement points. The results indicated the following:
  • The accuracy of the material levels predicted using temperature data was sufficiently high.
  • The proposed method enables simultaneous, real-time monitoring of temperature and material levels using a temperature detector, thereby ensuring efficient silo management.
  • The method can accurately predict the material levels inside a silo by optimizing the number and position of the temperature-measurement points. Even when the number of temperature measurement sensors was reduced from seven to three, the material level could be predicted accurately provided that the sensors were installed at optimized positions.
  • When there are more than four measurement points, the error is 1.2–1.3%. This represents a 50% decrease from the error when the number of measurement points is three. Therefore, considering the economic feasibility and temperature detection performance, which is the existing function of the detector, it is considered optimal if the number of measurement points is four or five.
  • The prediction error of the proposed method was approximately 50% less than that of the existing methods.
  • The proposed method is expected to increase the efficiency of the silo operation, make it more economical, and improve the silo safety management in practical applications.
The limitation of this study is that we did not consider surface features when predicting the internal silo material-level values. Depending on the surface shape, the predicted level values and ground truth values may be larger than the conventional errors. While some data have shown information about such surface features, data on real surface features have been insufficient to perform ANNs. Therefore, we would like to obtain sufficient surface-shape data for future studies and check the material-level forecast considering this. In addition, the data used in this study were obtained for the colder months of October–December; hence, it is necessary to obtain and compare the data in the summer, which provides the opposite environment.
Moreover, we intend to check the predictability of factors affecting silo safety management (such as pressure acting on silo walls and the detector) based on temperature data obtained using a silo hotspot detector.

Author Contributions

Conceptualization, J.H.R., S.I.K., and K.M.L.; methodology, J.H.R.; validation, J.H.R.; writing—original draft preparation, J.H.R.; writing—reviewing and editing, J.H.R., M.K.K., and Y.M.L.; supervision, M.K.K. and Y.M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Korea Institute of Energy Technology Evaluation and Planning (KETEP) and was granted financial support from the Ministry of Trade, Industry, and Energy, Republic of Korea (No. 20201510100010), and the Technology Advancement Research Program (TARP) funded by the Ministry of Land, Infrastructure, and Transport of the Korean government (20CTAP-C152286-02).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vogt, M.; Gerding, M. Silo and Tank Vision: Applications, Challenges, and Technical Solutions for Radar Measurement of Liquids and Bulk Solids in Tanks and Silos. IEEE Microw. Mag. 2017, 18, 38–51. [Google Scholar] [CrossRef]
  2. Yigit, E. A novel compressed sensing based quantity measurement method for grain silos. Comput. Electron. Agric. 2018, 145, 179–186. [Google Scholar] [CrossRef]
  3. Lewis, J.D., Sr. Technology Review Level Measurement of Bulk Solids in Bins, Silos and Hoppers; Monitor Technologies LLC: Illinois, IL, USA, 2004. [Google Scholar]
  4. Turner, A.P.; Jackson, J.J.; Koeninger, N.K.; McNeill, S.G.; Montross, M.D.; Casada, M.E.; Boac, J.M.; Bhadra, R.; Maghirang, R.G.; Thompson, S.A. Stored grain volume measurement using a low density point cloud. Appl. Eng. Agric. 2017, 33, 105–112. [Google Scholar]
  5. Lewis, J.D. Ensuring Successful Use of Guided-Wave Radar Level Measurement Technology; Technical Exclusive; Monitor Technologies LLC: Illinois, IL, USA, 2007; pp. 28–33. [Google Scholar]
  6. Alpaydin, E. Introduction to Machine Learning; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  7. Yigit, E. Operating frequency estimation of slot antenna by using adapted kNN algorithm. Int. J. Intell. Syst. Appl. Eng. 2018, 1, 29–32. [Google Scholar] [CrossRef] [Green Version]
  8. Bakhshipour, A.; Jafari, A. Evaluation of support vector machine and artificial neural networks in weed detection using shape features. Comput. Electron. Agric. 2018, 145, 153–160. [Google Scholar] [CrossRef]
  9. Chandwani, V.; Agrawal, V.; Nagar, R. Modeling slump of ready mix concrete using genetic algorithms assisted training of Artificial Neural Networks. Expert Syst. Appl. 2015, 42, 885–893. [Google Scholar] [CrossRef]
  10. Gao, Z.; Chin, C.S.; Woo, W.L.; Jia, J.; Da Toh, W. Genetic algorithm based back-propagation neural network approach for fault diagnosis in lithium-ion battery system. In Proceedings of the 6th International Conference on Power Electronics Systems and Applications, Hong Kong, China, 15–17 December 2015. [Google Scholar]
  11. Cui, X.; Yang, J.; Li, J.; Wu, C. Improved Genetic Algorithm to Optimize the Wi-Fi Indoor Positioning Based on Artificial Neural Network. IEEE Access 2020, 8, 74914–74921. [Google Scholar] [CrossRef]
  12. Zhang, Y.; Pan, G.; Chen, B.; Han, J.; Zhao, Y.; Zhang, C. Short-term wind speed prediction model based on GA-ANN improved by VMD. Renew. Energy 2020, 156, 1373–1388. [Google Scholar] [CrossRef]
  13. Li, Y.; Jia, M.; Han, X.; Bai, X.S. Towards a comprehensive optimization of engine efficiency and emissions by coupling artificial neural network (ANN) with genetic altorithm (GA). Energy 2021, 225, 120331. [Google Scholar] [CrossRef]
  14. DSánchez, D.; Melin, P.; Castillo, O. Optimization of modular granular neural networks using a firefly algorithm for human recognition. Eng. Appl. Artif. Intell. 2017, 64, 172–186. [Google Scholar] [CrossRef]
  15. Melin, P.; Sánchez, D. Multi-objective optimization for modular granular neural networks applied to pattern recognition. Inf. Sci. 2018, 460-461, 594–610. [Google Scholar] [CrossRef]
  16. Sánchez, D.; Melin, P.; Castillo, O. Comparison of particle swarm optimization variants with fuzzy dynamic parameter adaptation for modular granular neural networks for human recognition. J. Intell. Fuzzy Syst. 2020, 38, 3229–3252. [Google Scholar] [CrossRef]
  17. Benardos, P.; Vosniakos, G.-C. Optimizing feedforward artificial neural network architecture. Eng. Appl. Artif. Intell. 2007, 20, 365–382. [Google Scholar] [CrossRef]
  18. Soleimani, R.; Shoushtari, N.A.; Mirza, B.; Salahi, A. Experimental investigation, modeling and optimization of membrane separation using artificial neural network and multi-objective optimization using genetic algorithm. Chem. Eng. Res. Des. 2013, 91, 883–903. [Google Scholar] [CrossRef]
  19. Velásco-Mejía, A.; Vallejo-Becerra, V.; Chávez-Ramírez, A.U.; Torres-González, J.; Reyes-Vidal, Y.; Castañeda-Zaldivar, F. Modeling and optimization of a pharmaceutical crystallization process by using neural networks and genetic algorithms. Powder Technol. 2016, 292, 122–128. [Google Scholar] [CrossRef]
  20. Istadi, I.; Amin, N.A.S. Hybrid Artificial Neural Network−Genetic Algorithm Technique for Modeling and Optimization of Plasma Reactor. Ind. Eng. Chem. Res. 2006, 45, 6655–6664. [Google Scholar] [CrossRef] [Green Version]
  21. Karimi, H.; Ghaedi, M. Application of artificial neural network and genetic algorithm to modeling and optimization of removal of methylene blue using activated carbon. J. Ind. Eng. Chem. 2014, 20, 2471–2476. [Google Scholar] [CrossRef]
  22. Nandi, S.; Badhe, Y.; Lonari, J.; Sridevi, U.; Rao, B.; Tambe, S.S.; Kulkarni, B.D. Hybrid process modeling and optimization strategies integrating neural networks/support vector regression and genetic algorithms: Study of benzene isopropylation on Hbeta catalyst. Chem. Eng. J. 2004, 97, 115–129. [Google Scholar] [CrossRef]
  23. Izadifar, M.; Jahromi, M.Z. Application of genetic algorithm for optimization of vegetable oil hydrogenation process. J. Food Eng. 2007, 78, 1–8. [Google Scholar] [CrossRef]
  24. Rajasekaran, S.; Pai, G.A.V. Neural Networks, Fuzzy Logic and Genetic Algorithms: Synthesis & Applications; Prentice-Hall of India Private Limited: New Delhi, India, 2003. [Google Scholar]
  25. Chen, C.; Ramaswamy, H. Modeling and optimization of variable retort temperature (VRT) thermal processing using coupled neural networks and genetic algorithms. J. Food Eng. 2002, 53, 209–220. [Google Scholar] [CrossRef]
  26. Pappu, S.M.J.; Gummadi, S.N. Artificial neural network and regression coupled genetic algorithm to optimize parameters for enhanced xylitol production by Debaryomyces nepalensis in bioreactor. Biochem. Eng. J. 2017, 120, 136–145. [Google Scholar] [CrossRef]
  27. Sivapathasekaran, C.; Mukherjee, S.; Ray, A.; Gupta, A.; Sen, R. Artificial neural network modeling and genetic algorithm based medium optimization for the improved production of marine biosurfactant. Bioresour. Technol. 2010, 101, 2884–2887. [Google Scholar] [CrossRef]
  28. Pai, T.; Tsai, Y.-P.; Lo, H.; Tsai, C.; Lin, C. Grey and neural network prediction of suspended solids and chemical oxygen demand in hospital wastewater treatment plant effluent. Comput. Chem. Eng. 2007, 31, 1272–1281. [Google Scholar] [CrossRef]
  29. Sumpter, B.G.; Getino, C.; Noid, D.W. Theory and applications of neural computing in chemical science. Annu. Rev. Phys. Chem. 1994, 45, 439–481. [Google Scholar] [CrossRef]
  30. Abu Qdais, H.; Bani-Hani, K.A.; Shatnawi, N. Modeling and optimization of biogas production from a waste digester using artificial neural network and genetic algorithm. Resour. Conserv. Recycl. 2010, 54, 359–363. [Google Scholar] [CrossRef]
  31. Hugget, A.; Sébastian, P.; Nadeau, J.-P. Global optimization of a dryer by using neural networks and genetic algorithms. AIChE J. 1999, 45, 1227–1238. [Google Scholar] [CrossRef]
  32. Zhang, G.; Patuwo, B.E.; Hu, M.Y. Forecasting with artificial neural networks: The state of the art. Int. J. Forecast 1998, 14, 35–62. [Google Scholar] [CrossRef]
  33. Nalbant, M.; Gökkaya, H.; Toktaş, I.; Sur, G. The experimental investigation of the effects of uncoated, PVD- and CVD-coated cemented carbide inserts and cutting parameters on surface roughness in CNC turning and its prediction using artificial neural networks. Robot. Comput. Manuf. 2009, 25, 211–223. [Google Scholar] [CrossRef]
  34. Beale, M.H.; Hagan, M.T.; Demuth, H.B. Deep Learning Toolbox. In The Math Works; User’ Guide; Incorp.: Boston, MA, USA, 2018; Volume R2018b. [Google Scholar]
  35. Møller, M.F. A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 1993, 6, 525–533. [Google Scholar] [CrossRef]
  36. Holland, J.H. Adaptation in Natural and Artificial Systems; The University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
Figure 1. Silo hotspot detector (unit: mm).
Figure 1. Silo hotspot detector (unit: mm).
Sensors 21 04427 g001
Figure 2. Installation of the silo hotspot detector and position of the temperature sensors.
Figure 2. Installation of the silo hotspot detector and position of the temperature sensors.
Sensors 21 04427 g002
Figure 3. Material-level measuring device and method.
Figure 3. Material-level measuring device and method.
Sensors 21 04427 g003
Figure 4. Experimental temperature results and assumed material-level values.
Figure 4. Experimental temperature results and assumed material-level values.
Sensors 21 04427 g004
Figure 5. MFNN structure.
Figure 5. MFNN structure.
Sensors 21 04427 g005
Figure 6. Flowchart of ANN training process.
Figure 6. Flowchart of ANN training process.
Sensors 21 04427 g006
Figure 7. Illustration of the genetic algorithm concept, using an example iteration for a population of four individuals, each comprising seven genes.
Figure 7. Illustration of the genetic algorithm concept, using an example iteration for a population of four individuals, each comprising seven genes.
Sensors 21 04427 g007
Figure 8. Flowchart of the ANN-GA.
Figure 8. Flowchart of the ANN-GA.
Sensors 21 04427 g008
Figure 9. ANN training results.
Figure 9. ANN training results.
Sensors 21 04427 g009
Figure 10. ANN test results.
Figure 10. ANN test results.
Sensors 21 04427 g010
Figure 11. Level values and difference between measured and predicted values (Case 8: 9-19-19-1).
Figure 11. Level values and difference between measured and predicted values (Case 8: 9-19-19-1).
Sensors 21 04427 g011
Figure 12. Best fitness and generation values based on the number of optimization points.
Figure 12. Best fitness and generation values based on the number of optimization points.
Sensors 21 04427 g012
Figure 13. Measurement position according to the number of optimized temperature measurement points (unit: mm): (a) 3, (b) 4, (c) 5, and (d) 6 points.
Figure 13. Measurement position according to the number of optimized temperature measurement points (unit: mm): (a) 3, (b) 4, (c) 5, and (d) 6 points.
Sensors 21 04427 g013
Figure 14. Comparison of predicted and measured material levels and error at optimization position. Results of (a) 3, (b) 4, (c) 5, and (d) 6 points.
Figure 14. Comparison of predicted and measured material levels and error at optimization position. Results of (a) 3, (b) 4, (c) 5, and (d) 6 points.
Sensors 21 04427 g014aSensors 21 04427 g014b
Figure 15. Comparison of error at optimization position.
Figure 15. Comparison of error at optimization position.
Sensors 21 04427 g015
Table 1. Specifications of the silo hotspot detector.
Table 1. Specifications of the silo hotspot detector.
ParametersBasic Values
Mounting typeHanging down
Tensile load (kN)126
Operating temperature (°C)0–150
Temperature sensorK-type, 1.6 Φ, ungrounded
Unit materialStainless steel
Unit weight (kg/m)3
Length (mm)16,000
Diameter (mm)≤35
Table 2. Specifications of the material level measuring tool.
Table 2. Specifications of the material level measuring tool.
ParametersBasic Values
Measurement principleElectromechanical lot-sensor
VersionRope version
Process temperature (°C)−40–80
Accuracy1.5% of max. range
Min. immersion length (mm)245
Min. immersion length (m)1265
Table 3. The difference between the assumed level value and the measured value.
Table 3. The difference between the assumed level value and the measured value.
Case No.12345678910
Assumed value (m)15.76515.76511.26511.26514.26514.26515.76515.76511.26511.265
Measured value (m)16.27816.4415.9416.08312.24812.38114.06214.215.38215.552
Difference value (m)0.5130.6754.6754.8182.0171.8841.7031.5654.1174.287
Table 4. Numbers of internal sensors at different material levels.
Table 4. Numbers of internal sensors at different material levels.
Level
(m)
Number of Buried
Sensors (ea)
Level
(m)
Number of Buried
Sensors (ea)
0–7.515012.015–13.5154
7.515–9.015113.515–15.0155
9.015–10.515215.015–16.5156
10.515–12.015316.515–23.5157
Table 5. Range of temperature and silo level values.
Table 5. Range of temperature and silo level values.
Data TypeTraining DataTest Data
Min.Max.Min.Max.
InputNo. 1 (°C)2541.525.339.6
No. 2 (°C)22.941.824.739.8
No. 3 (°C)22.542.124.240.4
No. 4 (°C)29.542.431.940.2
No. 5 (°C)29.542.73340.6
No. 6 (°C)29.3453141
No. 7 (°C)2944.531.543.6
Atmospheric temperature (°C)16.528.116.520.1
Number of buried sensors (ea)2737
OutputMaterial level (m)9.8319.8911.1119.76
Table 6. ANN model parameters.
Table 6. ANN model parameters.
ParametersBasic Values
Number of input neurons9
Number of hidden layers1 or 2
Number of hidden neurons5, 9, 18, or 19
Number of output neurons1
Number of training dataset3080
Number of test dataset3080
Training algorithmScaled conjugate gradient
Transfer functionLogsig (hidden), purelin (output)
Learning rate0.01
Momentum0.9
Table 7. Parameters applied to ANN-GA model.
Table 7. Parameters applied to ANN-GA model.
ParametersBasic Values
Number of input neuronsts (= 3–6)
Number of hidden layers2
Number of hidden neuronsTs
Number of output neurons1
Gen150
Npop10 × ts
Table 8. Model sensitivity according to input data combination.
Table 8. Model sensitivity according to input data combination.
Case No.Feature
Combination
Performance
MAEMSER
1F10.46340.11560.96843
2F1 + F20.42660.08610.9789
3F1 + F30.26020.05310.98738
4F1 + F2 + F30.25280.03710.99107
Table 9. Training and Test results.
Table 9. Training and Test results.
CaseNumber of
Hidden
Layers
StructurePerformance
TrainingTest
MAEMSERMAEMSER
119-5-10.27680.1640.980550.28590.22580.9758
29-9-10.27860.11790.986690.28780.25450.96896
39-18-10.25870.09420.98930.28050.27710.96589
49-19-10.26460.07860.99120.27090.18080.97421
529-5-5-10.25280.10370.988750.29340.20880.97274
69-9-9-10.25010.07940.990820.27350.15450.98406
79-18-18-10.24340.06150.992770.26520.24630.97135
89-19-19-10.23770.03150.996120.23910.1880.9807
Table 10. ANN training and test result at optimized temperature-measurement position.
Table 10. ANN training and test result at optimized temperature-measurement position.
Number of
Measurement
Points
StructurePerformance
TrainingTest
MAEMSERMAEMSER
35-11-11-10.33840.04150.989390.4080.20160.97290
46-13-13-10.17720.02340.994630.21520.10870.97399
57-15-15-10.17070.01890.995610.18780.10710.97771
68-17-17-10.1410.00780.997950.15410.09190.98393
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rhee, J.H.; Kim, S.I.; Lee, K.M.; Kim, M.K.; Lim, Y.M. Optimization of Position and Number of Hotspot Detectors Using Artificial Neural Network and Genetic Algorithm to Estimate Material Levels Inside a Silo. Sensors 2021, 21, 4427. https://doi.org/10.3390/s21134427

AMA Style

Rhee JH, Kim SI, Lee KM, Kim MK, Lim YM. Optimization of Position and Number of Hotspot Detectors Using Artificial Neural Network and Genetic Algorithm to Estimate Material Levels Inside a Silo. Sensors. 2021; 21(13):4427. https://doi.org/10.3390/s21134427

Chicago/Turabian Style

Rhee, Jeong Hoon, Sang Il Kim, Kang Min Lee, Moon Kyum Kim, and Yun Mook Lim. 2021. "Optimization of Position and Number of Hotspot Detectors Using Artificial Neural Network and Genetic Algorithm to Estimate Material Levels Inside a Silo" Sensors 21, no. 13: 4427. https://doi.org/10.3390/s21134427

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop