Next Article in Journal
A New Carry Look-Ahead Adder Architecture Optimized for Speed and Energy
Previous Article in Journal
ORPP—An Ontology for Skill-Based Robotic Process Planning in Agile Manufacturing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of Environmental Parameters for Predatory Mite Cultivation Based on Temporal Feature Clustering

1
School of Electronic and Electrical Physics, Fujian University of Technology, Fuzhou 350014, China
2
School of Materials Science and Engineering, Xiamen University of Technology, Xiamen 361024, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(18), 3667; https://doi.org/10.3390/electronics13183667
Submission received: 15 August 2024 / Revised: 8 September 2024 / Accepted: 9 September 2024 / Published: 15 September 2024

Abstract

:
With the significant annual increase in market demand for biopesticides, the industrial production demand for predatory mites, which hold the largest market share among biopesticides, has also been rising. To achieve efficient and low-energy consumption control of predatory mite breeding environmental parameters, accurate estimation of breeding environmental parameters is necessary. This paper collects and pre-processes hourly time series data on temperature and humidity from industrial breeding environments. Time series prediction models such as SVR, LSTM, GRU, and LSTNet are applied to model and predict the historical data of the breeding environment. Experiments validate that the LSTNet model is more suitable for such environmental modeling. To further improve prediction accuracy, the training data for the LSTNet model is enhanced using hierarchical clustering of time series features. After augmentation, the root mean square error (RMSE) of the temperature prediction decreased by 27.3%, and the RMSE of the humidity prediction decreased by 32.8%, significantly improving the accuracy of the multistep predictions and providing substantial industrial application value.

1. Introduction

With the vigorous promotion of green ecological agriculture, the market demand for biopesticides has increased significantly year by year. Various types of biopesticides have already been introduced and applied in China, including three main categories: microbial pesticides, plant-derived pesticides, and natural enemy insects, such as predatory mites [1]. Microbial pesticides and plant-derived pesticides are advantageous due to their easy degradation and low residue levels. Predatory mites can control a wide range of pests and have broad adaptability to various crops. Under suitable environmental conditions, predatory mites can survive for long periods and provide sustained pest control. Since predatory mite-related products hold the largest market share, the efficiency of industrial-scale production of predatory mites faces significant challenges.
In the industrial cultivation and production of predatory mites, considerable energy is required to maintain the environmental parameters necessary for the mites’ life cycle, such as temperature, humidity, and carbon dioxide concentration. To achieve adaptive regulation of these parameters and ensure uniform control within the breeding environment, precise modeling and prediction of these environmental parameters are essential. Industrial-scale breeding of predatory mites typically uses semi-enclosed greenhouse facilities, where many environmental variables, such as temperature and humidity, exhibit certain temporal sequences, correlations, periodicity, and seasonality. Time series prediction models can be employed to model and forecast these parameters, enabling highly efficient and low-energy control of the production environment.
This study focuses on Amblyseius cucumeris, a species of predatory mite with specific requirements for temperature and humidity. Research indicates that A. cucumeris achieves its highest reproduction rate and predation efficiency at temperatures around 25 °C and relative humidity levels of 60–80% [2,3]. Under optimal environmental conditions, A. cucumeris actively seeks out and preys on its targets. When the temperature and humidity are ideal, these mites can develop rapidly and lay a large number of eggs within a short period. Accurate time series prediction of temperature and humidity is therefore crucial for the breeding and deployment of A. cucumeris.
In recent years, researchers have proposed various time series methods for temperature and humidity prediction, including ARIMA, Support Vector Machines (SVM), Random Forests, and Artificial Neural Networks (ANN). These methods have been widely applied for temperature and humidity prediction in greenhouse environments. For example, Tsai et al. [4] used Random Forest combined with weather forecast data to predict soil temperature and moisture in greenhouses 1 to 48 h ahead. Chen et al. [5] combined ARIMA and Grey Prediction models to predict global average temperatures. Zeynoddin et al. [6] used a nonlinear elastic net to predict daily soil temperature. Choi et al. [7] proposed a model based on Multilayer Perceptrons (MLP) to predict air temperature and relative humidity for the next 10 to 120 min. Taki et al. [8] studied a model based on Radial Basis Function Networks to predict the temperature of air, soil, and plants.
Although these machine learning algorithms perform well in short-term predictions, their accuracy decreases in multistep predictions. Traditional algorithms also face challenges in predicting multiple time points. To address this issue, researchers have started to use concepts from Recurrent Neural Networks (RNN) to incorporate time series data into their models to improve adaptability and performance. For example, Eraliev et al. [9] used an LSTM model to predict temperature, humidity, and CO2 in hydroponic greenhouses, showing good performance for short intervals, but significant performance drops for longer intervals. Wu et al. [10] proposed a model based on deep spatial and temporal networks to predict the distribution and trend of temperature over regional scales for the next 3 to 6 h. Guo et al. [11] effectively reduced temperature prediction errors by combining spatiotemporal attention with LSTM. Ahn et al. [12] compared the performance of Autoformer, LSTM, SegRNN, and DLlinear in predicting temperature, relative humidity, and CO2 concentration in greenhouse environments, with SegRNN showing superior performance. Wang and Chen [13] introduced the LSTNet model, which is based on spatiotemporal self-attention mechanisms and can adapt to different data patterns and prediction needs. Jin et al. [14] proposed a Bidirectional Self-Attention Encoder–Decoder Architecture (BEDA), where BiLSTM is used as the fundamental unit for extracting time series features, and the multihead self-attention mechanism is incorporated into the encoder−decoder framework to enhance prediction performance. Huang et al. [15] combined CNN with LSTM and introduced an attention mechanism, achieving high prediction accuracy for short-term forecasts; however, the prediction error increased significantly as the prediction horizon extended. Wei and Jia [16] used the DNNR model to perform multistep predictions of temperature and relative humidity. In their 24-h forecast results, the temperature and humidity errors reached 7 °C and 19%, respectively, indicating relatively high prediction errors.
In time series data of environmental parameters for predatory mite breeding, there are diverse data variation patterns. Due to the unavoidable long-term lag in multistep predictions, models often struggle to capture shifts in these data patterns in a timely manner, leading to prediction accuracies that fall short of practical application needs. To address this issue, this paper proposes a modeling and prediction approach using an LSTNet model combined with hierarchical clustering of temporal features. The aim is to improve the accuracy of environmental parameter predictions, thereby providing a basis for decision-making and carbon reduction control of these parameters.

2. Data Acquisition and Pre-Processing

2.1. Data Acquisition

In this study, temperature and humidity data were collected from a predatory mite breeding industrial park affiliated with Fujian Yanxuan Biological Control Technology Co., Ltd. Five temperature and humidity sensors were installed in the breeding rooms, as shown in Figure 1. These sensors were connected to industrial control boards, which periodically collected the sensor output signals according to a set sampling frequency and converted the analog signal into a digital signal. The collected data were saved to a database using SCADA software (KingView7.5). The sensors used were analog temperature and humidity sensors with output signals in the form of voltage signals. The time resolution of the data was 1 h. The dataset covers the period from 1 January 2020, to 31 December 2022, with 80% of the data used for training, 10% for validation, and 10% for testing.
The parameter specifications of the temperature and humidity sensor are as follows: temperature measurement range: −30~70 °C; humidity measurement range: 0~100 RH; accuracy: ±0.2 °C @25 °C, ±1%RH (10%RH–90%RH); long-term stability: <0.04 °C/year, 0.5%RH/year.

2.2. Data Processing

In the collected raw data, there are missing values and outliers, as shown in Figure 2. To detect and remove outliers from the dataset, the K-Nearest Neighbors (KNN) algorithm was employed. The K-Nearest Neighbors (KNN) algorithm is a simple yet effective method for classification and regression tasks. It can be employed for outlier detection within a dataset by leveraging the concept of similarity measured in terms of distance. For each data point in the dataset, the Euclidean distance to its K-Nearest Neighbors is computed. If a point’s distance significantly exceeds the average distance to its neighbors, it may be identified as an outlier. Based on a predefined threshold—often dependent on the local outlier factor—data points classified as outliers are excluded from the dataset.
To address the missing values in the data, Random Forest Regressoran (an ensemble learning method) was used. The core idea is to train the model using the complete part of the dataset—i.e., data without missing values—by leveraging the relationships between available features. Once the model is trained, it can predict and fill in the missing values in the dataset based on the patterns learned from the complete data. This approach not only handles missing data more effectively but also maintains the overall variability and complexity of the dataset, leading to more accurate and robust imputations. The processed data are shown in Figure 3. Additionally, to eliminate the influence of different data scales, the processed data were normalized.

2.3. Evaluating Indicator

The main focus of environmental parameter prediction in this study is on the temperature and humidity of the breeding environment. The root mean square error (RMSE) and mean absolute error (MAE) are used to evaluate prediction errors, with RMSE being more sensitive to outliers. The correlation coefficient (r) indicates the degree of correlation between the predicted results and the actual values, effectively assessing the accuracy of the model. Therefore, RMSE, MAE, and r are selected as evaluation metrics to assess the predictive capability of the model. The formulas are as follows:
R M S E = i = 1 N y i y i ^ 2 N ,
M A E = i = 1 N | y i y i ^ | N ,
r = E [ ( y i y ¯ ) ( y i ^ y i ^ ¯ ) ] V a r ( y i ) V a r ( y i ^ ) ,

3. Time Series Prediction Methods

Time series forecasting methods are used to predict future values of data sequences that change over time. These methods mainly include classical time series modeling techniques, machine learning approaches, and deep learning methods. Classical time series modeling techniques encompass the Autoregressive Integrated Moving Average (ARIMA) model, Seasonal ARIMA (SARIMA), and Exponential Smoothing (ETS). Among these, the most commonly used method is the ARIMA model, which consists of three components: uutoregression (AR), differencing (I), and moving average (MA). The AR component calculates the current value as a linear combination of past values. The differencing component computes the differences between the current and past values to reduce or eliminate trends and seasonality. The MA component calculates the current observation as a linear combination of past error terms [17,18]. The ARIMA model requires the time series to be stationary and typically achieves this by differencing to reduce or eliminate trends and seasonal characteristics in the data. However, when future data patterns deviate from those in the input time series (e.g., due to new trends or cyclical changes), the ARIMA model’s predictions may become less accurate.
Machine learning methods for time series forecasting include Support Vector Regression (SVR), Random Forest Regression, and Gradient Boosting Regression. Among these representative algorithms, SVR is a nonlinear regression method based on Support Vector Machines (SVM) and is commonly used for time series forecasting. SVR is particularly effective in handling nonlinear relationships in time series data. By using kernel functions, SVR can map input features into a high-dimensional space, enabling it to better capture complex patterns and nonlinear relationships within the data [19,20]. However, when the time series data exhibit complex dynamic patterns, long-term dependencies, or involve very large datasets, the predictive performance of SVR may be limited. SVR can be employed to evaluate the performance of traditional machine learning methods in time series forecasting tasks for breeding environment parameters.
Deep learning methods for time series forecasting include Long Short-Term Memory networks (LSTM), gated recurrent units (GRU), 1D Convolutional Neural Networks (1D-CNN), and Transformer-based models.
LSTM is a model specifically designed to overcome the long-term dependency problem of traditional Recurrent Neural Networks (RNNs). LSTM introduces forget gates, input gates, and output gates to control the flow of information. Through these gates and memory cells, LSTM can effectively capture and process long-term dependencies and better retain and transmit gradient information during training [21,22,23,24]. Due to these capabilities, LSTM is widely applied in time series forecasting tasks and serves as an important benchmark model.
Gated Recurrent Unit (GRU) is a variant similar to LSTM, designed to simplify the structure of LSTM and reduce the number of parameters. GRU includes a forget gate and an update gate, controlling the flow of information through these two gates. Compared to LSTM, GRU has a more compact structure with only two gates, reducing the network’s complexity. This simplification leads to lower computational costs, and in some tasks, the GRU can perform similarly to the LSTM [25,26,27]. As a simplified version of LSTM, the GRU helps evaluate the trade-off between model complexity and performance.
The CNN-LSTM is a hybrid deep learning model that combines Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks [28]. This model leverages the spatial feature extraction capabilities of CNNs along with the temporal feature capturing abilities of LSTMs, making it highly effective in handling complex data characteristics in time series forecasting tasks. In this model, the convolution layer extracts local information of the data through the convolution kernel and further reduces the dimension of the data through the pooling layer to retain important features. The LSTM layer receives the features extracted from the convolutional part and learns and memorizes the long-term dependencies of the data through its gating mechanism. For tasks like temperature and humidity time series forecasting, the CNN-LSTM model effectively captures both long-term and short-term dependencies within the sequence.
The Informer model is a deep learning model based on transformer architecture, designed for long time series prediction tasks, and uses a sparse self-attention mechanism to improve computational efficiency and the ability to process long time series [29]. Informer uses a generative decoder that simultaneously attends to the encoded representation of the input sequence and the generated output sequence via a cross-attention mechanism. This mechanism allows the decoder to effectively leverage historical information and prediction results, generating the future time series step by step and thereby better capturing long-term dependencies within the sequence [30]. However, the Informer model relies mainly on the self-attention mechanism to capture dependencies in the time series without specialized mechanisms to handle complex nonlinear relationships between input features. The Informer is designed for handling long sequences; therefore, if the data collection frequency is low or the sequence length is insufficient, the model may not fully exploit its advantages. Additionally, overly short sequences may lead to overfitting and short-term fluctuations.
This paper adopts long-term and short-term time series networks to realize the estimation of aquaculture environment parameters, introduces a multihead attention mechanism to improve the ability to capture dependencies at different time steps, and combines time series feature clustering to improve the accuracy of multistep estimation.

3.1. Long- and Short-Term Time Series Networks

LSTNet model is a fusion neural network model designed for time series prediction, combining a convolutional neural network (CNN), Gated Recurrent Unit (GRU), and attention mechanisms. The model architecture comprises a CNN module, recurrent module, recurrent skip module, and highway module.The model network structure is shown in Figure 4.
The convolutional module serves as the first layer in the LSTNet model, constituting a convolutional neural network without pooling layers [31]. Within the LSTNet model, the convolutional module plays a crucial role in extracting short-term patterns and local dependencies from time series data. Through convolutional operations, this module achieves the extraction of local features.
The recurrent module utilizes gated recurrent units (GRU) to address the long-term dependency relationships within time series data. Leveraging the gate mechanisms of the GRU, it selectively memorizes and forgets past information, facilitating the modeling of long-term dependency relationships in time series data.
The recurrent skip module aims to overcome the issue of gradient vanishing that traditional GRU and LSTM recurrent layers may encounter when dealing with very long-term dependency relationships. This module exploits the periodic patterns present in time series data and extends the temporal span of information flow through skip connections. The introduction of periodic information helps capture long-term dependency relationships in the data, thereby improving the optimization process of the model.
The highway module is employed to address the issue of input signal scale variations. It decomposes the final prediction into linear (AR model) and nonlinear components. This approach enables the LSTNet model to handle scale variations in real-world data more effectively, consequently enhancing prediction accuracy.
The LSTNet model excels at simultaneously capturing both long-term and short-term dependency relationships in time series data. Through the integration of convolutional layers, gated recurrent units, and attention mechanisms, the model effectively models local features, long-term dependency relationships, and importance in the data. This combination results in an outstanding performance of the LSTNet model in time series prediction tasks, especially for complex nonlinear time series data [32].

3.2. MultiHead Attention Mechanism

The multihead attention is an extension of the self-attention mechanism. It enhances the model’s ability to express and capture global information by computing multiple attention heads in parallel, with each head focusing on different aspects of the input sequence [33].
The multihead attention mechanism performs linear transformations on the input sequence to obtain queries (Q), keys (K), and values (V) for multiple attention heads. The linear transformations are as follows:
Q i = X W i Q ,   K i = X W i K ,   V i = X W i V ,   i = 1 ,   ,   n
where X is the input sequence, W i Q ,   W i K ,   W i V are the weight matrices for the i-th attention head, and n is the number of attention heads. Each attention head independently computes the dot product attention, which is used to compute the dot product of the query and the key, scale it, apply softmax, and then weighted sum it [34]:
h e a d i = A t t e n t i o n ( Q i , K i , V i ) , i = 1 , , n
A t t e n t i o n ( Q , K , V ) = s o f t m a x ( Q K T d k ) · V
Concatenate the output of all headers together:
c o n c a t = [ h e a d 1 , h e a d 2 , , h e a d n ]
Perform a linear transformation on the concatenated result to obtain the final output:
o u t p u t = c o n c a t W O
In the LSTNet model, to fully leverage the skip-layer mechanism, it is necessary to set a reasonable skip connection parameter based on the periodicity of the time series. However, the multihead attention mechanism can capture dependencies between different time steps in the input sequence in parallel. Each attention head focuses on different aspects of the input sequence. By introducing the multihead attention mechanism into the LSTNet model, it is possible to capture both long-term and short-term dependencies without the need to set the skip parameter.
Furthermore, the multihead attention provides a dynamic weighting mechanism, enabling the model to flexibly select the most useful information for prediction. This enhances the prediction accuracy by allowing the model to dynamically adjust and emphasize the most relevant aspects of the input data.The introduction of multi-head attention mechanism in LSTNet model is shown in Figure 5

3.3. Temporal Feature Hierarchical Clustering (TSFHC)

Temporal feature hierarchical clustering is a method used for clustering time series data. This approach combines the concepts of temporal features and hierarchical clustering, aiming to group time series with similar data variation patterns into the same category [35].
The main idea of this method is to transform the time series data into feature vectors and then use a hierarchical clustering algorithm to cluster these feature vectors. Various techniques can be employed to extract multiple features of the time series during the transformation process, such as statistical features, frequency domain features, or time domain features. The similarity or distance between feature vectors is then calculated as a clustering metric. Using a hierarchical clustering algorithm, a dendrogram (a tree-like structure) is constructed, which ultimately divides the time series into different clusters. The clustering process is illustrated in Figure 6.
The time series with different data change patterns are divided into different datasets through a hierarchical clustering algorithm based on time series features, and the LSTNet model is trained separately using the data of each dataset. Each LSTNet model obtained will have excellent prediction performance for the data with the corresponding data change pattern.

4. Experimental Results and Discussion

4.1. Comparison of Modeling and Prediction Effects of Commonly Used Models

In this study, we used SVR, LSTM, GRU, LSTNet, CNN-BiLSTM, and Informer models to model and predict the temperature and humidity parameters in a predatory mite breeding environment. The models were implemented in Python and based on the PyTorch deep learning framework. Model performance was compared using multiple evaluation metrics.The hyperparameters of the LSTNet model are shown in Table 1. The computer configuration used for this study was as follows: CPU: i5-13600KF; GPU: 4060Ti; Memory: 32 GB; Software environment: Python 3.11, PyTorch 2.1.0.

4.1.1. Experimental Results

To compare the performance of the SVR, LSTM, GRU, LSTNet, CNN-BiLSTM, and Informer models, we conducted experiments to predict the environmental temperature and humidity with prediction horizons of 1, 3, 6, and 24 steps. The predicted temperature and humidity results for each model are shown in Table 2 and Table 3, respectively.

4.1.2. Experimental Results Analysis

In Figure 7, it can be observed that the RMSE and MAE indicators of the LSTNet model in the temperature and humidity prediction of 1, 3, 6, and 24 steps are lower than those of the other models, indicating that the LSTNet model has the smallest error between the predicted value and the actual value in the temperature and humidity estimation task. At the same time, with the increase of the prediction step, the prediction performance of each model has decreased, among which the prediction error of the SVR, LSTM, and GRU models has increased significantly, indicating that it is difficult for a single network model to achieve multistep prediction of the time series of aquaculture environment parameters. The Informer model performs poorly in the comparison, which is largely related to the fact that the Informer model mainly relies on the self-attention mechanism to capture the dependencies in the time series, and it does not have a dedicated mechanism to handle the complex nonlinear relationships between input features. The CNN-BiLSTM model performs well in the multistep prediction task, but compared with the LSTNet model, it still has a certain error in the single-step prediction.
As shown in Figure 8 and Figure 9, to visually and accurately understand the prediction performance of each model, a sequence length of 168 was extracted from the test data, and the prediction results were plotted to compare the prediction results of the five models in the one-step temperature and humidity prediction task. Each curve in the figures represents the model’s predicted values over time. By visually inspecting the charts, the degree of consistency between each model’s predictions and actual values can be observed. Since the single-step prediction effect of the Informer model is poor, no comparison is made.
The comparison charts provide a visual representation of the predicted and actual values for each model, allowing a comprehensive evaluation of their predictive capabilities.
In the comparison charts of the one-step temperature and humidity prediction results for the six models, the LSTNet model performs the best, with minimal noticeable errors between the predicted and actual values. Additionally, it can be observed that the LSTNet model more accurately captures relevant details compared to other models, especially when there is a certain change in the actual data trend. The autoregressive component in the LSTNet model effectively captures the direction and scale of data changes, significantly improving prediction accuracy. In contrast, the errors of the other models increase significantly when the data trend changes.
By comparing the performance of the six models in the prediction tasks at four different steps, it is evident that the LSTNet model is more suitable for modeling and predicting temperature and humidity data in the breeding environment of predatory mites. However, in multistep prediction tasks, the LSTNet model exhibits larger prediction errors for time series containing multiple data patterns due to the lag effect of the input data. Considering the introduction of time series clustering methods can improve the model’s adaptability to different data patterns.

4.2. Temperature and Humidity Estimation Based on TSFHC-LSTNet

In the time series of temperature and humidity in the breeding environment of predatory mites, there are multiple data variation patterns. In multistep prediction tasks for temperature and humidity, when the data trend and amplitude, i.e., the data variation pattern, change, a single LSTNet model finds it difficult to identify the shift in data variation patterns based solely on the two variables of temperature and humidity. Instead, it maintains the original data variation pattern for predicting the temperature and humidity, and the predicted results resemble the most recent cycle data in the input sequence, leading to significant prediction errors, as shown in Figure 10.
In this study, feature vectors corresponding to short sequences were constructed by calculating the mean, variance, and fifth-order polynomial fitting coefficients of the temperature and humidity sequences. The mean and variance represent the level and dispersion degree of the sequence values, respectively, while the polynomial fitting coefficients can represent the trend and amplitude of data changes to a certain extent.
Since temporal feature hierarchical clustering can divide the dataset into multiple subsets according to different data variation patterns, this can result in a smaller amount of data in each subset, affecting the predictive performance of the LSTNet model. Therefore, a large amount of data is required for clustering. Temporal feature hierarchical clustering was performed using temperature and humidity data from Fuzhou City from 2018 to 2023. Agglomerative hierarchical clustering was used, and based on the dendrogram of the clustering results, an appropriate threshold was selected to divide the data into eight clusters and construc the corresponding datasets.
The LSTNet model was then used to train on the data from these eight clusters, with each dataset divided into training, validation, and test sets in a ratio of 8:1:1. The experimental results are shown in Table 4 and Table 5.
To study the difference in model performance before and after data enhancement, the temperature and humidity time series from 2018 to 2023 were directly used as the dataset, with the 2023 data serving as the test set. The LSTNet model was trained with a prediction step of 24 steps. The test set results for temperature prediction had an RMSE of 2.2 and an MAE of 1.58, while the humidity prediction results had an RMSE of 6.71 and an MAE of 4.9.
Compared to the single LSTNet model, the TSFHC-LSTNet model (Temporal Sequence Feature Hierarchical Clustering with LSTNet) showed significant improvement in prediction accuracy. The temperature prediction RMSE decreased by 27.3%, and the MAE decreased by 24.1%. The humidity prediction RMSE decreased by 32.8%, and the MAE decreased by 31.2%. These results indicate a significant improvement in the prediction accuracy after clustering and modeling.
Using the TSFHC-LSTNet model can significantly reduce the prediction errors caused by changes in data variation patterns. As shown in Figure 11, in the temperature prediction on 29 December 2023, the TSFHC-LSTNet model achieved an MAE of 1.81, which is a 39% reduction compared to the MAE of 2.97 from the single LSTNet model.
In situations in which the data variation patterns change less, the TSFHC-LSTNet model outperforms the LSTNet model. For example, as shown in Figure 12, the temperature prediction results on 26 July 2023, demonstrate that the TSFHC-LSTNet model achieved an MAE of 0.53, which is a 15.9% reduction compared to the MAE of 0.63 from the LSTNet model.
Similar to the temperature prediction, as shown in Figure 13, the humidity prediction results on 26 December 2023, indicate that the single LSTNet model had a significant error, with an MAE of 8.56. In contrast, the TSFHC-LSTNet model achieved an MAE of 4.91, which is a reduction of 42.6% compared to the LSTNet model. As illustrated in Figure 14, the humidity prediction results on 20 June 2023, showed that the LSTNet model had a certain prediction error, with an MAE of 2.95, while the TSFHC-LSTNet model achieved an MAE of 1.59, a reduction of 46.1% compared to the LSTNet model.
In summary, using the LSTNet model based on temporal feature hierarchical clustering can significantly reduce the prediction errors in multistep temperature and humidity prediction tasks compared to a single LSTNet model.

5. Energy Savings Estimation and Validation

Currently, industrial-scale predatory mite breeding facilities use a small closed-loop fixed parameter control method, integrating a single control room with independent third-party equipment. Due to the significant lag in breeding environment parameters, the system control tends to overshoot to ensure timely stability. This results in a narrower set range of breeding environment parameters compared to the actual adaptive range, ensuring biological activity and adaptability throughout the lifecycle of the breeding process.
Accurate environmental temperature predictions can optimize temperature control strategies in breeding environments, achieving energy consumption optimization for environmental parameter regulation. By predicting external temperature changes and adjusting indoor environment parameter settings accordingly, significant reductions in the energy consumption of equipment such as air conditioners can be achieved. For example, using pre-cooling/pre-heating strategies, cooling/heating equipment can be turned on or off in advance, or control parameters can be optimized during significant external temperature changes. Load distribution can be optimized by reasonably distributing cooling/heating loads based on the predicted temperatures, thereby reducing the increased energy consumption caused by frequent equipment starts and stops. After analyzing the existing equipment situation of the company, the project team and communicating with the company chose to drive the corresponding equipment based on changes in predicted environmental parameters on the existing control system, achieving energy-saving by ensuring equipment operates only when necessary.
Theoretical Calculation: In production practice, due to the large time lag characteristic of environmental temperature, a certain control margin is usually reserved in the actual temperature control process. Taking the breeding of Amblyseius cucumeris as an example, its acceptable temperature range is 15 °C–30 °C, typically set to 18 °C–27 °C in actual temperature control. Using a temperature control strategy based on predicted environmental temperature values can improve temperature control accuracy, allowing the temperature control range to be set to 15.2 °C–29.8 °C, reducing equipment operating time and electrical energy consumption.
Taking a breeding room with a length, width, and height of 6 m by 3 m by 3.6 m as an example, the reduced power consumption is estimated, as shown in Figure 15. The reduced power consumption includes the additional cooling capacity required to increase the temperature from 29.8 °C to 27 °C and the additional heating capacity required to increase the temperature from 15.2 °C to 18 °C.
The additional cooling/heating capacity provided is approximately the heat energy of the heat exchange between the culture room and the outside world:
Q = A × T × U ,
where A is the room wall area, T is the temperature difference, U is the wall thermal conductivity, R s i is the inner surface thermal resistance, R s e is the outer surface thermal resistance, and R is the wall material thermal resistance.
U = 1 R s i + R s e + R = 1 0.12 + 0.04 + 0.0588 = 4.57   W / ( m 3 · ° C ) ,
When the ambient temperature is greater than 29.8 °C or less than 15.2 °C, T is 2.8 °C, Q 1 = 1289.84 W . When the ambient temperature is between 27 °C–29.8 °C or between 15.2 °C–18 °C, T is 1.4 °C, Q 2 = 644.92 W .
The efficiency of air conditioners is usually expressed in terms of the Coefficient of Performance (COP). Assuming the COP of an air conditioner is 3.4, it means that for every 1 kWh of electricity consumed, 3.4 kWh of cooling capacity can be provided. E 1 and E 2 are the electricity consumed per hour by the air conditioner under different T conditions.
E 1 = Q 1 × 3600 3,600,000 J / k W h × 3.4 = 0.3794   k W h , E 2 = 0.1897   k W h ,
According to the temperature of Fuzhou in 2023, there are 642 h with a temperature greater than 29.8 °C, 935 h with a temperature between 27 °C and 29.8 °C, 2576 h with a temperature less than 15.2 °C, and 863 h with a temperature between 15.2 °C and 18 °C.
( 642 + 2576 ) × E 1 + ( 935 + 863 ) × E 2 = 1561.989   k W h
Therefore, by adopting a temperature control strategy based on the estimated ambient temperature, a culture room can save 1561.989 kWh of electricity throughout the year.
The project team conducted comparative experiments in two adjacent breeding rooms from 1 July 2023 to 15 July 2023. The actual measurement by the breeding enterprise showed that the power consumption of the breeding room using the temperature control strategy combined with the temperature estimation was 72.3 kWh, while the power consumption of the control breeding room was 138.3 kWh, saving 66 kWh of electricity, with an energy saving ratio of 47.7%. The experimental results differed by 2.1 kWh from the theoretical calculation, which basically verified the carbon reduction effect. The calculation error may be caused by the indoor and outdoor gas heat exchange caused by the airtightness of the breeding room and the regular inspection of the staff, the influence of indoor and outdoor humidity on the thermal conductivity of the wall, and the possible differences in the electrical equipment of the two breeding rooms.

6. Conclusions and Discussion

6.1. Conclusions

This study used multiple time series prediction models to model and estimate the parameters of the predatory mite breeding environment. Among them, the LSTNet model outperformed other models in multiple evaluation indicators of multiple time step predictions in the predatory mite breeding environment parameter prediction task, and its short-term prediction was basically consistent with the actual value. The RMSE, MAE, and correlation coefficient of the temperature single-step prediction were 0.28 °C, 0.2 °C, and 99.8%, respectively. The RMSE, MAE, and correlation coefficient of the relative humidity single-step prediction were 2.08%, 1.28%, and 99.29%, respectively. The LSTNet model has a strong ability to predict the short-term prediction task of predatory mite breeding environment parameters.
This paper proposes an LSTNet model that combines hierarchical clustering with temporal features. The dataset is enhanced through a hierarchical clustering algorithm based on temporal features. Data with similar data change patterns are concentrated in the same dataset, which improves the multistep estimation accuracy of the LSTNet model. In the 24-step estimation task, compared with the single LSTNet model, the RMSE of temperature estimation is reduced by 27.3%, MAE is reduced by 24.1%, and the RMSE of humidity estimation is reduced by 32.8%, MAE is reduced by 31.2%.

6.2. Implications of Study and Future Work

This study aims to use historical data to predict future breeding environmental parameters to optimize the regulation strategy of environmental parameters for predatory mite breeding. Research has proven that an environmental temperature control strategy based on temperature prediction can effectively reduce power consumption, and a single culture room can save 1562 kWh per year. In large-scale industrial breeding, the energy-saving effect of this strategy is even more significant. Breeding factories can collect environmental data in real time through sensor networks and use models to predict future changes in environmental parameters. Based on these predicted values, the control system can adjust the operating status of the control equipment in advance, reduce power consumption, and significantly reduce energy consumption and production costs. In addition, the prediction results of this model can be used to optimize the breeding environment and improve the reproductive efficiency and quality of predatory mites, thereby improving their production efficiency. Therefore, the application of environmental regulation strategies based on environmental parameter prediction in large-scale breeding can not only achieve significant energy-saving effects but also improve production efficiency and product quality, bringing dual economic and environmental benefits to enterprises.
In this study, we combine the LSTNet model with the temporal feature hierarchical clustering algorithm to significantly improve the accuracy of multistep prediction. However, this approach also increases the model complexity and computational cost. In addition, the dataset used in this article comes from the breeding environment of a specific object in a specific location, and the scale and diversity of the dataset may be insufficient. Future research can consider introducing more diverse datasets, including data from different geographical locations, climate conditions, and different objects, to improve the generalization ability and applicability of the model. At present, research mainly focuses on the two environmental parameters of temperature and humidity, while other factors that may have an important impact on the growth and reproduction of predatory mites (such as carbon dioxide concentration and ammonia concentration) have not been included in the scope of research. Future research could consider introducing these parameters into the model to build a more comprehensive prediction system.

Author Contributions

Conceptualization, Y.M.; Data curation, H.L.; methodology, H.L.; software, H.L.; validation, H.L. and W.C. (Weijie Chen); formal analysis, H.L.; investigation, H.L. and Y.M.; resources, Y.M.; writing—original draft preparation, H.L.; writing—review and editing, Y.M. and H.L.; visualization, W.C. (Weijie Chen); supervision, Y.M. and W.C. (Wei Chen) and Q.W.; project administration, Y.M. and W.C. (Wei Chen) and Q.W.; funding acquisition: Y.M. and Q.W. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the Fujian Provincial Department of Science and Technology, project number KY030240, Research and application of complete sets of equipment for industrial cultivation of predatory mites in pollution-free agriculture.

Data Availability Statement

Data are available on request due to restrictions, e.g., privacy or ethics.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, X.; Cao, A.; Yan, D.; Ouyang, C.; Wang, Q.; Li, Y. Overview of Mechanisms and Uses of Biopesticides. Int. J. Pest Manag. 2021, 67, 65–72. [Google Scholar] [CrossRef]
  2. Al-Azzazy, M.M.; Alhewairini, S.S. Effect of Temperature and Humidity on Development, Reproduction, and Predation Rate of Amblyseius Swirskii (Phytoseiidae) Fed on Phyllocoptruta Oleivora (Eriophyidae) and Eutetranychus Orientalis (Tetranychidae). Int. J. Acarol. 2020, 46, 304–312. [Google Scholar] [CrossRef]
  3. Kumari, M.; Sadana, G.L. Influence of Temperature and Relative Humidity on the Development ofAmblyseius Alstoniae (Acari: Phytoseiidae). Exp. Appl. Acarol. 1991, 11, 199–203. [Google Scholar] [CrossRef]
  4. Tsai, Y.-Z.; Hsu, K.-S.; Wu, H.-Y.; Lin, S.-I.; Yu, H.-L.; Huang, K.-T.; Hu, M.-C.; Hsu, S.-Y. Application of Random Forest and ICON Models Combined with Weather Forecasts to Predict Soil Temperature and Water Content in a Greenhouse. Water 2020, 12, 1176. [Google Scholar] [CrossRef]
  5. Chen, X.; Jiang, Z.; Cheng, H.; Zheng, H.; Cai, D.; Feng, Y. A Novel Global Average Temperature Prediction Model—Based on GM-ARIMA Combination Model. Earth Sci. Inform. 2024, 17, 853–866. [Google Scholar] [CrossRef]
  6. Zeynoddin, M.; Ebtehaj, I.; Bonakdari, H. Development of a Linear Based Stochastic Model for Daily Soil Temperature Prediction: One Step Forward to Sustainable Agriculture. Comput. Electron. Agric. 2020, 176, 105636. [Google Scholar] [CrossRef]
  7. Choi, H.; Moon, T.; Jung, D.H.; Son, J.E. Prediction of Air Temperature and Relative Humidity in Greenhouse via a Multilayer Perceptron Using Environmental Factors. J. Bio-Environ. Control 2019, 28, 95–103. [Google Scholar] [CrossRef]
  8. Taki, M.; Mehdizadeh, S.A.; Rohani, A.; Rahnama, M.; Rahmati-Joneidabad, M. Applied Machine Learning in Greenhouse Simulation; New Application and Analysis. Inf. Process. Agric. 2018, 5, 253–268. [Google Scholar]
  9. Eraliev, O.; Lee, C.-H. Performance Analysis of Time Series Deep Learning Models for Climate Prediction in Indoor Hydroponic Greenhouses at Different Time Intervals. Plants 2023, 12, 2316. [Google Scholar] [CrossRef]
  10. Wu, S.; Fu, F.; Wang, L.; Yang, M.; Dong, S.; He, Y.; Zhang, Q.; Guo, R. Short-Term Regional Temperature Prediction Based on Deep Spatial and Temporal Networks. Atmosphere 2022, 13, 1948. [Google Scholar] [CrossRef]
  11. Guo, Y.; Zhang, S.; Yang, J.; Yu, G.; Wang, Y. Dual Memory Scale Network for Multi-Step Time Series Forecasting in Thermal Environment of Aquaculture Facility: A Case Study of Recirculating Aquaculture Water Temperature. Expert Syst. Appl. 2022, 208, 118218. [Google Scholar] [CrossRef]
  12. Ahn, J.Y.; Kim, Y.; Park, H.; Park, S.H.; Suh, H.K. Evaluating Time-Series Prediction of Temperature, Relative Humidity, and CO2 in the Greenhouse with Transformer-Based and RNN-Based Models. Agronomy 2024, 14, 417. [Google Scholar] [CrossRef]
  13. Wang, D.; Chen, C. Spatiotemporal Self-Attention-Based LSTNet for Multivariate Time Series Prediction. Int. J. Intell. Syst. 2023, 2023, 1–16. [Google Scholar] [CrossRef]
  14. Jin, X.-B.; Zheng, W.-Z.; Kong, J.-L.; Wang, X.-Y.; Zuo, M.; Zhang, Q.-C.; Lin, S. Deep-Learning Temporal Predictor via Bidirectional Self-Attentive Encoder–Decoder Framework for IOT-Based Environmental Sensing in Intelligent Greenhouse. Agriculture 2021, 11, 802. [Google Scholar] [CrossRef]
  15. Huang, S.; Liu, Q.; Wu, Y.; Chen, M.; Yin, H.; Zhao, J. Edible Mushroom Greenhouse Environment Prediction Model Based on Attention CNN-LSTM. Agronomy 2024, 14, 473. [Google Scholar] [CrossRef]
  16. Jia, W.; Wei, Z. Short Term Prediction Model of Environmental Parameters in Typical Solar Greenhouse Based on Deep Learning Neural Network. Appl. Sci. 2022, 12, 12529. [Google Scholar] [CrossRef]
  17. Shumway, R.H.; Stoffer, D.S. ARIMA Models. In Time Series Analysis and Its Applications; Springer Texts in Statistics; Springer International Publishing: Cham, Switzerland, 2017; pp. 75–163. ISBN 978-3-319-52451-1. [Google Scholar]
  18. Ariyo, A.A.; Adewumi, A.O.; Ayo, C.K. Stock Price Prediction Using the ARIMA Model. In Proceedings of the 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation, Cambridge, UK, 26–28 March 2014; pp. 106–112. [Google Scholar]
  19. Chen, Y.; Xu, P.; Chu, Y.; Li, W.; Wu, Y.; Ni, L.; Bao, Y.; Wang, K. Short-Term Electrical Load Forecasting Using the Support Vector Regression (SVR) Model to Calculate the Demand Response Baseline for Office Buildings. Appl. Energy 2017, 195, 659–670. [Google Scholar]
  20. Lin, K.; Lin, Q.; Zhou, C.; Yao, J. Time Series Prediction Based on Linear Regression and SVR. In Proceedings of the Third International Conference on Natural Computation (ICNC 2007), Haikou, China, 24–27 August 2007; Volume 1, pp. 688–691. [Google Scholar]
  21. Hochreiter, S. Long Short-Term Memory; Neural Comput; MIT-Press: Cambridge, MA, USA, 1997. [Google Scholar]
  22. Fischer, T.; Krauss, C. Deep Learning with Long Short-Term Memory Networks for Financial Market Predictions. Eur. J. Oper. Res. 2018, 270, 654–669. [Google Scholar]
  23. Lindemann, B.; Müller, T.; Vietz, H.; Jazdi, N.; Weyrich, M. A Survey on Long Short-Term Memory Networks for Time Series Prediction. Procedia Cirp 2021, 99, 650–655. [Google Scholar] [CrossRef]
  24. Sherstinsky, A. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network. Phys. Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  25. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
  26. Dey, R.; Salem, F.M. Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks. In Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, MA, USA, 6–9 August 2017; pp. 1597–1600. [Google Scholar]
  27. Wang, Y.; Liao, W.; Chang, Y. Gated Recurrent Unit Network-Based Short-Term Photovoltaic Forecasting. Energies 2018, 11, 2163. [Google Scholar] [CrossRef]
  28. Elmaz, F.; Eyckerman, R.; Casteels, W.; Latré, S.; Hellinckx, P. CNN-LSTM Architecture for Predictive Indoor Temperature Modeling. Build. Environ. 2021, 206, 108327. [Google Scholar] [CrossRef]
  29. Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 2–9 February 2021; Volume 35, pp. 11106–11115. [Google Scholar]
  30. Jun, J.; Kim, H.K. Informer-Based Temperature Prediction Using Observed and Numerical Weather Prediction Data. Sensors 2023, 23, 7047. [Google Scholar] [CrossRef] [PubMed]
  31. Lai, G.; Chang, W.-C.; Yang, Y.; Liu, H. Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, Ann Arbor, MI, USA, 8–12 July 2018; ACM: Ann Arbor, MI, USA, 2018; pp. 95–104. [Google Scholar]
  32. Liu, R.; Chen, L.; Hu, W.; Huang, Q. Short-term Load Forecasting Based on LSTNet in Power System. Int. Trans. Electr. Energy Syst. 2021, 31, e13164. [Google Scholar] [CrossRef]
  33. Canizo, M.; Triguero, I.; Conde, A.; Onieva, E. Multi-Head CNN–RNN for Multi-Time Series Anomaly Detection: An Industrial Case Study. Neurocomputing 2019, 363, 246–260. [Google Scholar] [CrossRef]
  34. Vaswani, A. Attention Is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 4–9 December 2017. [Google Scholar]
  35. Wang, X.; Smith, K.; Hyndman, R. Characteristic-Based Clustering for Time Series Data. Data Min. Knowl. Discov. 2006, 13, 335–364. [Google Scholar] [CrossRef]
Figure 1. 3D Simulation Diagram of the Breeding Room.
Figure 1. 3D Simulation Diagram of the Breeding Room.
Electronics 13 03667 g001
Figure 2. Schematic diagram of raw data with missing values and outliers.
Figure 2. Schematic diagram of raw data with missing values and outliers.
Electronics 13 03667 g002
Figure 3. Schematic diagram of the data after removing outliers and filling in missing values.
Figure 3. Schematic diagram of the data after removing outliers and filling in missing values.
Electronics 13 03667 g003
Figure 4. LSTNet network architecture.
Figure 4. LSTNet network architecture.
Electronics 13 03667 g004
Figure 5. Network structure diagram of LSTNet model introducing the attention mechanism.
Figure 5. Network structure diagram of LSTNet model introducing the attention mechanism.
Electronics 13 03667 g005
Figure 6. Time series feature hierarchical clustering flowchart.
Figure 6. Time series feature hierarchical clustering flowchart.
Electronics 13 03667 g006
Figure 7. Model error comparison bar chart.
Figure 7. Model error comparison bar chart.
Electronics 13 03667 g007
Figure 8. Comparison chart of temperature single-step estimation results.
Figure 8. Comparison chart of temperature single-step estimation results.
Electronics 13 03667 g008
Figure 9. Comparison chart of humidity single-step estimation results.
Figure 9. Comparison chart of humidity single-step estimation results.
Electronics 13 03667 g009
Figure 10. Temperature estimate for 29 December 2023. Note: Figure 10 shows the estimated temperature and humidity values for 29 December 2023 using hourly temperature and humidity data from 21 December 2023 to 28 December 2023.
Figure 10. Temperature estimate for 29 December 2023. Note: Figure 10 shows the estimated temperature and humidity values for 29 December 2023 using hourly temperature and humidity data from 21 December 2023 to 28 December 2023.
Electronics 13 03667 g010
Figure 11. Comparison of temperature estimation results for 29 December 2023 before and after data augmentation.
Figure 11. Comparison of temperature estimation results for 29 December 2023 before and after data augmentation.
Electronics 13 03667 g011
Figure 12. Comparison of temperature estimation results before and after data augmentation on 26 July 2023.
Figure 12. Comparison of temperature estimation results before and after data augmentation on 26 July 2023.
Electronics 13 03667 g012
Figure 13. Comparison of humidity estimation results before and after data augmentation on 26 December 2023.
Figure 13. Comparison of humidity estimation results before and after data augmentation on 26 December 2023.
Electronics 13 03667 g013
Figure 14. Comparison of humidity estimation results before and after data augmentation on 20 June 2023.
Figure 14. Comparison of humidity estimation results before and after data augmentation on 20 June 2023.
Electronics 13 03667 g014
Figure 15. Single-day temperature curve.
Figure 15. Single-day temperature curve.
Electronics 13 03667 g015
Table 1. LSTNet model hyperparameters.
Table 1. LSTNet model hyperparameters.
HyperparametersHelpValue
hidCNNnumber of CNN hidden units100
hidRNNnumber of RNN hidden units100
headsNumber of Attention Mechanism Heads16
history_lengthConstruct the sequence length of input features78
pred_lenprediction sequence length1/3/6/24
kernelConvolutional kernel size of CNN layers6
highway_windowhighway_window3
clipThreshold for gradient clipping10
batch_sizebatch_size256
dropoutdropout0.2
optimizerOptimization AlgorithmAdam
skipUsed to skip specific time steps24
hidSkipNumber of hidden units in Skip RNN10
L1LossL1Loss1
output_funoutput_funsigmoid
Table 2. Evaluation index table of multi-model temperature prediction results.
Table 2. Evaluation index table of multi-model temperature prediction results.
Prediction Step Length13624
Evaluation
Indicators
RMSEMAErRMSEMAErRMSEMAErRMSEMAEr
LSTNet0.280.20.99800.920.690.97831.611.180.93382.662.130.8036
LSTM0.490.310.99580.970.690.97684.383.570.51123.752.810.7539
GRU0.440.280.99501.841.360.93714.213.080.58484.153.150.7356
SVR0.850.690.98502.411.900.84174.583.650.36742.702.190.8077
CNN-BiLSTM0.650.490.98981.150.890.96701.731.320.93582.411.880.7950
Informer2.612.070.82362.652.090.81132.982.320.76643.422.670.6775
Table 3. Evaluation index table of multi-model humidity prediction results.
Table 3. Evaluation index table of multi-model humidity prediction results.
Prediction Step Length13624
Evaluation
Indicators
RMSEMAErRMSEMAErRMSEMAErRMSEMAEr
LSTNet2.081.280.99294.603.390.96477.495.870.90659.367.730.8451
LSTM2.691.850.98818.537.150.894117.3213.920.465914.6712.120.5597
GRU3.172.20.98488.696.650.909718.7314.210.449616.1912.840.6128
SVR3.152.480.98419.226.920.850117.4913.980.42729.567.490.8509
CNN-BiLSTM2.881.960.98545.224.100.94747.275.760.91439.477.350.8167
Informer8.636.730.828411.589.750.771311.359.510.778311.499.450.7546
Note: The evaluation indicators in Table 1 and Table 2 are calculated using the predicted value of the last time step in the prediction results. For example, in a prediction task with a prediction step of 24, 24 predicted values are generated for each sample, and the 24th predicted value and its corresponding actual value are used to calculate the RMSE, MAE, and r.
Table 4. TSFHC-LSTNet temperature estimation index.
Table 4. TSFHC-LSTNet temperature estimation index.
Cluster1Cluster2Cluster3Cluster4Cluster5Cluster6Cluster7Cluster8Total
RMSE1.000.891.571.741.682.172.112.101.60
MAE0.790.671.291.421.331.731.621.681.20
r0.96010.91790.82480.88830.76700.67450.91180.93270.9746
R20.85650.82300.52300.78260.39210.12690.80060.83470.9469
Table 5. TSFHC-LSTNet humidity estimation index.
Table 5. TSFHC-LSTNet humidity estimation index.
Cluster1Cluster2Cluster3Cluster4Cluster5Cluster6Cluster7Cluster8Total
RMSE4.205.063.494.134.535.735.324.264.51
MAE3.193.912.653.163.184.464.073.183.37
r0.92770.92590.83980.89370.90310.95420.94480.95380.9366
R20.83120.82790.65480.78720.77800.90280.88580.90020.8639
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, Y.; Lin, H.; Chen, W.; Chen, W.; Wang, Q. Prediction of Environmental Parameters for Predatory Mite Cultivation Based on Temporal Feature Clustering. Electronics 2024, 13, 3667. https://doi.org/10.3390/electronics13183667

AMA Style

Ma Y, Lin H, Chen W, Chen W, Wang Q. Prediction of Environmental Parameters for Predatory Mite Cultivation Based on Temporal Feature Clustering. Electronics. 2024; 13(18):3667. https://doi.org/10.3390/electronics13183667

Chicago/Turabian Style

Ma, Ying, Hongjie Lin, Wei Chen, Weijie Chen, and Qianting Wang. 2024. "Prediction of Environmental Parameters for Predatory Mite Cultivation Based on Temporal Feature Clustering" Electronics 13, no. 18: 3667. https://doi.org/10.3390/electronics13183667

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop