Next Article in Journal
Investigation on the Dynamic Characteristics of Non-Orthogonal Helical Face Gears with Higher-Order Tooth Surface Modification
Next Article in Special Issue
A Community Detection and Graph-Neural-Network-Based Link Prediction Approach for Scientific Literature
Previous Article in Journal
Assessing Strategies to Overcome Barriers for Drone Usage in Last-Mile Logistics: A Novel Hybrid Fuzzy MCDM Model
Previous Article in Special Issue
Application of Artificial Intelligence Methods for Predicting the Compressive Strength of Green Concretes with Rice Husk Ash
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Sea Horse Optimization Algorithm for Hyperparameter Optimization of Agricultural Image Recognition

1
College of Plant Protection, Jilin Agricultural University, Changchun 130118, China
2
College of Information Technology, Jilin Agricultural University, Changchun 130118, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(3), 368; https://doi.org/10.3390/math12030368
Submission received: 6 December 2023 / Revised: 6 January 2024 / Accepted: 19 January 2024 / Published: 23 January 2024
(This article belongs to the Special Issue Advanced Research in Data-Centric AI)

Abstract

:
Deep learning technology has made significant progress in agricultural image recognition tasks, but the parameter adjustment of deep models usually requires a lot of manual intervention, which is time-consuming and inefficient. To solve this challenge, this paper proposes an adaptive parameter tuning strategy that combines sine–cosine algorithm with Tent chaotic mapping to enhance sea horse optimization, which improves the search ability and convergence stability of standard sea horse optimization algorithm (SHO). Through adaptive optimization, this paper determines the best parameter configuration in ResNet-50 neural network and optimizes the model performance. The improved ESHO algorithm shows superior optimization effects than other algorithms in various performance indicators. The improved model achieves 96.7% accuracy in the corn disease image recognition task, and 96.4% accuracy in the jade fungus image recognition task. These results show that ESHO can not only effectively improve the accuracy of agricultural image recognition, but also reduce the need for manual parameter adjustment.

1. Introduction

Since its inception, deep learning has garnered widespread attention for its unique advantages and has been applied across various domains. However, as societal demands continue to increase, conventional neural networks are no longer sufficient to meet people’s needs. Consequently, a plethora of enhanced neural networks have emerged.
Wang et al. built a new down sampling attention module based on AlexNet, and introduced the Mish activation function. The new module of the fully connected layer also reduced the network parameters, so as to build a new model AT-AlexNet [1]. In corn disease recognition, the accuracy of AT-AlexNet was significantly higher than other models. Fan et al. studied and designed a corn disease recognition system VGNet based on pretrained VGG16 [2]. The experimental results show that the performance of the proposed model is significantly better than other models. Dai et al. proposed an accurate detection and diagnosis system for corn leaf diseases based on multitask deep learning (MTDL-EPDCLD) [3]. The experimental results show that MTDL-EPDCLD can accurately and effectively identify corn diseases. Zeng et al. proposed a lightweight Dense scale network (LDSNet) [4]. The basic module of the network is the improved Dense Dilated Convolution (IDDC) module, which is used for real-world corn leaf disease image recognition, and the accuracy can reach 95.4%. A large number of studies have proved that the optimization of deep learning models can effectively improve the performance of the model.
In recent years, the superior performance of swarm intelligence algorithms in the field of optimization has gradually attracted extensive attention, especially in the field of hyperparameter optimization [5]. Bahaa et al. used the improved swarm intelligence optimization algorithm to optimize the convolutional neural network with hyperparameters, so as to construct a new model APSO-WOA-CNN [6]. The experimental results show that the performance of APSO-WOA-CNN is significantly better than other models. Paharia et al. improved the grey Wolf optimization algorithm, and applied the improved grey Wolf optimization algorithm to optimize the convolutional neural network [7], and the performance of the newly constructed model was significantly improved. In order to solve the problem of CNN hyperparameter configuration, Wang et al. improved the particle swarm optimization algorithm, and used the improved particle swarm optimization algorithm to optimize the hyperparameters of CNN [8]. The experimental results show that the improved particle swarm optimization algorithm can solve the problem of CNN hyperparameter optimization, and effectively improve the performance of CNN. In summary, it is feasible to use the swarm intelligence optimization algorithm to optimize the hyperparameters of the neural network model, and can effectively improve the performance of the neural network. Based on this, this paper applied the improved sea horse optimization algorithm to hyperparameter optimization of ResNet-50, and applied the newly constructed model to identify corn diseases. The rest of the paper is structured as follows.
In this paper, the improved sea horse optimization algorithm is applied to find the optimal parameter configuration of ResNet-50 adaptively, so as to improve the performance of ResNet-50.
In this paper, CEC2017 test function was used to verify the performance of the improved sea horse optimization algorithm, and the model improved by ESHO was applied to the image recognition of jade fungus and corn, respectively, in order to verify the performance of the proposed model.
The rest of this paper is structured as follows.
Section 2 details the dataset and experimental methods. Section 3 mainly introduces the experimental environment and experimental results. Section 4 mainly summarizes the overall work and states the focus of future work.

2. Materials and Methods

2.1. Data Sources

2.1.1. Jade Fungus Dataset

The obtained images of dried wood ear were acquired by the FScan2000 acquisition device. As shown in the Figure 1 below, the dimensions of the device body are 570 mm × 430 mm × 280 mm. The imaging resolution is 16 million pixels (4608 × 3456), with a camera sensor size of 1/2.3 CMOS sensor and a focal length of 8 mm for the camera lens. The light source is a 360° surround light, utilizing LED white light. The maximum shooting size is 400 mm × 300 mm, and the minimum accuracy is 0.12 mm.
The experimental data used in this study were collected in the year 2023, from Haotian Village in Najin Town, Tao nan City, Jilin Province. The black board in the center of the apparatus was used to place the edible fungus. During each data collection, the same distance and angle were maintained with respect to each camera.
To further ensure the accuracy and reliability of the edible fungus data, this study meticulously annotated the image data of each Jade fungus sample. Following the dry Jade fungus grading standard DB22/T 2605-2016, we recorded detailed information on the size, quantity, shape, and color of Jade fungus slices, as well as the presence of spots and damage. These six indicators were used as the basis for grading Jade fungus into four levels: first-grade, second-grade, third-grade, and disqualified. The classification method is presented in the Table 1 below.

2.1.2. Dataset on Corn Diseases

The public dataset (https://www.kaggle.com/ accessed on 2 November 2023) is used for the experimental data in this paper. The dataset contains a large number of pictures of plant diseases and healthy plants, which aims at the research work of plant disease classification and recognition. The dataset uses 4187 images covering four different types of plant diseases as well as samples of healthy plants. In the dataset, there were 1146 Blight images, 1306 Common-Rust images, 573 gray spot images, and 1162 health images. The images were collected and annotated by various plant disease experts and researchers and are representative and diverse.
For accurate plant disease classification and identification, each image is carefully annotated and classified into the following categories:
Blight: This category contains 1146 images of plants affected by Blight. Blight [9,10,11] is a common plant disease that causes the leaves and stems of plants to atrophy and turn yellow.
Common-Rust: The dataset contains 1306 Common-Rust images that show the appearance of Common-Rust spots on plant leaves. Common-Rust [12,13,14] is a plant disease caused by fungi that causes rust-red spots and disease spots on plant leaves.
Grey leaf spot: The 573 grey leaf spot images in the dataset reveal the grey leaf spot on the plant leaves. Grey leaf spot [15,16,17] is a common disease caused by fungi, which causes dark brown spots on the surface of plant leaves.
Health: 1162 images of healthy plants are also included in the dataset, which show healthy plants that have not been affected by any visible plant diseases.
Figure 2 below shows the three diseases and healthy comparison images.
Through the dataset, a wide range of representative samples of plant diseases and healthy plants can be obtained, which provides an important basis for subsequent research work. At the same time, in the process of constructing the dataset, we also fully consider the accuracy of annotation and the diversity of data samples to ensure the reliability and validity of the experimental results.

2.2. Experiment Method

2.2.1. Resnet-50 Model

This paper uses ResNet-50 [18], a subclass of deep neural networks, which contains two basic layers: the convolutional layer and the fully connected layer. There are 49 convolutional layers and one fully connected layer. The ResNet-50 network structure is mainly composed of five parts, of which the first part does not contain residual blocks, mainly preprocessing the input data, including: convolution, regularization, activation function, and maximum pooling calculation. The structure of the second, third, fourth, and fifth parts is similar, mainly composed of two residual blocks, namely the identity residual block and the convolutional residual block.
ResNet-50 is mainly used to solve image classification problems, and its advantage is that it can be connected across one layer to the next, which significantly reduces the overall computing time of network classification, and all five parts contain convolutional layers and pooling layers.

2.2.2. Sea Horse Algorithm

In this section, we will introduce the standard Sea horse algorithm [19] in detail. The SHO algorithm simulates the movement, predation, and reproduction of sea horse. These three behaviors are key components of SHO. In order to better balance the improvement of SHO algorithm, the global and locaters strategies are applied to the motion and predation behaviors, respectively.

Movement Behavior of the Seahorse

Seahorse locomotion behavior is divided into two situations: one is the spiral movement of the hippocampus with the vortex of the ocean, and the other is the Brownian motion of the hippocampus with the waves.
Case one: The Seahorse spirals with the whirlpool of the ocean.
The Seahorse spirals closer to its best advantage, and Lévy flight is used here to simulate the Seahorse’s moving steps. This method will avoid the strategy of SHO falling into the local optimal solution, and the unique spiral motion of the Seahorse can also make it constantly change the rotation angle, and can also expand the neighborhood of the existing local solution. This is mathematically achieved as follows:
X n e w 1 ( t + 1 ) = X i ( t ) + L e v y ( λ ) ( ( X e l i t e ( t ) X i ( t ) ) × x × y × z + X e l i t e ( t ) )
where x, y, and z represent the three-dimensional vector of coordinates (x, y, z) in the spiral motion, respectively.
Case two the seahorse does Brownian motion with the waves.
σ = ( Γ ( 1 + λ ) × sin ( π λ 2 ) Γ ( 1 + λ 2 ) × λ × 2 ( λ 1 2 ) )
To the left of the r 1 cut-off point, in order to better explore the search space for SHO, Brownian motion is used to simulate the motion step size of the sea horse, which is expressed as follows:
X n e w 1 ( t + 1 ) = X i ( t ) + r a n d × l × β t × ( X i ( t ) β t × X e l i t e )
where l is the constant coefficient (this article sets it to l = 0.05).

Predatory Behavior of the Seahorse

There are two situations in which seahorses are preyed: one is success and the other is failure. To simulate both cases, this article introduces a random number r2. In real life, the predation success rate of the seahorse is 90%, so we set the critical value to r2 > 0.1, when it is proved that the seahorse finally successfully captured the prey; on the contrary, it means that the speed of the prey is faster than the speed of the seahorse when preying, let it escape, and unsuccessfully capture the prey, expressed by a mathematical model as follows:
X n e w 2 ( t + 1 ) = { α × ( X e l i t e r a n d × X n e w 1 ( t ) ) + ( 1 α ) × X e l i t e r 2 > 0.1 ( 1 α ) × ( X n e w 1 ( t ) r a n d × X e l i t e ) + α × X n e w 1 ( t ) r 2 0.1
X n e w 1 ( t ) indicates the new position of the seahorse at the time of t, r2 is the random number [0, 1], which is used to adjust the step length of the seahorse during predation, which decreases linearly as the iteration progresses.

Reproductive Behavior of the Seahorse

It is worth noting that male seahorse is reproduced in nature, so in the SHO algorithm, some of the better fitness values are used as male populations for reproduction, and the other part is used as female populations to distinguish the next generation with better characteristics, the mathematical expression is as follows:
f a t h e r = X s o r t 2 ( 1 : p o p / 2 ) m o t h e r = X s o r t 2 ( p o p / 2 + 1 : p o p )
Here, X s o r t 2 represents the fitness value for all X s o r t 2 ’s in ascending order of predation behavior, and father and mother represent the male and female populations, respectively.
To make the SHO algorithm easier to perform, it is assumed that only one offspring will be produced per pair of seahorse random mating, with the following expression:
X i o f f s p r i n g = r 3 X i f a t h e r + ( 1 r 3 ) X i m o t h e r
r 3 represents a random number between [0, 1], i is a positive integer in the range [1, p o p / 2 ], X i f a t h e r and X i m o t h e r represent an individual randomly produced in male and female populations, respectively.

2.2.3. Sine–Cosine Algorithm Optimizes Chaotic Sea Horse Algorithm

The enhanced seahorse optimization algorithm proposed in this paper is improved in two aspects: the first point introduces the chaotic mapping algorithm in the initialization of the seahorse optimization algorithm, and the second point introduces the sine–cosine optimization algorithm in the seahorse optimization algorithm to optimize the fitness value.

Chaos Initialization and Parameter Optimization

The traditional seahorse optimization algorithm adopts the strategy of population initialization is random generation, which has the disadvantage of this method is that the randomness is large, and the quality of the initial solution cannot be guaranteed. In order to better optimize the problems of traditional seahorse optimization algorithms, in this paper, Tent chaos mapping is used to generate random chaotic sequences to generate initial sea horse populations.
The advantage of chaos mapping is that it has randomness, ergo city, and strong sensitivity to initial values, which makes the algorithm adding chaos mapping have a faster convergence speed than the original algorithm.
Tent mapping [20] is also known as tent mapping because of the function image’s resemblance to the tent shape. This is shown in Figure 3.
Compared with the tent map, the initial population optimization effect in the optimization algorithm is better, and the random chaotic sequence generated by the tent map is used instead of the randomly generated parameters in the original algorithm, so that the initial solution generated in the search space has good diversity. The resulting high-quality initial solution will help the algorithm with convergence speed and accuracy of the result.
The steps are as follows:
  • Determine the parameters α (This article α = 0.7)
  • Set the value range of the initial value x 0 sequence according to the objective function, and generate X values in this range.
  • X 0 = x 0 ( n ) , n = 1 , 2 X
  • x ( 1 ) = X 0
x ( n + 1 ) = { X ( n ) α x ( n ) [ 0 , α ) ( 1 X ( n ) ) ( 1 α ) x ( n ) [ α , 1 )
Sine–cosine algorithm optimizes chaotic seahorse algorithm.
The seahorse optimization algorithm has the disadvantages of slow convergence speed, low search accuracy, and easy to fall into local optimal solution. Although the tent mapping algorithm is introduced to improve the convergence speed, the sine and cosine optimization algorithm is introduced because it greatly improves the speed of leader position update, improves the optimization speed, improves the search accuracy, and improves the local optimal solution.
Sine cosine optimization algorithm (SCA) is a global optimization algorithm proposed in recent years [21], which is realized by using the properties of sine function and cosine function in mathematics, and balances the global exploration ability and local development ability of the algorithm through amplitude, which is different from the traditional swarm intelligent optimization algorithm, the advantage structure of the algorithm is relatively simple, robust, and easy to implement.
Assuming the population size is N and the search space dimension is d, map each solution of the optimization target problem to the location of each population in the search space. Then the population position of i ( i = 1 , 2 N ) populations that have undergone t iterations in the d dimensional search space can be expressed as X i t = ( X i 1 t , X i 2 t X i D t ) .
First, N population locations are randomly initialized in the search space; then, the population fitness value is calculated according to the objective function. Sort the best and disadvantage by the fitness value of the population, and update the optimal fitness value and its corresponding position.
X d i ( t + 1 ) = { X d i ( t ) + a × sin ( r 3 ) × | r 4 X * X d i ( t ) |       r 5 < 0.5 X d i ( t ) + a × cos ( r 3 ) × | r 4 X * X d i ( t ) |       r 5 0.5
where X d i represents the position of the t generation of i populations in the d dimension; X * represents the current optimal position; The a parameter is to control the search direction of the population, and the change mode a = ( 1 t T ) 2 t T , r 3 is the random number in the value range of [0, 2π] is used to control the search distance of the algorithm, r 4 and r 5 are random numbers on [0, 2] and [0, 1], respectively (in this paper r 3 , r 4 , and r 5 random numbers are selected by Tent chaotic mapping), and the way to control the update position of the t + 1 generation is to use a sine function or a cosine function.

2.2.4. Enhanced Seahorse Optimization Algorithm

Because the seahorse optimization algorithm is too chaotic in the initial sequence, the chaotic mapping algorithm is introduced in the initialization to improve, and the position with the best fitness value is assigned to the seahorse leader at each iteration, which leads to the algorithm being easy to fall into the local optimal region, which often leads to the selection of the optimal fitness value by the optimization precision, and the SCA algorithm can be randomly selected to optimize the sine and cosine cross-optimization, so that the position update methods of the two complement each other. Pseudocode to enhance the seahorse optimization algorithm is shown in Algorithm 1.
Algorithm 1 Enhance sea horse optimization algorithm
Input: The population size p o p , the maximum number of iterations T and the variable dimension D i m
Output: The optimal search agent X b e s t and its fitness value f b e s t
Use Tent chaos to map the initial population X i j and the parameters rand, λ, w, and k
While (t < T)
  if ( r 1 > 0)
   Update seahorse position using Equation (1)
  else if
   Update seahorse position using Equation (3)
  end if
  Update seahorse position using Equation (4)
  Calculate the fitness values for each hippocampus
  Select parents with Equation (5)
  Calculate the next generation using Equation (6)
  Update fitness values using SCA, Equation (8)
  Update the location of the seahorses
  t = t + 1
 End While

2.2.5. ESHO Hyperparameter Optimization Resnet-50 Model

ResNet-50 includes a number of hyperparameters that contribute to model performance, including training algorithm, momentum leaning, batch size, epoch, and validation frequency. These parameters are the main performance improvement parameters.
The main steps are as follows:
Step 1: Set the number of populations, dimensions, maximum iterations of the population, and determination of the boundary of the SHO algorithm.
Step 2: Initialization: According to the parameters of the ResNet-50 network and ESHO, the population is created through Tent chaos mapping.
Step 3: Random numbers such as rand, λ, w, and k are uniformly generated by Tent chaos mapping.
Step 4: Fitness value: The objective function is used to evaluate each ResNet-50 network, and the required hyperparameter values are automatically updated by the ESHO algorithm, and the fitness function selected from it is also the determination of the error rate.
Step 5: Based on the target value and the generated hyperparameters, create a new network for calculation.
Step 6: Update the position of the deposited target value according to the SCA algorithm, and introduce the greedy mechanism to determine whether it is the global optimal solution.
Step 7: Steps 3, 4, 5, and 6 are recalculated until the optimal solution with the maximum number of iterations is reached.
This is shown in Figure 4 and Algorithm 2.
Algorithm 2 Enhanced sea horse optimization algorithm for hyperparameter optimization of CNN
Input: dim, pop, T, K1, K2, K3, K4, Hyperparameter EvalFunction
Output: Optimized deep learning parameters w&b
Initialize enhanced sea horse optimization algorithm population (including each individual’s position and deep learning parameters w&b, a, A, C)
Obtain a batch of training datasets
For each iteration t = 1 to T do:
 For each population member i = 1 to pop do:
  Calculate fitness value
  Update the current individual’s position if a better position is found
 End inner for loop
 Update algorithm control parameters a, A, C
 (K1, K2, K3, K4) = Hyperparameter EvalFunction(current optimal position Lp)
End outer for loop
Descent with Momentum (SGDM) optimization algorithm
Update w&b using SGDM with hyperparameters K1, K2, K3, K4 Return w&b
End While

2.3. Evaluation

This subsection introduces the evaluation criteria proposed in this paper as accuracy, sensitivity, precision, recall, and other evaluation criteria for verification, and ROC analysis is used to verify the experimental data results. The following Table 2 is the indicator formula for the classification criteria.
Accuracy was used to assess whether Blight, Common-Rust, gray spot, and health could be completely distinguished, i.e., the proportion of samples correctly classified in the total sample.
Sensitivity, also known as Recall, is used to assess the proportion of all Blight in the correct classification, as well as the ability to recognize Blight. It is the proportion of all samples correctly identified as Blight to the total number of samples that are truly Blight.
Precision, also known as the Positive Predictive Value, refers to the proportion of samples correctly classified as Blight to the total number of samples classified as Blight. It is used to assess the accuracy of the classification of Blight.
Recall (also known as Sensitivity) is the proportion of all samples correctly identified as Blight to the total number of samples that are truly Blight, used to assess the ability to recognize Blight.

3. Results

Experiment 1 compares the proposed improved ESHO algorithm with the traditional SHO algorithm and its two enhancement strategies in terms of their search capability, convergence speed, and accuracy. This experiment will utilize the CEC2017 test suite to evaluate the performance of these algorithms on a set of standard test functions for optimization problems. The primary aim of Experiment 2 is to assess the classification performance of a neural network optimized by the ESHO algorithm (ESHO-net) when processing the Jade fungus image dataset. The experiment utilizes the Jade fungus image dataset, collected using an industrial camera, as input data, with the ESHO algorithm optimized neural network performing the classification task. To provide a comparative analysis, ESHO-net is compared against other classical neural network models, as well as models optimized using swarm intelligence algorithms. By comparing metrics such as classification accuracy, recall, and precision among these different models, the strengths and weaknesses of ESHO-net in handling the Jade fungus image dataset are evaluated. The aim of Experiment 3 is to evaluate the performance of the neural network optimized by the ESHO algorithm (ESHO-net) in the task of image classification using 3221 images of corn diseases. This experiment seeks to compare the classification accuracy, multiclass classification performance, and the ability to handle imbalanced data of the ESHO-net with models optimized by other classical and swarm intelligence algorithms when processing corn disease images. These performance metrics will aid in determining the superiority of ESHO-net in the task of classifying corn disease images.
The test environment is Windows 64-bit operating system, and the MATLAB is 2023. Hardware environment: The processor on the computer is AMD Ryzen 7 5800H with Radeon Graphics CPU @ 3.20 GHz. The running memory is 128 GB, the graphics card is NVIDIA Quadro RTX 3060, and the video memory (VRAM) is 16 GB.

3.1. Experiment 1 Function Test

ESHO refers to an algorithm obtained by optimizing SHO algorithm using Tent mapping and sine cosine algorithm. Tent mapping and sine cosine algorithm are used as improvement measures to improve the performance and convergence speed of the optimization algorithm to better solve the problem.
For ESHO algorithm, the Tent map was used to increase the diversity of the search space, and use the cosine algorithm with local search, further optimize the solution. These improvements help to improve the global search ability and solution quality of ESHO algorithm.
ESHO algorithm improves the initial population in SHO algorithm and adds sine cosine optimization algorithm for improvement. In order to verify the effectiveness of the algorithm improvement, the effect of the two position improvements will be compared. As shown in Table 3, the modified initial population position is denoted by “S”, and the addition of the sine cosine optimization algorithm is denoted by “SIN”. In the table, “1” means that this location update strategy is improved in SHO, and “0” means that it is not improved.
Therefore, the ESHO algorithm may be better than the original SHO algorithm in terms of performance, resulting in solutions that are closer to the optimal solution and have smaller Best, Worst, Mean, and Std metrics. This means that ESHO algorithm may be improved in the ability to find the global optimal solution, the ability to avoid poor solutions, and the average quality and stability of the solution. ESHO algorithm is an optimization algorithm that improves SHO algorithm by introducing Tent mapping and sine cosine algorithm. These improvements help to improve the performance of the algorithm and the quality of the solution. The algorithm pairs are shown in Table 4 for example.
As illustrated in Figure 5, in the CEC2017 function testing experiment, we selected six distinct functions for comparison, namely F1, F2, F4, F17, F23, and F27. By comparison, it is apparent that the addition of Tent mapping has improved the selection of initial points, signifying the optimization algorithm’s enhanced exploration of the search space. In our experiment, the inclusion of Tent mapping may have been observed to elevate the optimization performance for these functions. Incorporating the cosine algorithm facilitates local search, enabling more precise adjustments and refinement near the current solution, thus further optimizing the solution. In our experiment, integrating the cosine algorithm may have rendered the algorithm more effective in the search process, thereby further enhancing its performance. The inclusion of both enhancement strategies simultaneously has led to a more pronounced improvement, indicating their complementary and reinforcing role in augmenting the performance of the SHO algorithm. They can better guide the search process, enhance the algorithm’s global search capabilities, and improve solution quality.
In summary, the experimental results demonstrate a remarkable improvement in the SHO algorithm resulting from the addition of Tent mapping and the cosine algorithm. This strongly underscores the significance of these two enhancement strategies in optimizing the SHO algorithm, particularly when dealing with functions such as F1, F2, F4, F17, F23, and F27. This bears important guiding implications for further enhancing algorithm performance and improving solution quality.

3.2. Experiment 2 Jade Fungus Classification

The aim of this study is to compare the performance of 7 different pretrained networks (Resnet 18 [22], Google net [23], Inception v3 [24], Alexnet, Densenet 201 [25], ResNet50, VGG16) in classifying images of Jade fungus, incorporating optimization algorithms (SHO, SSHO, SINSHO, WOA, GA, PSO, GWO) for fine-tuning ResNet50. Additionally, the results will be compared to ESHO-net model. Evaluation metrics include accuracy, sensitivity, precision, and recall. Figure 6 and Table 5 present the optimal parameters and confusion matrices obtained through the optimization of ESHO-net model.

3.2.1. Compared with Pretrained Neural Network

In Figure 7, a selection of 7 different pretrained networks were chosen to assess their performance in the classification of Jew’s Ear mushroom images, and these networks were compared with the ESHO-net model. To evaluate the performance of these networks, four key indicators were utilized, namely accuracy, recall, sensitivity, and precision. The ESHO-net model demonstrated improvements across all performance metrics. Specifically, the ESHO-net model showed an increase in accuracy ranging from 1.5% to 16.1%, improvements in recall and sensitivity ranging from −2.1% to 10.9%, −1.2% to 20.8%, 10.3% to 35.1%, and 1.9% to 9.8%, respectively. The precision also experienced enhancements within the ranges from −1.1% to 7.3%, 6.5% to 21.3%, 1.1% to 15.9%, and −0.9% to 9.3%.

3.2.2. Optimize Resnet-50 Network with Other Optimization Algorithms

The proposed evaluation methods (accuracy, sensitivity, accuracy, recall) were compared with other optimization algorithms (SHO, SSHO, SINSHO, WOA [26], GA [27], PSO [28], GWO [29]) to optimize the network.
The confusion matrix in Figure 8 reveals that the ESHO-net model has exhibited improvements in the following aspects: an increase in accuracy ranging from 1.0% to 2.8%, and in recall and sensitivity from −2.1% to 1%, −1.2% to 3.8%, 6.1% to 12.8%, and 0% to 2.8%. Furthermore, the precision has seen enhancements from −1.1% to 2.1%, 2.5% to 7.4%, −3.2% to 9.5%, and −0.9% to 3.7%. ESHO-net demonstrates performance enhancements across multiple evaluation metrics, indicating its potential as a promising model.

3.3. Experiment 3 Classification of CORN Diseases

The optimal momentum, initial learning rate, maximum epoch, and validation frequency were selected through the ESHO-net model as presented in Table 6. The confusion matrix for the classification of Blight, Common Rust, Gray Spot, and Healthy Corn is shown in Figure 9. This includes metrics such as accuracy, sensitivity, precision, and recall, with an accuracy of 96.7% and a loss rate of 3.3%. The sensitivity is 94.7%, 99.0%, 89.1%, and 99.7%, while the precision is 94.5%, 98.5%, 90.1%, and 100%. The recall and sensitivity are 94.7%, 99.0%, 89.1%, and 99.7%, respectively.

3.3.1. Compared with Pretrained Neural Network

The proposed evaluation methods (accuracy, sensitivity, accuracy, recall) were compared with unoptimized pretrained networks (Resnet 18, Google net, Inception v3, Alexnet, Densenet 201). The confusion matrix pair is shown in Figure 10 below:
Through comparative analysis of confusion matrix, we evaluate five kinds of popular neural networks with strong classification ability. The results show that the accuracy index of these neural networks has been improved in different degrees, the improvement range is between 0.7% and 3.5%, among which DenseNet network has the best performance, reaching 96.0%, while the network with low accuracy is Inception V3, which is 93.2%. Further comparing sensitivity, accuracy, and recall rates, we found that in the classification of blight, rust, gray spot, and health, sensitivity and recall rates increased by 1.4% to 8.5% and 0.3% to 3.3%, respectively, and the relatively improved ESHO-net model improved sensitivity and recall rates to varying degrees. In addition, the improvement of comparison accuracy ranges from 1.2% to 7%, −0.7% to 2.1%, 3.5% to 15.7%, and 0 to 0.3%. Based on the above comparison results, it can be concluded that the ESHO-net model has excellent performance.

3.3.2. Optimize Resnet-50 Network with Other Optimization Algorithms

The proposed evaluation methods (accuracy, sensitivity, accuracy, recall) were compared with other optimization algorithms (SHO, SSHO, SINSHO, WOA, GA, PSO, GWO) to optimize the network. The confusion matrix pair is shown in Figure 11 below:
The comparison SHO, SSHO, SINSHO, and four optimization algorithms with strong optimization ability WOA, GA, PSO, and GWO are optimized for ResNet-50 at the same time to obtain the confusion matrix in Figure 6. Specifically, the ESHO algorithm improves the accuracy by 0.3% to 2.9%. In the classification of four diseases (Blight, Common-Rust, Gray spot, and Health), ESHO algorithm has improved sensitivity, accuracy, and recall rate compared with other optimization algorithms. Among them, the sensitivity of Blight, Gray-Leaf-Spot, and Health was increased by −0.7–2.6%. The accuracy is increased by −0.5% to 0.5%. The increase in recall is −0.6% to 7.3%. For the classification of Common-Rust, the sensitivity was increased by −0.3% to 0.6%. The accuracy is increased by 0.3% to 1.9%. The increase in recall is −2.3% to 6.4%. Considering the performance of various indicators and confusion matrix, it can be concluded that compared with other optimization algorithms, ESHO-net network has better classification effect and optimization performance for ResNet-50 network.

4. Conclusions

This study successfully demonstrated the strong potential of enhanced sea horse optimization algorithm (ESHO) and its optimized ResNet50 model (ESHO-NET) in agricultural image recognition, especially in maize leaf disease and jade fungus image classification. The SHO algorithm is enhanced by chaotic map and sine cosine algorithm to achieve a balance of search behavior and solve the problem of excessive randomness and imbalance between exploration and exploitation in the original algorithm. The biggest advantage of the ESHO algorithm is reflected in the adaptive adjustment of ResNet50 parameters, which significantly reduces the burden of manual parameter tuning.
Specifically, Experiment I verifies that the ESHO algorithm outperforms the original and its optimization strategy optimized SHO algorithm in several performance metrics. In the second experiment, ESHO-net shows superior classification accuracy on the jade auricularia image dataset, which surgoes multiple comparison neural network models optimized by classical and swarm intelligence algorithms. In addition, the results of Experiment 3 prove that ESHO-net is significantly superior to other state-of-the-art models in terms of accuracy, sensitivity, and recall in the classification task of 3221 corn disease images dataset, which has a significant accuracy of 96.7% and a loss rate as low as 3.3%.
Although the above findings confirm the practical value of ESHO-net for image recognition tasks, the limitations of this study should also be noted, such as the range of disease types and the diversity of data sources. Future work will focus on these limitations and strengthen the generalization ability and practical application of the model by expanding the collection of disease types and multisource data. The conclusion of this study provides a solid foundation for further using sea horse optimization algorithm to promote intelligent image recognition in the agricultural field, and indicates a broad application prospect in the field of intelligent agriculture and precision agriculture.

Author Contributions

Conceptualization, S.Q.; methodology, S.Q.; software, S.Q.; validation, S.Q.; formal analysis, N.L.; investigation, N.L.; resources, Y.X.; data curation, X.H.; writing—original draft preparation, N.L.; writing—review and editing, N.L.; visualization, N.L.; supervision, Z.L.; project administration, Z.L.; funding acquisition, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Scientific Research Project of the Department of Education of Jilin Province, grant number JJKH20210331KJ; “14th Five-Year Plan” national key Research and development Plan, grant number: 2023YFD1201602-1.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Li, X.S.; Li, J.Y.; Yang, H.C.; Wang, Y.R.; Gao, S.C. Population interaction network in representative differential evolution algorithms: Power-law outperforms Poisson distribution. Phys. A Stat. Mech. Its Appl. 2022, 603, 127764. [Google Scholar] [CrossRef]
  2. Fan, X.P.; Guan, Z.B. VGNet: A Lightweight Intelligent Learning Method for Corn Diseases Recognition. Agriculture 2023, 13, 1606. [Google Scholar] [CrossRef]
  3. Dai, D.K.; Xia, P.W.; Zhu, Z.Y.; Che, H.L. MTDL-EPDCLD: A Multi-Task Deep-Learning-Based System for Enhanced Precision Detection and Diagnosis of Corn Leaf Diseases. Plants 2023, 12, 2433. [Google Scholar] [CrossRef]
  4. Zeng, W.H.; Li, H.D.; Hu, G.S.; Liang, D. Lightweight dense-scale network (LDSNet) for corn leaf disease identification. Comput. Electron. Agric. 2022, 197, 106943. [Google Scholar] [CrossRef]
  5. Emambocus, B.A.S.; Jasser, M.B.; Amphawan, A. A Survey on the Optimization of Artificial Neural Networks Using Swarm Intelligence Algorithms. IEEE Access 2023, 11, 1280–1294. [Google Scholar] [CrossRef]
  6. Bahaa, A.; Sayed, A.; Elfangary, L.; Fahmy, H. A novel hybrid optimization enabled robust CNN algorithm for an IoT network intrusion detection approach. PLoS ONE 2022, 17, e0278493. [Google Scholar] [CrossRef] [PubMed]
  7. Paharia, N.; Jadon, R.S.; Gupta, S.K. Optimization of convolutional neural network hyperparameters using improved competitive gray wolf optimizer for recognition of static signs of Indian Sign Language. J. Electron. Imaging 2023, 32, 023042. [Google Scholar] [CrossRef]
  8. Wang, C.X.; Shi, T.T.; Han, D.N. Adaptive Dimensional Gaussian Mutation of PSO-Optimized Convolutional Neural Network Hyperparameters. Appl. Sci. 2023, 13, 4254. [Google Scholar] [CrossRef]
  9. Xiong, C.Y.; Mo, H.W.; Fan, J.S.; Ren, W.C.; Pei, H.; Zhang, Y.H.; Ma, Z.W.; Wang, W.Y.; Huang, J. Physiological and Molecular Characteristics of Southern Leaf Blight Resistance in Sweet Corn Inbred Lines. Int. J. Mol. Sci. 2022, 23, 10236. [Google Scholar] [CrossRef]
  10. Liu, H.; Guo, F.F.; Chen, X.L.; Wu, B.M. Temporal Progress and Spatial Patterns of Northern Corn Leaf Blight in Corn Fields in China. Phytopathology 2022, 112, 1936–1945. [Google Scholar] [CrossRef]
  11. Ahangar, M.A.; Wani, S.H.; Dar, Z.A.; Roohi, J.; Mohiddin, F.; Bansal, M.; Choudhary, M.; Aggarwal, S.K.; Waza, S.A.; Dar, K.A.; et al. Distribution, Etiology, Molecular Genetics and Management Perspectives of Northern Corn Leaf Blight of Maize (Zea mays L.). Phyton-Int. J. Exp. Bot. 2022, 91, 2111–2133. [Google Scholar] [CrossRef]
  12. Iseghohi, I.; Abe, A.; Meseka, S.; Mengesha, W.; Gedil, M.; Job, A.; Menkir, A. Reactions of provitamin-A-enriched maize to foliar diseases under field conditions in Nigeria. Cereal Res. Commun. 2023. [Google Scholar] [CrossRef]
  13. Micca-Ramerez, M.V.; Andrada, N.R. Damage function of common rust corn, Puccinia sorghi, applicable in the semiarid area. Phytopathology 2022, 112, 10. [Google Scholar]
  14. Holan, K.; Whitham, S.A. Long-read draft genome assembly of Puccinia sorghi, the common rust pathogen of maize. Phytopathology 2022, 112, 79. [Google Scholar]
  15. Nikzainalalam, N.; Copeland, D.; Wiggins, M.; Telenko, D.E.P.; Wise, K.A.; Jacobs, J.L.; Chilvers, M. Fungicide sensitivity of Cercospora spp, the causal agent of grey leaf spot disease on corn (Zea-mays). Phytopathology 2022, 112, 109. [Google Scholar]
  16. Luis, J.M.S.; Nicolli, C.P.; Duffeck, M.R.; Robertson, A.E.; Smith, D.L.; Allen, T.; Bissonnette, K.; Sjarpe, D.; Check, J.C.; Chilvers, M.; et al. Validation of binary logistic regression models for assessing the risk for gray leaf spot of maize prior to planting. Phytopathology 2022, 112, 38. [Google Scholar]
  17. Luis, J.M.S.; Duffeck, M.R.; Frey, T.S.; Nicolli, C.P.; Robertson, A.E.; Smith, D.L.; Allen, T.; Bissonnette, K.; Chilvers, M.; Check, J.C.; et al. Vertical progress of gray leaf spot of maize from an in-field source of inoculum as influenced by environment and hybrid resistance. Phytopathology 2022, 112, 126. [Google Scholar]
  18. Wen, L.; Li, X.Y.; Gao, L. A transfer convolutional neural network for fault diagnosis based on ResNet-50. Neural Comput. Appl. 2020, 32, 6111–6124. [Google Scholar] [CrossRef]
  19. Zhao, S.; Zhang, T.; Ma, S.; Wang, M. Sea-horse optimizer: A novel nature-inspired meta-heuristic for global optimization problems. Appl. Intell. 2023, 53, 11833–11860. [Google Scholar] [CrossRef]
  20. Zhao, B.X.; Zhu, J.Z.; Hu, Y.B.; Liu, Q.M.; Liu, Y. Mapping Landslide Sensitivity Based on Machine Learning: A Case Study in Ankang City, Shaanxi Province, China. Geofluids 2022, 2022, 2058442. [Google Scholar] [CrossRef]
  21. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  22. Huang, B.; Liu, J.H.; Zhang, Q.; Liu, K.; Li, K.; Liao, X.Y. Identification and Classification of Aluminum Scrap Grades Based on the Resnet18 Model. Appl. Sci. 2022, 12, 11133. [Google Scholar] [CrossRef]
  23. Fu, Y.S.; Song, J.; Xie, F.X.; Bai, Y.; Zheng, X.; Gao, P.; Wang, Z.T.; Xie, S.Q. Circular Fruit and Vegetable Classification Based on Optimized GoogLeNet. IEEE Access 2021, 9, 113599–113611. [Google Scholar] [CrossRef]
  24. Wang, C.; Chen, D.L.; Hao, L.; Liu, X.B.; Zeng, Y.; Chen, J.W.; Zhang, G.K. Pulmonary Image Classification Based on Inception-v3 Transfer Learning Model. IEEE Access 2019, 7, 146533–146541. [Google Scholar] [CrossRef]
  25. Zhao, C.; Shuai, R.J.; Ma, L.; Liu, W.J.; Hu, D.; Wu, M.L. Dermoscopy Image Classification Based on StyleGAN and DenseNet201. IEEE Access 2021, 9, 8659–8679. [Google Scholar] [CrossRef]
  26. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  27. Drezner, Z.; Drezner, T.D. Biologically Inspired Parent Selection in Genetic Algorithms. Ann. Oper. Res. 2020, 287, 161–183. [Google Scholar] [CrossRef]
  28. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 1944, pp. 1942–1948. [Google Scholar]
  29. Wang, Z.D.; Xie, H.M. Wireless Sensor Network Deployment of 3D Surface Based on Enhanced Grey Wolf Optimizer. IEEE Access 2020, 8, 57229–57251. [Google Scholar] [CrossRef]
Figure 1. FScan2000 Display of collection devices.
Figure 1. FScan2000 Display of collection devices.
Mathematics 12 00368 g001
Figure 2. The three corn diseases and Health data sets show: (a) Blight data set, (b) Common-Rust data set, (c) Grey leaf spot data set, (d) Health data set.
Figure 2. The three corn diseases and Health data sets show: (a) Blight data set, (b) Common-Rust data set, (c) Grey leaf spot data set, (d) Health data set.
Mathematics 12 00368 g002aMathematics 12 00368 g002b
Figure 3. Image of Tent chaotic map function.
Figure 3. Image of Tent chaotic map function.
Mathematics 12 00368 g003
Figure 4. ESHO hyperparameter optimization flowchart.
Figure 4. ESHO hyperparameter optimization flowchart.
Mathematics 12 00368 g004
Figure 5. CEC2017 function test comparison diagram. (a) Function 1; (b) Function 2; (c) Function 4; (d) Function 17; (e) Function 23; (f) Function 27.
Figure 5. CEC2017 function test comparison diagram. (a) Function 1; (b) Function 2; (c) Function 4; (d) Function 17; (e) Function 23; (f) Function 27.
Mathematics 12 00368 g005
Figure 6. The optimal parameter confusion matrix is selected for the jade fungus grading data set.
Figure 6. The optimal parameter confusion matrix is selected for the jade fungus grading data set.
Mathematics 12 00368 g006
Figure 7. Comparison of Traditional Deep Learning Confusion Matrices (a) AlexNet Confusion matrix; (b) DenseNet201 Confusion matrix; (c) GoogleNet Confusion matrix; (d) inception V3 Confusion matrix; (e) ResNet 18 Confusion matrix; (f) ResNet 50 Confusion matrix; (g) VGG 16 Confusion matrix.
Figure 7. Comparison of Traditional Deep Learning Confusion Matrices (a) AlexNet Confusion matrix; (b) DenseNet201 Confusion matrix; (c) GoogleNet Confusion matrix; (d) inception V3 Confusion matrix; (e) ResNet 18 Confusion matrix; (f) ResNet 50 Confusion matrix; (g) VGG 16 Confusion matrix.
Mathematics 12 00368 g007aMathematics 12 00368 g007b
Figure 8. Other optimization algorithms were used to optimize the confusion matrix comparison of Resnet-50 network (a) SHO-net Confusion matrix; (b) SSHO-net Confusion matrix; (c) SINSHO-net Confusion matrix; (d) WOA-net Confusion matrix; (e) GA-net Confusion matrix; (f) PSO-net Confusion matrix; (g) GWO-net Confusion matrix.
Figure 8. Other optimization algorithms were used to optimize the confusion matrix comparison of Resnet-50 network (a) SHO-net Confusion matrix; (b) SSHO-net Confusion matrix; (c) SINSHO-net Confusion matrix; (d) WOA-net Confusion matrix; (e) GA-net Confusion matrix; (f) PSO-net Confusion matrix; (g) GWO-net Confusion matrix.
Mathematics 12 00368 g008
Figure 9. The optimal parameter confusion matrix is selected for the Maize disease classification dataset.
Figure 9. The optimal parameter confusion matrix is selected for the Maize disease classification dataset.
Mathematics 12 00368 g009
Figure 10. Comparison of Traditional Deep Learning Confusion Matrices (a) AlexNet Confusion matrix; (b) DenseNet Confusion matrix; (c) GoogleNet Confusion matrix; (d) inception V3 Confusion matrix; (e) ResNet 18 Confusion matrix.
Figure 10. Comparison of Traditional Deep Learning Confusion Matrices (a) AlexNet Confusion matrix; (b) DenseNet Confusion matrix; (c) GoogleNet Confusion matrix; (d) inception V3 Confusion matrix; (e) ResNet 18 Confusion matrix.
Mathematics 12 00368 g010
Figure 11. Other optimization algorithms were used to optimize the confusion matrix comparison of Resnet-50 network (a) SHO-net Confusion matrix; (b) SSHO-net Confusion matrix; (c) SINSHO-net Confusion matrix; (d) WOA-net Confusion matrix; (e) GA-net Confusion matrix; (f) PSO-net Confusion matrix; (g) GWO-net Confusion matrix.
Figure 11. Other optimization algorithms were used to optimize the confusion matrix comparison of Resnet-50 network (a) SHO-net Confusion matrix; (b) SSHO-net Confusion matrix; (c) SINSHO-net Confusion matrix; (d) WOA-net Confusion matrix; (e) GA-net Confusion matrix; (f) PSO-net Confusion matrix; (g) GWO-net Confusion matrix.
Mathematics 12 00368 g011aMathematics 12 00368 g011b
Table 1. Display of jade fungus dataset and classification criteria.
Table 1. Display of jade fungus dataset and classification criteria.
DataJade Fungus
Level OneLevel TwoLevel ThreeDisqualified
Number1234
Ear colorWhite to light yellowBeige white to light yellowLight yellow to beige
Number of ear piecesSingleSingleSingle or multipleMultiple
Ear size2~33~44~5>5
Ear shapeComplete and uniformMore complete and uniformMore complete and uniformIncomplete both
Ear conditionHealthyContains broken earsContains broken earsContains infestation
ImageMathematics 12 00368 i001Mathematics 12 00368 i002Mathematics 12 00368 i003Mathematics 12 00368 i004
Image Num320408210360
Table 2. Evaluation criteria formula.
Table 2. Evaluation criteria formula.
EvaluateFormula
Accuracy A c c u r a c y = T P + T N T P + T N + F P + F N
Sensitivity T P R = T P T P + F N
Precision P r e c i s i o n = T P T P + F P
Recall R e c a l l = T P T P + F N
TP (True Positives) is the sample is positive and the prediction result is positive. FP (False Positives) is the sample is negative and the prediction result is positive. TN (True Negatives) is the sample is negative and the prediction result is negative. FN (False Negatives) is the sample is positive and the prediction result is negative.
Table 3. Representation of the two optimizations in SHO.
Table 3. Representation of the two optimizations in SHO.
AlgorithmSSIN
SHO00
SSHO10
SINSHO01
ESHO11
Table 4. Comparison of CEC2017 test function algorithms.
Table 4. Comparison of CEC2017 test function algorithms.
Fun SHOSSHOSINSHOESHO
F1Best1.15 × 10101.01 × 10101.26 × 10101.01 × 1010
worst3.08 × 10103.30 × 10103.37 × 10102.68 × 1010
mean1.94 × 10102.07 × 10101.92 × 10101.81 × 1010
std5.37 × 1095.47 × 1094.96 × 1094.32 × 109
F2Best2.87 × 10279.74 × 10263.09 × 10254.27 × 1024
worst9.61 × 10383.89 × 10389.78 × 10367.57 × 1036
mean3.31 × 10372.46 × 10374.55 × 10353.92 × 1035
std1.75 × 10387.81 × 10371.82 × 10361.41 × 1036
F3Best52,212.4153,015.2948,580.2652,613.81
worst83,056.4682,353.0286,771.1278,841.13
mean68,990.9870,028.3366,705.0765,567.97
std7876.647256.5658976.786903.744
F4Best1054.551500.2771268.575836.9427
worst6747.0846912.5967586.8766463.786
mean3259.5584022.422928.8482541.938
std1525.7451535.8261579.3581350.051
F5Best690.1524706.2802713.8169709.8417
worst801.9082852.4711804.9284843.369
mean756.447759.936750.273750.5114
std28.96532.9042625.9579826.68688
F6Best6.43 × 1026.40 × 1026.41 × 1026.31 × 102
worst6.60 × 1026.60 × 1026.65 × 1026.65 × 102
mean6.53 × 1026.52 × 1026.53 × 1026.51 × 102
std4.39 × 105.99 × 105.99 × 106.32 × 10
F7Best9.18 × 1029.66 × 1029.55 × 1029.37 × 102
worst1.05 × 1031.08 × 1031.06 × 1031.06 × 103
mean1.01 × 1039.99 × 1029.95 × 1029.95 × 102
std3.09 × 1012.56 × 1012.30 × 1012.67 × 101
F8Best9.18 × 1029.66 × 1029.55 × 1029.37 × 102
worst1.05 × 1031.08 × 1031.06 × 1031.06 × 103
mean1006.899999.0511995.0073994.866
std30.8752625.6371822.9935426.69634
F9Best5045.9124770.3974051.6094124.663
worst7286.8839052.5886716.6716892.722
mean6038.5056174.6535558.3075405.359
std560.4052895.5512697.2742695.639
F10Best4816.3584234.365170.1724167.356
worst7052.3986951.1797235.786503.908
mean5985.5355759.2615885.7185673.887
std519.5401545.7817499.8241612.4502
F11Best1518.0071713.9811513.6991561.659
worst8127.9897158.7337545.375901.66
mean3472.5143218.3123695.6453402.712
std1504.5131248.8171537.3791151.42
F12Best1.09 × 1081.32 × 1081.69 × 1081.28 × 108
worst7.53 × 1096.77 × 1097.66 × 1094.86 × 109
mean2 × 1092.36 × 1091.93 × 1092 × 109
std1.82 × 1092.03 × 1091.89 × 1091.13 × 109
F13Best1,936,1995,324,2524,639,1374,718,375
worst7.62 × 1091.04 × 10107.04 × 1085.1 × 109
mean9.9 × 1087.21 × 1081.78 × 1086 × 108
std2.02 × 1091.96 × 1091.73 × 1081.2 × 109
F14Best68,110.24130,88098,109.1370,302.43
worst3,129,3252,211,5602,555,3391,107,680
mean922,027.11,037,686976,467.4517,943.9
std649,074.7607,791.7624,504.3295,845.4
F15Best14,784.2817,108.7711,294.5411,057.22
worst6,578,4999,345,19020,656,18158,669,887
mean942,711.71,513,0592,608,7614,129,592
std1,405,6982,038,8914,679,57211,314,354
F16Best2369.1292429.8332473.3262471.599
worst3599.5714159.493831.6844158.597
mean3108.5083167.4483130.6353188.239
std266.2169422.0315318.3059375.3002
F17Best1944.4241839.1231959.6631828.769
worst2834.8082963.4812740.472803.459
mean2356.1882293.3072323.5882353.132
std251.8807285.3271216.0364219.267
F18Best596,877.9448,618.8139,801.9128,147.2
worst24,946,31121,730,05613,947,1371,966,4615
mean3,270,4414,479,1112,715,2882,996,348
std4,749,1705,229,0322,816,7303,984,972
F19Best32,669.1623,875.639838.2166813.085
worst5.28 × 1081.23 × 1083.86 × 1081.31 × 108
mean31,618,1015,287,55717,247,1498,906,206
std99,909,09922,234,09072,483,98826,703,308
F20Best2304.262295.2882300.1312246.822
worst3019.553029.4242908.5842795.154
mean2593.2642628.7032515.6032505.816
std209.162182.0024167.7344146.8245
F21Best2474.7962463.4522473.1992467.6
worst2621.732579.1782639.6772598.871
mean2525.3972523.4142524.2542525.216
std32.4781230.2422136.2424632.04138
F22Best3383.5613770.8073657.7033822.016
worst8627.8988504.0439044.2948403.451
mean6208.5136220.165837.8215750.331
std1477.6571411.2451808.9691427.724
F23Best2962.4082945.8022917.3542879.052
worst3226.9733201.8263093.1623170.076
mean3054.793048.8683015.7753008.702
std64.0182471.7346345.6135372.37846
F24Best3214.7823246.5723198.0563184.568
worst3447.9563481.3373421.3833397.219
mean3339.0363353.1383281.5363285.132
std56.7118356.0742456.8200549.41599
F25Best3168.93161.5063159.443132.425
worst3909.324297.0024290.1123919.242
mean3436.9993521.0273410.4323406.443
std195.7024272.8187264.2817213.8743
F26Best6052.0815424.1256308.895125.343
worst8937.7059404.3118614.5258872.829
mean7553.3967795.7867448.6877287.09
std771.6484919.546603.431882.8359
F27Best3399.3123357.4543319.2943370.905
worst3734.8593834.5884053.0983770.368
mean3545.8233572.4323489.7973492.418
std91.95908115.2077137.822385.95479
F28Best3985.4443698.7883804.8333699.801
worst5267.3735215.8075433.594982.299
mean4517.7814390.1964430.4464288.284
std380.2863387.744479.7648382.0062
F29Best3854.9393837.673877.0584156.992
worst5259.4975131.165073.1915077.449
mean4505.44477.7654533.3034523.774
std364.2282277.1826282.0671250.438
F30Best3,313,6532,518,3041,023,2772,937,596
worst8.65 × 10844,619,1458.6 × 1081.08 × 108
mean48,992,56218,326,44743,953,90518,719,231
std1.55 × 10813,137,8871.55 × 10821,574,223
Table 5. Optimal parameters for classification dataset of jade fungus.
Table 5. Optimal parameters for classification dataset of jade fungus.
ParameterSGDM
Momentum0.9
Initial learning rate0.003855
Maximum epoch32
Validation frequency33
Table 6. Optimal parameters for maize disease dataset.
Table 6. Optimal parameters for maize disease dataset.
ParameterSGDM
Momentum0.5
Initial learning rate0.03943
Maximum epoch32
Validation frequency33
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Z.; Qu, S.; Xu, Y.; Hao, X.; Lin, N. Enhanced Sea Horse Optimization Algorithm for Hyperparameter Optimization of Agricultural Image Recognition. Mathematics 2024, 12, 368. https://doi.org/10.3390/math12030368

AMA Style

Li Z, Qu S, Xu Y, Hao X, Lin N. Enhanced Sea Horse Optimization Algorithm for Hyperparameter Optimization of Agricultural Image Recognition. Mathematics. 2024; 12(3):368. https://doi.org/10.3390/math12030368

Chicago/Turabian Style

Li, Zhuoshi, Shizheng Qu, Yinghang Xu, Xinwei Hao, and Nan Lin. 2024. "Enhanced Sea Horse Optimization Algorithm for Hyperparameter Optimization of Agricultural Image Recognition" Mathematics 12, no. 3: 368. https://doi.org/10.3390/math12030368

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop