Next Article in Journal
Healthier Indoor Environments for Vulnerable Occupants: Analysis of Light, Air Quality, and Airborne Disease Risk
Previous Article in Journal
Research on Radionuclide Identification Method Based on GASF and Deep Residual Network
Previous Article in Special Issue
Impact of Subsurface Drainage System Design on Nitrate Loss and Crop Production
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Lightweight Deep Learning Model for Forecasting the Fishing Ground of Purpleback Flying Squid (Sthenoteuthis oualaniensis) in the Northwest Indian Ocean

1
Key Laboratory of Fisheries Remote Sensing, Ministry of Agriculture and Rural Affairs, East China Sea Fisheries Research Institute, Chinese Academy of Fishery Sciences, Shanghai 200090, China
2
Laoshan Laboratory, Qingdao 266237, China
*
Authors to whom correspondence should be addressed.
These authors have contributed equally to this work.
Appl. Sci. 2025, 15(3), 1219; https://doi.org/10.3390/app15031219
Submission received: 16 December 2024 / Revised: 20 January 2025 / Accepted: 23 January 2025 / Published: 24 January 2025

Abstract

:
The purpleback flying squid (Sthenoteuthis oualaniensis) is an economically significant cephalopod species in the Northwest Indian Ocean. Predicting its fishing grounds can provide a crucial foundation for fishery management and production. In this research, we collected data from China’s light-purse seine fishery in the Northwest Indian Ocean from 2016 to 2020 to train and validate the AlexNet and VGG11 models. We designed a data partitioning method (DPM) to divide the training set into three scenarios, namely DPM-S1, DPM-S2, and DPM-S3. Firstly, DPM-S1 was employed to select the base model (BM). Subsequently, the optimal BM was lightweighted to obtain the optimal model (OM). The OM, known as the AlexNetMini model, has a model size that is one-third of that of the BM-AlexNet model. Our results also showed the following: (1) the F1-scores for AlexNet and AlexNetMini across the datasets DPM-S1, -S2, and -S3 were 0.6957, 0.7505, and 0.7430 for AlexNet and 0.6992, 0.7495, and 0.7486 for AlexNetMini, suggesting that both models exhibited comparable predictive performance; (2) the optimal dropout values for the AlexNetMini model were 0 and 0.2, and the optimal training set proportion was 0.8; (3) AlexNetMini utilized both DPM-S2 and DPM-S3, yielding comparable outcomes. However, given that the training duration for DPM-S3 was relatively shorter, DPM-S3 was selected as the preferred method for data partitioning. The findings of our study indicated that the lightweight model for the purpleback flying squid fishing ground prediction, specifically AlexNetMini, demonstrated superior performance compared to the original AlexNet model, particularly in terms of efficiency. Our study on the lightweight method for deep learning models provided a reference for enhancing the usability of deep learning in fisheries.

1. Introduction

The purpleback flying squid (Sthenoteuthis oualaniensis) is the primary economic and commercial species targeted by the high seas light-purse seine fishery in the Northwest Indian Ocean. It is also widely distributed in the subtropical waters of the Pacific Ocean [1], with global resources estimated at approximately 10 million tons [2]. China began exploring the purpleback flying squid resource in the Northwest Indian Ocean in 2003 [2,3], and commercial light-purse seine fishing vessels began operations in this region in 2014. The investigation showed that the catch per unit effort (CPUE) of this species exhibited an increasing trend from 2016 to 2019. However, there was a significant decline in 2020 [4]. Two major factors likely contributed to this decline: a sharp increase in fishing pressure, perhaps due to more intensive fishing activities, and climate change, which can disrupt marine ecosystems and species’ availability [5]. Therefore, accurately predicting the fishing grounds of the purpleback flying squid in the Northwest Indian Ocean is essential for ensuring sustainable fishery production.
The purpleback flying squid, with its short life cycle, is a crucial warm-water pelagic cephalopod that, similar to other cephalopods, can be significantly affected by SST [6,7]. Research on cephalopods has indicated that temperature influences various biological and ecological parameters, including their growth length, sex ratio, body size, and population density [8]. However, there are differences in the effects of SST on cephalopods during the incubation period, juvenile, and adult stages. Waluda et al. [9] developed a prediction model based on SST and concluded that SST has a significant effect on the aggregation behavior of juvenile Illex argentines and also concluded that SST during the incubation period is negatively correlated with the catch of I. argentines in the following season, while the study of Hatfield [10] also demonstrated that the difference in temperature during the early growth of Loligo gahi will result in the significant difference in body size of the adult squid. In addition, seawater temperature anomalies due to climatic change also significantly affect the distribution of cephalopods. Yu et al. [5] studied the effects of the El Niño index and regional water temperature on Ommastrephes bartramii in the Northwest Pacific Ocean and Dosidicus gigas in the Southeast Pacific Ocean. They concluded that the El Niño phenomenon can change the favorable SST area and the optimal oceanic fronts, which results in the abnormal distribution of the two species of squid. Therefore, studies on SST and climate-induced SST anomalies on cephalopods are conducive to deepening the comprehensive knowledge of cephalopod resources and can provide an important reference for fishery resource management and model construction of fishing ground predictions.
Machine learning (ML) is a scientific discipline that focuses on the study of algorithms derived from data. It has been extensively applied in fisheries to predict fishing vessel behavior or CPUE [11,12]. ML models are generally regarded as possessing high accuracy and efficiency. For instance, Shen et al. [13] utilized sea surface temperature (SST) and sea surface height (SSH) to develop a prediction model based on the habitat index, achieving an average accuracy of 63%. Similarly, Zhang et al. [14] created a prediction model using the Bayesian analysis, which yielded an accuracy of approximately 65%. Although traditional machine learning methods generally show high training speeds and strong interpretability, their simplistic model structures present challenges in effectively accommodating large-scale fisheries and environmental datasets. As a result, the prediction accuracy of these models tends to be relatively low [11,12]. In contrast, deep learning can extract potential features from vast amounts of data, revealing hidden patterns and high-order feature information [15]. Consequently, deep learning has been increasingly applied in the field of fisheries. For instance, Zhu et al. [16], Yuan et al. [17], and Han et al. [18] employed deep learning methods to predict fishing grounds with high accuracy [13,14,15,19]. Notably, it was discovered by Zhu et al. [16] that the test accuracy of the AlexNet model was 8.5% to 20.6% higher in comparison with that of the random forest model.
While deep learning offers numerous advantages in fishery applications, it is always characterized by a complex structure, a greater number of parameters, and longer training times compared to traditional machine learning models. To shrink the model training time, model size, or the number of parameters, researchers have implemented various strategies, such as incorporating depth-separable convolutional layers [17,18,20], developing new deep neural networks [21], or enhancing existing networks [22]. These efforts have collectively improved model usability. However, in the aspect of the development and application of the fishing ground forecasting model, the related work is quite rare. The objective of this study was to employ deep learning methods for predicting fishing grounds by (1) reducing the number of convolutional kernels in the BM model to decrease the training time; (2) scientifically predicting the spatial distribution of the purpleback flying squid and optimizing the utilization of this resource; and (3) exploring lightweight methods for deep learning models to provide a reference for enhancing the usability of deep learning in fisheries.

2. Materials and Methods

2.1. Sources and Processing of Fishery Data

Fishery data were collected from the statistics of China’s Northwest Indian Ocean fishing vessels between 2016 and 2021. The dataset included the vessel name, date of fishing operations (year, month, day), number of nets, operation location (latitude and longitude), and species of catches. The type of operation was light-purse seine fishing in the high seas of the Northwest Indian Ocean (40–80° E, 5° S–35° N) from January to May and September to December during the years 2016 to 2021. The spatial distribution of average catch per unit effort (CPUE) of purpleback flying squid in the Northwest Indian Ocean is illustrated in Figure 1.
During our data analysis, we encountered several types of anomalies. Abnormal catch data included records with negative reported yields, which were removed to ensure the accuracy of the analysis. Temporal anomalies were entries where the recorded year was outside the relevant operational years of our study by Excel. Such out-of-range time values could skew our temporal analysis, so we excluded them. Spatial anomalies were data points suggesting fishing activities on land. To uphold the integrity of our spatial analysis, these were also removed using ArcGIS Pro [4]. The nominal CPUE was calculated to establish a standard for determining whether a location qualifies as a central fishing ground. Subsequently, the fishery data were gridded at a spatial resolution of 0.5° × 0.5°, namely the fishing zone. The formula for calculating nominal CPUE, measured in units of t/net, was calculated as follows:
C P U E = c a t c h e f f o r t
where catch (t) is the total weight of the purpleback flying squid in the current fishing zone, and effort (net) is the total number of nets operated by all fishing vessels in the same fishing zone.
Three distinct scenarios were utilized to segment the dataset for the training samples, with the months of June to August being excluded due to the influence of the monsoon season (Table 1). The median value of the CPUE was employed as a criterion to categorize the dataset into fishing grounds, designated as 1, and non-fishing grounds, designated as 0. A summary of the experimental group numbers, the models employed, and the datasets was provided for comparative analyses (Table 2).

2.2. Generation of 4-Channel Input Factors Involving Spatiotemporal and Environmental Factors

The study utilized Level 3 sea surface temperature (SST) data from 2016 to 2021, obtained from the Moderate Resolution Imaging Spectroradiometer (MODIS) developed by the National Aeronautics and Space Administration (NASA) (URL: https://oceandata.sci.gsfc.nasa.gov/l3/ (accessed on 5 May 2024), with a spatial resolution of 9 km and a temporal resolution of 8 days. The SST values are provided in a mapped file format of netCDF.
The mapped SST data underwent several processing steps, such as localization, interpolation, data enhancement, and cropping; subsequently, in line with reference [16], a 4-channel dataset was constructed by integrating longitude, latitude, and time. Subsequently, each of these channels was converted and unified into an n by n matrix, as illustrated in Figure 2.
The missing values in the SST matrix were interpolated by Euclidean distance interpolation with Equation (2):
Z ^ ( S 0 ) = k = 1 N λ k Z ( S k )
where Z ^ ( S 0 ) is the predicted value of SST at (S0), λk is the weight of the k-th neighboring sampling point used in the prediction, and Z(Sk) is the attribute value of the k-th adjacent sampling point.
The formula utilized to calculate the weight λk is as follows:
λ k = d k p n = 1 N d n p
where d k p is the distance between the predicted point and the k-th neighboring point, where p is the exponent, usually taken as 2.0.

2.3. The Structures of Deep Learning Models

In this research, two deep learning architectures, VGG11 and AlexNet, were chosen for the analysis. The VGG11 model is distinguished by its configuration of 5 convolutional layers and 3 fully connected layers. This network is a uniform structure, employing convolutional kernels with a size of 3 × 3 and pooling layers with a size of 2 × 2, following each group of convolutions. The design principle of utilizing small convolutional kernels in conjunction with multiple convolutional layers is intended to increase the depth of the neural network. Conversely, the foundational architecture of the AlexNet model consists of 3 convolutional groups and 3 fully connected layers, each accompanied by a pooling layer with a size of 3 × 3 with a stride of 2. The structural designs of both models are depicted in Figure 3. In addition to the core components, which encompass convolutional layers, pooling layers, fully connected layers, and activation functions, this study integrated several supplementary elements aimed at enhancing model performance. Specifically, in order to reduce the possibility of overfitting, facilitate the parameter tuning process, and expedite training and convergence, a dropout layer was incorporated following the fully connected layer. Moreover, to mitigate challenges associated with gradient explosion or vanishing gradients, a batch normalization layer was added after each convolutional layer and fully connected layer.
These two models are both convolutional neural network models, and the specific formulas for the convolution operation are delineated as follows:
Y c o n v = f j = 0 J 1 i = 0 I 1 X m + i , n + j   W i j + b
where X denotes a matrix with dimensions M × N, where the variable m is restricted to the interval (0, M − 1), and the variable n is confined to the interval (0, N − 1). The constants I and J represent the dimensions of the convolution kernel W. Furthermore, f(⋅) indicates the activation function, and b denotes the additional bias or offset. The precise formula of the Maxpooling function employed by the model is shown as follows when the size of the pooling window is 2 × 2:
f p o o l = M a x x m , n , x m + 1 , n , x m , n + 1 , x m + 1 , n + 1
where the variable f p o o l represents the outcome obtained following the application of maximum pooling.
Additionally, the Rectified Linear Unit (ReLU) is employed as the activation function, which serves to decrease the computational load and reduce the phenomenon of gradient vanishing and overfitting. The ReLU function can also speed up the operation of convolutional neural networks. Its expression is presented as follows:
f ( x ) = m a x ( 0 , x )
where x is the input of a neuron.
The Softmax function is mainly used to convert the output of neurons (usually a vector of real numbers) into a probability distribution for multi-class classification problems, facilitating the understanding and comparison of classification results. In this study, the output of the fully connected layer is a two-dimensional vector of predicted results for high- and low-catch fishing areas. It is computed as 2 probability values by the function Softmax.
P ( y j | x ) = e f ( y j ) k = 1 n e f ( y k )
where n is set to 2 as the number of classes. P ( y j | x ) is the normalized probability of correctly classifying the label yj given the 4-channel input factor x.

2.4. Evaluation Indicators of Models

The validation dataset was used to evaluate the model training results, and the evaluation metric is the accuracy of the validation dataset, as shown in Equation (8).
Accuracy = TP + TN TP + TN + FP + FN  
where TP is the number of true high-catch fishing grounds predicted to be high-catch fishing grounds; TN is the number of true low-catch fishing grounds predicted to be low-catch fishing grounds; FP is the number of true low-catch fishing grounds predicted to be high-catch fishing grounds; and FN is the number of true high-catch fishing grounds predicted to be low-catch fishing grounds.
To conduct a more comprehensive evaluation of the models, this study also calculated the precision and recall of the model outputs. Additionally, the F1-score is a measure of the combined precision and recall, which can be used to test the effect of the model’s actual application [18]. The formulas are expressed in Equations (9), (10), and (11), respectively.
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 s c o r e = 2 T P 2 T P + F P + F N

2.5. Comparative Experiments Between AlexNet and VGG11

The same hyperparameters were employed to compare AlexNet and VGG11 in this study. The experimental results showed that the dropout layers had obvious impacts on the models. Therefore, we have set up five experimental little groups (ELGs). The dropout values of ELGs were set to 0, 0.2, 0.4, 0.6, and 0.8 (which were respectively named as 0ELG, 0.2ELG, and so on hereafter). Meanwhile, the batch size was 64 throughout the training process, the number of epochs was 800, and the learning rate was 0.001. In order to avoid overfitting and boost the model’s generalization ability, the dataset for the experiments was partitioned as follows: a total of 70% of the samples were randomly selected from the years 2016 to 2020, and these were allocated to the training set. At the same time, the remaining 30% of the samples were reserved as the validation set. Additionally, the data from the year 2021 were used as the test set. Taking the accuracy and loss of the AlexNet and VGG11 models as the evaluation criteria, a superior model was chosen to serve as the base model (BM).

2.6. Lightweight Process of the Model

The model’s number of convolutional kernels was tuned in accordance with the BM structure by employing five ELGs. The dropout layer for every ELG took the values of 0, 0.2, 0.4, 0.6, and 0.8 to derive the accuracy and loss, and the superior model structure was selected as the optimal model (OM). In order to comprehensively compare the performance of the base model (BM) and the other model (OM), five hyperparameters were designated as follows: the dropout, the proportion of the training set (TSP), the learning rate, the number of epochs, and the batch size. The first three hyperparameters were set as variables. Specifically, for dropout, the values were 0, 0.2, 0.4, 0.6, and 0.8; for TSP, they were 0.5, 0.6, 0.7, 0.8, and 0.9; and for the learning rate, they were 0.0005 and 0.001. In contrast, the last two hyperparameters were kept constant. The batch size was fixed at 64, and the number of epochs was set to 800. Following the completion of model training, the F1-scores were obtained to facilitate a comparative analysis between OM and BM. This analysis aimed to assess the applicability of OM in predicting the fishing grounds of the purpleback flying squid.
The box plots were drawn according to the F1-score results. Each box plot was called an experimental group. Each group had 40 boxes, collectively referred to as experimental units in this study. The key distinction between the experimental units within each experimental group lies in the variation of the model’s hyperparameters. The upper quartile (Q3), lower quartile (Q1), median (Q2), upper limits, and lower limits of the box were derived. The difference between the upper and lower limits of each experimental unit can be regarded as the stability of the model, and it was computed for all experimental units. Subsequently, the median of these differences across all experimental units was determined, which totaled 640 experimental units. If the difference is less than the median, the model is considered to be relatively stable. Otherwise, it is considered to be a relatively unstable model.

3. Results

3.1. Evaluation and Comparison of the Forecasting Abilities of Two Base Models

3.1.1. Comparison of Test Dataset Accuracy Between the VGG11 and AlexNet Models

According to Figure 4A, the accuracy of the VGG11 model was higher only for 0ELG, e.g., if the dropout was set to 0, and the accuracy for 0ELG became stable after about 400 iterations, but the accuracies for other scenarios were low or unstable. When the accuracy curves stabilized, the mean accuracy values for the five models were 0.7508, 0.7110, 0.5238, 0.5099, and 0.4863, respectively.
According to Figure 4B, the accuracies stabilized in all ELGs after about 300 iterations, and the mean accuracy values for the five ELGs were 0.7378, 0.7434, 0.7377, 0.7326, and 0.7165, respectively. When comparing each ELG, it was observed that, specifically for 0ELG, the VGG11 model exhibited an average accuracy that surpassed that of the AlexNet model by a margin of 0.013; for scenarios 0.2, 0.4, 0.6, and 0.8 ELGs, the average accuracy of AlexNet exceeded that of VGG11 by 0.0334, 0.2139, 0.2227, and 0.2302, respectively. As shown in Figure 4, the accuracy curve of the AlexNet model was more stable than that of the VGG11 model. Considering the model’s accuracy curve and accuracy mean, it was concluded that the AlexNet model was superior to the VGG11 model.

3.1.2. Comparison of Loss Curve Results for the VGG11 and AlexNet Models

For the five ELGs, the average loss values for the VGG11 model were 5.73 × 103, 1.65 × 107, 1.02 × 1011, 3.38 × 1012, and 8.4 × 106, respectively (Figure 5A). In contrast, the average loss values for the AlexNet model were 5.31 × 10−1, 7.54 × 10−1, 5.71 × 10−1, 3.03 × 103, and 9.34 × 102, respectively (Figure 5B). When comparing the average loss values of the two models, the AlexNet model consistently outperformed the VGG11 model in every scenario.

3.2. Lightweight Experiment of the Model

Since an augmentation in the number of convolutional kernels will result in a lengthened computation time in deep learning models, this experiment initially chose a decreased number of convolutional kernels for the lightweight of the model. The objective of this strategy was to build a model that could reach a relatively stable state with fewer training iterations. In this experiment, the convolutional kernels used were the 1/3 filters within the AlexNet model (AM3) and the 1/6 filters within the AlexNet model (AM6) of the base model (BM), respectively.
The results showed that mean accuracy values of AM3 were higher than those of AM6 in all cases of the ELGs (Figure 6). The mean accuracy values of AM3(Figure 6A) were 0.7522, 0.7504, 0.7645, 0.7090, and 0.7483, while those of AM6(Figure 6B) were 0.7367, 0.5, 0.7626, 0.4992, and 0.4999. After a comprehensive comparison, it can be concluded that although AM6 had fewer convolutional kernels, it was more unstable and inaccurate than AM3. Therefore, the structure of AM3 was selected as the AlexNetMini.

3.3. Comparison of the Predictive Performance of AlexNet and AlexNetMini Models

As illustrated in Table 3, in Scenario 1, the mean F1-score for AlexNetMini exceeded that of AlexNet by 0.0035. Conversely, in Scenario 2, AlexNet demonstrated a mean F1-score that was 0.001 greater than that of AlexNetMini. In Scenario 3, the average F1-score for AlexNetMini surpassed that of AlexNet by 0.0056. Consequently, Scenario 3, which involved dataset division based on a cluster analysis of the gravity centers of catches, emerged as the most effective segmentation scheme for training data.
Furthermore, Figure 7 indicated that as the training set ratio (TSP) for AlexNet increased from 0.5 to 0.8, the numbers of the experimental units with higher F1-scores (represented by blue dots) were identified as 5, 8, 2, and 15, respectively. In contrast, the numbers of relatively stable experimental units (indicated by orange dots) were 25, 32, 27, and 30, respectively. For AlexNetMini, the numbers of experimental units with relatively higher F1-scores were recorded as 0, 0, 1, and 29, while the numbers of relatively stable experimental units were 37, 30, 26, and 33. Considering these metrics, we proposed that the optimal TSP value be established at 0.8.
In the AlexNet model, when the learning rates were set at 0.0005 and 0.001, the numbers of experimental units achieving a higher F1-score were 14 and 16, respectively. Additionally, the numbers of relatively stable experimental units were recorded as 62 and 52 for the respective learning rates. In the case of the AlexNetMini model, the numbers of experimental units with higher F1-scores remained consistent at 14 and 16 for the different learning rates, while the numbers of relatively stable experimental units were noted as 61 and 65, respectively. These findings indicated that the learning rate exerted minimal influence on both the F1-score and the stability of the models.
Furthermore, the analysis revealed that there were 128 relatively stable experimental units in the AlexNetMini model compared to 112 in the AlexNet model, suggesting that AlexNetMini demonstrated greater stability compared to its counterpart.
According to Figure 7, both AlexNetMini and AlexNet exhibited five experimental units (indicated by yellow dots), achieving the highest F1-score when the dropout values were set at 0 and 0.2. However, as the dropout values increased to 0.4, 0.6, and 0.8, the numbers of the experimental units attaining the highest F1-score declined to 2, 0, and 0, respectively. Consequently, superior performance can be achieved at dropout values of 0 and 0.2.

4. Discussion

4.1. Rationality of Choosing SST as the Only Environmental Input Variable of the Model

The present study employed the AlexNetMini model, utilizing SST, longitude, latitude, and temporal data to forecast the fishing grounds of purpleback flying squid in the Northwest Indian Ocean. Previous investigations have clearly demonstrated a significant correlation between the distribution of purpleback flying squid fishing grounds and SST in both the Northwest Indian Ocean and the South China Sea [6,7]. This correlation provides a strong rationale for incorporating SST as an input variable in relevant models. However, it is important to note that the scientific community has also recognized that the distribution of the purpleback flying squid is not solely determined by SST. Some studies, such as those cited in [6,8], have suggested that other environmental factors, such as sea surface salinity and chlorophyll-a concentration, also play significant roles. However, a trade-off emerges when considering integrating multiple environmental variables into the model. Incorporating additional variables will increase the number of channels, which, in turn, will lead to an increase in the number of parameters in the training results. This can potentially endow the model with a higher level of accuracy. Conversely, using fewer variables will reduce the number of model parameters. Although this simplifies the model structure to some extent and cuts down on computational costs, it may also limit the amount of information the model can learn, ultimately resulting in a decrease in model accuracy.
The results of our study appeared to be quite promising. They indicated that by utilizing only SST data, we could achieve satisfactory predictive outcomes for forecasting the fishing grounds of the purpleback flying squid. This finding not only validated our approach but also offered a potential pathway for more efficient and targeted research in this area. We may further explore the combination of SST with other key environmental factors, potentially discovering a more balanced method to incorporate multiple variables without significantly increasing training time and model complexity in the future.

4.2. Comparison of the Application Effects of the AlexNet and VGG11 Models

There are differences in the performance of the AlexNet and VGG11 models in different domains, and the better model can be decided by comparing their metrics. For example, Wang et al. [23] found that the AlexNet model far outperformed the VGG11 model in terms of accuracy. When it came to identifying automotive tire defects, the AlexNet model achieved an average accuracy rate of 71.40%, while the VGG11 model only reached 37.58%. In the identification of COVID-19, the AlexNet model’s accuracy, recall, precision, and F1-score are comprehensively higher than the VGG11 model [24]. There are also studies reporting that the VGG11 model is better than AlexNet. Li et al. [25] argued that the AlexNet model has a simple structure and poor robustness on face perception, while the VGG11 model performs better than the AlexNet model. There are also studies showing that the AlexNet and VGG11 models have similar recognition results, but, based on the consideration of time, the AlexNet model was finally chosen. Cao et al. [26] compared the prediction performance of the AlexNet and VGG11 models in the fish recognition process and found that there is not much difference in the accuracy, F1-score, and other indices between the two, but the AlexNet model has a shorter running time. Yang et al. [27] obtained similar results in the study of marine pollution image classification. In this study, the AlexNet model was more stable than the VGG11 model, with higher accuracy and shorter training time. Therefore, AlexNet is more suitable for predicting purpleback flying squid fishing grounds in the Northwest Indian Ocean.
The primary distinction between the two types of models discussed in this paper is the size of the convolutional kernels. In AlexNet, the first two layers made use of relatively larger convolutional kernels. Specifically, the kernel size of the first layer was 11 × 11, while that of the second layer was 5 × 5. In contrast, the VGG11 model employed uniform 3 × 3 kernels throughout the whole network. In terms of training performance, the AlexNet model outperformed the VGG11 model. Up to now, no research has been discovered concerning the influence of the convolutional kernel size on the fishery ground prediction model. In many applications of CNN models, a sizeable convolutional kernel is usually split into several small convolutional kernels [8,26]. Still, according to the effective perceptual field (ERF) theory [28], the ERF increases linearly with the kernel size and sub-linearly with the depth. Ding et al. [29] also demonstrated that using large convolutional kernels instead of splitting them into several small convolutional kernels results in a larger ERF, dramatically improving the CNN performance. This may be one of the reasons for the better performance of the AlexNet model compared to the VGG11 model in this study. In addition, some studies have shown that a higher recognition rate can also be obtained if we adjust the size of the convolution kernel according to the image characteristics. Yu et al. [30] believed that an initial application of a larger convolution kernel is advantageous for the extraction of contour features, followed by the utilization of a smaller convolution kernel to eliminate detailed features. This approach is suggested to enhance the recognition rate in leaf identification tasks. In the study of hand-drawn sketch recognition, it is also found that a satisfactory recognition rate can be obtained by adjusting the size of the convolutional kernel according to the law of sketch recognition and using a larger first-layer convolutional kernel instead of a smaller one [31]. In the present study, the temperature fields of the fishing grounds that the purpleback flying squid inhabits also belong to the relatively large-scale features. The AlexNetMini model finally selected in this paper is based on the improvement of the AlexNet model, which used a large convolution kernel and then a small convolution kernel. This kind of combination of kernel structures not only extracted the large-scale environmental features but also took into account the detailed features of the temperature field, and, therefore, it achieved a better recognition of the fishing ground. In future research, the exploration of ERF can be strengthened to deepen the understanding of the relationship between ERF and fishing grounds, which can improve the interpretability of the model more effectively.

4.3. The Construction and Lightweight of the Deep Learning Fishing Ground Prediction Model

Deep learning, recognized as a prominent approach within the realm of data mining, demonstrates a strong capacity for model fitting and is particularly effective in addressing challenges associated with large-scale datasets [32]. It has been applied in many fields [6,22,30], including in fishing ground predictions. Armas et al. [33] constructed a model using the factors of SST, SSS, and MLD to predict the potential fishing grounds of Engraulis ringens in the northern Chilean waters with commendable performance. Zhu [15] established a four-channel AlexNet by utilizing SST with time, latitude, and longitude to predict the fishing grounds of Ommastrephes bartramii and also achieved notable results with an F1-score of 73.1%. Yuan et al. [14,19] predicted the South Pacific Thunnus alalunga fishing grounds based on feature interaction with convolutional neural networks and bimodal deep learning methods, and both models achieved high fishing prediction rates. Xiao [34] developed a fishing ground prediction model for Scomber japonicus by means of a deep learning method. In a comprehensive comparison, it was concluded that the VGG16 model was more practical. Xie et al. [32] used a U-Net model to effectively solve the problem of predicting O. bartramii fishing grounds in the Northwest Pacific with an F1-score of 0.89, and the model results were satisfactory. Han et al. [18] used the self-constructed 2DCNN and 3DCNN models with an F1-score of 0.76 for the central fishing grounds and 0.74 for the non-central fishing grounds. The findings of this research offered novel insights into the application of deep learning techniques for predicting fishing grounds. However, our experimental results, along with the fishing ground prediction accuracies obtained in the past, have never exceeded 90%. This limitation may be attributed to the insufficient comprehensiveness and accuracy of the collected far sea fishery data. Moving forward, it is essential to optimize our data collection strategies and implement a rigorous data review process to ensure the accuracy, completeness, and consistency of the information gathered. Additionally, we should enhance international cooperation in fishery data collection to acquire more extensive datasets for the deep learning model construction.
Deep learning model lightweight can reduce the model training time and training cost, which has been applied in many fields and has made great progress. Paszke [21] argued that real-time semantic segmentation capability is very important in mobile applications. He proposed an emerging deep neural network, ENet, which dramatically reduces the number of network parameters and computation. Zhang et al. [35], Ma et al. [36], and Howard et al. [37] added depth-separable convolutional layers to their models, effectively reducing the model computation. Jin et al. [22] also proposed a lightweight model, the SG-YOLOv5s model based on YOLOv5s, to address the problem of under-vehicle hazmat detection, which cut down the model’s volume and parameter amount by about 28%. Rong et al. [38] proposed the SimCC-ShuffleNetV2 model for the problems of slow detection speed and complex network structure of deep learning technology in cow keypoint detection, which realized the accurate and efficient detection of cows. Fan et al. [39] used EfficientNet’s backbone network to replace YOLOv5s’ backbone layer and also added the SPPF feature fusion module. These measures significantly decreased the number of model parameters, facilitated the deployment, improved the model detection accuracy, and provided a reference for the follow-up studies.
To the best of our knowledge, there appears to be a lack of research focused on lightweight deep learning models for predicting fishing grounds. This study successfully achieved a lightweight version of the AlexNet model by decreasing the number of convolutional kernels. Consequently, the revised model’s parameter count was reduced to one-third of that of the original model. This modification not only significantly decreased the number of parameters and the computational time but also maintained an accuracy level comparable to that of the original one. These findings indicated that lightweight deep learning models hold considerable potential for advancement in the domain of fishing ground predictions.

4.4. Limitations and Shortcomings

One of the limitations of our research is the relatively small sample size. Scientists are always recommended to utilize a sufficiently large sample size. This is because a larger sample size leads to greater statistical power, which, in turn, can generate statistically true effects, thus achieving scientific and statistical significance, as noted in references [40,41]. Due to the data collection difficulties of purpleback flying squid in the sea, we were only able to collect data from a limited number of samples, which may limit the generalizability of our study. Future studies could expand the sample size to increase the robustness of the results. In addition, as the beginning of the exploration of deep learning lightweight for fishing ground forecasts, we only reduced the number of convolutional kernels of the AlexNet network to decrease the model size. In future research, we intend to draw insights from models such as deep separable convolutional layers, ENet, and SimmCC-ShuffleNetV2, with the objective of further minimizing both the model size and computational time.
Additionally, this research focused primarily on improving the performance of the model itself and its applicability in practical scenarios. Limited resources and efforts were concentrated on the lightweight optimization of the deep learning prediction model to ensure that it can quickly and accurately provide prediction results at the forefront of fishery production. Although visual prediction maps of fishing grounds are highly significant, this study has not included their production in order to emphasize the research focus. In the future, with the achievement of the lightweight objective for the deep learning prediction model and the further advancement of technology, we plan to incorporate visualization functions into the research scope to provide a more comprehensive and user-friendly fishery prediction service.
Despite these limitations, our research has yielded significant insights into the lightweight modeling of deep learning for fishing ground prediction. We anticipate that our findings will enhance the existing body of knowledge in this domain. Subsequent investigations may expand upon our work by addressing the identified limitations and pursuing novel research directions.

5. Conclusions

This study involved the development of two convolutional neural network models utilizing SST, longitude, latitude, and temporal data. A comparative analysis was undertaken to assess the performance and stability of these models. Following this, an experiment was conducted to evaluate the lightweight model. The outcomes of this study yielded the subsequent results and conclusions:
(1)
The AlexNetMini model, which is a lightweight version, was developed and is one-third the size of the original AlexNet model while maintaining comparable predictive performance.
(2)
The optimal dropout rates identified for the AlexNetMini model were 0 and 0.2, with an optimal training set proportion (TSP) of 0.8.
(3)
The average F1-score for the AlexNetMini model was recorded at 0.7486 when utilizing DPM-S3 and 0.7495 with DPM-S2. Ultimately, Scenario 3, as determined through a cluster analysis of fishing ground centers, was selected as the most effective division of the training dataset.
In conclusion, the AlexNetMini model demonstrated an F1-score comparable to that of the AlexNet model while exhibiting a reduced size in this study. Consequently, it was deemed appropriate for the practical prediction of fishing grounds for the purpleback flying squid in the high sea of the Northwest Indian Ocean. Furthermore, there is potential for the application of deep learning models across various fish species to establish a more precise fishery ground prediction system. Looking ahead, we intend to optimize the utilization of big data in fisheries and contribute to the advancement of a high-precision, intelligent fishing ground prediction system.

Author Contributions

Conceptualization, S.Z., J.C. and F.T.; methodology, J.C., S.Z. and H.H.; data curation, J.C., H.H., F.T. and X.C.; writing—review and editing, S.Z., J.C., H.H. and X.C.; visualization, J.C. and Y.S.; validation, Y.S.; funding acquisition, S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Laoshan Laboratory, grant number LSKJ202201804.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data supporting the results of this research are not publicly available.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, Y.; Chen, X. World Oceanic Economic Cephalopod Resources and Fishery, 1st ed.; Ocean Press: Beijing, China, 2005; ISBN 7-5027-6299-X. [Google Scholar]
  2. Chen, X.; Qian, W. Study on the resource density distribution of Symlectoteuthis oualaniensis in the northwestern Indian Ocean. J. Shanghai Ocean. Univ. 2004, 13, 2118–2223. [Google Scholar]
  3. Chen, X.; Shao, F. Study on the resource characteristics of Symlectoteuthis oualaniensis and their relationships with the sea conditions in the high sea of the northwestern Indian Ocean. Period. Ocean Univ. China 2006, 36, 611–616. [Google Scholar]
  4. Chen, J.; Zhao, G.; Zhang, S.; Cui, X.; Tang, F.; Chen, F.; Han, H. Study on temporal and spatial distribution characteristics of Symplectoteuthis oualaniensis in high seas fishing ground of northwest Indian Ocean. J. Fish. China 2023. accepted. [Google Scholar]
  5. Yu, W.; Chen, X.; Liu, L. Synchronous Variations in Abundance and Distribution of Ommastrephes bartramii and Dosidicus gigas in the Pacific Ocean. J. Ocean Univ. China 2021, 20, 695–705. [Google Scholar] [CrossRef]
  6. Wei, J.; Cui, G.; Xuan, W.; Tao, Y.; Su, S.; Zhu, W. Effects of SST and Chl-a on the spatiotemporal distribution of Sthenoteuthis oualaniensis fishing ground in the Northwest Indian Ocean. J. Fish. Sci. China 2022, 29, 388–397. [Google Scholar]
  7. Fan, J.; Yu, W.; Ma, S.; Chen, Z. Spatio-temporal variability of habitat distribution of Sthenoteuthis oualaniensis in South China Sea and its interannual variation. South China Fish. Sci. 2022, 18, 1–9. [Google Scholar]
  8. Chemshirova, I.; Hoving, H.-J.; Arkhipkin, A. Temperature effects on size, maturity, and abundance of the squid Illex argentinus (Cephalopoda, Ommastrephidae) on the Patagonian Shelf. Estuar. Coast. Shelf Sci. 2021, 255, 107343. [Google Scholar] [CrossRef]
  9. Waluda, C.M.; Trathan, P.N.; Rodhouse, P.G. Influence of oceanographic variability on recruitment in the Illex argentinus (Cephalopoda: Ommastrephidae) fishery in the South Atlantic. Mar. Ecol. Prog. Ser. 1999, 183, 159–167. [Google Scholar] [CrossRef]
  10. Hatfield, E.M.C. Do some like it hot? Temperature as a possible determinant of variability in the growth of the Patagonian squid, Loligo gahi (Cephalopoda: Loliginidae). Fish. Res. 2000, 47, 27–40. [Google Scholar] [CrossRef]
  11. Xing, B.; Zhang, L.; Liu, Z.; Sheng, H.; Bi, F.; Xu, J. The Study of Fishing Vessel Behavior Identification Based on AIS Data: A Case Study of the East China Sea. J. Mar. Sci. Eng. 2023, 11, 1093. [Google Scholar] [CrossRef]
  12. Xiao, G.; Xu, B.; Zhang, H.; Tang, F.; Chen, F.; Zhu, W. A study on spatial-temporal distribution and marine environmental elements of Symplectoteuthis oualaniensis fishing grounds in outer sea of Arabian Sea. South China Fish. Sci. 2022, 18, 10–19. [Google Scholar]
  13. Shen, Z.; Chen, X.; Wang, J. Forecasting of Bigeye tuna fishing ground in the Eastern Pacific Ocean based on sea surface temperature and sea surface height. Mar. Sci. 2015, 39, 45–51. [Google Scholar]
  14. Zhang, H.; Cui, X.; Fan, W. Predicting system of Chilean jack mackerel fishing grounds based on remote sensing data. Predict. Syst. Chil. Jack Mackerel Fish. Grounds Based Remote Sens. Data 2012, 28, 140–144. [Google Scholar]
  15. Zhu, H. Construction of Fishing Ground Forecast Model of Ommastrephes bartramii in Northwest Pacific Based on Convolutional Neural Network. Master’s Thesis, Shanghai Ocean University, Shanghai, China, 2021. [Google Scholar]
  16. Zhu, H.; Wu, Y.; Tang, F.; Jin, S.; Pei, K.; Cui, X. Construction of fishing ground forecast model of Ommastrephes bartramii using convolutional neural network in the Northwest Pacific. Trans. Chin. Soc. Agric. Eng. 2020, 36, 57+153–160. [Google Scholar]
  17. Yuan, H.; Zhang, S.; Chen, G. Fishery forecasting in the fishing ground based on dual-modal deep learning model. Jiangsu J. Agric. Sci. 2021, 37, 435–442. [Google Scholar]
  18. Han, H.; Yang, C.; Jiang, B.; Shang, C.; Sun, Y.; Zhao, X.; Xiang, D.; Zhang, H.; Shi, Y. Construction of chub mackerel (Scomber japonicus) fishing ground prediction model in the northwestern Pacific Ocean based on deep learning and marine environmental variables. Mar. Pollut. Bull. 2023, 193, 115158. [Google Scholar] [CrossRef] [PubMed]
  19. Fan, Y.; Dai, X. Forecasting central fishing ground of Thunnus alalunga in the south Pacific Ocean based on multifactor. In Proceedings of the Abstract Collection of Papers of the 2014 Annual Conference of the Chinese Fisheries Society, Changsha, China, 29 October 2014; p. 374. [Google Scholar]
  20. Yuan, H.; Wang, M.; Liu, H.; Chen, G. Fishing ground prediction model based on feature interaction and convolutional network. Jiangsu J. Agric. Sci. 2021, 37, 1501–1509. [Google Scholar]
  21. Paszke, A.; Chaurasia, A.; Kim, S.; Culurciello, E. ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation. arXiv 2016. [Google Scholar] [CrossRef]
  22. Jin, X.; Zhuang, J.; Xu, Z. Lightweight YOLOv5s network-based algorithm for identifying hazardous objects under vehicles. J. Zhejiang Univ. (Eng. Sci.) 2023, 57, 1526+1561. [Google Scholar]
  23. Wang, R.; Guo, Q.; Lu, S.; Zhang, C. Tire Defect Detection Using Fully Convolutional Network. IEEE Access 2017, 7, 43502–43510. [Google Scholar] [CrossRef]
  24. Rodrigues, L.; Rodrigues, L.; da Silva, D.; Mari, J.F. Evaluating Convolutional Neural Networks for COVID-19 classification in chest X-ray images. In Proceedings of the Anais do Workshop de Visão Computacional (WVC), Uberlandia, Minas Gerais, Brazil, 7–8 October 2020; pp. 52–57. [Google Scholar]
  25. Li, Y.-F.; Ying, H. Disrupted visual input unveils the computational details of artificial neural networks for face perception. Front. Comput. Neurosci. 2022, 16, 1054421. [Google Scholar] [CrossRef]
  26. Cao, Y.; Liu, S.; Wang, M.; Liu, W.; Liu, T.; Cao, L.; Guo, J.; Feng, D.; Zhang, H.; Hassan, S.G.; et al. A Hybrid Method for Identifying the Feeding Behavior of Tilapia. IEEE Access 2024, 12, 76022–76037. [Google Scholar] [CrossRef]
  27. Yang, T.; Jia, S.; Zhang, H.; Zhou, M. Research on Image Classification of Marine Pollutants with Convolution Neural Network. In Proceedings of the ICCCS 2018: Cloud Computing and Security, Haikou, China, 26 September 2018; pp. 665–673. [Google Scholar]
  28. Luo, W.; Li, Y.; Urtasun, R.; Zemel, R. Understanding the Effective Receptive Field in Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2016; Volume 29. [Google Scholar]
  29. Ding, X.; Zhang, X.; Han, J.; Ding, G. Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022), New Orleans, LA, USA, 18–24 June 2022; pp. 11963–11975. [Google Scholar]
  30. Yu, H.; Ma, J.; Zhang, Y. Plant leaf recognition model based on two-way convolutional neural network. J. Beijing For. Univ. 2018, 40, 132–137. [Google Scholar]
  31. Wang, F. Research on Sketch Recognition Using Convolutional Neural Networks. Master’s Thesis, Anhui University, Hefei, China, 2016. [Google Scholar]
  32. Xie, M.; Liu, T.; Chen, X. Prediction on fishing ground of Ommastrephes bartramii in Northwest Pacific based on deep learning. J. Fish. China 2022, 48, 119311. [Google Scholar] [CrossRef]
  33. Armas, E.; Aarncibia, H.; Neira, S. Identification and Forecast of Potential Fishing Grounds for Anchovy (Engraulis ringens) in Northern Chile Using Neural Networks Modeling. Fishes 2022, 7, 204. [Google Scholar] [CrossRef]
  34. Xiao, G. Construction and Comparison of Fishing Ground Forecast Model of Chub Mackerel (Scomber japonicus) in Pacific Northwest. Master’s Thesis, Shanghai Ocean University, Shanghai, China, 2022. [Google Scholar]
  35. Zhang, X.; Zhou, X.; Lin, X.; Sun, J. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018), Salt Lake City, UT, USA, 11 July 2018; pp. 6848–6856. [Google Scholar]
  36. Ma, N.; Zhang, X.; Zheng, H.-T.; Sun, J. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. In Proceedings of the Computer Vision–ECCV 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 122–138. [Google Scholar]
  37. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017. [Google Scholar] [CrossRef]
  38. Song, H.; Hua, Z.; Ma, B.; Wen, S.; Kong, X.; Xu, X. Lightweight Keypoint Detection of Dairy Cow Based on SimCC- ShuffleNetV2. Trans. Chin. Soc. Agric. Mach. 2023, 54, 275–281. [Google Scholar]
  39. Fan, T.; Gu, J.; Wang, W.; Zuo, Y.; Ji, C.; Hou, Z.; Lu, B.; Dong, J. Lightweight Honeysuckle Recognition Method Based on Improved YOLOv5s. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2023, 39, 192–200. [Google Scholar] [CrossRef]
  40. Armstrong, R.A. Is There a Large Sample Size Problem? Ophthalmic Physiol. Opt. 2019, 39, 129–130. [Google Scholar] [CrossRef]
  41. Rajput, D.; Wang, W.-J.; Chen, C.-C. Evaluation of a Decided Sample Size in Machine Learning Applications. BMC Bioinform. 2023, 24, 48. [Google Scholar] [CrossRef]
Figure 1. Distribution of average CPUE of purpleback flying squid in the Northwest Indian Ocean from 2016 to 2021.
Figure 1. Distribution of average CPUE of purpleback flying squid in the Northwest Indian Ocean from 2016 to 2021.
Applsci 15 01219 g001
Figure 2. Schematic of a 2D matrix of 4 channels, including SST, longitude, latitude, and time.
Figure 2. Schematic of a 2D matrix of 4 channels, including SST, longitude, latitude, and time.
Applsci 15 01219 g002
Figure 3. The network architecture diagram of the AlexNet model (a) and VGG11 model (b).
Figure 3. The network architecture diagram of the AlexNet model (a) and VGG11 model (b).
Applsci 15 01219 g003
Figure 4. Test set accuracy of the VGG11 (A) and AlexNet (B) models.
Figure 4. Test set accuracy of the VGG11 (A) and AlexNet (B) models.
Applsci 15 01219 g004
Figure 5. Loss curves of the VGG11 (A) and AlexNet (B) models.
Figure 5. Loss curves of the VGG11 (A) and AlexNet (B) models.
Applsci 15 01219 g005
Figure 6. Comparison of validation set accuracy results for AM3 (A) and AM6 (B).
Figure 6. Comparison of validation set accuracy results for AM3 (A) and AM6 (B).
Applsci 15 01219 g006
Figure 7. Comparison of F1-score results between AlexNet (left) and AlexNetMini (right).
Figure 7. Comparison of F1-score results between AlexNet (left) and AlexNetMini (right).
Applsci 15 01219 g007
Table 1. The division of the training datasets under various scenarios.
Table 1. The division of the training datasets under various scenarios.
Scenario NameDataset CountPeriod of PeriodDataset Division Principle
Scenario 11January–DecemberNo division
Scenario 22January–MayDue to the influence of the summer monsoon, the production data from June to August are extremely limited [4]. The dataset was bisected by the monsoon boundary.
September–December
Scenario 33February–MayAccording to the results of the cluster analysis of the gravity centers of catches in reference [4], the dataset was divided into 3 parts.
September–November
December–January next year
Table 2. Experimental group codes (from A to L) associated with the cross-combination between models and datasets.
Table 2. Experimental group codes (from A to L) associated with the cross-combination between models and datasets.
ModelDataset Period of Date
January–MaySeptember–DecemberFebruary–MaySeptember–NovemberDecember–Januart Next YearJanuary–December
AlexNetACEGIK
AlexNetMiniBDFHJL
Table 3. A comparative analysis of the F1-score of AlexNet and AlexNetMini under different scenarios.
Table 3. A comparative analysis of the F1-score of AlexNet and AlexNetMini under different scenarios.
ScenarioDatasetAlexNet ModelAlexNetMini ModelAverage of the Two Models
F1-ScoreAverageF1-ScoreAverage
Scenario 11K: 0.69570.6957L: 0.69920.69920.6975
Scenario 21A: 0.72970.7505B: 0.74410.74950.7369
2G: 0.7728H: 0.76400.7684
Scenario 31C: 0.75010.7430D: 0.74190.74860.7640
2E: 0.7495F: 0.76900.7593
3I: 0.7339J: 0.73860.7363
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, S.; Chen, J.; Han, H.; Tang, F.; Cui, X.; Shi, Y. A Lightweight Deep Learning Model for Forecasting the Fishing Ground of Purpleback Flying Squid (Sthenoteuthis oualaniensis) in the Northwest Indian Ocean. Appl. Sci. 2025, 15, 1219. https://doi.org/10.3390/app15031219

AMA Style

Zhang S, Chen J, Han H, Tang F, Cui X, Shi Y. A Lightweight Deep Learning Model for Forecasting the Fishing Ground of Purpleback Flying Squid (Sthenoteuthis oualaniensis) in the Northwest Indian Ocean. Applied Sciences. 2025; 15(3):1219. https://doi.org/10.3390/app15031219

Chicago/Turabian Style

Zhang, Shengmao, Junlin Chen, Haibin Han, Fenghua Tang, Xuesen Cui, and Yongchuang Shi. 2025. "A Lightweight Deep Learning Model for Forecasting the Fishing Ground of Purpleback Flying Squid (Sthenoteuthis oualaniensis) in the Northwest Indian Ocean" Applied Sciences 15, no. 3: 1219. https://doi.org/10.3390/app15031219

APA Style

Zhang, S., Chen, J., Han, H., Tang, F., Cui, X., & Shi, Y. (2025). A Lightweight Deep Learning Model for Forecasting the Fishing Ground of Purpleback Flying Squid (Sthenoteuthis oualaniensis) in the Northwest Indian Ocean. Applied Sciences, 15(3), 1219. https://doi.org/10.3390/app15031219

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop