Next Article in Journal
Fatigue and Ultimate Strength Evaluation of GFRP-Reinforced, Laterally-Restrained, Full-Depth Precast Deck Panels with Developed UHPFRC-Filled Transverse Closure Strips
Previous Article in Journal
The Influence of Elite Race Walkers’ Year-Long Training on Changes in Total Energy and Energy Cost While Walking at Different Speeds
Previous Article in Special Issue
An IoT Healthcare System Based on Fog Computing and Data Mining: A Diabetic Use Case
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Smart City Aquaculture: AI-Driven Fry Sorting and Identification Model

1
Department of Computer Science & Information Management, Soochow University, Taipei 100006, Taiwan
2
Vossic Technology, New Taipei 235030, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(19), 8803; https://doi.org/10.3390/app14198803
Submission received: 21 August 2024 / Revised: 25 September 2024 / Accepted: 27 September 2024 / Published: 30 September 2024
(This article belongs to the Special Issue IoT in Smart Cities and Homes, 2nd Edition)

Abstract

:

Featured Application

Under the constraints of limited resources in smart cities, it is necessary to separate fish by gender in the early stages and raise them separately. This study applies AI models for the identification and separation of fingerlings and conducts a specific validation, thereby providing competitive advantages for the sustainable management of aquaculture.

Abstract

The development of smart agriculture has become a critical issue for the future of smart cities, with large-scale management of aquaculture posing numerous challenges. Particularly in the fish farming industry, producing single-sex fingerlings (especially male fingerlings) is crucial for enhancing rearing efficiency and could even provide key support in addressing future global food demands. However, traditional methods of manually selecting the gender of broodfish rely heavily on experienced technicians, are labor-intensive and time-consuming, and present significant bottlenecks in improving production efficiency, thus limiting the capacity and sustainable development potential of fish farms. In response to this situation, this study has developed an intelligent identification system based on the You Only Look Once (YOLO) artificial intelligence (AI) model, specifically designed for analyzing secondary sexual characteristics and gender screening in farmed fish. Through this system, farmers can quickly photograph the fish’s cloaca using a mobile phone, and AI technology is then used to perform real-time gender identification. The study involved two phases of training with different sample sets: in the first phase, the AI model was trained on a single batch of images with varying parameter conditions. In the second phase, additional sample data were introduced to improve generalization. The results of the study show that the system achieved an identification accuracy of over 95% even in complex farming environments, significantly reducing the labor costs and physical strain associated with traditional screening operations and greatly improving the production efficiency of breeding facilities. This research not only has the potential to overcome existing technological bottlenecks but also may become an essential tool for smart aquaculture. As the system continues to be refined, it is expected to be applicable across the entire life cycle management of fish, including gender screening during the growth phase, thereby enabling a more efficient production and management model. This not only provides an opportunity for technological upgrades in the aquaculture industry but also promotes the sustainable development of aquaculture. The smart aquaculture solution proposed in this study demonstrates the immense potential of applying AI technology to the aquaculture industry and offers strong support for global food security and the construction of smart cities.

1. Introduction

The scope of smart cities encompasses various sectors such as manufacturing, transportation, agriculture and aquaculture, retail, and more. With the rapid increase in population, food security is poised to become a global challenge in the future. Recognizing that smart agriculture will be a critical topic within future smart cities, this study focuses on the aquaculture industry, specifically on smart aquaculture.
The mass production and cultivation of farmed fish worldwide follow similar practices. Without the application of specialized technologies during production, fish in ponds can easily continue to reproduce uncontrollably, leading to significant disparities in fish size, slow growth rates, low feed efficiency, and a decline in the harvestable size and quality, all of which result in increased costs and various management challenges. Therefore, the primary goal of hatcheries is to stock ponds with all-male fingerlings at the outset, positioning hatcheries as the key suppliers of large quantities of male fingerlings.
In the operation of producing large quantities of single-sex male fingerlings, it is essential to separate and raise male and female broodfish separately to prevent mating and spawning until the appropriate time, when they are then paired and allowed to mate and spawn, potentially even hatching the fry. This separation process facilitates concentrated fingerling production, and increasing the frequency of this process can effectively enhance the final output of fingerlings. Currently, the gender screening of broodfish in hatcheries relies heavily on the manual labor of experienced technicians, which demands significant physical and visual effort, severely limiting the time duration and daily output that each technician can achieve. This, in turn, affects the hatchery’s ability to supply fingerlings. Additionally, training technicians to develop adequate skills in broodfish screening often requires a considerable amount of time. As a result, the entire operation of hatcheries today is a standard “fully manual operation”, which is not attractive to younger generations and poses a significant barrier to further industry development.
Therefore, this study aims to develop an intelligent identification system for the aquaculture industry, which could become a crucial tool for technical upgrades in breeding operations. Additionally, it is anticipated that as this identification tool becomes more sophisticated, it could be expanded for use in grow-out farms, particularly during the small and medium fish stages in the existing segmented farming model, thereby maximizing commercial benefits and leading to a comprehensive upgrade of the aquaculture industry.
By integrating advanced technological control techniques into the fundamental operations of hatcheries, this approach aims to overcome the bottlenecks in operational efficiency associated with broodfish gender screening and to enhance production capacity. This could position the technology as one of the pioneers in introducing advanced technology to aquaculture, driving new transformations in the industry, attracting younger generations to the field, and ensuring the transmission of aquaculture technology. Ultimately, this would enhance the effectiveness of aquaculture within smart cities through the application of AI (artificial intelligence) technology.

2. Literature Review

The development and application of technological advancements in aquaculture have significantly transformed traditional operations within the industry. Smart aquaculture technologies [1,2,3], such as artificial intelligence (AI), the Internet of Things (IoT), automation technologies, and big data analytics, have not only improved production efficiency but also enhanced the precision and sustainability of management practices [4,5]. These technologies, including automated feeding systems, environmental monitoring, and fish behavior analysis, have markedly reduced the need for manual labor while improving the quality of aquaculture products. These advancements have not only optimized various stages of the production process but also played a crucial role in addressing environmental challenges and labor shortages.
Frank and Smith [6] emphasized the role of technological innovation in fisheries and aquaculture in their study. They highlighted that smart aquaculture technologies, through the introduction of automation and digital technologies, have not only boosted productivity but also improved the sustainability of aquaculture practices. Similarly, Daniel et al. [7] explored the application of technological advancements in Cobia fish farming, particularly in water quality monitoring and automated feeding systems. They found that the integration of these technologies not only increased the precision of farming operations but also significantly enhanced the growth rate and yield of fish. The application of these technologies in Cobia fish farming not only reduced farming costs but also improved product consistency and market competitiveness.
In traditional breeding operations worldwide, after pairing broodfish to produce fingerlings, farmers commonly use male hormones to treat the newly hatched fingerlings, forcing them to develop into males to ensure the production of single-sex fingerlings [8,9]. However, the use of male hormones raises concerns about environmental pollution, causing significant impacts on the surrounding ecosystem [10]. In response, some hatcheries have adopted hybridization methods to produce single-sex fingerlings, thereby avoiding the reliance on male hormones. Additionally, the acquisition of hormone treatments has become increasingly restricted, with rising costs and declining purity and quality, further complicating the use of male hormones in breeding operations.
Therefore, the fish gender identification tool developed in this study aims to provide immediate contributions to current aquaculture hatcheries. As it matures through the accumulation of sufficient data, it is expected to be further integrated into grow-out farms. Specifically, when juvenile fish grow to a stage where their gender can be identified through secondary sexual characteristics, this tool can be employed to separate male and female fish for rearing. This would allow mass production operations in aquaculture to break free from the dependence on obtaining single-sex fingerlings, achieving high-efficiency production—a potential future market with enormous promise. To achieve this goal, this study explores and analyzes three mainstream market models—SSD (Single Shot MultiBox Detector), MobileNet, and YOLO (You Only Look Once)—and examines their respective advantages and disadvantages.
The Single Shot MultiBox Detector (SSD) has demonstrated excellent performance in visual recognition tasks [10,11]. The primary advantages of SSD include its fast processing speed and high accuracy, particularly when handling large-scale datasets such as PASCAL VOC and COCO.
However, SSD has some limitations. Firstly, its performance may decrease when detecting small objects due to the use of fixed-size and -ratio anchor boxes for predicting object locations [12]. Additionally, the SSD model requires a large amount of labeled data and computational resources for training, which may not align well with resource-constrained platforms where the model is eventually deployed. To address these issues, researchers have introduced several improvements, such as multi-scale and attention mechanisms, to enhance small object detection capabilities and more efficient feature fusion techniques to improve overall performance [12]. Despite some challenges, SSD remains a powerful tool for fast and accurate object detection, making it suitable for applications where speed is essential. For instance, Almudawi et al. [13] utilized the SSD model to improve gesture recognition systems, particularly in medical and educational settings where gestures facilitate communication for diagnosis and treatment. Similarly, Souaidi et al. [14] explored the SSD model’s application in medical image analysis, focusing on the automatic detection of polyps, which demonstrates the SSD model’s potential in medical diagnostics.
The MobileNet model has garnered significant attention due to its efficient visual recognition performance, particularly on mobile and embedded devices. These models utilize depthwise separable convolutions to construct lightweight deep neural networks, effectively balancing latency and accuracy, making them ideal for real-time processing applications. Regarding MobileNet’s performance, it excels in object detection and classification, especially when handling complex images and video data [15]. For instance, MobileNetV2 introduced linear bottlenecks and shortcut connections, which contributed to improved training speed and accuracy. These models maintain accuracy similar to larger models while keeping computational and parameter requirements low.
For example, Elfatimi et al. [16] applied deep learning techniques, particularly the MobileNet architecture, to detect bean leaf diseases. The model’s effectiveness was evaluated by testing it on three different bean leaf image datasets, achieving a high accuracy of over 92% on all three datasets. Huynh et al. [17] proposed a novel lightweight model called MobileNet-SA for sketch classification tasks. This model combines the self-attention mechanism with the lightweight Convolutional Neural Network architecture of MobileNet to enhance sketch classification performance in resource-constrained environments. In another study, transfer learning techniques were employed to predict the severity of traffic accidents [18]. Shapley values were used to analyze the impact of different features on model predictions, aiming to provide more effective traffic safety policy recommendations. The experiments utilized various deep learning models (Multilayer Perceptron, MLP; Convolutional Neural Network, CNN; Long Short-Term Memory, LSTM) and transfer learning models (ResNet, EfficientNetB4, InceptionV3, Xception, MobileNet), with MobileNet achieving the best performance, with a prediction accuracy of 98.17%.
The advantages of MobileNet in visual recognition include high efficiency, low latency, and relatively high accuracy, though it may perform less effectively in identifying small objects in certain situations. These models are particularly well suited for mobile and edge computing devices, capable of handling a wide range of complex visual recognition tasks [19,20].
YOLO (You Only Look Once) is a widely used deep learning algorithm known for its speed and accuracy in real-time object detection. Below are some of the latest research and applications of YOLO in various fields. YOLO has been applied in intelligent surveillance systems, particularly in real-time object detection, demonstrating its advantages in speed and precision, making it an ideal choice for smart cities and security systems. Studies [21,22] have explored the application of YOLO in intelligent surveillance systems, showcasing its efficiency and accuracy in real-time object detection.
In the field of traffic sign detection and recognition, YOLO has proven crucial for enhancing vehicle safety and the reliability of autonomous driving technologies. These systems can instantly identify and classify traffic signs, assisting drivers or autonomous systems in making better driving decisions. A review paper [23] provides a detailed analysis of the methods and challenges associated with using the YOLO algorithm for traffic sign detection and recognition.
In agriculture, YOLO has been used to detect and classify crop diseases, helping farmers identify and address crop issues promptly, thereby improving crop yield and quality. A study combined MobileNetV3 and YOLOv7 to develop a new model specifically for detecting pests and diseases in rice crops. Additionally, research combining YOLO with SSD, such as [24], collected a dataset of tomato images taken by robots in greenhouses and conducted benchmark testing and evaluation on several deep learning models, including SSD and YOLO networks. The results showed that SSD MobileNet v2 was the best detection model, with high accuracy and fast inference speed. Moreover, the YOLOv4 Tiny model also performed well, with an inference time of about 5 milliseconds.
Comparative analyses of SSD, MobileNet, and YOLO models [25,26] indicate that each model has its unique strengths and limitations. The SSD model is renowned for its high accuracy and real-time processing capability, making it suitable for video streams and dynamic scenes, though it may be less precise than YOLO when dealing with small objects and has relatively higher computational costs. MobileNet is widely used in mobile and embedded devices due to its lightweight and efficient design, significantly reducing computational load and model size through depthwise separable convolutions, though it may sacrifice precision in some complex detection tasks. YOLO, celebrated for its rapid detection speed, can achieve high-accuracy object detection in a single pass, making it ideal for applications requiring real-time responses, such as traffic monitoring. However, YOLO may have limitations in accuracy when handling overlapping objects or small-sized objects. When choosing the appropriate model, it is crucial to consider the specific requirements of the application. For example, in applications where real-time processing is essential, YOLO might be the better choice.
In recent years, AI, IoT, and edge computing technologies have been widely applied in smart agriculture. Ref. [27] reviewed the application of AI and IoT in smart homes, focusing on how these technologies are integrated into various household devices, and analyzed their value in energy management and enhancing user convenience. This is relevant to smart agriculture, as both rely on sensor networks and real-time data processing. Ref. [28] explored how to optimize the YOLOv8 model for detecting tomatoes, particularly in enhancing features and recognition techniques in complex backgrounds, which closely aligns with our research methodology and serves as a valuable reference for applying algorithms in different scenarios. Pise et al. [29] examined the application of AIoT in the architecture of smart healthcare systems, covering how AI can improve the efficiency of healthcare services. This technical framework can also inspire how environmental monitoring and data analysis can be implemented in smart aquaculture. In [30], Vasconez et al. utilized Convolutional Neural Networks (CNNs) for fruit detection and counting, and provided a detailed comparison of different CNN architectures, which holds significant reference value for AI-based gender identification in aquaculture. Additionally, Ref. [31] introduced feature extraction methods for cocoa bean image classification, demonstrating how AI can be applied to image processing in smart agriculture, making it a suitable supplement to discussions on other application scenarios of AI in agriculture. These references collectively provide a solid technical foundation for this study and demonstrate the feasibility of applying AI technologies in various smart aquaculture environments.
The intrusion detection system proposed by [32] addresses the issue of data imbalance with innovative techniques. By utilizing data normalization, dimensionality reduction (Fisher Discriminant Analysis), and the k-nearest neighbor method, data preparation is enhanced, and instance-based learners are used to detect attack vectors, achieving 99% accuracy and detection rates. This approach provides valuable insights for aquaculture identification projects, particularly in handling imbalanced data and classification challenges. In the study by [33], it is mentioned that IoT systems consist of multiple layers, including the perception layer, network layer, and application layer, each facing different security challenges. To protect these layers, the research proposes the use of encryption technologies, intrusion detection systems, and secure communication protocols to defend against attacks. Moreover, machine learning and deep learning models can enhance system security by analyzing attack patterns in IoT data for real-time anomaly detection. These findings may offer valuable insights for aquaculture identification projects, particularly in ensuring data integrity and system security. AI-driven security models can improve the ability to detect and address potential threats.

3. Materials and Methods

This study focuses on developing an intelligent identification system for detecting sexual characteristics in farmed fish, which can assist hatcheries in quickly identifying the gender of fish and improving the efficiency of sorting operations.
After fish are harvested from the breeding ponds, they directly enter the identification process. However, during this process, various impurities in the water, such as bubbles, excrement, and gravel, often interfere, resulting in images that are not as clean or orderly as factory-produced standard specimens. Additionally, the identification process is affected by varying lighting conditions from morning to evening, leading to significant environmental parameter fluctuations. The unique reproductive characteristics of each fish further complicate the identification process, requiring extensive training data and precise parameter adjustments.
To address these challenges, this study first selected an appropriate AI model and designed an identification workflow. Parameters were then adjusted according to the characteristics of both the AI model and the identification environment. Ultimately, this led to the development of an excellent smart aquaculture solution: an AI-based intelligent fingerling separation and identification model.
The primary objective of this study is to verify the feasibility of the AI gender identification system in a specific aquaculture environment; thus, the error analysis mainly focused on overall error rates and accuracy. Detailed statistical data on low-confidence or misclassification cases were not extensively analyzed, as the emphasis at this stage was on the preliminary validation of the system’s overall performance. However, we observed a decline in recognition performance under certain environmental conditions (such as insufficient lighting or the presence of impurities in the water).
Future research will further investigate the specific types and causes of these recognition errors and adopt more detailed error analysis methods to optimize system performance. This will include classification analysis of low-confidence cases and designing model improvement strategies.
Before finalizing the AI model, this study tested three mainstream models on the market: SSD, MobileNet, and YOLO. The test results are as follows:
  • SSD: During training, the SSD model failed to converge, with the loss not decreasing, meaning that the AI could not effectively learn.
  • MobileNet: The model successfully completed training and could correctly frame the genital area. However, in the test set, 80% of male fish were incorrectly identified as female, indicating that the model’s identification accuracy was low.
  • YOLO: The model successfully completed training and could correctly frame the genital area. In the test set, the identification accuracy for both male and female fish exceeded 80%.
As a result, this study selected YOLO as the training model. The identification method is illustrated in Figure 1 below and is explained as follows:
Data Preparation and Augmentation
  • Data Collection and Augmentation: A large number of annotated images of farmed fish were collected, ensuring coverage of different angles, lighting conditions, and backgrounds. Data augmentation techniques were applied, such as rotation, scaling, and brightness adjustment.
Model Selection and Training
  • Pre-trained YOLO Model Selection: A YOLO model pre-trained on a large-scale dataset was chosen to perform better in transfer learning.
  • Model Architecture Modification: The output layer of the model was modified to accommodate new categories (male and female fish). If necessary, the network’s depth or width was adjusted.
  • Implementation of Transfer Learning Strategy: The first few layers of the pre-trained model were frozen, only fine-tuning the last few layers and the newly added output layer. Gradually, more layers were unfrozen for detailed fine-tuning.
Loss Function and Optimizer Selection
  • Loss Function and Optimizer: A suitable loss function for object detection, such as focal loss, was selected, taking into account the issue of class imbalance.
Model Training and Fine-tuning
  • Training and Fine-tuning: The model was trained and fine-tuned by monitoring the loss function and evaluation metrics.
Model Evaluation
  • Evaluation: The model’s performance was evaluated using overall accuracy, class-specific accuracy (for male and female fish), error rates, and processing time. These simplified evaluation methods allow for a quick assessment of the model’s performance in real-world aquaculture scenarios.
YOLO Loss Function
The YOLO loss function is a composite loss function consisting of multiple components, including bounding box loss, object loss, non-object loss, and classification loss. Each part of the loss function measures the difference between the model’s predictions and the ground truth. The purpose of this loss function design is not only to accurately locate the target object’s position but also to ensure that the model can accurately classify the object.
This comprehensive approach ensures that the model is well prepared to handle the challenges of real-world scenarios, such as varying lighting conditions and the presence of environmental noise, while maintaining high accuracy in the gender identification of fish.
L = λcoord × Lbox + λobj × Lobj + λnoobj × Lnoobj + λclass × Lclass
Variable Descriptions:
  • L: total loss;
  • Lbox: bounding box loss;
  • Lobj: object loss;
  • Lnoobj: non-object loss;
  • Lclass: classification loss;
  • λcoord, λobj, λnoobj, λclass: weight coefficients for each component.
Cosine Annealing learning rate scheduling is a learning rate adjustment strategy used to gradually decrease the learning rate during the training process. The core idea is to allow the learning rate to follow a cosine curve throughout training, starting with a higher learning rate at the beginning and gradually reducing it as training progresses, eventually converging to a smaller value. This approach helps prevent premature convergence, enabling the model to better explore the parameter space during training, thereby achieving improved performance.
ηt = ηmin + 0.5 × (ηmax − ηmin) × (1 + cos(t × π/T))
Variable Descriptions:
  • ηt: Learning rate at time t;
  • ηmin: Minimum learning rate;
  • ηmax: Maximum learning rate;
  • t: Current iteration number;
  • T: Total number of iterations.
When evaluating a model, especially in specific application scenarios, using simple and intuitive evaluation methods can more directly reflect the model’s performance. Simplified evaluation methods include calculating overall accuracy, class-specific accuracy, error rate, and processing time. These indicators allow for a quick assessment of the model’s basic performance and its effectiveness in real-world applications without the need for more complex statistical analysis tools. These methods are particularly suitable for rapid and effective performance evaluation when actual samples are available. Since this study involves real-world aquaculture fish data for validation, a simplified evaluation approach is adopted.
  • Overall Accuracy
Accuracy = (Number of correctly classified fish)/(Total number of fish)
  • Class-Specific Accuracy
Male Fish Accuracy = (Number of correctly classified male fish)/(Total number of male fish)
Female Fish Accuracy = (Number of correctly classified female fish)/(Total number of female fish)
  • Error Rate
Error Rate = (Number of incorrectly classified fish)/(Total number of fish)
In this study, we chose the YOLO model primarily due to its advantages in scenarios requiring real-time processing, offering fast inference speed alongside high accuracy. Although SSD and MobileNet models were also tested, their convergence speed during training and inference time did not meet expectations, so they were not selected as the final model. Since the focus of this research is on validating the AI system’s gender recognition capability in a specific application scenario, our model comparison mainly centered on recognition effectiveness and practical feasibility, rather than detailed performance metrics such as training time and inference time. Future research will supplement these model comparisons under various application environments, including training time, inference speed, and accuracy, to further strengthen the rationale for model selection.

4. Results

While this study demonstrates high efficiency in the gender identification of fingerlings under specific environmental conditions, we acknowledge that broader validation across various conditions (such as different lighting, water quality, and fish species) has not yet been conducted. However, the primary goal of this research was to develop a feasible technological solution for specific aquaculture operational scenarios. To ensure accurate performance evaluation during the early development phase, we opted for controlled experimental conditions to validate the model. Future research will expand to more diverse conditions to test the model’s generalization. For now, the results of this study should be discussed within the scope of the designed experimental framework.
In this study, a relatively small sample size was used for the training and validation of the YOLO model. While a larger dataset could further enhance the model’s generalization ability, YOLO is renowned for its efficiency in real-time object detection and can achieve high accuracy even with smaller datasets. The experimental design was optimized for specific fish species and environmental conditions, allowing for good recognition performance despite the limited sample size. Future research will consider expanding the dataset and implementing more rigorous data augmentation techniques to further improve the model’s robustness and adaptability.
This study adopted a typical hardware configuration as a reference:
  • CPU: Intel Core i7-10700K @ 3.80GHz;
  • GPU: NVIDIA GeForce RTX 2080 or NVIDIA A100;
  • RAM: 16GB;
  • Storage: 512GB SSD;
  • Operating System: Ubuntu 18.04 or Windows 10.
This configuration efficiently handles the inference process of deep learning models and achieved reasonable inference times during the experiments.
This study leverages the capability of the recognition module to perform instant identification, achieving a real-time identification rate of 12 fish per minute with a gender recognition accuracy of over 95%. This significantly reduces the workload of laborers in the aquaculture industry and enhances sorting efficiency. The study focused on Taiwan’s aquaculture industry, specifically using tilapia (Oreochromis spp.) as the experimental subject. The secondary sexual characteristics of the cloaca of male and female tilapia were input into the YOLO AI model. Through annotation, scaling, and parameter adjustments, the study overcame challenges such as diverse external characteristics and suboptimal identification environments (e.g., significant light variations, the presence of bubbles, excrement, and gravel mixed with the fish). During the experiment, the system was able to instantly identify the gender of Taiwan tilapia at a speed of 12 fish per minute, with an accuracy rate exceeding 95%. Breeders need only use a mobile phone as the identification platform, allowing them to capture images of fish cloacal characteristics in any setting and quickly display AI judgment results. This operation greatly enhances the operational outlook of tilapia breeding, transforming it from traditional manual methods to a more automated operational mode. This Table 1 summarizes the data used in the first test, categorized by sample type, source, and content.
First Training Sample Conclusion: The source of image capture significantly impacts the learning ability of the recognition model.
After initial sample evaluation, it became clear that the reproductive traits of different fish vary significantly, and the number of machine learning samples affects accuracy. In addition to the recognition of sexual characteristics, other parameters can be adjusted to improve identification accuracy.
The genital characteristics of farmed fish are quite small, and when the fish are taken out of the breeding pond, bubbles or excrement often contaminate the environment. The visual recognition model frequently misidentifies bubbles or excrement as genital features, leading to incorrect identification results. Therefore, during the first testing phase, fish fin annotation was used for positioning, along with other parameter adjustments for image training, to compare differences in recognition capabilities. The preliminary test results are provided below:
The model was trained using Sample A, which consists of images captured by mobile device A, and tested on Sample B, which consists of images captured by mobile device B. The use of different mobile devices allows us to evaluate the model’s robustness across varying image qualities and conditions, ensuring that the model can generalize well to different capture environments.
In the first test, only Sample A was trained, with different parameters tested:
The error rate in identifying male fish was significantly higher than that for female fish. By using fin annotation for positioning, the impact of other contaminants was reduced, effectively decreasing the problem of gender misidentification. However, issues persisted, such as low confidence levels or failure to identify the gender at all. It can be inferred that even after positioning, male fish samples remained more challenging to identify. By annotating the fish fins, the recognition rate for male fish can be significantly improved, which is the most critical aspect of the model. This is illustrated in Figure 2.
The validation results for male fish, as shown in the table below, indicate that under non-rotating conditions, there is a higher error rate, leading to recognition failures. Similarly, the test results for female fish exhibited low confidence levels. This suggests that applying angle rotation can enhance recognition accuracy. Likewise, enabling random scaling, as opposed to using fixed sizes, can improve recognition capability and reduce the likelihood of recognition failures. The data from the rotation and scaling experiments for female fish showed positive results; however, to avoid redundancy, the visualization of female fish data is not included. Refer to Figure 3 for further details.
Different orientations and settings from various mobile devices result in varying levels of testing accuracy. When the model trained on Sample A was tested using Sample B, the recognition accuracy failed to meet the standard threshold, highlighting the need to improve the model’s flexibility and generalization.
For Sample A, the results from tests with a smaller sample size outperformed those with a larger sample size. Conversely, for Sample B, the tests with a larger sample size outperformed those with a smaller sample size. This indicates that different photographic environment parameters affect the results depending on the sample size, and various shooting methods and camera settings can lead to variable test outcomes. The imaging results from Sample A were more favorable for recognition.
For Sample B, the fixed-size-mode test results were superior to those from random scaling, which is in stark contrast to the results observed in Sample A. This demonstrates a completely different recognition outcome, as illustrated in Figure 4.
In the second test, a small amount of Sample B data was added as training material to enhance the model’s generalization. Please refer to Table 2 for details. After incorporating these additional samples, the model underwent separate validation and testing.
This table summarizes the data used in the second test, where a small portion of Sample B was included in the training set to enhance the model’s generalization. Validation and testing were performed using data from both Sample A and Sample B.
Although the training sample size for Sample B was relatively small, incorporating Sample B still improved recognition accuracy and reduced the number of recognition errors. This indicates that increasing the sample size would likely result in a significant enhancement of recognition capabilities. Under these conditions, the recognition accuracy for farmed fish reached a high level of 95.76%.
Female Fish Recognition Data:
  • Number of Photos: 495;
  • Recognition Failures: 0;
  • Low-Confidence Recognitions: 6;
  • Gender Misidentifications: 11.
Male Fish Recognition Data:
  • Number of Photos: 873;
  • Recognition Failures: 6;
  • Low-Confidence Recognitions: 24;
  • Gender Misidentifications: 11.
Refer to Figure 5 for a visual representation of these results.
In-depth data exploration reveals that the failure rate for female fish recognition is relatively low, with the majority of errors being related to gender misidentification.
Figure 6 illustrates the distribution of recognition rates when using the AI model to identify female fish. Figure 6a shows the distribution of high recognition rates. Across approximately 500 recognition events, the distribution of the AI model’s recognition rates for female fish is displayed. The horizontal axis represents the event number, and the vertical axis represents the recognition rate (percentage). It can be observed that most of the model’s recognition rates are concentrated in the high range (97.5% to 100%), indicating that the model is very accurate in recognizing female fish in most cases. However, there are some events with lower scores, indicating that the model’s performance is inconsistent in certain situations. Figure 6b shows the distribution of low-recognition events. The low recognition rates vary significantly, ranging from 45% to 75%. This suggests that while the model generally performs well, there are specific instances where its accuracy drops considerably.
Although the recognition failure rate for male fish is relatively higher, the overall recognition accuracy still exceeds 95%. Male fish are more likely to be misidentified as female due to the lower recognition accuracy. The related distribution can be seen in Figure 7.

5. Discussion

The evaluation of this study’s technology is primarily based on the specific application scenario, designed and tested to meet the practical needs of aquaculture. Given the significant differences in environmental conditions, datasets, and testing parameters across various studies, we believe that direct data comparisons may not be consistent. Therefore, we opted not to perform direct data comparisons, but instead focused on the system’s performance under variable lighting and complex environmental conditions. Future research may explore the feasibility of horizontal comparisons of technologies under similar conditions.
This study focuses on the application of smart aquaculture within the framework of smart cities, specifically investigating the potential of an AI-based intelligent fish fingerling separation system for use in aquaculture, particularly in broodfish gender identification and sorting. The findings indicate that, when the YOLO model is properly trained and optimized, it can achieve a recognition accuracy exceeding 95%, even under challenging conditions such as varying lighting and environments mixed with bubbles and excrement. This high level of accuracy demonstrates that the YOLO model is highly suitable for real-time applications in aquaculture. After selecting YOLO as the AI model, we began training with different samples, conducted in two phases. In the first phase, the model was trained using a single batch of images, with various parameter settings applied to test its effectiveness. In the second phase, additional sample training data were introduced to enhance generalization. Based on the validation results, the following conclusions were drawn, and parameters were adjusted accordingly:
A.
Model accuracy varies significantly based on the recording device and environment: continuous training and learning are required for the model.
B.
Annotation has a significant impact on model accuracy: the best results were obtained by annotating the fish fins, randomly scaling sizes, and avoiding rotation in the modules.
C.
There are at least tens of thousands of variations in the shape of male and female fish genitalia.
D.
Female fish are easier to identify than male fish.
E.
The model’s generalization is insufficient:
(a).
The accuracy of AI identification varies greatly with different mobile devices.
(b).
Most of the failed identifications were due to genital shapes that were not present in the training data.
(c).
The sample size was insufficient.
F.
Inconsistent photo formats: many photos did not capture the fish fins.
G.
Some photos contained bubbles or excrement, leading to identification failures.
H.
The identification accuracy of videos shot on the same day was low, mainly due to poor mobile shooting quality, which caused generalization issues.
I.
We should improve model generalization and control for the impact of different shooting environments.
This is particularly important for large-scale fish farms, where traditional manual sorting is not only labor-intensive but also prone to human error. By automating the sorting process, this system significantly reduces reliance on skilled labor, addressing a major bottleneck in the industry. Additionally, the reduction in labor costs and the increase in efficiency brought by this system are expected to substantially improve the economic benefits for aquaculture enterprises.
This study also identified several challenges that require further improvement. One challenge is the model’s sensitivity to variations in image quality, particularly under different photographic equipment and environmental conditions. Our results suggest that enhancing the diversity of the training dataset, especially by including images captured under various conditions and with different devices, can improve the model’s generalization ability. Another challenge is the discrepancy in recognition rates between male and female fish, with female fish generally being easier to identify correctly. This may be due to the more distinct secondary sexual characteristics of female fish. To reduce this discrepancy, further optimization of the model and feature engineering are necessary. Additionally, the study shows that the sample size and the parameters used during training significantly impact the system’s performance. For instance, incorporating samples from different devices and applying data augmentation techniques (such as random scaling and rotation) can enhance the robustness of the model.
The results of this study provide several directions for future research:
  • First, expanding the dataset to cover a wider range of environmental conditions and fish species could help improve the system’s adaptability and accuracy.
  • Second, exploring the integration of other AI models, such as MobileNet or SSD frameworks, might help optimize the balance between model complexity, speed, and accuracy.
This study lays a solid foundation for the development of AI-driven intelligent systems in aquaculture, a technology that has the potential to transform traditional farming practices. By overcoming current challenges and continuously improving the model, this technology can promote the sustainable development of the aquaculture industry and ultimately contribute to global food security and the advancement of smart agriculture.
In this study, we observed that the recognition performance of the model declines when applied across different mobile devices or shooting environments, highlighting the challenge of generalization. This may be due to variations in camera parameters and lighting conditions across devices. The current research focused primarily on validating the model’s performance under specific conditions, and therefore did not explore solutions to these generalization issues. Future research will consider applying domain adaptation techniques and transfer learning methods to improve the model’s generalization capability. For instance, domain-invariant feature extraction could help reduce the model’s sensitivity to different devices and environments, while transfer learning could be used to fine-tune the model for data from specific devices, thereby enhancing the model’s robustness across various scenarios.
While this study primarily focused on the effectiveness of AI technology in aquaculture, we also recognize the potential issues that automation may pose for the labor market and the environment. Automation significantly reduces labor demand and increases production efficiency, but it may also impact traditional labor forces. We recommend providing skill development programs for technical personnel to reduce the negative effects on traditional aquaculture labor. In terms of environmental impact, AI technology enhances precision in aquaculture and lowers environmental burdens. However, the energy consumption and possible ecological effects of its implementation must also be evaluated. Future sustainability research can delve deeper into the environmental and ethical issues related to AI technology in aquaculture and propose feasible management strategies to ensure the long-term sustainability of its application.

6. Conclusions

The smart aquaculture solution developed in this study, centered around the YOLO-based AI model, has demonstrated exceptional performance in fish fingerling separation and identification, particularly in addressing the gender-sorting challenges encountered during aquaculture breeding processes. Compared to traditional manual recognition techniques, this system offers significant advantages in several areas. Traditional recognition technologies are typically applied to the identification of uniform and simple features in objects like standard industrial products. In contrast, the AI system developed in this study can analyze thousands of different reproductive feature patterns, making it more adaptable to handling complex and diverse biological characteristics.
In terms of environmental parameters, the AI model in this study can operate under various outdoor lighting conditions, such as sunlight, shadow, and low light, with an adjustable recognition distance, which enhances its adaptability. Traditional industrial recognition technologies usually operate under fixed indoor lighting and with a set recognition distance; thus, their performance is more limited when dealing with environmental changes. Additionally, the AI system can effectively handle irregular biological samples mixed with impurities and bubbles, whereas traditional technologies require clean surfaces and distinct features for accurate recognition. This capability allows the system developed in this study to maintain high recognition efficiency even under suboptimal environmental and sample conditions, whereas traditional techniques may experience significant performance degradation. In terms of recognition difficulty, the system developed in this study must manage a wide variety of reproductive features and unstable environmental factors, making the recognition process relatively challenging. However, the system’s ability to perform instant recognition at a rate of 12 fish per minute, with an accuracy rate exceeding 95%, reduces the workload of laborers in the aquaculture industry and improves sorting efficiency.
Introducing this system into the aquaculture management of smart cities not only enhances production efficiency and reduces labor costs but also addresses the issue of labor shortages. It has the potential to attract younger generations to the aquaculture industry, promoting sustainable development within the sector. Within the framework of smart cities, the application of such technologies contributes to achieving intelligent agricultural management, enhancing overall urban operational efficiency, and providing critical support in meeting future global food demands.

Author Contributions

Conceptualization, C.-Y.K. and I.-C.C.; methodology, C.-Y.K.; validation, C.-Y.K. and I.-C.C.; formal analysis, C.-Y.K. and I.-C.C.; investigation, C.-Y.K.; data curation, C.-Y.K. and I.-C.C.; writing—original draft preparation, C.-Y.K.; writing—review and editing, C.-Y.K. and I.-C.C.; visualization, C.-Y.K.; supervision, C.-Y.K. and I.-C.C. All authors have read and agreed to the published version of the manuscript.

Funding

Appreciation is extended to the Industrial Development Bureau, Ministry of Economic Affairs, Republic of China, for the support provided through the CITD project “Innovation and R&D for Traditional Industries Affected by the COVID-19 Pandemic”. This research outcome was jointly published with Vossic Technology.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

Author I-Chih Chen was employed by the company Vossic Technology, New Taipei 235030, Taiwan. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interes.

References

  1. Chiu, M.-C.; Yan, W.-M.; Bhat, S.A.; Huang, N.-F. Development of smart aquaculture farm management system using IoT and AI-based surrogate models. J. Agric. Food Res. 2022, 9, 100357. [Google Scholar] [CrossRef]
  2. Yves, I.; Madson, G.; Innocent, I.; Claude, H.; Maximillien, N.; Gedeon, B. IOT Monitoring Systems in Fish Farming Case Study: “University of Rwanda Fish Farming and Research Station (Ur-FFRs)”. Eur. J. Technol. 2023, 7, 43–61. [Google Scholar] [CrossRef]
  3. Muhammed, D.; Ahvar, E.; Ahvar, S.; Trocan, M.; Montpetit, M.-J.; Ehsani, R. Artificial Intelligence of Things (AIoT) for smart agriculture: A review of architectures, technologies and solutions. J. Netw. Comput. Appl. 2024, 228, 103905. [Google Scholar] [CrossRef]
  4. Vo, T.T.E.; Ko, H.; Huh, J.-H.; Kim, Y. Overview of Smart Aquaculture System: Focusing on Applications of Machine Learning and Computer Vision. Electronics 2021, 10, 2882. [Google Scholar] [CrossRef]
  5. Taha, M.F.; ElMasry, G.; Gouda, M.; Zhou, L.; Liang, N.; Abdalla, A.; Rousseau, D.; Qiu, Z. Recent Advances of Smart Systems and Internet of Things (IoT) for Aquaponics Automation: A Comprehensive Overview. Chemosensors 2022, 10, 303. [Google Scholar] [CrossRef]
  6. Asche, F.; Smith, M. Induced innovation in fisheries and aquaculture. Food Policy 2018, 76, 1–7. [Google Scholar] [CrossRef]
  7. Benetti, D.; Sardenberg, B.; Hoenig, R.; Welch, A.; Stieglitz, J.; Miralao, S.; Farkas, D.; Brown, P.; Jory, D. Cobia (Rachycentron canadum) hatchery-to-market aquaculture technology: Recent advances at the University of Miami Experimental Hatchery (UMEH). Rev. Bras. Zootec. 2010, 39, 60–67. [Google Scholar] [CrossRef]
  8. Huang, S.; Wu, Y.; Chen, K.; Zhang, X.; Zhao, J.; Luo, Q.; Liu, H.; Wang, F.; Li, K.; Fei, S.; et al. Gene Expression and Epigenetic Modification of Aromatase during Sex Reversal and Gonadal Development in Blotched Snakehead (Channa maculata). Fishes 2023, 8, 129. [Google Scholar] [CrossRef]
  9. El-Greisy, Z.A.; El-Gamal, A.E. Monosex production of tilapia, Oreochromis niloticus using different doses of 17α-methyltestosterone with respect to the degree of sex stability after one year of treatment. Egypt. J. Aquat. Res. 2012, 38, 59–66. [Google Scholar] [CrossRef]
  10. Voorhees, J.M.; Mamer, E.R.J.M.; Schill, D.J.; Adams, M.; Martinez, C.; Barnes, M.E. 17β-Estradiol Can Induce Sex Reversal in Brown Trout. Fishes 2023, 8, 103. [Google Scholar] [CrossRef]
  11. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Computer Vision—ECCV 2016. ECCV 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; Volume 9905. [Google Scholar] [CrossRef]
  12. Zhou, S.; Qiu, J. Enhanced SSD with interactive multi-scale attention features for object detection. Multimed. Tools Appl. 2021, 80, 11539–11556. [Google Scholar] [CrossRef]
  13. Almudawi, N.; Ansar, H.; Alazeb, A.; Aljuaid, H.; Alqahtani, Y.; Algarni, A.; Jalal, A.; Liu, H. Innovative healthcare solutions: Robust hand gesture recognition of daily life routines using 1D CNN. Front. Bioeng. Biotechnol. 2024, 12, 1401803. [Google Scholar] [CrossRef]
  14. Souaidi, M.; Lafraxo, S.; Kerkaou, Z.; El Ansari, M.; Koutti, L. A Multiscale Polyp Detection Approach for GI Tract Images Based on Improved DenseNet and Single-Shot Multibox Detector. Diagnostics 2023, 13, 733. [Google Scholar] [CrossRef]
  15. Dong, K.; Zhou, C.; Yihan, R.; Li, Y. MobileNetV2 Model for Image Classification. In Proceedings of the 2020 2nd International Conference on Information Technology and Computer Application (ITCA), Guangzhou, China, 18–20 December 2020; pp. 476–480. [Google Scholar] [CrossRef]
  16. Elfatimi, E.; Eryiğit, R.; Shehu, H.A. Impact of datasets on the effectiveness of MobileNet for beans leaf disease detection. Neural Comput. Appl. 2024, 36, 1773–1789. [Google Scholar] [CrossRef]
  17. Huynh, V.T.; Nguyen, T.T.; Nguyen, T.V.; Tran, M.T. MobileNet-SA: Lightweight CNN with Self Attention for Sketch Classification. In Image and Video Technology. PSIVT 2023; Yan, W.Q., Nguyen, M., Nand, P., Li, X., Eds.; Lecture Notes in Computer Science; Springer: Singapore, 2024; Volume 14403. [Google Scholar] [CrossRef]
  18. Aboulola, O. Improving traffic accident severity prediction using MobileNet transfer learning model and SHAP XAI technique. PLoS ONE 2024, 19, e0300640. [Google Scholar] [CrossRef]
  19. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  20. Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2019, arXiv:1905.11946. [Google Scholar]
  21. Oguine, K.; Oguine, O.; Bisallah, H. YOLO v3: Visual and Real-Time Object Detection Model for Smart Surveillance Systems(3s). In Proceedings of the 2022 5th Information Technology for Education and Development (ITED), Abuja, Nigeria, 1–3 November 2022; pp. 1–8. [Google Scholar] [CrossRef]
  22. Flores-Calero, M.; Astudillo, C.A.; Guevara, D.; Maza, J.; Lita, B.S.; Defaz, B.; Ante, J.S.; Zabala-Blanco, D.; Armingol Moreno, J.M. Traffic Sign Detection and Recognition Using YOLO Object Detection Algorithm: A Systematic Review. Mathematics 2024, 12, 297. [Google Scholar] [CrossRef]
  23. Jia, L.; Wang, T.; Chen, Y.; Zang, Y.; Li, X.; Shi, H.; Gao, L. MobileNet-CA-YOLO: An Improved YOLOv7 Based on the MobileNetV3 and Attention Mechanism for Rice Pests and Diseases Detection. Agriculture 2023, 13, 1285. [Google Scholar] [CrossRef]
  24. Magalhães, S.A.; Castro, L.; Moreira, G.; dos Santos, F.N.; Cunha, M.; Dias, J.; Moreira, A.P. Evaluating the Single-Shot MultiBox Detector and YOLO Deep Learning Models for the Detection of Tomatoes in a Greenhouse. Sensors 2021, 21, 3569. [Google Scholar] [CrossRef]
  25. Khan, D.; Waqas, M.; Tahir, M.; Islam, S.; Amin, M.; Ishtiaq, A.; Jan, L.; Latif, J. Revolutionizing Real-Time Object Detection: YOLO and MobileNet SSD Integration. J. Comput. Biomed. Inform. 2023, 6, 41–49. [Google Scholar] [CrossRef]
  26. Khoa Tran, N.D.; Tran Pham, A.K. Comparative Analysis of Image Processing Object Detection Models: SSD MobileNet and YOLO for Guava Application. In Proceedings of the International Conference on Sustainable Energy Technologies. ICSET 2023, Ho Chi Minh City, Vietnam, 10–11 November 2023; Todor, D., Kumar, S., Choi, S.B., Nguyen-Xuan, H., Nguyen, Q.H., Trung Bui, T., Eds.; Green Energy and Technology; Springer: Singapore, 2024. [Google Scholar] [CrossRef]
  27. Sepasgozar, S.; Karimi, R.; Farahzadi, L.; Moezzi, F.; Shirowzhan, S.; Ebrahimzadeh, S.M.; Hui, F.; Aye, L. A Systematic Content Review of Artificial Intelligence and the Internet of Things Applications in Smart Home. Appl. Sci. 2020, 10, 3074. [Google Scholar] [CrossRef]
  28. Yang, G.; Wang, J.; Nie, Z.; Yang, H.; Yu, S. A Lightweight YOLOv8 Tomato Detection Algorithm Combining Feature Enhancement and Attention. Agronomy 2023, 13, 1824. [Google Scholar] [CrossRef]
  29. Pise, A.; Yoon, B.; Singh, S. Enabling Ambient Intelligence of Things (AIoT) healthcare system architectures. Comput. Commun. 2023, 198, 186–194. [Google Scholar] [CrossRef]
  30. Vasconez, J.P.; Delpiano, J.; Vougioukas, S.; Auat Cheein, F. Comparison of convolutional neural networks in fruit detection and counting: A comprehensive evaluation. Comput. Electron. Agric. 2020, 173, 105348. [Google Scholar] [CrossRef]
  31. Adhitya, Y.; Prakosa, S.W.; Köppen, M.; Leu, J.-S. Feature Extraction for Cocoa Bean Digital Image Classification Prediction for Smart Farming Application. Agronomy 2020, 10, 1642. [Google Scholar] [CrossRef]
  32. Ali, B.; Ullah, I.; Khan, I. ICS-IDS: Application of big data analysis in AI-based intrusion detection systems to identify cyberattacks in ICS networks. J. Supercomput. 2023, 80, 7876–7905. [Google Scholar] [CrossRef]
  33. Haq, I.; Ullah, I. Analysis of IoT Security Challenges and Its Solutions Using Artificial Intelligence. Brain Sci. 2023, 13, 683. [Google Scholar] [CrossRef]
Figure 1. Research methodology framework.
Figure 1. Research methodology framework.
Applsci 14 08803 g001
Figure 2. Results for Sample A with and without fish fin annotation: (a) Sample A male fish test, no fin annotation; (b) Sample A female fish test, no fin annotation; (c) Sample A male fish test, with fin annotation; (d) Sample A female fish test, with fin annotation.
Figure 2. Results for Sample A with and without fish fin annotation: (a) Sample A male fish test, no fin annotation; (b) Sample A female fish test, no fin annotation; (c) Sample A male fish test, with fin annotation; (d) Sample A female fish test, with fin annotation.
Applsci 14 08803 g002
Figure 3. Comparison results for Sample A male fish: no fin annotation, angle rotation, and random scaling: (a) Sample A male fish test, random 180° rotation; (b) Sample A male fish test, random scaling; (c) Sample A male fish test, no rotation; (d) Sample A male fish test, fixed size.
Figure 3. Comparison results for Sample A male fish: no fin annotation, angle rotation, and random scaling: (a) Sample A male fish test, random 180° rotation; (b) Sample A male fish test, random scaling; (c) Sample A male fish test, no rotation; (d) Sample A male fish test, fixed size.
Applsci 14 08803 g003
Figure 4. Test results for Sample B imaging: (a) male fish test with fixed image size; (b) female fish test with fixed image size; (c) male fish test with random image scaling; (d) female fish test with random image scaling.
Figure 4. Test results for Sample B imaging: (a) male fish test with fixed image size; (b) female fish test with fixed image size; (c) male fish test with random image scaling; (d) female fish test with random image scaling.
Applsci 14 08803 g004
Figure 5. Overall sample recognition accuracy: (a) female fish recognition accuracy: 96.57%; (b) Male fish recognition accuracy: 95.30%.
Figure 5. Overall sample recognition accuracy: (a) female fish recognition accuracy: 96.57%; (b) Male fish recognition accuracy: 95.30%.
Applsci 14 08803 g005
Figure 6. Female fish recognition rate distribution charts: (a) female fish high-recognition-rate distribution; (b) female fish low-recognition-rate distribution.
Figure 6. Female fish recognition rate distribution charts: (a) female fish high-recognition-rate distribution; (b) female fish low-recognition-rate distribution.
Applsci 14 08803 g006
Figure 7. Male fish recognition rate distribution charts: (a) male fish high-recognition-rate distribution; (b) male fish low-recognition-rate distribution.
Figure 7. Male fish recognition rate distribution charts: (a) male fish high-recognition-rate distribution; (b) male fish low-recognition-rate distribution.
Applsci 14 08803 g007
Table 1. First test sample data.
Table 1. First test sample data.
Sample TypeSample SourceContent
Training SampleA Phone- Male Fish: 133 images
- Female Fish: 119 images
Validation SampleA Phone- Male Fish: 21 images
- Female Fish: 21 images
Test SampleA Phone- Male Fish: 20 images
- Female Fish: 20 images
Table 2. Sample data for the second test.
Table 2. Sample data for the second test.
Sample TypeSample SourceSample Content
Training SampleA Phone- Male Fish: 133 images
- Female Fish: 119 images
B Phone- Male Fish: 26 images
- Female Fish: 36 images
Validation SampleA Phone- Male Fish: 21 images
- Female Fish: 21 images
B Phone- Male Fish: 6 images
- Female Fish: 9 images
Test SampleA Phone- Male Fish: 20 images
- Female Fish: 20 images
B Phone- Male Fish: 20 images
- Female Fish: 20 images
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kao, C.-Y.; Chen, I.-C. Smart City Aquaculture: AI-Driven Fry Sorting and Identification Model. Appl. Sci. 2024, 14, 8803. https://doi.org/10.3390/app14198803

AMA Style

Kao C-Y, Chen I-C. Smart City Aquaculture: AI-Driven Fry Sorting and Identification Model. Applied Sciences. 2024; 14(19):8803. https://doi.org/10.3390/app14198803

Chicago/Turabian Style

Kao, Chang-Yi, and I-Chih Chen. 2024. "Smart City Aquaculture: AI-Driven Fry Sorting and Identification Model" Applied Sciences 14, no. 19: 8803. https://doi.org/10.3390/app14198803

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop