Next Article in Journal
Circular Economy in Charcoal Production: Valorization of Residues for Increased Efficiency and Sustainability
Previous Article in Journal
Digital Twins Facing the Complexity of the City: Some Critical Remarks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Deep Learning for Sustainable Agriculture: A Systematic Review on Applications in Lettuce Cultivation

1
International College Beijing, China Agricultural University, 17 Qinghua East Road, Haidian, Beijing 100083, China
2
College of Engineering, China Agricultural University, 17 Qinghua East Road, Haidian, Beijing 100083, China
3
School of Integrated Circuits, Guangdong University of Technology, Guangzhou 510006, China
4
National Innovation Center for Digital Fishery, China Agricultural University, Beijing 100083, China
5
College of Information and Electrical Engineering, China Agricultural University, 17 Qinghua East Road, Haidian, Beijing 100083, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Sustainability 2025, 17(7), 3190; https://doi.org/10.3390/su17073190
Submission received: 17 March 2025 / Revised: 2 April 2025 / Accepted: 3 April 2025 / Published: 3 April 2025
(This article belongs to the Section Sustainable Agriculture)

Abstract

:
Lettuce, a vital economic crop, benefits significantly from intelligent advancements in its production, which are crucial for sustainable agriculture. Deep learning, a core technology in smart agriculture, has revolutionized the lettuce industry through powerful computer vision techniques like convolutional neural networks (CNNs) and YOLO-based models. This review systematically examines deep learning applications in lettuce production, including pest and disease diagnosis, precision spraying, pesticide residue detection, crop condition monitoring, growth stage classification, yield prediction, weed management, and irrigation and fertilization management. Notwithstanding its significant contributions, several critical challenges persist, including constrained model generalizability in dynamic settings, exorbitant computational requirements, and the paucity of meticulously annotated datasets. Addressing these challenges is essential for improving the efficiency, adaptability, and sustainability of deep learning-driven solutions in lettuce production. By enhancing resource efficiency, reducing chemical inputs, and optimizing cultivation practices, deep learning contributes to the broader goal of sustainable agriculture. This review explores research progress, optimization strategies, and future directions to strengthen deep learning’s role in fostering intelligent and sustainable lettuce farming.

1. Introduction

Lettuce is one of the most widely cultivated leafy vegetables globally, with a long history of cultivation dating back to ancient Egypt [1]. It is rich in dietary fiber, iron, and vitamin C, contributing to human health, while its bioactive compounds have been shown to possess anti-inflammatory, cholesterol-lowering, and anti-diabetic properties [2], enhancing its value in functional food and medicinal applications [3]. In recent years, the global demand for lettuce has steadily increased due to its short growth cycle, high yield, and nutritional benefits [4], establishing it as a crop of significant economic importance [5]. Over the past few decades, global lettuce production has shown an overall upward trend, with its distribution and growth across continents illustrated in Figure 1.
As illustrated in Figure 1, global lettuce production from 1993 to 2022 exhibited cyclical fluctuations superimposed on an overall growth trajectory. The data reveal three distinct phases: (1) rapid expansion (1993–2000) with production surging from 15 to 35 million metric tons; (2) a consolidation period (2000–2010) featuring volatility (35→25→35 million tons); and (3) renewed growth post-2010, albeit at a moderated pace. Regionally, Asia emerged as the dominant producer after 2005, contributing 47% of global output by 2022, while North America maintained stable production at 28–30% share. Notably, European production declined from 31% to 18% of the global total during this period, likely due to shifting agricultural priorities. It can be concluded that lettuce has become a crop of great socio-economic importance.
With the continuous growth of the global population and the increasing scarcity of natural resources, agricultural production faces unprecedented sustainability challenges, which, to some extent, impact global food security and efforts to alleviate hunger. As an essential vegetable in human diets [5], lettuce production is influenced by various environmental and biological factors, posing significant challenges to both yield and quality. Pest infestations are a major threat to lettuce growth. Insects such as aphids, thrips, and beet armyworms [6,7] cause direct damage to plants, reducing their quality and yield. Additionally, pathogen infections further exacerbate crop losses, with common diseases including downy mildew, lettuce mosaic virus, and tomato spotted wilt virus [8,9,10], which can lead to widespread plant decline or even total crop failure. Beyond biological factors, cultivation management and environmental conditions play a critical role in lettuce growth. Improper fertilization can lead to nutrient imbalances, affecting growth rates and quality [11]; inadequate light regulation may reduce photosynthetic efficiency, thereby limiting biomass accumulation [12]; and fluctuations in temperature and humidity [13] can intensify physiological stress, weakening the plant’s resilience to adverse conditions. To enhance lettuce production efficiency and promote sustainable agriculture, it is imperative to implement scientific agricultural management strategies and optimize resource utilization. These approaches will enable more efficient lettuce cultivation while minimizing environmental impact and ensuring the long-term stability of agricultural production.
However, as agricultural production scales up and the demand for precision agriculture increases, traditional manual methods and conventional automation technologies are becoming increasingly inadequate for handling the complexities of modern farming. These limitations hinder efficiency and sustainability in agricultural production [14]. Against this backdrop, the rapid advancement of deep learning has provided new momentum for sustainable agriculture.
In summary, the integration of deep learning technology offers innovative solutions for the sustainable development of lettuce production. It not only enhances production efficiency and optimizes resource utilization but also mitigates environmental pollution, thereby contributing to the stability of agricultural ecosystems. This review systematically examines the applications of deep learning technology in lettuce cultivation, with the content structured as follows: Section 2 introduces the fundamental concepts of deep learning, categorizes its primary techniques, and discusses potential challenges in its application to lettuce production. Section 3 provides a comprehensive review of deep learning technology applications across various aspects of lettuce farming, including pest and disease control, crop monitoring, and field management. Section 4 explores the advantages of deep learning methods in lettuce production while addressing existing technical and practical challenges, along with future development directions. Finally, Section 5 summarizes the key findings and discusses the broader prospects of deep learning in advancing sustainable agriculture.

2. Overview of Deep Learning

2.1. Overview of Deep Learning Techniques

In recent years, deep learning has demonstrated exceptional performance across various domains [15,16,17,18] and has been progressively integrated into different aspects of agricultural production, significantly enhancing precision and intelligence in farm management. Specifically, leveraging its powerful data processing and pattern recognition capabilities, deep learning has achieved remarkable results in multiple agricultural applications. These include pest and disease detection [19,20], precise weed identification and classification [5,21,22], crop recognition and localization [23], soil quality assessment [24], crop yield prediction [25], climate and natural disaster forecasting [26,27], and the development of intelligent irrigation systems [28].
Moreover, deep learning has played a crucial role in various crop production processes. For instance, Mathew et al. [29] deployed the DSC-TransNet model on a handheld GPU for classifying plant leaf diseases and pests. Ali et al. [30] integrated RNN and LSTM algorithms to identify drought stress in crops. Liu et al. [31] combined a computer vision system (CVS) with a deep neural network to enable rapid identification of chrysanthemum tea. Huo et al. [32] utilized StyleGAN3-synthetic images and a Vision Transformer to classify the growth stages of tomato plants. Han et al. [33] proposed a model named CAEDLSTM, successfully achieving automated soil moisture prediction. These advancements have significantly contributed to agricultural production by mitigating pest and disease outbreaks, reducing pesticide and fertilizer usage, and optimizing water resource management. Consequently, they have effectively promoted sustainable agriculture while maintaining ecological balance. Therefore, further optimization and broader adoption of deep learning technologies are essential for enhancing agricultural productivity, minimizing environmental impact, and advancing the development of intelligent farming systems.
The successful application of deep learning in other crop production systems suggests its immense potential in lettuce cultivation. For lettuce image recognition tasks, deep learning models, with their strong nonlinear fitting capabilities, can effectively process complex phenotypic features while enhancing recognition accuracy and stability [34]. For instance, in lettuce growth monitoring and yield prediction, researchers have employed linear motion rails and cameras to automatically capture top-view images of lettuce. These images are then processed using deep learning models such as CNNs and multilayer perceptrons (MLPs) for feature extraction [35]. This approach enables precise analysis of lettuce growth conditions and yield estimation, providing crucial insights for precision agricultural management. Moreover, deep learning can be deeply integrated with hardware systems to facilitate automated lettuce sorting. Specifically, the DeepLabv3+ model can be utilized for crop image segmentation and feature extraction, with the extracted information processed by a control system to generate operation commands, thereby enabling robotic arms to execute precise sorting tasks [36]. The introduction of this automated process reduces human intervention, lowers labor costs, and optimizes lettuce supply chain management, ultimately improving agricultural productivity. As deep learning continues to advance, its applications in lettuce cultivation and management are expected to expand further, driving intelligent agriculture toward greater efficiency and sustainability.
Since Guo et al. [37] achieved a breakthrough in large-scale image classification using deep learning in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), deep learning has rapidly become a focal point of artificial intelligence research. It has led to significant advancements across various domains, including computer vision, natural language processing, and bioinformatics. At its core, deep learning leverages multi-layer neural network architectures to automatically learn data features. The term “deep” refers to the presence of multiple hidden layers that progressively extract higher-level feature representations from raw data. Compared to traditional machine learning methods, deep learning enables hierarchical feature extraction, modeling information in a progressive manner—from low-level edges and textures to high-level object categories and semantic concepts. By applying nonlinear transformations, deep learning maps input data into complex feature spaces and, through large-scale data training and computational power, learns and optimizes patterns automatically [14,38]. This powerful feature extraction and modeling capability has opened vast opportunities for deep learning in agricultural production, particularly in tasks such as crop identification, pest and disease detection, and yield prediction.
In the agricultural sector, deep learning technology has been widely applied in crop growth monitoring, pest and disease detection, environmental factor prediction, and intelligent irrigation. It has played a crucial role in enhancing agricultural productivity, optimizing resource utilization, and promoting sustainable development. However, despite its great potential, the application of deep learning in agriculture still faces numerous challenges and limitations. A systematic exploration of deep learning applications in agriculture reveals several obstacles and challenges in achieving sustainable lettuce production, including the following:
  • High energy consumption and carbon emissions: In recent years, the training scale of deep learning models (e.g., Transformer, BERT, GPT) has grown exponentially, leading to a sharp increase in computational costs and energy consumption. This trend accelerates the depletion of non-renewable energy sources and poses a threat to future energy security. For instance, training a BERT model generates approximately 1438 pounds of CO2 emissions, equivalent to the carbon footprint of a car traveling 1000 miles [39]. These emissions contribute to air pollution and global warming, contradicting the principles of sustainable agriculture. To achieve sustainable applications of deep learning in agriculture, model architectures and training strategies need to be optimized to reduce computational energy consumption and minimize environmental impact. Lightweight networks (e.g., MobileNet, EfficientNet) can reduce computational complexity and enable efficient operation of low-computing-power devices. Knowledge distillation techniques utilize large models to guide the learning of small models, reducing resource requirements while maintaining performance. Model pruning reduces computation by removing redundant parameters and increases inference speed. Combined with low-power hardware (e.g., Raspberry Pi, NVIDIA Jetson), efficient local computation can be realized in field environments, reducing energy consumption for cloud transmission. Future research should synthesize these approaches to promote energy-efficient applications of deep learning in agriculture and improve its sustainability and scalability.
  • Computational resource demands and accessibility issues: The training of deep learning models heavily relies on high-performance computing resources such as GPUs and TPUs. However, many agricultural research institutions and farmers, particularly those in developing countries, struggle to afford such costly hardware. Additionally, the centralization of computing resources (e.g., cloud-based data centers) may further exacerbate regional disparities in agricultural technology development, allowing technologically advanced regions to dominate while limiting access to underdeveloped areas. To address the challenges of deep learning applications in resource-constrained environments, algorithms, model optimization, and computational architectures are needed. The development of low computational cost algorithms reduces complexity and improves adaptability, such as efficient network structures based on sparse representations or self-attentive mechanisms. Model parameter optimization involves not only pruning and quantization but can also be combined with dynamic computational mechanisms that allow the model to adjust computational resources according to task demands. The introduction of edge computing can be combined with distributed reasoning to reasonably allocate computational tasks to multiple devices and achieve load balancing, while combining adaptive data sampling and transmission strategies to reduce bandwidth requirements and improve real-time response capabilities in agricultural environments.
  • Risk of declining crop diversity: The stability of agricultural ecosystems largely depends on biodiversity. Studies have shown that biodiversity can enhance lettuce productivity through the complementary effect and reduce disease transmission risks via the dilution effect [40]. However, deep learning-driven agricultural practices may prioritize optimizing high-yield or high-profit cultivars while neglecting crop diversity, thus promoting monoculture farming. This practice can reduce the resilience of farmland ecosystems, making crops more susceptible to diseases and extreme climate conditions, ultimately threatening agricultural sustainability. Therefore, when applying deep learning to optimize lettuce production, it is crucial to consider ecosystem stability and incorporate intelligent optimization techniques, such as multi-objective optimization algorithms, to encourage the cultivation of diverse lettuce varieties, thereby enhancing long-term sustainability and resilience in agricultural systems.

2.2. Overview of Common Deep Learning Methods in Agriculture

Based on different learning paradigms, this study adopts a taxonomy-based classification to categorize deep learning techniques applied in sustainable agriculture into three main types: supervised or discriminative learning, unsupervised or generative learning, and hybrid learning [41]. Each type of deep learning method exhibits varying potential in agricultural production, as summarized in Table 1.

2.2.1. Discriminative Learning

Discriminative learning is primarily used for supervised learning tasks, where the core objective is to establish a mapping relationship between input data and corresponding labels for classification, regression, or sequence prediction. This learning paradigm focuses on learning decision boundaries within data to optimize model prediction performance. In recent years, discriminative learning methods have been widely applied in agriculture, playing a significant role in crop health monitoring, lettuce growth stage recognition, pest and disease detection, and environmental parameter prediction. Common discriminative learning methods include:
  • Convolutional Neural Networks (CNN) [42]: CNNs are widely utilized in image classification, object detection, and semantic segmentation due to their superior feature extraction capabilities. In agricultural production, CNNs have been extensively applied to crop health monitoring, lettuce growth stage recognition, pest and disease detection, and plant phenotyping [43,44,45].
  • Recurrent Neural Networks (RNN) and its variants (Long Short-Term Memory, LSTM; Gated Recurrent Unit, GRU) [46]: RNNs are well-suited for handling sequential data, such as environmental parameter prediction and meteorological condition analysis. However, traditional RNNs suffer from gradient vanishing issues, limiting their ability to model long-term dependencies. To address this, LSTM and GRU were introduced to enhance long-range dependency learning. For example, LSTM can be used to predict temperature, humidity, and light variation in lettuce cultivation environments, aiding in optimized cultivation management strategies.
  • Deep Neural Networks (DNN) [47]: DNNs, characterized by their hierarchical structure and strong representational capabilities, can automatically extract complex features and process high-dimensional nonlinear data. Their advantages include powerful learning ability, good generalization performance, and adaptability to large-scale data, making them widely applicable in image recognition, speech processing, and intelligent control [48]. Through end-to-end learning and integration with various techniques, DNNs can be applied in agriculture for automated decision-making tasks such as soil quality assessment and crop yield prediction.

2.2.2. Generative Learning

Generative learning focuses on modeling the underlying distribution of data to generate new samples or augment datasets, thereby improving the generalization ability of deep learning models [49]. Unlike discriminative learning, generative learning not only learns data features but also creates new instances, making it particularly suitable for agricultural applications with limited data availability. The following are common generative learning methods and their applications in agriculture:
  • Generative Adversarial Networks (GAN) [50]: GANs consist of a generator and a discriminator, which compete during training to produce high-quality synthetic data. In agriculture, GANs can be used to generate synthetic crop disease images, enhancing datasets and improving the robustness and generalization capability of pest and disease identification models.
  • Variational Autoencoders (VAE) [51]: VAEs generate new data through probabilistic modeling and are commonly applied in crop phenotyping data augmentation. For instance, in lettuce phenotype research, VAEs can generate virtual images of different growth stages, thereby improving the generalization ability of deep learning models.
  • Deep Belief Networks (DBN) [52] and Restricted Boltzmann Machine (RBM) [53]: These methods are primarily used for unsupervised feature learning and have potential applications in soil composition analysis and unsupervised crop disease classification.

2.2.3. Hybrid Learning

Hybrid learning integrates the strengths of supervised learning and unsupervised learning to enhance model learning capabilities, improve data utilization efficiency, and boost generalization performance. In agriculture, hybrid learning methods are particularly useful for multimodal data fusion, feature learning and optimization, and complex task modeling, making them well-suited for lettuce growth monitoring, pest and disease detection, and yield prediction. The following are representative hybrid learning methods and their applications in agriculture:
  • CNN + LSTM hybrid model: CNN extracts image features, while LSTM processes temporal sequences. In lettuce production, this approach enables the integration of image data and environmental sensor data for growth monitoring and yield prediction.
  • Autoencoder (AE) + CNN: The autoencoder performs dimensionality reduction and feature extraction, while CNN handles classification tasks. For instance, in autonomous farm management, this method can be applied in intelligent monitoring systems for real-time surveillance of lettuce cultivation areas.
  • Transformer architecture: In recent years, Transformer models, based on self-attention mechanisms, have demonstrated outstanding performance in computer vision tasks. For example, Vision Transformer has been applied in lettuce growth stage recognition and pest and disease diagnosis [20], providing higher classification accuracy.

3. The Applications of Deep Learning in Lettuce Cultivation

In order to systematically review the applications of deep learning techniques in lettuce cultivation, this paper will focus on the following categorization:
  • Pest and disease control:
    (a)
    Pest and disease diagnosis;
    (b)
    Precision spraying;
    (c)
    Pesticides residue detection.
  • Crop monitoring:
    (a)
    Condition monitoring;
    (b)
    Classification of growth stages;
    (c)
    Yield prediction.
  • Field management:
    (a)
    Weed management;
    (b)
    Irrigation and fertilization management.
A total of three application-type parent classes and eight application-type subclasses are elaborated. The application classification of deep learning techniques in lettuce cultivation is shown in Figure 2.

3.1. Pest and Disease Control

3.1.1. Pest and Disease Diagnosis

Traditional machine learning methods for plant pest and disease detection typically rely on handcrafted features, such as color, texture, and shape. However, these manually designed features often lack stability in complex environments and under varying disease characteristics, limiting model generalization and making it difficult to adapt to disease variations across different cultivation settings. As a result, traditional approaches struggle to achieve efficient and accurate disease detection, particularly in scenarios with diverse disease types, unstable lighting conditions, and varying leaf morphologies. In contrast, deep learning enables automatic feature learning, extracting multi-level feature representations directly from data, thereby handling large-scale and high-dimensional agricultural images more effectively. In terms of recognition accuracy, deep learning models learn more discriminative features, reducing the dependence on handcrafted feature engineering and significantly improving pest and disease detection accuracy. Additionally, their strong adaptability allows them to better handle variations in lighting conditions, camera angles, and leaf orientations, ensuring stable detection performance across diverse environments. Moreover, deep learning enhances model robustness through techniques such as data augmentation, transfer learning, and adversarial training, enabling better resilience to noise, occlusion, and background variations in real-world agricultural settings [20]. These advantages make deep learning particularly effective in improving recognition accuracy, enhancing model generalization, and ensuring stable performance in complex environments for plant pest and disease detection. Figure 3 illustrates examples of lettuce pest and disease samples, showcasing variations in morphology, color, and leaf tissue damage severity, further highlighting the importance and application value of deep learning in automated pest and disease detection [54,55,56,57].
Lettuce tip-burn stress is a physiological disorder that severely affects lettuce quality, characterized by leaf edge scorch and necrosis, which, in severe cases, can lead to significant yield loss [58]. To enhance the detection and prediction of this condition, deep learning has made significant progress in lettuce disease diagnosis in recent years. Hamidon et al. [54] compared three mainstream single-stage object detection models—CenterNet, YOLOv4, and YOLOv5—for tip-burn detection. Their results demonstrated that YOLOv5 achieved the best performance on the training dataset, effectively localizing tip-burn-affected areas in lettuce leaves. Additionally, by optimizing the dataset and model parameters, the study further improved the effectiveness of the proposed method for detecting tip-burn stress in lettuce.
In addition to object detection methods, hyperspectral imaging technology has been employed for the early prediction of lettuce diseases. Ban et al. [59] integrated hyperspectral imaging with partial least squares regression (PLS), random forest (RF), and convolutional neural network (CNN) algorithms to construct a regression model for predicting downy mildew in lettuce. Among these models, CNN achieved the highest coefficient of determination (R2) at 0.963, outperforming PLS (0.857) and RF (0.910), demonstrating the superior performance of deep learning in disease prediction. For cloud-based agricultural diagnostic systems, Abbasi et al. [60] utilized pretrained ResNet-50 and YOLOv5s for disease detection, effectively reducing training time and enhancing model performance. Additionally, they developed a cloud-based crop diagnostic system, achieving a disease identification accuracy of 95.83%. Furthermore, Ali et al. [61] integrated CNN, VGG16, and MobileNet models, leveraging image processing techniques for the automated identification of four vegetable diseases. Among these, lettuce disease identification achieved the highest accuracy, with the CNN model reaching 100% accuracy, further validating the effectiveness of deep learning in lettuce disease classification.
For automated pest and disease detection and classification in lettuce cultivation, Barcenilla et al. [62] employed a CNN-based image recognition model to automatically identify and classify various lettuce pests and diseases. The model was trained and evaluated using a 10-fold cross-validation approach, ensuring robust performance assessment. Their experimental results demonstrated that the model achieved an accuracy of 95.72%, with a precision of 97.03%, a recall of 95.12%, and an F1-score of 95.84%. These metrics indicate the model’s strong capability to distinguish between different pest and disease categories, thereby providing a reliable tool for early detection and management in smart farming systems. Building on advancements in deep learning, Wang et al. [63] proposed the YOLO-EfficientNet method, which combines the real-time object detection capabilities of YOLOv8n with the high-accuracy classification performance of EfficientNet-v2s for hydroponic lettuce disease identification. Their experimental setup involved training YOLOv8n to detect diseased lettuce regions, followed by EfficientNet-v2s performing fine-grained classification on the detected areas. The results showed that YOLOv8n achieved an object detection accuracy approaching 99%, effectively localizing disease-affected regions with high confidence. Meanwhile, EfficientNet-v2s attained a validation accuracy of 95.78% and an F1-score of 96.18%, demonstrating its ability to accurately classify different disease types. These findings highlight the potential of integrating detection and classification models to enhance automated disease diagnosis in controlled-environment agriculture.
These studies clearly demonstrate the extensive application potential of deep learning in lettuce pest and disease detection. Deep learning not only improves disease identification accuracy but also enables early warning and precise disease management, effectively reducing crop losses and increasing lettuce yield. However, since deep learning models typically rely on large-scale, high-quality labeled datasets, the lack of comprehensive lettuce disease databases remains a significant limitation to their widespread application. Existing agricultural image datasets often have limited coverage, making it difficult to meet the generalization requirements for different growing environments and disease types. Notably, in our previous research [20], we proposed a self-supervised pre-training method based on Vision Transformer (CRE framework) for plant pest and disease classification. We also developed GPID-22, a dataset encompassing 22 plant species, 199 categories, and a total of 205,371 images. Unfortunately, lettuce images were not included in this dataset. In future research, we plan to expand the existing dataset to incorporate lettuce images and apply the pretrained model to lettuce pest and disease detection. The dataset directory and plant pest and disease classification model from this study are illustrated in Figure 4. Additionally, Zhou et al. [64] proposed the LQ-GCN model, which significantly enhances overlapping community detection performance by integrating local modularity with optimized GCN architecture. This approach can be an effective tool for community structure modeling in large-scale agricultural knowledge graphs, such as plant growth networks or pest dissemination networks.

3.1.2. Precision Spraying

Precision spraying technology, powered by deep learning-based image recognition, enables accurate detection and localization of crop diseases, facilitating targeted pesticide application. This approach helps reduce pesticide overuse, minimize environmental pollution, and promote the sustainable development of agricultural ecosystems. In recent years, researchers have integrated deep learning, spectral analysis, and intelligent decision-making methods to develop more efficient and environmentally friendly pesticide application technologies, aiming to increase crop yield while reducing excessive chemical pesticide use. Bari et al. [65] employed deep transfer learning to develop a high-efficiency pest identification and pesticide recommendation system, where the VGG16 model achieved the highest performance in pest recognition, with an accuracy of 99%, significantly outperforming ResNet50mx1. This system not only assists farmers in rapid pest identification but also intelligently recommends appropriate pesticides, enhancing crop yield while minimizing unnecessary pesticide use.
Additionally, Hu et al. [66] proposed LettuceTrack, a precision spraying system based on YOLOv5 and multi-object tracking technology. This system integrates feature extraction and data association algorithms to achieve high-precision target detection and tracking, ensuring that each lettuce plant is sprayed only once. This approach effectively reduces pesticide waste and prevents over-application. The overall workflow of this method is shown in Figure 5. Intelligent recognition and real-time tracking technologies enable the spraying system to precisely cover target plants, enhancing pesticide utilization efficiency while minimizing environmental pollution caused by overlapping sprays.
These research advancements have successfully reduced pesticide usage, minimized agricultural environmental pollution, and enhanced the sustainability of agricultural production. However, precision spraying also introduces the challenge of pesticide residue, which not only affects lettuce quality but also poses potential health risks to humans. The accumulation of pesticides on leaf surfaces and their residual presence in soil and water bodies may lead to food safety concerns and ecological degradation. Therefore, optimizing precision spraying strategies, reducing pesticide residues, and ensuring food safety remain critical challenges in precision pesticide application. Future research directions could include the development of low-residue pesticides, intelligent dosage regulation, application of nano-pesticide technology, and post-spraying pesticide degradation detection to further enhance the safety and sustainability of precision pesticide application.

3.1.3. Pesticides Residue Detection

In lettuce production, trace element detection technologies have been continuously evolving, with a growing shift toward deep learning, offering new research directions for pesticide residue detection in precision spraying. Traditional elemental analysis methods primarily rely on spectroscopic detection and chromatographic analysis, which are based on physical and chemical techniques. However, integrating machine learning and deep learning has the potential to enhance automation and data processing capabilities, enabling more efficient and precise pesticide residue assessment. Maione et al. [67] applied SVM and linear discriminant analysis (LDA) to analyze the elemental composition of lettuce, improving detection accuracy. However, while SVM performs well on small sample datasets, it has high computational complexity, particularly when using nonlinear kernel functions, which reduces model interpretability. In contrast, LDA offers better interpretability but is less effective in handling nonlinear data.
Compared to traditional machine learning methods, deep learning is increasingly applied in trace element detection, demonstrating higher prediction accuracy, generalization ability, and robustness. In recent years, researchers have integrated spectral imaging, DNN, and autoencoders to develop efficient trace element detection methods, meeting the agricultural demand for high-throughput and non-destructive testing. Wu et al. [68] combined near-infrared transmission spectroscopy with DBN to propose a rapid, non-destructive pesticide residue detection method for lettuce leaves. Experimental results showed that the DBN-SVM hybrid model outperformed traditional classification methods, achieving 95% accuracy on the test set, and demonstrating the potential of combining deep learning and spectral analysis for pesticide residue detection. For lettuce trace element deficiency detection, Lu et al. [69] found that the Random Forest algorithm achieved 97.6% accuracy, whereas deep learning models exceeded 99.5%, further highlighting deep learning’s advantages in agricultural data analysis. Zhou et al. [70] introduced a deep learning regression method combining visible-near infrared hyperspectral imaging with stacked autoencoders (SAE) and least squares support vector regression (LSSVR), successfully enabling rapid prediction of cadmium (Cd) residues in lettuce leaves. In subsequent research [71], the same team further incorporated wavelet transform (WT) and stacked convolutional autoencoders (SCAE) for lead (Pb) detection, although Cd prediction performance declined slightly (Rp2 decreased by 0.0168, RP decreased by 0.116), indicating room for improvement. Additionally, Sun et al. [72] utilized infrared spectroscopy to identify the characteristic wavelength (709 nm) for pesticide residue detection and employed CNNs for model training, enabling rapid detection of pesticide residues on lettuce leaves with improved accuracy and efficiency. The targeted image extraction process during the characteristic wavelength spectral analysis is illustrated in Figure 6.
A systematic and comprehensive review reveals that integrating deep learning with other technologies significantly enhances precision detection and prediction capabilities in agriculture. Among deep learning models, YOLO demonstrates exceptional application potential in complex agricultural environments due to its high accuracy, real-time performance, lightweight structure, and strong generalization ability. Table 2 summarizes the application of deep learning in Pest and disease control. However, limited agricultural data resources for lettuce impose several challenges on deep learning applications in lettuce pest and disease detection, trace element analysis, and precision spraying. These challenges include data availability, environmental adaptability, economic costs, technology adoption barriers, model robustness, and other key issues:
  • Data scarcity: A major bottleneck for deep learning applications in lettuce is the lack of comprehensive datasets. Unlike staple crops such as rice and wheat, lettuce—an economic crop—has received less research attention, resulting in fragmented and limited annotated datasets. This scarcity restricts model training capabilities and generalization performance.
  • Environmental adaptability: The diverse ecological conditions, pest and disease types, and cultivation practices across regions hinder data integration, limiting model adaptability in cross-regional applications.
  • Economic costs: High implementation costs remain a key challenge for intelligent precision agriculture. Although deep learning models offer high detection accuracy, their deployment in real-world agricultural settings requires high-performance computing resources (GPU/TPU), sensors, drones, or automated systems, which are often unaffordable for small and medium-sized farms.
  • Technical barriers: The specialized nature of deep learning technology creates a significant adoption hurdle. From data collection and model training to system deployment, implementing deep learning requires advanced technical expertise, yet most agricultural producers lack the necessary AI knowledge and skills, restricting its practical applications.
  • Impact of environmental factors: The complexity of environmental variables poses a major challenge for deep learning models. Lettuce growth is influenced by light, temperature, humidity, soil nutrients, and nutrient solution concentrations, and fluctuations in these variables can destabilize model predictions, reducing model robustness across different growing conditions.
  • Pesticide residue accumulation: Although precision spraying technology reduces pesticide usage, residue accumulation remains a critical concern. Further research is needed to optimize spraying strategies, minimize harmful element absorption by crops, and develop intelligent monitoring and control systems to ensure food safety and environmental sustainability.

3.2. Crop Monitoring

3.2.1. Condition Monitoring

Nutrient detection in lettuce crops has been a research focus for many years. Traditional methods primarily rely on spectral analysis and machine learning techniques for elemental content measurement. For example, Gao et al. [73] utilized spectroscopy combined with machine learning-based regression models, including the linear regression model PLS and the nonlinear regression model ELM, along with genetic algorithm-based GA-siPLS, to measure nitrogen content in lettuce. Although ELM performs well in handling nonlinear relationships, its parameter selection relies on a trial-and-error approach, increasing optimization complexity. In contrast, deep learning leverages end-to-end learning and adaptive capabilities to rapidly optimize model performance and enhance accuracy [14].
Sikati et al. [74] optimized the YOLOv8 network and proposed the YOLO-NPK model, replacing the backbone network with VGG16 and incorporating depthwise separable convolutions, achieving 99% accuracy with FLOPs below 10 G and a latency of just 64.1 milliseconds. Ahsan et al. [75] applied deep learning models (VGG16, VGG19, and CNN) to analyze nutrient (Nitrogen/N) concentrations in hydroponic lettuce. Through data augmentation techniques, VGG16 and VGG19 achieved an accuracy of 87.5% to 100% in identifying four lettuce varieties and their nitrogen levels. Yu et al. [76] integrated hyperspectral data and temporal phenotypic data with Inception networks, residual networks, and attention mechanisms for feature extraction, using RNNs to process time-series data, thereby improving lettuce quality assessment under water stress conditions. This study captured images of four different lettuce varieties, combining RGB imaging and deep learning to analyze growth performance and nutrient concentration. The results showed that VGG16 outperformed CNN, achieving 88% to 100% accuracy in species classification and fertilization level detection. These findings further validate the potential of integrating computer vision, deep learning, and robotic systems for real-time monitoring of lettuce growth and nutrient levels. This approach offers high accuracy and efficiency, providing strong support for intelligent agriculture.
Hamidon et al. [77] employed deep learning models (CenterNet, YOLOv5, YOLOv7, and Faster R-CNN) for lettuce seedling defect detection under varying lighting conditions. The experimental results demonstrated that YOLOv7 achieved the highest mAP (97.2%), highlighting the great potential of deep learning-driven automated seedling systems in indoor farming. The YOLO-based defect detection process using bounding boxes is illustrated in Figure 7. Automated detection of defective lettuce seedlings enhances nursery management efficiency in indoor agriculture. Additionally, Clave et al. [78] developed a mobile application that captures lettuce images and employs a lightweight CNN model for rapid health assessment and detection of nitrogen or potassium deficiencies, achieving an accuracy of 81%, making it suitable for mobile devices. These studies demonstrate that precise detection techniques contribute to efficient and sustainable lettuce farming, facilitating early diagnosis of nutrient deficiencies, improving crop yield, and promoting sustainable agricultural practices.

3.2.2. Classification of Growth Stages

In recent years, the application of deep learning in lettuce growth stage classification has demonstrated significant advantages, outperforming traditional machine learning methods. Studies have shown that, compared to traditional approaches that rely on destructive sampling measurements, deep learning enables non-destructive growth assessment by analyzing digital images of lettuce, aligning with the principles of sustainable agriculture [79,80]. Moreover, deep learning models exhibit strong adaptability across different lettuce varieties and demonstrate robust cross-season generalization, whereas traditional machine learning methods rely on manually crafted features, resulting in weaker generalization capabilities. The experimental results further indicate that in lettuce growth prediction tasks, deep learning methods achieve higher coefficients of determination (R2) and lower normalized root mean square error (NRMSE), demonstrating superior accuracy in growth-related feature estimation.
Accurate detection and prediction of lettuce growth stages play a crucial role in optimizing resource allocation, reducing waste, and promoting sustainable agriculture. Malabanan et al. [81] employed YOLOv10 and DETR models to classify four lettuce growth stages (early stage, heading stage, leaf stage, and harvesting stage). Based on a dataset of 173 images, the experimental results demonstrated that YOLOv10 outperformed DETR in classification accuracy. Zhang et al. [82] integrated the YOLO model with a channel attention mechanism and an adaptive spatial feature fusion module, proposing an improved YOLOX model to replace manual observation and enable automated identification of key growth stages across multiple lettuce varieties. This model achieved a mAP of 99.04%, demonstrating high precision in lettuce growth stage recognition.
In lettuce phenotypic analysis, Yu et al. [83] utilized the AUNet model for image segmentation and extracted 45 phenotypic indicators, including geometric, color, and texture features. These phenotypic parameters effectively reflect lettuce growth status at different stages and reveal its dynamic responses to water and nitrogen stress conditions. Chang et al. [84] applied the U-Net deep learning model for lettuce image segmentation, using the Jaccard index to evaluate segmentation accuracy, achieving a median score of 0.88. This approach successfully enabled growth pattern prediction and detection for lettuce. Additionally, Ojo et al. [85] employed the DeepLabV3+ network with MobileNetv2 as the backbone, enabling precise segmentation of lettuce images for phenotypic parameter prediction and resource optimization.
Additionally, predicting the optimal harvesting time for lettuce is critical for market supply chain management, as it not only enhances economic efficiency but also promotes sustainable agricultural development. Hou et al. [86] proposed an improved Mask R-CNN model for phenotypic estimation of optimal lettuce harvesting time. This approach replaces the ResNet backbone with RepVGG and incorporates a new phenotypic branch, enabling end-to-end prediction from images to phenotypic parameters, significantly improving estimation accuracy.

3.2.3. Yield Prediction

In recent years, deep learning applications in agriculture have been continuously optimized to enhance prediction accuracy and computational efficiency, leading to significant advancements in crop yield prediction [87,88]. These developments have enabled deep learning-based lettuce yield prediction to minimize manual intervention, contributing to sustainable agricultural practices. Lin et al. [89] proposed a multi-branch deep learning model that integrates color, depth, and geometric features from RGB-D images. Using a U-Net network, the model segments lettuce leaves and extracts their geometric features, which are then processed by a multi-branch regression network for fresh weight prediction. Notably, Xu et al. [90] later improved this approach by adopting a single-structure network to reduce computational complexity and expanding the dataset from 286 to 486 samples to enhance model generalization. Additionally, they calibrated RGB data and used a Kinect 2.0 device to optimize depth image fusion. The experimental results showed that this model increased the overall coefficient of determination (R2) by 0.0221, reduced the normalized root mean square error (NRMSE) by 0.0427, and lowered the mean absolute percentage error (MAPE) to 8.47%, meeting soft sensing standards. Similarly, Sun et al. [88] developed an image-based method for cotton boll counting and yield prediction, estimating fiber yield by calculating the number of cotton bolls in field images. Their study established a linear regression model (R2 = 0.53) between boll count and yield, validated with field data, achieving a prediction error of 8.92% and a root mean square error (RMSE) of 99 g. Despite some error margins, the findings demonstrate that image-based boll counting can serve as an effective tool for yield estimation, aiding breeders and farmers in optimizing cotton management and improving yield predictions. From the same research group, Tan et al. [91] proposed a cotton yield prediction method based on UAV imagery. Although these studies have not yet been directly applied to lettuce cultivation, they underscore the strong potential of deep learning, which can be transferred and applied to lettuce production in the future.
Early prediction of lettuce quality traits is crucial for optimizing breeding strategies and enhancing agricultural production efficiency, with hyperspectral imaging technology playing an increasingly important role in this field. Yu et al. [92] developed two end-to-end deep learning models, Deep2D and DeepFC, to predict soluble solid content (SSC) and pH values from the spectral reflectance of lettuce canopies, enabling phenotypic trait evaluation. Building on this, the same research team proposed a deep learning model that integrates hyperspectral data with temporal phenotypic information [76]. By incorporating a recurrent neural network (RNN) to process time-series data, they significantly improved the accuracy of lettuce quality prediction under water stress conditions. Additionally, Ye et al. [93] combined a CNN network with a spectral attention module, demonstrating superior performance in chlorophyll prediction. Their method achieved an average coefficient of determination (R2) of 0.746 and a root mean square error (RMSE) of 2.018, outperforming traditional machine learning approaches.
In lettuce counting and field distribution visualization, the integration of computer vision and deep learning has demonstrated exceptional application potential. Bauer et al. [94] employed a CNN model to automatically identify lettuce plants, achieving 98% accuracy. This enabled the counting of lettuce plants of various sizes and the visualization of field layouts. Building on this, the same research team developed an open-source analysis platform named Air-Surf-Lettuce [95], which enables automated lettuce counting and field analysis. Machefer et al. [96] enhanced Faster R-CNN by incorporating Mask R-CNN branches and applying transfer learning strategies to fine-tune parameters for drone imagery, enabling efficient lettuce counting. Zhang et al. [97] optimized YOLOv5s using the ShuffleNetv2 lightweight strategy, achieving superior performance in mAP, recall, and precision, while maintaining a model size of only 3.18 MB and a processing time of just 1.0 ms, demonstrating high computational efficiency and strong application value.
In summary, the application of deep learning in crop yield prediction primarily relies on computer vision technology, largely due to the large-scale nature of crop cultivation and significant plant occlusion issues. Deep learning methods effectively handle overlapping instances in images, making them well-suited for agricultural environments. For example, the YOLOv5s-ShuffleNetv2 model has demonstrated outstanding detection performance in small, large, and densely packed target scenarios. Table 3 summarizes the application of deep learning in crop monitoring. However, complex and dynamic cultivation environments continue to pose major challenges to the generalization ability of deep learning models, primarily in the following aspects:
  • Environmental variations affecting image quality: Fluctuations in light intensity may lead to brightness variations in lettuce images, reducing the detection accuracy of target recognition models.
  • External factors causing image blur: Biotic and abiotic factors in agricultural fields, such as insect activity and strong winds causing leaf movement, may result in blurry images, thereby reducing model stability and robustness.
  • Diversity of cultivation environments: Environmental conditions in different growing regions may significantly differ from the training dataset, leading to a decline in model generalization ability and affecting prediction accuracy.
Therefore, to enhance the applicability of deep learning models in agricultural settings, future research should focus on data augmentation, domain adaptation techniques, and multimodal data fusion to improve model adaptability to complex environments and enhance the accuracy of crop growth stage classification and yield prediction.
Table 3. Deep learning applications in crop monitoring.
Table 3. Deep learning applications in crop monitoring.
Research DirectionMain MethodologyKey ResultsReferences
Condition MonitoringTraditional Spectral Analysis + Machine Learning (PLS, ELM, GA-siPLS)
  • YOLO-NPK: 99% accuracy, FLOPs < 10 G, latency 64.1 ms
  • VGG16/VGG19: 87.5–100% accuracy
  • RNN: Combining spectral and time series data to improve lettuce quality assessment under water stress
[73,74,75,76,77,78]
Deep Learning (YOLO-NPK, VGG16, VGG19, Inception, ResNet, RNN)
Growth Stage ClassificationYOLOv10, DETR
  • YOLOv10 > DETR (higher classification accuracy)
  • YOLOXs: 99.04% mAP
  • AUNet: 45 phenotypic parameters extracted to analyze growth dynamics
  • U-Net: Jaccard index 0.88
[79,80,81,82,83,84,85,86]
YOLOXs (Attention Mechanism + Adaptive Spatial Feature Fusion)
AUNet, U-Net, DeepLabV3+
Yield PredictionMulti-branch deep learning (U-Net + RGB-D features + iterative regression)
  • DBN-SVM combined model achieves 95.00% detection accuracy on test set
  • Deep learning model achieves over 99.5% accuracy in lettuce elemental deficiency detection
  • Hyperspectral + SCAE predicts Pb residues despite slightly reduced Cd prediction performance
[87,88,89,90,91,92,93,94,95,96,97]
RNN (time series data fusion)
CNN + Spectral Attention Mechanism

3.3. Field Management

3.3.1. Weed Management

Field weeds exert significant competitive pressure on lettuce growth and are a key factor contributing to crop yield reduction [5,98]. Inefficient and unscientific weed management practices not only lower crop productivity [99] but also disrupt ecological balance and compromise the stability of agroecosystems. Additionally, over-reliance on chemical herbicides can lead to environmental pollution and accelerate the degradation of farmland ecosystems. Therefore, to achieve sustainable agriculture, it is crucial to optimize weed management strategies and minimize excessive herbicide use [100,101]. In this context, exploring precise and environmentally sustainable weed control methods, such as intelligent weeding robots, biological control strategies, and integrated weed management (IWM), is essential for enhancing agricultural productivity while ensuring ecological conservation [102].
In recent years, deep learning technology has achieved significant advancements in weed detection and precision weeding. Osorio et al. [103] successfully implemented automatic weed detection in lettuce fields by integrating SVM, YOLOv3, Mask R-CNN, and the Normalized Difference Vegetation Index (NDVI), laying the foundation for precision weed management. Subsequently, Zhang et al. [104] applied the YOLOv5x model to lettuce weed detection and precision weeding. Similarly, Hu et al. [105] enhanced YOLOv7-L by incorporating Efficient Channel Attention and Coordinate Attention mechanisms, while integrating ELAN-B3 and DownC modules to improve detection accuracy and computational efficiency. More recently, Zhao et al. [5] applied YOLOv8n to precision weeding in lettuce, demonstrating superior performance across multiple evaluation metrics, including detection accuracy, recall, and mAP50. Furthermore, their study integrated the improved model with a pneumatic servo mowing system, enabling highly efficient automated weeding. Based on Zhao et al.‘s research [5], Wang et al. [106] developed a more efficient lettuce and weeds detection algorithm, and conducted field experiments.
Weeding robot technology continues to advance, enhancing precision weed recognition and operational efficiency. Hu et al. [105] proposed a lightweight YOLO model and integrated it with a mechanical-laser cooperative weeding robot, developing a new field weed severity classification algorithm. Jiang et al. [98] designed the SPH-YOLOv5x model for precision weeding, replacing the SPPF module with SPPFCSPC and incorporating a more powerful CBAM attention mechanism, further improving detection accuracy. Zhang et al. [107] compared traditional machine learning and deep learning for lettuce weed detection, demonstrating that deep learning models outperform traditional methods in both accuracy and robustness. Notably, Raja et al. [108] introduced a robot vision-based automated lettuce weeding system, employing a geometric appearance detection algorithm for real-time crop identification, which then guides a mechanical weeding blade for non-chemical weed removal. The system utilizes UV illumination and multi-angle mirror imaging to accurately distinguish lettuce plants even in complex weed environments, achieving a 97.8% crop detection accuracy and an 83% weed removal rate in field experiments. This approach reduces reliance on manual and chemical weeding, promoting sustainable lettuce cultivation and improving production efficiency. Furthermore, Xiang et al. [109] developed an enhanced YOLOv5-based all-directional intelligent lettuce weeding machine, incorporating a color correction module and a lightweight feature extraction network to improve crop detection accuracy under varying field lighting conditions. The machine employs a “separation-closure” strategy to precisely avoid lettuce seedlings while effectively removing intra-row and inter-row weeds. Field experiments across four different locations demonstrated a 96.87% weeding rate, 1.19% crop damage rate, and 0.34% weed regrowth rate. These findings highlight the broad application potential of robotic weeding systems in precision agriculture, effectively reducing chemical herbicide use and supporting sustainable agricultural development.
Overall, with the continuous optimization of deep learning models, the intelligence level of lettuce field management has steadily improved, providing technological support for sustainable agriculture. An example of deep learning-driven lettuce weeding equipment is shown in Figure 8.

3.3.2. Irrigation and Fertilization Management

Precision irrigation and fertilization are essential for efficient lettuce cultivation, as intelligent management strategies not only enhance water and nutrient utilization efficiency and reduce energy consumption but also minimize waste in agricultural production, thereby promoting sustainable agriculture [110,111]. In modern agricultural systems, irrigation and fertilization typically integrate software and hardware, where software systems collect environmental data via sensors and transmit control commands to hardware devices, enabling precise regulation [112,113].
In recent years, smart irrigation and fertilization technologies have rapidly evolved, incorporating Internet of Things (IoT), machine learning, and remote sensing solutions. For instance, Jarrar et al. [114] developed an IoT-based solar-powered intelligent irrigation system, while Li et al. [110] proposed a strawberry irrigation strategy using the K-means clustering algorithm. Additionally, Sudkaew et al. [115] tested a variable fertilization robot that adjusts spray volume based on vegetation indices, further advancing precision fertilization techniques.
With the advancement of artificial intelligence, deep learning has been increasingly applied in smart irrigation systems, providing technological support for precision management in lettuce cultivation. Moraitis et al. [116] integrated the Faster R-CNN-Inception-V2 network with a motor-driven system and sensors, enabling precision irrigation for lettuce and improving water resource utilization efficiency. Similarly, Chang et al. [117] employed a deep learning-based Line Generation Method to optimize the autonomous navigation of mobile robots in lettuce fields, enhancing the automation of irrigation operations.
The key to precision irrigation lies in the real-time monitoring and accurate assessment of drought and water stress. Flores et al. [118] developed a MobileNetV2-SVM-based deep learning vision system for efficient detection of lettuce water stress, integrating Raspberry Pi 4B and Arduino for irrigation control. Hao et al. [119] utilized MFC-CNN to predict full moisture content (FMC) and equivalent water thickness (EWT) in lettuce canopies. Concepcion II et al. [120] combined thermal imaging with visible light features and used deep learning to estimate lettuce water stress levels. Their three YOLOv4 models were applied to detect black water bands, field edges, and crops in images. Notably, to ensure smooth crop recognition without occlusion or unidentifiable areas, no masking operations were applied to crop detection. Additionally, Wolter-Salas et al. [121] integrated infrared thermal imaging technology with the YOLOv8 model, achieving precise detection of drought stress in lettuce, thereby improving the accuracy of crop water status assessment under extreme environmental conditions.
Table 4 summarizes the application of deep learning in field management. Although deep learning technology has introduced intelligent advancements in field management, several challenges still limit its potential in sustainable agriculture, necessitating further optimization strategies:
  • Limited generalization ability in weed recognition: The high diversity of weed species and variability in growth environments present challenges for deep learning models. Some weeds closely resemble lettuce in appearance, reducing model generalization in complex environments.
  • High computational resource requirements: Despite achieving high speed and accuracy in object detection tasks, deep learning models demand significant computational resources, leading to high application costs in agricultural production and imposing an additional burden on farms [122]. Future research should focus on developing lighter models to improve real-time detection performance and reduce hardware costs.
  • Herbicide resistance: Prolonged use of the same herbicide can lead to weed resistance, reducing weed control effectiveness. Therefore, integrating biological control, crop rotation, and novel herbicide strategies is essential to mitigate resistance risks.
  • Ecological impact of herbicides: Certain herbicides can negatively affect the environment and biodiversity, hindering the ecological sustainability of lettuce farming. To address this, more environmentally friendly weeding techniques, such as laser-based or mechanical methods, should be developed to minimize chemical herbicide use.
  • Economic feasibility challenges: The high cost of smart irrigation and fertilization systems, including sensors, control systems, and automation hardware, makes it difficult for small farms to afford installation and maintenance, limiting economic sustainability. Future research should explore low-cost, high-efficiency solutions to facilitate the widespread adoption of intelligent agricultural technologies.
Table 4. Deep learning applications in field management.
Table 4. Deep learning applications in field management.
Research DirectionMain MethodologyKey ResultsReferences
Weed ManagementSVM, YOLOv3, Mask R-CNN, NDVI
  • Achieved automatic weed detection in lettuce fields, laying the foundation for precision weed management
  • Improved detection accuracy and computational efficiency for weed identification
  • Developed a new field weed severity classification algorithm, improving weeding efficiency
  • Enhanced detection accuracy and real-time weed identification
  • Achieved 97.8% crop detection accuracy and 83% weed removal rate in field tests
  • 96.87% weeding rate, 1.19% crop damage rate, and 0.34% weed regrowth rate
[98,103,104,105,108,109]
YOLOv5x, YOLOv7-L (incorporating Efficient Channel Attention, Coordinate Attention, ELAN-B3, and DownC modules)
SPH-YOLOv5x (SPPF replaced by SPPFCSPC, CBAM attention mechanism)
Geometric appearance detection + UV illumination + multi-angle mirror imaging
YOLOv5-based system with color correction and lightweight feature extraction network
Irrigation and Fertilization ManagementSolar-powered IoT irrigation, K-means clustering for irrigation scheduling
  • Optimized water resource utilization and enhanced irrigation efficiency
  • Improved automation and precision in lettuce irrigation
  • Enabled real-time lettuce water stress assessment and accurate drought detection
  • Achieved precise drought stress detection under extreme conditions
[110,114,116,117,118,119,120,121]
Faster R-CNN-Inception-V2, motor-driven irrigation systems, deep learning-based Line Generation Method
MobileNetV2-SVM-based vision system, MFC-CNN for moisture content prediction, thermal imaging with YOLOv4
Integrated infrared thermal imaging with deep learning models for stress detection

4. Discussion

4.1. Advantages of Deep Learning in Lettuce Cultivation

The rapid advancement of deep learning has not only driven innovation in agricultural technology but also contributed to agricultural sustainability. Agricultural ecological sustainability can be evaluated from three perspectives: economic sustainability, ecological sustainability, and energy sustainability [123,124,125]. Deep learning offers innovative solutions in these areas, enhancing the intelligence level of agricultural production.
In economic sustainability, deep learning significantly reduces labor costs and improves efficiency compared to traditional machine learning methods, owing to its powerful algorithmic capabilities. Traditional image classification tasks rely on manual feature extraction, which requires task-specific feature design, making the process complex and labor-intensive. In contrast, CNN-based deep learning methods can directly process raw images, automatically learning complex feature representations without manual feature engineering, thereby accelerating processing speed and reducing human resource consumption [126]. In ecological sustainability, traditional machine learning methods typically handle single-task classification problems, making it difficult to perform multiple tasks simultaneously. Deep learning, however, supports multi-task learning, enabling object detection, classification, and segmentation within a single framework [127]. This capability is particularly beneficial for plant health monitoring and pest identification, as it reduces excessive crop monitoring and intervention, thereby minimizing the ecological impact on farmland. Moreover, deep learning-based plant recognition and classification do not rely on task-specific feature processors, which simplifies data preprocessing, shortens overall analysis time [128], and eliminates the limitations of manual feature extraction in traditional methods [129]. In energy sustainability, deep learning-driven agricultural automation technologies—such as drone monitoring, intelligent spraying systems, and autonomous weeding robots—improve resource utilization efficiency. For instance, smart irrigation systems integrate deep learning algorithms with sensor data to enable precision irrigation, reducing water wastage. Similarly, autonomous weeding robots can accurately identify weeds, reducing the need for chemical herbicides, and thereby lowering environmental pollution.
Compared to traditional machine learning methods, deep learning demonstrates higher accuracy and superior performance across multiple tasks. Studies have shown that deep learning models consistently outperform traditional machine learning approaches in classification and prediction accuracy [130]. This advantage primarily stems from the hierarchical structure of deep learning models, which enables them to learn higher-level feature representations through successive neural layers and extract information directly from raw data using end-to-end learning, thereby enhancing model generalization and precision [131]. In contrast, traditional machine learning methods face challenges in scalability, adaptability, and data utilization. For instance, when handling large-scale datasets, traditional methods may experience performance degradation, leading to a decline in algorithm accuracy. Additionally, data redundancy and noise can introduce model instability and inconsistencies, negatively affecting prediction accuracy [132]. In agriculture, the YOLO model has gained widespread attention due to its outstanding accuracy and recall rate, making it a preferred choice for agricultural vision-based detection tasks [77]. For example, YOLO has been applied in crop disease detection to enable precision spraying, thereby reducing pesticide usage and mitigating the ecological impact of chemical pesticides [133,134]. By integrating deep learning-driven precision agriculture technologies, agricultural production efficiency can be significantly improved while minimizing environmental degradation, contributing to energy and ecological sustainability. In the future, lightweight network optimization, transfer learning, and multimodal data fusion are expected to further enhance deep learning’s role in intelligent agricultural management, providing robust technological support for global sustainable agriculture development.

4.2. Challenges of Deep Learning in Lettuce Cultivation

Although deep learning has achieved remarkable progress in lettuce pest detection and irrigation management, several challenges remain in practical applications. CNN models can automate preprocessing steps, reducing manual intervention, and leverage open-source frameworks such as Keras and TensorFlow for training, improving pest detection accuracy and handling complex computations [135]. However, deep learning models still face significant limitations. Lettuce exhibits high genetic diversity, encompassing local and modern cultivars, with distinct cultivation methods and significant genetic variations. Given the global distribution of lettuce production, the variation in pest and disease types across different climates reduces the generalization ability of deep learning models [136]. To solve this problem, future research should construct a large-scale, multi-environmental, multi-modal lettuce database covering multi-source data such as RGB, hyperspectral, LiDAR, etc., in order to improve the model’s generalization ability. In addition, data augmentation, synthetic data (e.g., GAN generation), and migration learning can be used to alleviate the problem of data insufficiency, and open sharing of standardized datasets is encouraged to promote cross-research institution collaboration to improve data quality and applicability. Additionally, lettuce is cultivated using both hydroponic and traditional farming systems and is highly sensitive to environmental fluctuations, meaning that models trained on a single dataset may fail in diverse growing conditions [137]. Furthermore, prolonged use of the same pesticides may lead to resistance development in lettuce, potentially disrupting farmland ecosystems [138]. From a technological application perspective, current deep learning-integrated agricultural hardware primarily includes drones and spraying robots, yet their high costs make them inaccessible to small-scale farms [139,140]. Additionally, large-scale agricultural machinery is commonly used in lettuce farming, and in mixed farming environments, the presence of biological noise may compromise the robustness of deep learning models, reducing their stability in detection and control tasks. Moreover, deep learning models require extensive training before fertilization and irrigation applications, leading to increased consumption of fertilizers and water, which not only raises production costs but also contradicts economic and energy sustainability principles. Although deep learning technology has revolutionized lettuce production management, challenges related to data generalization, environmental adaptability, equipment costs, and resource consumption must be overcome to ensure its long-term feasibility and successful implementation in agricultural production.
Although deep learning has demonstrated significant advantages in lettuce yield prediction and weed management, its application in agriculture remains challenging due to the unstructured nature of agricultural environments and the variability of abiotic factors. First, yield estimation using deep learning does not always correspond to actual harvested yield, as different lettuce varieties exhibit significant variations in appearance quality during storage, affecting prediction accuracy [141]. Second, deep learning models heavily rely on large-scale image datasets for training [142,143]; however, abiotic factors such as rainfall and strong winds can degrade image quality. For instance, water droplets on camera lenses or crop movement due to wind can blur images, reducing the stability and reliability of research outcomes. Integrating multi-spectral or LiDAR data could potentially enhance model robustness by providing additional depth and spectral information, thereby mitigating the adverse effects of environmental disturbances. Moreover, deep learning-based crop growth prediction requires considerable technical expertise, posing adoption challenges for many farmers and limiting its widespread application in agriculture. More critically, the training process of deep learning models is often energy-intensive [144], which conflicts with the energy-efficient principles of sustainable agriculture. To address these challenges, hybrid models—such as combining CNNs with attention mechanisms—could improve feature extraction and enhance model adaptability to complex agricultural environments. Therefore, further algorithmic optimizations, including multimodal data fusion and hybrid architectures, are necessary to reduce computational resource consumption while improving the accuracy and robustness of deep learning applications in agricultural production.
In lettuce weed identification and removal, the primary challenges stem from environmental factors. The widespread use of chemical herbicides may contribute to ecological degradation, which not only violates the principles of ecological sustainability in agriculture but also threatens farmland biodiversity. Consequently, the development of mechanical weeding technologies is crucial to reduce dependence on chemical herbicides. Additionally, variations in regional growing conditions lead to significant differences in weed morphology, with some weeds closely resembling lettuce in shape, increasing the risk of misclassification in deep learning-based detection. To enhance the generalization ability of deep learning models, large-scale and diverse datasets are required for training, enabling better adaptation to different environments and weed species. This, in turn, can further advance the intelligence level of lettuce cultivation management.
A major challenge of deep learning in lettuce cultivation management lies in the wide planting range, diverse environmental conditions, and numerous varieties. However, the lack of effective integration among different data platforms hinders data sharing and unified utilization. This data fragmentation not only affects the systematic study of lettuce growth characteristics but also limits the adaptability of deep learning models across different growing environments. Additionally, while lettuce is highly valued for its nutritional benefits and market demand, its yield, storage characteristics, and cultivation scale are far less extensive than staple crops such as wheat, resulting in a relative scarcity of lettuce growth data. Since deep learning models rely on large-scale datasets for pretraining to learn rich feature representations and provide robust initialization parameters, data insufficiency becomes a key factor restricting model generalization [145]. The current lack of large-scale, integrated databases on lettuce growth significantly limits deep learning models’ adaptability to various growing environments. Therefore, future research should prioritize the development of large-scale, multi-environment, and multi-variety comprehensive lettuce databases to enhance the generalization capability of deep learning models. An alternative solution is to optimize deep learning algorithms to enable efficient learning from small datasets, employing techniques such as Transfer Learning and Self-Supervised Learning to reduce data dependence. However, these approaches often come with higher computational costs and increased energy consumption, which conflict with the energy conservation principles of sustainable agriculture. Thus, future research must strike a balance between large-scale data infrastructure development and model optimization, ensuring the sustainable application of deep learning in agriculture.

4.3. Future Perspectives

The future development of deep learning must address multiple challenges, not only by enhancing algorithmic capabilities and accuracy but also by proposing effective improvements tailored to the specific difficulties of lettuce-related applications. To accelerate model convergence and improve performance, future research can employ adaptive optimization algorithms such as Adam and RMSprop and leverage pretrained models (e.g., ImageNet) for fine-tuning, thereby enhancing model generalization in new tasks. Additionally, given the unique requirements of agricultural applications, researchers can design task-specific loss functions, such as Focal Loss, to address class imbalance issues. To prevent overfitting and improve model robustness, noise regularization techniques (e.g., L1/L2 regularization or Dropout) can be introduced in the input data or hidden layers. Furthermore, future studies may explore integrating wavelet interpolation coupling techniques into deep learning models [146,147,148] to enhance feature extraction capabilities, further improving model performance in agricultural tasks.
The lack of sufficient lettuce growth data and the absence of cross-regional integrated data platforms severely restrict the accuracy and generalization ability of deep learning models. Future research should focus on expanding the scale of lettuce growth data collection and integrating datasets from different regions to facilitate the development of comprehensive lettuce growth data platforms (e.g., lettuceDB [149] and lettuceGBD [150]). This would enhance deep learning model training efficiency and generalization performance. Additionally, future studies could explore the incorporation of diffusion models [151,152] to generate synthetic data or augment existing lettuce datasets, thereby improving model robustness and generalization in lettuce cultivation applications and addressing current issues related to data scarcity and low dataset quality.
The future development of deep learning should emphasize stronger integration with hardware, as interdisciplinary collaboration can facilitate the fusion of software and hardware, making deep learning models more adaptable to the practical needs of agricultural environments. Existing research has demonstrated that various hardware technologies can be effectively integrated with deep learning, including:
  • Sensor Fusion: The integration of RGB, multispectral, thermal imaging, and SAR high-resolution sensors can enhance lettuce phenotypic prediction, soil analysis, and yield estimation, while also contributing to the development of fully autonomous farms [92,153,154,155,156]. Although multimodal data fusion—such as integrating hyperspectral imaging with IoT-based sensors—has been proposed as a promising direction, several technical challenges must be addressed to enable real-time decision-making. First, synchronizing heterogeneous data streams from different sensors remains a major obstacle, as variations in data acquisition rates and environmental conditions can introduce temporal misalignment. Second, interpreting fused data requires advanced deep learning models capable of extracting relevant features while filtering out noise from multimodal inputs. Third, computational efficiency is a critical concern, particularly for edge computing applications, where processing power is limited. Moreover, optical field optimization strategies offer innovative solutions for precision laser manipulation, enhanced machine vision, and multimodal sensor fusion in intelligent agricultural equipment, especially in complex environments where adaptive navigation and real-time visual feedback systems are essential [157].
  • Drone Technology: Drones equipped with deep learning algorithms and sensors enable intelligent pesticide spraying and fertilization, reducing manual intervention and minimizing agriculture’s ecological impact [158].
  • Weeding Robots: Deep learning-powered mechanical and laser weeding robots can decrease reliance on chemical herbicides, thereby reducing ecological pollution [5,106].
  • Irrigation and Fertilization Robots: Compared to traditional agricultural machinery, deep learning-enabled smart irrigation and fertilization robots possess precision sensing, autonomous decision-making, and intelligent control capabilities, allowing them to adapt to complex agricultural environments and perform highly efficient precision operations [159].
Despite its potential, deep learning applications in agriculture still face challenges related to high training costs and energy consumption, particularly due to the expensive GPU and CPU hardware requirements, making it difficult for small farms to afford [160]. Future research should prioritize lightweight models and low-cost hardware solutions to reduce expenses and increase the accessibility of deep learning technology in agriculture. Existing studies have already explored such approaches. For instance, Ukaegbu et al. [161] developed a lightweight drone-based herbicide spraying system, utilizing Raspberry Pi 3 as an embedded system, combined with a CNN model and low-cost sensors (e.g., Pi Camera), significantly reducing hardware costs. However, deploying lightweight models on edge devices such as Raspberry Pi requires a trade-off between computational efficiency and prediction accuracy, and low-complexity models may lead to performance degradation. In addition, existing research lacks real-time performance evaluation in agricultural scenarios, and standardized benchmarks should be established in the future to quantify the inference speed, power consumption, and accuracy of different models under field conditions. Future research could further investigate the use of hardware accelerators (e.g., Intel Movidius [162]) to decrease inference time and integrate advanced deep learning algorithms to improve detection efficiency. Additionally, several lightweight deep learning models, such as MobileNet, EfficientNet, and ShuffleNet [163], can effectively reduce computational costs and energy consumption while also being deployable on lightweight devices such as smartphones and microcontrollers, supporting sustainable energy and economic development in agriculture. For instance, Wang et al. [164] integrated EfficientNet with MobileNetV1 and DenseNet-121, improving lettuce height prediction accuracy. Similarly, Adianggiali et al. [165] designed a CNN model based on MobileNetV2 architecture, achieving successful classification of nutrient deficiencies in hydroponic lettuce.
Although existing research on lightweight models remains limited, future efforts should focus on advancing the application of lightweight deep learning models in agriculture to enhance their role in the sustainable development of lettuce farming. Moreover, deep learning has already been widely applied in the management of other crops, providing valuable case studies for future research [14,20,166,167,168,169]. By leveraging these successful implementations and integrating transfer learning [170], established models can be adapted and applied to lettuce cultivation, improving model adaptability and scalability. Furthermore, future studies could explore the integration of digital twin technology into lettuce cultivation to further enhance the level of intelligent management [171].

5. Conclusions

Lettuce has significant global economic value due to its nutritional and medicinal properties, with rising production demand. Deep learning enhances cultivation quality and efficiency, making it a key research area in modern agriculture.
Deep learning includes discriminative, generative, and hybrid learning, all widely applied in lettuce research. Compared to traditional methods, it reduces labor costs and processing time while improving accuracy. For example, YOLO and CNN models combined with hyperspectral imaging enable real-time pest detection, while deep learning aids in nutrient deficiency identification for precision fertilization. Integration with IoT and sensors allows automated water and fertilizer management through intelligent irrigation systems.
However, challenges remain. First, diverse climates, soils, and farming methods limit model generalization, requiring larger datasets and transfer learning. Second, high hardware costs and energy consumption hinder adoption, especially for small farms. Future research should develop lightweight models and affordable agricultural hardware to improve accessibility.
Additionally, pesticide residue and resistance issues challenge deep learning applications. Overreliance on chemical pesticides harms sustainability. Future efforts should optimize models for real-world conditions and explore sustainable strategies such as precision spraying, mechanical weeding, and biological control to reduce pesticide dependence and support eco-friendly agriculture.

Author Contributions

Conceptualization, Y.N., R.-F.W. and H.W.; methodology, Y.N., R.-F.W. and H.W.; investigation, Y.-M.Q. and Y.-H.T.; resources, Y.N., R.-F.W. and H.W.; data curation, Y.-M.Q.; writing—original draft preparation, Y.-M.Q., Y.-H.T. and T.L.; writing—review and editing, Y.N., R.-F.W. and H.W.; visualization, T.L.; supervision, Y.N., R.-F.W. and H.W.; project administration, Y.N., R.-F.W. and H.W.; funding acquisition, Y.N., R.-F.W. and H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Agricultural High-Quality Development Key Common Technology Research and Development Special Program (21327401D), the Guangdong Basic and Applied Basic Research Foundation (2023A1515110319), Science and Technology Projects in Guangzhou (2025A04J3757), and the Key Science and Technology Program of Henan Province (242102210171).

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lindqvist, K. On the Origin of Cultivated Lettuce. Hereditas 1960, 46, 319–350. [Google Scholar] [CrossRef]
  2. Kim, M.J.; Moon, Y.; Tou, J.C.; Mou, B.; Waterland, N.L. Nutritional Value, Bioactive Compounds and Health Benefits of Lettuce (Lactuca sativa L.). J. Food Compos. Anal. 2016, 49, 19–34. [Google Scholar] [CrossRef]
  3. Lei, L. Lettuce-Manufactured Pharmaceuticals. Nat. Plants 2019, 5, 646. [Google Scholar] [CrossRef]
  4. Shatilov, M.; Razin, A.; Ivanova, M. Analysis of the World Lettuce Market. IOP Conf. Ser. Earth Environ. Sci. 2019, 395, 012053. [Google Scholar] [CrossRef]
  5. Zhao, C.-T.; Wang, R.-F.; Tu, Y.-H.; Pang, X.-X.; Su, W.-H. Automatic Lettuce Weed Detection and Classification Based on Optimized Convolutional Neural Networks for Robotic Weed Control. Agronomy 2024, 14, 2838. [Google Scholar] [CrossRef]
  6. da Silva, T.M.; Cividanes, F.J.; Salles, F.A.; Pacífico Manfrim Perticarrari, A.L.; Zambon da Cunha, S.B.; Monteiro dos Santos-Cividanes, T. Insect Pests and Natural Enemies Associated with Lettuce Lactuca sativa L. (Asteraceae) in an Aquaponics System. Sci. Rep. 2024, 14, 14947. [Google Scholar] [CrossRef]
  7. Embaby, E.-S.M.; Lotfy, D.E.-S. Ecological Studies on Cabbage Pests. J. Agric. Technol. 2015, 11, 1145–1160. [Google Scholar]
  8. Simko, I.; Atallah, A.J.; Ochoa, O.E.; Antonise, R.; Galeano, C.H.; Truco, M.J.; Michelmore, R.W. Identification of QTLs Conferring Resistance to Downy Mildew in Legacy Cultivars of Lettuce. Sci. Rep. 2013, 3, 2875. [Google Scholar] [CrossRef]
  9. German-Retana, S.; Walter, J.; Le Gall, O. Lettuce Mosaic Virus: From Pathogen Diversity to Host Interactors. Mol. Plant Pathol. 2008, 9, 127–136. [Google Scholar] [CrossRef]
  10. Kamberoglu, M.; Alan, B. Occurrence of Tomato Spotted Wilt Virus in Lettuce in Cukurova Region of Turkey. Int. J. Agric. Biol. 2011, 13, 431–434. [Google Scholar]
  11. Hong, J.; Xu, F.; Chen, G.; Huang, X.; Wang, S.; Du, L.; Ding, G. Evaluation of the Effects of Nitrogen, Phosphorus, and Potassium Applications on the Growth, Yield, and Quality of Lettuce (Lactuca sativa L.). Agronomy 2022, 12, 2477. [Google Scholar] [CrossRef]
  12. Shen, Y.Z.; Guo, S.S.; Ai, W.D.; Tang, Y.K. Effects of Illuminants and Illumination Time on Lettuce Growth, Yield and Nutritional Quality in a Controlled Environment. Life Sci. Space Res. 2014, 2, 38–42. [Google Scholar] [CrossRef]
  13. Ojeda, A.; Moreno, G.; Martínez, O. Effects of Environmental Factors on the Morphometric Characteristics of Cultivated Lettuce (Lactuca sativa L.). Agron. Colomb. 2012, 30, 351–358. [Google Scholar]
  14. Wang, R.-F.; Su, W.-H. The Application of Deep Learning in the Whole Potato Production Chain: A Comprehensive Review. Agriculture 2024, 14, 1225. [Google Scholar] [CrossRef]
  15. Zhou, G.; Wang, R.-F. The Heterogeneous Network Community Detection Model Based on Self-Attention. Symmetry 2025, 17, 432. [Google Scholar] [CrossRef]
  16. Pan, C.-H.; Qu, Y.; Yao, Y.; Wang, M.-J.-S. HybridGNN: A Self-Supervised Graph Neural Network for Efficient Maximum Matching in Bipartite Graphs. Symmetry 2024, 16, 1631. [Google Scholar] [CrossRef]
  17. Tu, Y.-H.; Wang, R.-F.; Su, W.-H. Active Disturbance Rejection Control—New Trends in Agricultural Cybernetics in the Future: A Comprehensive Review. Machines 2025, 13, 111. [Google Scholar] [CrossRef]
  18. Camalan, S.; Cui, K.; Pauca, V.P.; Alqahtani, S.; Silman, M.; Chan, R.; Plemmons, R.J.; Dethier, E.N.; Fernandez, L.E.; Lutz, D.A. Change Detection of Amazonian Alluvial Gold Mining Using Deep Learning and Sentinel-2 Imagery. Remote Sens. 2022, 14, 1746. [Google Scholar] [CrossRef]
  19. Latif, G.; Abdelhamid, S.E.; Mallouhy, R.E.; Alghazo, J.; Kazimi, Z.A. Deep Learning Utilization in Agriculture: Detection of Rice Plant Diseases Using an Improved CNN Model. Plants 2022, 11, 2230. [Google Scholar] [CrossRef]
  20. Wang, Z.; Wang, R.; Wang, M.; Lai, T.; Zhang, M. Self-Supervised Transformer-Based Pre-Training Method with General Plant Infection Dataset. In Proceedings of the Pattern Recognition and Computer Vision, Urumqi, China, 18–20 October 2024; Lin, Z., Cheng, M.-M., He, R., Ubul, K., Silamu, W., Zha, H., Zhou, J., Liu, C.-L., Eds.; Springer Nature: Singapore, 2025; pp. 189–202. [Google Scholar]
  21. Nasiri, A.; Omid, M.; Taheri-Garavand, A.; Jafari, A. Deep Learning-Based Precision Agriculture through Weed Recognition in Sugar Beet Fields. Sustain. Comput. Inform. Syst. 2022, 35, 100759. [Google Scholar]
  22. Su, D.; Kong, H.; Qiao, Y.; Sukkarieh, S. Data Augmentation for Deep Learning Based Semantic Segmentation and Crop-Weed Classification in Agricultural Robotics. Comput. Electron. Agric. 2021, 190, 106418. [Google Scholar]
  23. Zhao, Z.; Yin, C.; Guo, Z.; Zhang, J.; Chen, Q.; Gu, Z. Research on Apple Recognition and Localization Method Based on Deep Learning. Agronomy 2025, 15, 413. [Google Scholar] [CrossRef]
  24. Escorcia-Gutierrez, J.; Gamarra, M.; Soto-Diaz, R.; Pérez, M.; Madera, N.; Mansour, R.F. Intelligent Agricultural Modelling of Soil Nutrients and pH Classification Using Ensemble Deep Learning Techniques. Agriculture 2022, 12, 977. [Google Scholar] [CrossRef]
  25. Li, F.; Bai, J.; Zhang, M.; Zhang, R. Yield Estimation of High-Density Cotton Fields Using Low-Altitude UAV Imaging and Deep Learning. Plant Methods 2022, 18, 55. [Google Scholar] [PubMed]
  26. El-Habil, B.Y.; Abu-Naser, S.S. Global Climate Prediction Using Deep Learning. J. Theor. Appl. Inf. Technol. 2022, 100, 4824–4838. [Google Scholar]
  27. Prodhan, F.A.; Zhang, J.; Yao, F.; Shi, L.; Pangali Sharma, T.P.; Zhang, D.; Cao, D.; Zheng, M.; Ahmed, N.; Mohana, H.P. Deep Learning for Monitoring Agricultural Drought in South Asia Using Remote Sensing Data. Remote Sens. 2021, 13, 1715. [Google Scholar] [CrossRef]
  28. Sami, M.; Khan, S.Q.; Khurram, M.; Farooq, M.U.; Anjum, R.; Aziz, S.; Qureshi, R.; Sadak, F. A Deep Learning-Based Sensor Modeling for Smart Irrigation System. Agronomy 2022, 12, 212. [Google Scholar] [CrossRef]
  29. Mathew, M.P.; Elayidom, S.; Jagathy Raj, V.; Abubeker, K. Development of a Handheld GPU-Assisted DSC-TransNet Model for the Real-Time Classification of Plant Leaf Disease Using Deep Learning Approach. Sci. Rep. 2025, 15, 3579. [Google Scholar]
  30. Ali, T.; Rehman, S.U.; Ali, S.; Mahmood, K.; Obregon, S.A.; Iglesias, R.C.; Khurshaid, T.; Ashraf, I. Smart Agriculture: Utilizing Machine Learning and Deep Learning for Drought Stress Identification in Crops. Sci. Rep. 2024, 14, 30062. [Google Scholar]
  31. Liu, C.; Lu, W.; Gao, B.; Kimura, H.; Li, Y.; Wang, J. Rapid Identification of Chrysanthemum Teas by Computer Vision and Deep Learning. Food Sci. Nutr. 2020, 8, 1968–1977. [Google Scholar]
  32. Huo, Y.; Liu, Y.; He, P.; Hu, L.; Gao, W.; Gu, L. Identifying Tomato Growth Stages in Protected Agriculture with StyleGAN3–Synthetic Images and Vision Transformer. Agriculture 2025, 15, 120. [Google Scholar] [CrossRef]
  33. Han, J.; Hong, J.; Chen, X.; Wang, J.; Zhu, J.; Li, X.; Yan, Y.; Li, Q. Integrating Convolutional Attention and Encoder–Decoder Long Short-Term Memory for Enhanced Soil Moisture Prediction. Water 2024, 16, 3481. [Google Scholar] [CrossRef]
  34. Cynthia, E.P.; Ismanto, E.; Arifandy, M.I.; Sarbaini, S.; Nazaruddin, N.; Manuhutu, M.A.; Akbar, M.A.; Abdiyanto. Convolutional Neural Network and Deep Learning Approach for Image Detection and Identification. J. Phys. Conf. Ser. 2022, 2394, 012019. [Google Scholar]
  35. Kim, J.-S.G.; Moon, S.; Park, J.; Kim, T.; Chung, S. Development of a Machine Vision-Based Weight Prediction System of Butterhead Lettuce (Lactuca sativa L.) Using Deep Learning Models for Industrial Plant Factory. Front. Plant Sci. 2024, 15, 1365266. [Google Scholar]
  36. Wu, Z.; Yang, R.; Gao, F.; Wang, W.; Fu, L.; Li, R. Segmentation of Abnormal Leaves of Hydroponic Lettuce Based on DeepLabV3+ for Robotic Sorting. Comput. Electron. Agric. 2021, 190, 106443. [Google Scholar]
  37. Guo, H.; Woodruff, A.; Yadav, A. Improving Lives of Indebted Farmers Using Deep Learning: Predicting Agricultural Produce Prices Using Convolutional Neural Networks. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 13294–13299. [Google Scholar]
  38. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of Deep Learning: Concepts, CNN Architectures, Challenges, Applications, Future Directions. J. Big Data 2021, 8, 53. [Google Scholar]
  39. Wu, C.-J.; Raghavendra, R.; Gupta, U.; Acun, B.; Ardalani, N.; Maeng, K.; Chang, G.; Aga, F.; Huang, J.; Bai, C.; et al. Sustainable Ai: Environmental Implications, Challenges and Opportunities. Proc. Mach. Learn. Syst. 2022, 4, 795–813. [Google Scholar]
  40. Cappelli, S.L.; Domeignoz-Horta, L.A.; Loaiza, V.; Laine, A.-L. Plant Biodiversity Promotes Sustainable Agriculture Directly and via Belowground Effects. Trends Plant Sci. 2022, 27, 674–687. [Google Scholar]
  41. Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar]
  42. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  43. Deng, L. A Tutorial Survey of Architectures, Algorithms, and Applications for Deep Learning. APSIPA Trans. Signal Inf. Process. 2014, 3, e2. [Google Scholar] [CrossRef]
  44. Jiang, Y.; Li, C.; Paterson, A.H.; Robertson, J.S. DeepSeedling: Deep Convolutional Network and Kalman Filter for Plant Seedling Detection and Counting in the Field. Plant Methods 2019, 15, 141. [Google Scholar] [CrossRef] [PubMed]
  45. Tan, C.; Li, C.; He, D.; Song, H. Anchor-Free Deep Convolutional Neural Network for Tracking and Counting Cotton Seedlings and Flowers. Comput. Electron. Agric. 2023, 215, 108359. [Google Scholar] [CrossRef]
  46. Sherstinsky, A. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network. Phys. Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  47. Huang, Y.; Sun, S.; Duan, X.; Chen, Z. A Study on Deep Neural Networks Framework. In Proceedings of the 2016 IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Xi’an, China, 3–5 October 2016; pp. 1519–1522. [Google Scholar]
  48. Liu, N.; Dai, F.; Chai, X.; Liu, G.; Wu, X.; Huang, B. Review of Collaborative Inference in Edge Intelligence: Emphasis on DNN Partition. In Proceedings of the 2024 IEEE Cyber Science and Technology Congress (CyberSciTech), Boracay Island, Philippines, 5–8 November 2024; pp. 15–22. [Google Scholar]
  49. Liu, D.; Li, Z.; Wu, Z.; Li, C. Digital Twin/MARS-CycleGAN: Enhancing Sim-to-Real Crop/Row Detection for MARS Phenotyping Robot Using Synthetic Images. J. Field Robot. 2024. [Google Scholar] [CrossRef]
  50. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. Commun ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  51. Kingma, D.P.; Welling, M. Auto-Encoding Variational Bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
  52. Hua, Y.; Guo, J.; Zhao, H. Deep Belief Networks and Deep Learning. In Proceedings of the 2015 International Conference on Intelligent Computing and Internet of Things, Harbin, China, 17–18 January 2015; pp. 1–4. [Google Scholar]
  53. Zhang, N.; Ding, S.; Zhang, J.; Xue, Y. An Overview on Restricted Boltzmann Machines. Neurocomputing 2018, 275, 1186–1199. [Google Scholar] [CrossRef]
  54. Hamidon, M.H.; Ahamed, T. Detection of Tip-Burn Stress on Lettuce Grown in an Indoor Environment Using Deep Learning Algorithms. Sensors 2022, 22, 7251. [Google Scholar] [CrossRef]
  55. Macioszek, V.K.; Marciniak, P.; Kononowicz, A.K. Impact of Sclerotinia Sclerotiorum Infection on Lettuce (Lactuca sativa L.) Survival and Phenolics Content—A Case Study in a Horticulture Farm in Poland. Pathogens 2023, 12, 1416. [Google Scholar] [CrossRef]
  56. Tang, Y.; Du, M.; Li, Z.; Yu, L.; Lan, G.; Ding, S.; Farooq, T.; He, Z.; She, X. Identification and Genome Characterization of Begomovirus and Satellite Molecules Associated with Lettuce (Lactuca sativa L.) Leaf Curl Disease. Plants 2025, 14, 782. [Google Scholar] [CrossRef] [PubMed]
  57. PlantVillage. Available online: https://plantvillage.psu.edu/ (accessed on 31 March 2025).
  58. Wissemeier, A.H.; Zühlke, G. Relation between Climatic Variables, Growth and the Incidence of Tipburn in Field-Grown Lettuce as Evaluated by Simple, Partial and Multiple Regression Analysis. Sci. Hortic. 2002, 93, 193–204. [Google Scholar] [CrossRef]
  59. Ban, S.; Tian, M.; Hu, D.; Xu, M.; Yuan, T.; Zheng, X.; Li, L.; Wei, S. Evaluation and Early Detection of Downy Mildew of Lettuce Using Hyperspectral Imagery. Agriculture 2025, 15, 444. [Google Scholar] [CrossRef]
  60. Abbasi, R.; Martinez, P.; Ahmad, R. Crop Diagnostic System: A Robust Disease Detection and Management System for Leafy Green Crops Grown in an Aquaponics Facility. Artif. Intell. Agric. 2023, 10, 1–12. [Google Scholar]
  61. Ali, A.M.; Słowik, A.; Hezam, I.M.; Abdel-Basset, M. Sustainable Smart System for Vegetables Plant Disease Detection: Four Vegetable Case Studies. Comput. Electron. Agric. 2024, 227, 109672. [Google Scholar]
  62. Barcenilla, J.A.G.; Maderazo, C.V. Identifying Common Pest and Disease of Lettuce Plants Using Convolutional Neural Network. In Proceedings of the 2023 2nd International Conference on Futuristic Technologies (INCOFT), Belagavi, India, 24–26 November 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
  63. Wang, Y.; Wu, M.; Shen, Y. Identifying the Growth Status of Hydroponic Lettuce Based on YOLO-EfficientNet. Plants 2024, 13, 372. [Google Scholar] [CrossRef]
  64. Zhou, G.; Wang, R.-F.; Cui, K. A Local Perspective-Based Model for Overlapping Community Detection. arXiv 2025, arXiv:2503.21558. [Google Scholar]
  65. Bari, P.; Ragha, L. Optimizing Pesticide Decisions with Deep Transfer Learning by Recognizing Crop Pest. In Proceedings of the 2023 International Conference on New Frontiers in Communication, Automation, Management and Security (ICCAMS), Bangalore, India, 27–28 October 2023; IEEE: Piscataway, NJ, USA, 2023; Volume 1, pp. 1–6. [Google Scholar]
  66. Hu, N.; Su, D.; Wang, S.; Nyamsuren, P.; Qiao, Y.; Jiang, Y.; Cai, Y. LettuceTrack: Detection and Tracking of Lettuce for Robotic Precision Spray in Agriculture. Front. Plant Sci. 2022, 13, 1003243. [Google Scholar]
  67. Maione, C.; Araujo, E.M.; dos Santos-Araujo, S.N.; Boim, A.G.F.; Barbosa, R.M.; Alleoni, L.R.F. Determining the Geographical Origin of Lettuce with Data Mining Applied to Micronutrients and Soil Properties. Sci. Agric. 2021, 79, e20200011. [Google Scholar]
  68. Wu, M.; Sun, J.; Lu, B.; Ge, X.; Zhou, X.; Zou, M. Application of Deep Brief Network in Transmission Spectroscopy Detection of Pesticide Residues in Lettuce Leaves. J. Food Process Eng. 2019, 42, e13005. [Google Scholar] [CrossRef]
  69. Lu, J.; Peng, K.; Wang, Q.; Sun, C. Lettuce Plant Trace-Element-Deficiency Symptom Identification via Machine Vision Methods. Agriculture 2023, 13, 1614. [Google Scholar] [CrossRef]
  70. Zhou, X.; Sun, J.; Tian, Y.; Chen, Q.; Wu, X.; Hang, Y. A Deep Learning Based Regression Method on Hyperspectral Data for Rapid Prediction of Cadmium Residue in Lettuce Leaves. Chemom. Intell. Lab. Syst. 2020, 200, 103996. [Google Scholar]
  71. Zhou, X.; Sun, J.; Tian, Y.; Lu, B.; Hang, Y.; Chen, Q. Hyperspectral Technique Combined with Deep Learning Algorithm for Detection of Compound Heavy Metals in Lettuce. Food Chem. 2020, 321, 126503. [Google Scholar] [CrossRef]
  72. Sun, L.; Cui, X.; Fan, X.; Suo, X.; Fan, B.; Zhang, X. Automatic Detection of Pesticide Residues on the Surface of Lettuce Leaves Using Images of Feature Wavelengths Spectrum. Front. Plant Sci. 2023, 13, 929999. [Google Scholar]
  73. Gao, H.; Mao, H.; Zhang, X. Determination of Lettuce Nitrogen Content Using Spectroscopy with Efficient Wavelength Selection and Extreme Learning Machine. Zemdirb.-Agric. 2015, 102, 51–58. [Google Scholar] [CrossRef]
  74. Sikati, J.; Nouaze, J.C. YOLO-NPK: A Lightweight Deep Network for Lettuce Nutrient Deficiency Classification Based on Improved YOLOv8 Nano. Eng. Proc. 2023, 58, 31. [Google Scholar] [CrossRef]
  75. Ahsan, M.; Eshkabilov, S.; Cemek, B.; Küçüktopcu, E.; Lee, C.W.; Simsek, H. Deep Learning Models to Determine Nutrient Concentration in Hydroponically Grown Lettuce Cultivars (Lactuca sativa L.). Sustainability 2021, 14, 416. [Google Scholar] [CrossRef]
  76. Yu, S.; Fan, J.; Lu, X.; Wen, W.; Shao, S.; Liang, D.; Yang, X.; Guo, X.; Zhao, C. Deep Learning Models Based on Hyperspectral Data and Time-Series Phenotypes for Predicting Quality Attributes in Lettuces under Water Stress. Comput. Electron. Agric. 2023, 211, 108034. [Google Scholar] [CrossRef]
  77. Hamidon, M.H.; Ahamed, T. Detection of Defective Lettuce Seedlings Grown in an Indoor Environment under Different Lighting Conditions Using Deep Learning Algorithms. Sensors 2023, 23, 5790. [Google Scholar] [CrossRef]
  78. Clave, J.; Formales, K.P.; Godoy, G.S.; Macatangay, A.P.; Pedrasa, J.R. Mobile Detection of Macronutrient Deficiencies in Lettuce Plants Using Convolutional Neural Network. In Proceedings of the TENCON 2024—2024 IEEE Region 10 Conference (TENCON), Singapore, 1–4 December 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1377–1380. [Google Scholar]
  79. Nagano, S.; Moriyuki, S.; Wakamori, K.; Mineno, H.; Fukuda, H. Leaf-Movement-Based Growth Prediction Model Using Optical Flow Analysis and Machine Learning in Plant Factory. Front. Plant Sci. 2019, 10, 227. [Google Scholar] [CrossRef]
  80. Zhang, L.; Xu, Z.; Xu, D.; Ma, J.; Chen, Y.; Fu, Z. Growth Monitoring of Greenhouse Lettuce Based on a Convolutional Neural Network. Hortic. Res. 2020, 7, 124. [Google Scholar]
  81. Malabanan, J.A.B.; Buenventura, V.A.N.; Domondon, J.Y.F.; Canada, L.A.; Rosales, M.A. Growth Stage Classification on Lettuce Cultivars Using Deep Learning Models. In Proceedings of the 2024 IEEE International Conference on Imaging Systems and Techniques (IST), Tokyo, Japan, 14–16 October 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
  82. Zhang, P.; Li, D. CBAM+ ASFF-YOLOXs: An Improved YOLOXs for Guiding Agronomic Operation Based on the Identification of Key Growth Stages of Lettuce. Comput. Electron. Agric. 2022, 203, 107491. [Google Scholar]
  83. Yu, H.; Dong, M.; Zhao, R.; Zhang, L.; Sui, Y. Research on Precise Phenotype Identification and Growth Prediction of Lettuce Based on Deep Learning. Environ. Res. 2024, 252, 118845. [Google Scholar] [PubMed]
  84. Chang, S.; Lee, U.; Hong, M.J.; Jo, Y.D.; Kim, J.-B. Lettuce Growth Pattern Analysis Using U-Net Pre-Trained with Arabidopsis. Agriculture 2021, 11, 890. [Google Scholar] [CrossRef]
  85. Ojo, M.O.; Zahid, A.; Masabni, J.G. Estimating Hydroponic Lettuce Phenotypic Parameters for Efficient Resource Allocation. Comput. Electron. Agric. 2024, 218, 108642. [Google Scholar]
  86. Hou, L.; Zhu, Y.; Wei, N.; Liu, Z.; You, J.; Zhou, J.; Zhang, J. Study on Utilizing Mask R-CNN for Phenotypic Estimation of Lettuce’s Growth Status and Optimal Harvest Timing. Agronomy 2024, 14, 1271. [Google Scholar] [CrossRef]
  87. Kaur, N.; Snider, J.L.; Paterson, A.H.; Virk, G.; Parkash, V.; Roberts, P.; Li, C. Genotypic Variation in Functional Contributors to Yield for a Diverse Collection of Field-Grown Cotton. Crop Sci. 2024, 64, 1846–1861. [Google Scholar] [CrossRef]
  88. Sun, S.; Li, C.; Paterson, A.H.; Chee, P.W.; Robertson, J.S. Image Processing Algorithms for Infield Single Cotton Boll Counting and Yield Prediction. Comput. Electron. Agric. 2019, 166, 104976. [Google Scholar] [CrossRef]
  89. Lin, Z.; Fu, R.; Ren, G.; Zhong, R.; Ying, Y.; Lin, T. Automatic Monitoring of Lettuce Fresh Weight by Multi-Modal Fusion Based Deep Learning. Front. Plant Sci. 2022, 13, 980581. [Google Scholar]
  90. Xu, D.; Chen, J.; Li, B.; Ma, J. Improving Lettuce Fresh Weight Estimation Accuracy through RGB-D Fusion. Agronomy 2023, 13, 2617. [Google Scholar] [CrossRef]
  91. Tan, C.; Sun, J.; Song, H.; Li, C. A Customized Density Map Model and Segment Anything Model for Cotton Boll Number, Size, and Yield Prediction in Aerial Images. Comput. Electron. Agric. 2025, 232, 110065. [Google Scholar] [CrossRef]
  92. Yu, S.; Fan, J.; Lu, X.; Wen, W.; Shao, S.; Guo, X.; Zhao, C. Hyperspectral Technique Combined with Deep Learning Algorithm for Prediction of Phenotyping Traits in Lettuce. Front. Plant Sci. 2022, 13, 927832. [Google Scholar]
  93. Ye, Z.; Tan, X.; Dai, M.; Chen, X.; Zhong, Y.; Zhang, Y.; Ruan, Y.; Kong, D. A Hyperspectral Deep Learning Attention Model for Predicting Lettuce Chlorophyll Content. Plant Methods 2024, 20, 22. [Google Scholar]
  94. Bauer, A.; Bostrom, A.G.; Ball, J.; Applegate, C.; Cheng, T.; Laycock, S.; Rojas, S.M.; Kirwan, J.; Zhou, J. Combining Computer Vision and Deep Learning to Enable Ultra-Scale Aerial Phenotyping and Precision Agriculture: A Case Study of Lettuce Production. Hortic. Res. 2019, 6, 70. [Google Scholar] [PubMed]
  95. Bauer, A.; Bostrom, A.G.; Ball, J.; Applegate, C.; Cheng, T.; Laycock, S.; Rojas, S.M.; Kirwan, J.; Zhou, J. AirSurf-Lettuce: An Aerial Image Analysis Platform for Ultra-Scale Field Phenotyping and Precision Agriculture Using Computer Vision and Deep Learning. bioRxiv 2019. [Google Scholar] [CrossRef]
  96. Machefer, M.; Lemarchand, F.; Bonnefond, V.; Hitchins, A.; Sidiropoulos, P. Mask R-CNN Refitting Strategy for Plant Counting and Sizing in UAV Imagery. Remote Sens. 2020, 12, 3015. [Google Scholar] [CrossRef]
  97. Zhang, P.; Li, D. Automatic Counting of Lettuce Using an Improved YOLOv5s with Multiple Lightweight Strategies. Expert Syst. Appl. 2023, 226, 120220. [Google Scholar]
  98. Jiang, B.; Zhang, J.-L.; Su, W.-H.; Hu, R. A SPH-YOLOv5x-Based Automatic System for Intra-Row Weed Control in Lettuce. Agronomy 2023, 13, 2915. [Google Scholar] [CrossRef]
  99. Oerke, E.-C. Crop Losses to Pests. J. Agric. Sci. 2006, 144, 31–43. [Google Scholar] [CrossRef]
  100. Perotti, V.E.; Larran, A.S.; Palmieri, V.E.; Martinatto, A.K.; Permingeat, H.R. Herbicide Resistant Weeds: A Call to Integrate Conventional Agricultural Practices, Molecular Biology Knowledge and New Technologies. Plant Sci. 2020, 290, 110255. [Google Scholar] [CrossRef]
  101. Dai, X.; Xu, Y.; Zheng, J.; Song, H. Analysis of the Variability of Pesticide Concentration Downstream of Inline Mixers for Direct Nozzle Injection Systems. Biosyst. Eng. 2019, 180, 59–69. [Google Scholar] [CrossRef]
  102. Bhowmik, P.C. Weed Biology: Importance to Weed Management. Weed Sci. 1997, 45, 349–356. [Google Scholar]
  103. Osorio, K.; Puerto, A.; Pedraza, C.; Jamaica, D.; Rodríguez, L. A Deep Learning Approach for Weed Detection in Lettuce Crops Using Multispectral Images. AgriEngineering 2020, 2, 471–488. [Google Scholar] [CrossRef]
  104. Zhang, J.-L.; Su, W.-H.; Zhang, H.-Y.; Peng, Y. SE-YOLOv5x: An Optimized Model Based on Transfer Learning and Visual Attention Mechanism for Identifying and Localizing Weeds and Vegetables. Agronomy 2022, 12, 2061. [Google Scholar] [CrossRef]
  105. Hu, R.; Su, W.-H.; Li, J.-L.; Peng, Y. Real-Time Lettuce-Weed Localization and Weed Severity Classification Based on Lightweight YOLO Convolutional Neural Networks for Intelligent Intra-Row Weed Control. Comput. Electron. Agric. 2024, 226, 109404. [Google Scholar]
  106. Wang, R.-F.; Tu, Y.-H.; Chen, Z.-Q.; Zhao, C.-T.; Su, W.-H. A Lettpoint-Yolov11l Based Intelligent Robot for Precision Intra-Row Weeds Control in Lettuce. 2025. Available online: https://ssrn.com/abstract=5162748 (accessed on 2 April 2025).
  107. Zhang, L.; Zhang, Z.; Wu, C.; Sun, L. Segmentation Algorithm for Overlap Recognition of Seedling Lettuce and Weeds Based on SVM and Image Blocking. Comput. Electron. Agric. 2022, 201, 107284. [Google Scholar]
  108. Raja, R.; Nguyen, T.T.; Slaughter, D.C.; Fennimore, S.A. Real-Time Robotic Weed Knife Control System for Tomato and Lettuce Based on Geometric Appearance of Plant Labels. Biosyst. Eng. 2020, 194, 152–164. [Google Scholar] [CrossRef]
  109. Xiang, M.; Gao, X.; Wang, G.; Qi, J.; Qu, M.; Ma, Z.; Chen, X.; Zhou, Z.; Song, K. An Application Oriented All-Round Intelligent Weeding Machine with Enhanced YOLOv5. Biosyst. Eng. 2024, 248, 269–282. [Google Scholar] [CrossRef]
  110. Li, L.; Wang, H.; Wu, Y.; Chen, S.; Wang, H.; Sigrimis, N.A. Investigation of Strawberry Irrigation Strategy Based on K-Means Clustering Algorithm. Trans. Chin. Soc. Agric. Mach. 2020, 51, 295–302. [Google Scholar]
  111. Li, L.; Li, J.; Wang, H.; Georgieva, T.; Ferentinos, K.; Arvanitis, K.; Sygrimis, N. Sustainable Energy Management of Solar Greenhouses Using Open Weather Data on MACQU Platform. Int. J. Agric. Biol. Eng. 2018, 11, 74–82. [Google Scholar]
  112. Yuan, H.; Cheng, M.; Pang, S.; Li, L.; Wang, H.; Sigrims, N.A. Construction and Performance Experiment of Integrated Water and Fertilization Irrigation Recycling System. Trans. Chin. Soc. Agric. Eng. 2014, 30, 72–78. [Google Scholar]
  113. Wang, H.; Fu, Q.; Meng, F.; Mei, S.; Wang, J.; Li, L. Optimal Design and Experiment of Fertilizer EC Regulation Based on Subsection Control Algorithm of Fuzzy and PI. Trans. Chin. Soc. Agric. Eng. 2016, 32, 110–116. [Google Scholar]
  114. Jarrar, E.; Hasan, A.R.; Alimari, A.; Saleh, M. Water and Fertilizer Use Efficiency of Lettuce Plants Cultivated in Soilless Conditions under Different Irrigation Systems. Desalination Water Treat. 2022, 275, 184–195. [Google Scholar]
  115. Sudkaew, N.; Tantrairatn, S. Foliar Fertilizer Robot for Raised Bed Greenhouse Using NDVI Image Processing System. In Proceedings of the 2021 25th International Computer Science and Engineering Conference (ICSEC), Chiang Rai, Thailand, 18–20 November 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 222–227. [Google Scholar]
  116. Moraitis, M.; Vaiopoulos, K.; Balafoutis, A.T. Design and Implementation of an Urban Farming Robot. Micromachines 2022, 13, 250. [Google Scholar] [CrossRef] [PubMed]
  117. Chang, C.-L.; Chen, H.-W. Straight-Line Generation Approach Using Deep Learning for Mobile Robot Guidance in Lettuce Fields. In Proceedings of the 2023 9th International Conference on Applied System Innovation (ICASI), Chiba, Japan, 21–25 April 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 187–189. [Google Scholar]
  118. Flores, E.J.C.; Gonzaga, J.A.; Augusto, G.L.; Chua, J.A.T.; Lim, L.A.G. Deep Learning-Based Vision System for Water Stress Classification of Lettuce in Pot Cultivation. In Proceedings of the 2023 IEEE 15th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM), Coron, Philippines, 19–23 November 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
  119. Hao, X.; Jia, J.; Gao, W.; Guo, X.; Zhang, W.; Zheng, L.; Wang, M. MFC-CNN: An Automatic Grading Scheme for Light Stress Levels of Lettuce (Lactuca sativa L.) Leaves. Comput. Electron. Agric. 2020, 179, 105847. [Google Scholar]
  120. Concepcion, R., II; Lauguico, S.; Almero, V.J.; Dadios, E.; Bandala, A.; Sybingco, E. Lettuce Leaf Water Stress Estimation Based on Thermo-Visible Signatures Using Recurrent Neural Network Optimized by Evolutionary Strategy. In Proceedings of the 2020 IEEE 8th R10 Humanitarian Technology Conference (R10-HTC), Kuching, Malaysia, 1–3 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
  121. Wolter-Salas, S.; Canessa, P.; Campos-Vargas, R.; Opazo, M.C.; Sepulveda, R.V.; Aguayo, D. WS-YOLO: An Agronomical and Computer Vision-Based Framework to Detect Drought Stress in Lettuce Seedlings Using IR Imaging and YOLOv8. In Proceedings of the International Conference on Advanced Research in Technologies, Information, Innovation and Sustainability, Madrid, Spain, 18–20 October 2023; Springer: Cham, Switzerland, 2023; pp. 339–351. [Google Scholar]
  122. Teshome, F.T.; Bayabil, H.K.; Schaffer, B.; Ampatzidis, Y.; Hoogenboom, G. Improving Soil Moisture Prediction with Deep Learning and Machine Learning Models. Comput. Electron. Agric. 2024, 226, 109414. [Google Scholar]
  123. Reganold, J.P.; Papendick, R.I.; Parr, J.F. Sustainable Agriculture. Sci. Am. 1990, 262, 112–121. [Google Scholar]
  124. Velten, S.; Leventon, J.; Jager, N.; Newig, J. What Is Sustainable Agriculture? A Systematic Review. Sustainability 2015, 7, 7833–7865. [Google Scholar] [CrossRef]
  125. Robertson, G.P. A Sustainable Agriculture? Daedalus 2015, 144, 76–89. [Google Scholar]
  126. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using Deep Learning for Image-Based Plant Disease Detection. Front. Plant Sci. 2016, 7, 215232. [Google Scholar]
  127. Liu, J.; Wang, X. Plant Diseases and Pests Detection Based on Deep Learning: A Review. Plant Methods 2021, 17, 22. [Google Scholar] [CrossRef] [PubMed]
  128. Grinblat, G.L.; Uzal, L.C.; Larese, M.G.; Granitto, P.M. Deep Learning for Plant Identification Using Vein Morphological Patterns. Comput. Electron. Agric. 2016, 127, 418–424. [Google Scholar] [CrossRef]
  129. Kang, J.; Zhang, Y.; Liu, X.; Cheng, Z. Hyperspectral Image Classification Using Spectral–Spatial Double-Branch Attention Mechanism. Remote Sens. 2024, 16, 193. [Google Scholar] [CrossRef]
  130. Tan, L.; Lu, J.; Jiang, H. Tomato Leaf Diseases Classification Based on Leaf Images: A Comparison between Classical Machine Learning and Deep Learning Methods. AgriEngineering 2021, 3, 542–558. [Google Scholar] [CrossRef]
  131. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in Vegetation Remote Sensing. ISPRS J. Photogramm. Remote Sens. 2021, 173, 24–49. [Google Scholar] [CrossRef]
  132. Cravero, A.; Sepúlveda, S. Use and Adaptations of Machine Learning in Big Data—Applications in Real Cases in Agriculture. Electronics 2021, 10, 552. [Google Scholar] [CrossRef]
  133. Li, J.; Qiao, Y.; Liu, S.; Zhang, J.; Yang, Z.; Wang, M. An Improved YOLOv5-Based Vegetable Disease Detection Method. Comput. Electron. Agric. 2022, 202, 107345. [Google Scholar] [CrossRef]
  134. Wang, H.; Shang, S.; Wang, D.; He, X.; Feng, K.; Zhu, H. Plant Disease Detection and Classification Method Based on the Optimized Lightweight YOLOv5 Model. Agriculture 2022, 12, 931. [Google Scholar] [CrossRef]
  135. Rajendiran, G.; Rethnaraj, J.; Malaisamy, J. Enhanced CNN Model for Lettuce Disease Identification in Indoor Aeroponic Vertical Farming Systems. In Proceedings of the 2024 4th International Conference on Sustainable Expert Systems (ICSES), Kaski, Nepal, 15–17 October 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1407–1412. [Google Scholar]
  136. Missio, J.C.; Rivera, A.; Figàs, M.R.; Casanova, C.; Camí, B.; Soler, S.; Simó, J. A Comparison of Landraces vs. Modern Varieties of Lettuce in Organic Farming during the Winter in the Mediterranean Area: An Approach Considering the Viewpoints of Breeders, Consumers, and Farmers. Front. Plant Sci. 2018, 9, 1491. [Google Scholar] [CrossRef]
  137. Lages Barbosa, G.; Almeida Gadelha, F.D.; Kublik, N.; Proctor, A.; Reichelm, L.; Weissinger, E.; Wohlleb, G.M.; Halden, R.U. Comparison of Land, Water, and Energy Requirements of Lettuce Grown Using Hydroponic vs. Conventional Agricultural Methods. Int. J. Environ. Res. Public. Health 2015, 12, 6879–6891. [Google Scholar] [CrossRef]
  138. Hassan Mhya, D.; Mohammed, A. Pesticides’ Impact on the Nutritious and Bioactive Molecules of Green Leafy Vegetables: Spinach and Lettuce. J. Soil Sci. Plant Nutr. 2025, 1–17. [Google Scholar] [CrossRef]
  139. Huang, Y.-Y.; Li, Z.-W.; Yang, C.-H.; Huang, Y.-M. Automatic Path Planning for Spraying Drones Based on Deep Q-Learning. J. Internet Technol. 2023, 24, 565–575. [Google Scholar]
  140. Wang, X.; Wang, S.; Peng, F.; Su, J. Design and Research of an Intelligent Pesticide Spraying Robot. In Proceedings of the 2023 IEEE 7th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 15–17 September 2023; IEEE: Piscataway, NJ, USA, 2023; Volume 7, pp. 1907–1911. [Google Scholar]
  141. Martínez-Ispizua, E.; Calatayud, Á.; Marsal, J.I.; Basile, F.; Cannata, C.; Abdelkhalik, A.; Soler, S.; Valcárcel, J.V.; Martínez-Cuenca, M.-R. Postharvest Changes in the Nutritional Properties of Commercial and Traditional Lettuce Varieties in Relation with Overall Visual Quality. Agronomy 2022, 12, 403. [Google Scholar] [CrossRef]
  142. Picon, A.; San-Emeterio, M.G.; Bereciartua-Perez, A.; Klukas, C.; Eggers, T.; Navarra-Mestre, R. Deep Learning-Based Segmentation of Multiple Species of Weeds and Corn Crop Using Synthetic and Real Image Datasets. Comput. Electron. Agric. 2022, 194, 106719. [Google Scholar] [CrossRef]
  143. Gang, M.-S.; Kim, H.-J.; Kim, D.-W. Estimation of Greenhouse Lettuce Growth Indices Based on a Two-Stage CNN Using RGB-D Images. Sensors 2022, 22, 5499. [Google Scholar] [CrossRef]
  144. Guillén, M.A.; Llanes, A.; Imbernón, B.; Martínez-España, R.; Bueno-Crespo, A.; Cano, J.-C.; Cecilia, J.M. Performance Evaluation of Edge-Computing Platforms for the Prediction of Low Temperatures in Agriculture Using Deep Learning. J. Supercomput. 2021, 77, 818–840. [Google Scholar]
  145. Reyes, A.K.; Caicedo, J.C.; Camargo, J.E. Fine-Tuning Deep Convolutional Networks for Plant Recognition. In Proceedings of the Working Notes of CLEF 2015—Conference and Labs of the Evaluation Forum, Toulouse, France, 8–11 September 2015; Volume 1391. [Google Scholar]
  146. Wang, H.; Liu, J.; Liu, L.; Zhao, M.; Mei, S. Coupling Technology of OpenSURF and Shannon-Cosine Wavelet Interpolation for Locust Slice Images Inpainting. Comput. Electron. Agric. 2022, 198, 107110. [Google Scholar]
  147. Wang, H.; Mei, S.-L. Shannon Wavelet Precision Integration Method for Pathologic Onion Image Segmentation Based on Homotopy Perturbation Technology. Math. Probl. Eng. 2014, 2014, 601841. [Google Scholar]
  148. Wang, H.; Zhang, X.; Mei, S. Shannon-Cosine Wavelet Precise Integration Method for Locust Slice Image Mixed Denoising. Math. Probl. Eng. 2020, 2020, 4989735. [Google Scholar]
  149. Zhou, W.; Yang, T.; Zeng, L.; Chen, J.; Wang, Y.; Guo, X.; You, L.; Liu, Y.; Du, W.; Yang, F.; et al. LettuceDB: An Integrated Multi-Omics Database for Cultivated Lettuce. Database 2024, 2024, baae018. [Google Scholar]
  150. Guo, Z.; Li, B.; Du, J.; Shen, F.; Zhao, Y.; Deng, Y.; Kuang, Z.; Tao, Y.; Wan, M.; Lu, X.; et al. LettuceGDB: The Community Database for Lettuce Genetics and Omics. Plant Commun. 2023, 4, 100425. [Google Scholar] [PubMed]
  151. Cui, K.; Li, R.; Polk, S.L.; Lin, Y.; Zhang, H.; Murphy, J.M.; Plemmons, R.J.; Chan, R.H. Superpixel-Based and Spatially-Regularized Diffusion Learning for Unsupervised Hyperspectral Image Clustering. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4405818. [Google Scholar]
  152. Polk, S.L.; Cui, K.; Chan, A.H.; Coomes, D.A.; Plemmons, R.J.; Murphy, J.M. Unsupervised Diffusion and Volume Maximization-Based Clustering of Hyperspectral Images. Remote Sens. 2023, 15, 1053. [Google Scholar] [CrossRef]
  153. Shang, C.; Yang, F.; Huang, D.; Lyu, W. Data-Driven Soft Sensor Development Based on Deep Learning Technique. J. Process Control 2014, 24, 223–233. [Google Scholar]
  154. Wang, D.; Cao, W.; Zhang, F.; Li, Z.; Xu, S.; Wu, X. A Review of Deep Learning in Multiscale Agricultural Sensing. Remote Sens. 2022, 14, 559. [Google Scholar] [CrossRef]
  155. Hou, L.; Zhu, Y.; Wang, M.; Wei, N.; Dong, J.; Tao, Y.; Zhou, J.; Zhang, J. Multimodal Data Fusion for Precise Lettuce Phenotype Estimation Using Deep Learning Algorithms. Plants 2024, 13, 3217. [Google Scholar] [CrossRef]
  156. Martinez-Nolasco, C.; Padilla-Medina, J.A.; Nolasco, J.J.M.; Guevara-Gonzalez, R.G.; Barranco-Gutiérrez, A.I.; Diaz-Carmona, J.J. Non-Invasive Monitoring of the Thermal and Morphometric Characteristics of Lettuce Grown in an Aeroponic System through Multispectral Image System. Appl. Sci. 2022, 12, 6540. [Google Scholar] [CrossRef]
  157. Li, Z.; Sun, C.; Wang, H.; Wang, R.-F. Hybrid Optimization of Phase Masks: Integrating Non-Iterative Methods with Simulated Annealing and Validation via Tomographic Measurements. Symmetry 2025, 17, 530. [Google Scholar] [CrossRef]
  158. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Deep Learning Techniques to Classify Agricultural Crops through UAV Imagery: A Review. Neural Comput. Appl. 2022, 34, 9511–9536. [Google Scholar]
  159. Zhao, C.; Fan, B.; Li, J.; Feng, Q. Agricultural Robots: Technology Progress, Challenges and Trends. Smart Agric. 2023, 5, 1–15. [Google Scholar]
  160. Albahar, M. A Survey on Deep Learning and Its Impact on Agriculture: Challenges and Opportunities. Agriculture 2023, 13, 540. [Google Scholar] [CrossRef]
  161. Ukaegbu, U.F.; Tartibu, L.K.; Okwu, M.O.; Olayode, I.O. Development of a Light-Weight Unmanned Aerial Vehicle for Precision Agriculture. Sensors 2021, 21, 4417. [Google Scholar] [CrossRef]
  162. Ukaegbu, U.; Tartibu, L.; Okwu, M. An Overview of Deep Learning Hardware Accelerators in Smart Agricultural Applications. In Proceedings of the 31st Annual Southern African Institute for Industrial Engineering Conference, Virtual, 5–7 October 2020. [Google Scholar]
  163. Liu, H.-I.; Galindo, M.; Xie, H.; Wong, L.-K.; Shuai, H.-H.; Li, Y.-H.; Cheng, W.-H. Lightweight Deep Learning for Resource-Constrained Environments: A Survey. ACM Comput. Surv. 2024, 56, 267. [Google Scholar] [CrossRef]
  164. Wang, M.; Guo, X.; Zhong, Y.; Feng, Y.; Zhao, M. Extracting the Height of Lettuce by Using Neural Networks of Image Recognition in Deep Learning. Authorea Prepr. 2022. Available online: https://d197for5662m48.cloudfront.net/documents/publicationstatus/105790/preprint_pdf/e6fc785d5b7ce07b655625a991504e49.pdf (accessed on 2 April 2025).
  165. Adianggiali, A.; Irawati, I.D.; Hadiyoso, S.; Latip, R. Classification of Nutrient Deficiencies Based on Leaf Image in Hydroponic Lettuce Using MobileNet Architecture. ELKOMIKA J. Tek. Energi Elektr. Tek. Telekomun. Tek. Elektron. 2023, 11, 958. [Google Scholar] [CrossRef]
  166. Cui, K.; Zhu, R.; Wang, M.; Tang, W.; Larsen, G.D.; Pauca, V.P.; Alqahtani, S.; Yang, F.; Segurado, D.; Lutz, D.; et al. Detection and Geographic Localization of Natural Objects in the Wild: A Case Study on Palms. arXiv 2025, arXiv:2502.13023. [Google Scholar]
  167. Wan, S.; Zhao, K.; Lu, Z.; Li, J.; Lu, T.; Wang, H. A Modularized IoT Monitoring System with Edge-Computing for Aquaponics. Sensors 2022, 22, 9260. [Google Scholar] [CrossRef]
  168. Li, Z.; Xu, R.; Li, C.; Munoz, P.; Takeda, F.; Leme, B. In-Field Blueberry Fruit Phenotyping with a MARS-PhenoBot and Customized BerryNet. Comput. Electron. Agric. 2025, 232, 110057. [Google Scholar] [CrossRef]
  169. Jiang, L.; Li, C.; Fu, L. Apple Tree Architectural Trait Phenotyping with Organ-Level Instance Segmentation from Point Cloud. Comput. Electron. Agric. 2025, 229, 109708. [Google Scholar] [CrossRef]
  170. Cui, K.; Shao, Z.; Larsen, G.; Pauca, V.; Alqahtani, S.; Segurado, D.; Pinheiro, J.; Wang, M.; Lutz, D.; Plemmons, R.; et al. PalmProbNet: A Probabilistic Approach to Understanding Palm Distributions in Ecuadorian Tropical Forest via Transfer Learning. In Proceedings of the 2024 ACM Southeast Conference, Marietta, GA, USA, 18–20 April 2024; pp. 272–277. [Google Scholar]
  171. Ding, H.; Zhao, L.; Yan, J.; Feng, H.-Y. Implementation of Digital Twin in Actual Production: Intelligent Assembly Paradigm for Large-Scale Industrial Equipment. Machines 2023, 11, 1031. [Google Scholar] [CrossRef]
Figure 1. Global production of lettuce from 1993 to 2022. (Data source: FAO dataset, last accessed on 12 March 2025).
Figure 1. Global production of lettuce from 1993 to 2022. (Data source: FAO dataset, last accessed on 12 March 2025).
Sustainability 17 03190 g001
Figure 2. The applications of deep learning in various processes of lettuce cultivation.
Figure 2. The applications of deep learning in various processes of lettuce cultivation.
Sustainability 17 03190 g002
Figure 3. Samples of lettuce disease and pest: (a) a sample of lettuce tip-burn stress [54]; (b) samples of sclerotinia drop (sclerotinia rot) [55]; (c) samples of lettuce leaf curl disease [56]; (d) a sample of European brown snail [57]; (e) a sample of gray garden slug [57].
Figure 3. Samples of lettuce disease and pest: (a) a sample of lettuce tip-burn stress [54]; (b) samples of sclerotinia drop (sclerotinia rot) [55]; (c) samples of lettuce leaf curl disease [56]; (d) a sample of European brown snail [57]; (e) a sample of gray garden slug [57].
Sustainability 17 03190 g003
Figure 4. (a) Taxonomy of the GPID-22 dataset; (b) structural diagram of CRE framework [20].
Figure 4. (a) Taxonomy of the GPID-22 dataset; (b) structural diagram of CRE framework [20].
Sustainability 17 03190 g004
Figure 5. Overview of LettuceTrack [66]: (A) overview of precision spraying robot; (B) detection flow of LettuceTrack.
Figure 5. Overview of LettuceTrack [66]: (A) overview of precision spraying robot; (B) detection flow of LettuceTrack.
Sustainability 17 03190 g005
Figure 6. Targeted image extraction process during the characteristic wavelength spectrum [72].
Figure 6. Targeted image extraction process during the characteristic wavelength spectrum [72].
Sustainability 17 03190 g006
Figure 7. The proposed bounding box-based YOLO model prediction process for detecting defective seedlings [77].
Figure 7. The proposed bounding box-based YOLO model prediction process for detecting defective seedlings [77].
Sustainability 17 03190 g007
Figure 8. Instances of deep learning-powered lettuce weeding devices: (a) conveyor belt lettuce intra-row weeding system [5]; (b) diagram of mechanical-laser collaborative intra-row weeding robot identifying and removing weeds in lettuce field [105]; (c) overview of UC Davis intra-row weeding robot [108]; (d) overview of all-round intelligent weeding machine [109].
Figure 8. Instances of deep learning-powered lettuce weeding devices: (a) conveyor belt lettuce intra-row weeding system [5]; (b) diagram of mechanical-laser collaborative intra-row weeding robot identifying and removing weeds in lettuce field [105]; (c) overview of UC Davis intra-row weeding robot [108]; (d) overview of all-round intelligent weeding machine [109].
Sustainability 17 03190 g008
Table 1. Classification of deep learning techniques and their application in agriculture.
Table 1. Classification of deep learning techniques and their application in agriculture.
TypeMethodsAgricultural Applications
Supervised or discriminative learningCNNCrop health monitoring, lettuce growth stage recognition, and pest and disease detection.
RNN, LSTM, GRUEnvironmental parameter prediction (e.g., temperature, humidity, light variation).
DNNSoil quality assessment and crop yield prediction.
Unsupervised or generative learningGANSynthetic crop disease image generation for dataset augmentation.
VAEVirtual crop growth stage image generation to enhance deep learning model generalization.
DBN, RBMSoil composition analysis and unsupervised pest and disease classification.
Hybrid learningCNN + LSTMIntegration of image and environmental data for lettuce growth monitoring and yield prediction.
AE + CNNAutonomous farm monitoring for real-time tracking of lettuce cultivation areas.
TransformerHigh-precision classification for lettuce growth stage recognition.
Table 2. Deep learning applications in pest and disease control.
Table 2. Deep learning applications in pest and disease control.
Research DirectionMain MethodologyKey ResultsReferences
Pest and Disease DiagnosisYOLOv5, YOLOv8, EfficientNet-v2s
  • YOLOv5 best performs in lettuce leaf scorch disease detection
  • CNN achieves 100% accuracy in identifying four vegetable diseases
  • YOLO-EfficientNet method combines target detection and fine-grained classification with 96.18% F1-score
[54,58,60,61,62,63]
CNN, VGG16, MobileNet
Migration Learning, Data Enhancement
Precision SprayingYOLOv5 + multi-target tracking (LettuceTrack)
  • YOLOv5 accurately tracks lettuce plants to avoid repeat spraying
  • VGG16 identifies pests with 99% accuracy and optimizes pesticide recommendations
[65,66]
VGG16 combines disease identification with pesticide recommendation
Pesticide Residue DetectionSVM, LDA
  • DBN-SVM combined model achieves 95.00% detection accuracy on test set
  • Deep learning model achieves over 99.5% accuracy in lettuce elemental deficiency detection
  • Hyperspectral + SCAE predicts Pb residues despite slightly reduced Cd prediction performance
[67,68,69,70,71,72]
NIR spectroscopy + Deep Belief Networks (DBN)
Visible-NIR hyperspectral imaging + SAE-LSSVR
CNN trained pesticide residue detection model
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qin, Y.-M.; Tu, Y.-H.; Li, T.; Ni, Y.; Wang, R.-F.; Wang, H. Deep Learning for Sustainable Agriculture: A Systematic Review on Applications in Lettuce Cultivation. Sustainability 2025, 17, 3190. https://doi.org/10.3390/su17073190

AMA Style

Qin Y-M, Tu Y-H, Li T, Ni Y, Wang R-F, Wang H. Deep Learning for Sustainable Agriculture: A Systematic Review on Applications in Lettuce Cultivation. Sustainability. 2025; 17(7):3190. https://doi.org/10.3390/su17073190

Chicago/Turabian Style

Qin, Yi-Ming, Yu-Hao Tu, Tao Li, Yao Ni, Rui-Feng Wang, and Haihua Wang. 2025. "Deep Learning for Sustainable Agriculture: A Systematic Review on Applications in Lettuce Cultivation" Sustainability 17, no. 7: 3190. https://doi.org/10.3390/su17073190

APA Style

Qin, Y.-M., Tu, Y.-H., Li, T., Ni, Y., Wang, R.-F., & Wang, H. (2025). Deep Learning for Sustainable Agriculture: A Systematic Review on Applications in Lettuce Cultivation. Sustainability, 17(7), 3190. https://doi.org/10.3390/su17073190

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop