sensors-logo

Journal Browser

Journal Browser

AI-Based Sensors and Sensing Systems for Smart Agriculture

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Smart Agriculture".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 67551

Special Issue Editor


E-Mail Website
Guest Editor
Department of Agricultural and Biological Engineering, University of Florida, Gainesville, FL 32611-0570, USA
Interests: precision agriculture; artificial intelligence; sensor development; machine vision/image processing; GNSS/GIS; variable rate technology; yield mapping; machine systems design; instrumentation; remote sensing; NIR spectroscopy; farm automation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Smart agriculture has been widely used for profitable and sustainable crop and animal production. One of the most important aspects of smart agriculture is to detect the status and properties of soil, crop, and livestock for implementing proper and timely management practices.

Numerous sensors and sensing systems have played a major role in smart agriculture, for example, GNSS receivers for positioning, various imaging systems for crop yield and phenotyping, IoT devices for soil moisture and pH, soil electrical conductivity sensors, optical sensors for measuring soil and crop reflectance, pressure and flow sensors for variable rate application, LiDAR for canopy volume and autonomous navigation, weather stations, and wireless sensor network.

Recently, artificial intelligence has rapidly been adopted for many smart agricultural applications such as yield prediction, disease and pest detection, spot sprayers for weeds, remote sensing of crop quality, phenotyping, and genotyping. AI technologies help analyze big data generated from crop and animal production, make useful inferences, and provide valuable decision support to farmers. 

This Special Issue aims to showcase excellent implementation of AI-based sensors and sensing systems for smart agricultural applications and to provide opportunities for researchers to publish their work related to the topic. Articles that address any sensors and sensing systems implemented with AI and applied to smart crop and animal production are welcomed.

This Special Issue seeks to amass original research articles and reviews. Research areas may include (but not limited to) the following topics:

  • Crop yield estimation and prediction;
  • Detection of nutrient status, water stress, diseases, insect damages;
  • Smart irrigation;
  • Autonomous navigation;
  • Precision planting;
  • Weed management;
  • Phenotyping and genotyping;
  • Postharvest quality evaluation;
  • Machine vision applications;
  • Remote sensing applications;
  • Robotic operations;
  • Animal health monitoring and welfare;
  • Automated milking and feeding;
  • Manure management;
  • Predictive crop and animal data analytics.

I look forward to receiving your contributions.

Prof. Dr. Wonsuk (Daniel) Lee
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Animal welfare
  • Crop quality assessment
  • Decision support system
  • Drones
  • IoT
  • Harvesting
  • Phenotyping
  • Robots
  • Wireless sensor network
  • Yield prediction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 21627 KiB  
Article
Agricultural Robot-Centered Recognition of Early-Developmental Pest Stage Based on Deep Learning: A Case Study on Fall Armyworm (Spodoptera frugiperda)
by Hammed Obasekore, Mohamed Fanni, Sabah Mohamed Ahmed, Victor Parque and Bo-Yeong Kang
Sensors 2023, 23(6), 3147; https://doi.org/10.3390/s23063147 - 15 Mar 2023
Cited by 8 | Viewed by 3097
Abstract
Accurately detecting early developmental stages of insect pests (larvae) from off-the-shelf stereo camera sensor data using deep learning holds several benefits for farmers, from simple robot configuration to early neutralization of this less agile but more disastrous stage. Machine vision technology has advanced [...] Read more.
Accurately detecting early developmental stages of insect pests (larvae) from off-the-shelf stereo camera sensor data using deep learning holds several benefits for farmers, from simple robot configuration to early neutralization of this less agile but more disastrous stage. Machine vision technology has advanced from bulk spraying to precise dosage to directly rubbing on the infected crops. However, these solutions primarily focus on adult pests and post-infestation stages. This study suggested using a front-pointing red-green-blue (RGB) stereo camera mounted on a robot to identify pest larvae using deep learning. The camera feeds data into our deep-learning algorithms experimented on eight ImageNet pre-trained models. The combination of the insect classifier and the detector replicates the peripheral and foveal line-of-sight vision on our custom pest larvae dataset, respectively. This enables a trade-off between the robot’s smooth operation and localization precision in the pest captured, as it first appeared in the farsighted section. Consequently, the nearsighted part utilizes our faster region-based convolutional neural network-based pest detector to localize precisely. Simulating the employed robot dynamics using CoppeliaSim and MATLAB/SIMULINK with the deep-learning toolbox demonstrated the excellent feasibility of the proposed system. Our deep-learning classifier and detector exhibited 99% and 0.84 accuracy and a mean average precision, respectively. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

19 pages, 6529 KiB  
Article
Classification of Fluorescently Labelled Maize Kernels Using Convolutional Neural Networks
by Zilong Wang, Ben Guan, Wenbo Tang, Suowei Wu, Xuejie Ma, Hao Niu, Xiangyuan Wan and Yong Zang
Sensors 2023, 23(5), 2840; https://doi.org/10.3390/s23052840 - 6 Mar 2023
Cited by 3 | Viewed by 2193
Abstract
Accurate real-time classification of fluorescently labelled maize kernels is important for the industrial application of its advanced breeding techniques. Therefore, it is necessary to develop a real-time classification device and recognition algorithm for fluorescently labelled maize kernels. In this study, a machine vision [...] Read more.
Accurate real-time classification of fluorescently labelled maize kernels is important for the industrial application of its advanced breeding techniques. Therefore, it is necessary to develop a real-time classification device and recognition algorithm for fluorescently labelled maize kernels. In this study, a machine vision (MV) system capable of identifying fluorescent maize kernels in real time was designed using a fluorescent protein excitation light source and a filter to achieve optimal detection. A high-precision method for identifying fluorescent maize kernels based on a YOLOv5s convolutional neural network (CNN) was developed. The kernel sorting effects of the improved YOLOv5s model, as well as other YOLO models, were analysed and compared. The results show that using a yellow LED light as an excitation light source combined with an industrial camera filter with a central wavelength of 645 nm achieves the best recognition effect for fluorescent maize kernels. Using the improved YOLOv5s algorithm can increase the recognition accuracy of fluorescent maize kernels to 96%. This study provides a feasible technical solution for the high-precision, real-time classification of fluorescent maize kernels and has universal technical value for the efficient identification and classification of various fluorescently labelled plant seeds. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

17 pages, 2296 KiB  
Article
Precision Agriculture Using Soil Sensor Driven Machine Learning for Smart Strawberry Production
by Rania Elashmawy and Ismail Uysal
Sensors 2023, 23(4), 2247; https://doi.org/10.3390/s23042247 - 16 Feb 2023
Cited by 10 | Viewed by 3712
Abstract
Ubiquitous sensor networks collecting real-time data have been adopted in many industrial settings. This paper describes the second stage of an end-to-end system integrating modern hardware and software tools for precise monitoring and control of soil conditions. In the proposed framework, the data [...] Read more.
Ubiquitous sensor networks collecting real-time data have been adopted in many industrial settings. This paper describes the second stage of an end-to-end system integrating modern hardware and software tools for precise monitoring and control of soil conditions. In the proposed framework, the data are collected by the sensor network distributed in the soil of a commercial strawberry farm to infer the ultimate physicochemical characteristics of the fruit at the point of harvest around the sensor locations. Empirical and statistical models are jointly investigated in the form of neural networks and Gaussian process regression models to predict the most significant physicochemical qualities of strawberry. Color, for instance, either by itself or when combined with the soluble solids content (sweetness), can be predicted within as little as 9% and 14% of their expected range of values, respectively. This level of accuracy will ultimately enable the implementation of the next phase in controlling the soil conditions where data-driven quality and resource-use trade-offs can be realized for sustainable and high-quality strawberry production. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

20 pages, 4834 KiB  
Article
Y–Net: Identification of Typical Diseases of Corn Leaves Using a 3D–2D Hybrid CNN Model Combined with a Hyperspectral Image Band Selection Module
by Yinjiang Jia, Yaoyao Shi, Jiaqi Luo and Hongmin Sun
Sensors 2023, 23(3), 1494; https://doi.org/10.3390/s23031494 - 29 Jan 2023
Cited by 8 | Viewed by 2782
Abstract
Corn diseases are one of the significant constraints to high–quality corn production, and accurate identification of corn diseases is of great importance for precise disease control. Corn anthracnose and brown spot are typical diseases of corn, and the early symptoms of the two [...] Read more.
Corn diseases are one of the significant constraints to high–quality corn production, and accurate identification of corn diseases is of great importance for precise disease control. Corn anthracnose and brown spot are typical diseases of corn, and the early symptoms of the two diseases are similar, which can be easily misidentified by the naked eye. In this paper, to address the above problems, a three–dimensional–two–dimensional (3D–2D) hybrid convolutional neural network (CNN) model combining a band selection module is proposed based on hyperspectral image data, which combines band selection, attention mechanism, spatial–spectral feature extraction, and classification into a unified optimization process. The model first inputs hyperspectral images to both the band selection module and the attention mechanism module and then sums the outputs of the two modules as inputs to a 3D–2D hybrid CNN, resulting in a Y–shaped architecture named Y–Net. The results show that the spectral bands selected by the band selection module of Y–Net achieve more reliable classification performance than traditional feature selection methods. Y–Net obtained the best classification accuracy compared to support vector machines, one–dimensional (1D) CNNs, and two–dimensional (2D) CNNs. After the network pruned the trained Y–Net, the model size was reduced to one–third of the original size, and the accuracy rate reached 98.34%. The study results can provide new ideas and references for disease identification of corn and other crops. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

19 pages, 7796 KiB  
Article
A Soft Sensor to Estimate the Opening of Greenhouse Vents Based on an LSTM-RNN Neural Network
by Mounir Guesbaya, Francisco García-Mañas, Francisco Rodríguez and Hassina Megherbi
Sensors 2023, 23(3), 1250; https://doi.org/10.3390/s23031250 - 21 Jan 2023
Cited by 8 | Viewed by 2560
Abstract
In greenhouses, sensors are needed to measure the variables of interest. They help farmers and allow automatic controllers to determine control actions to regulate the environmental conditions that favor crop growth. This paper focuses on the problem of the lack of monitoring and [...] Read more.
In greenhouses, sensors are needed to measure the variables of interest. They help farmers and allow automatic controllers to determine control actions to regulate the environmental conditions that favor crop growth. This paper focuses on the problem of the lack of monitoring and control systems in traditional Mediterranean greenhouses. In such greenhouses, most farmers manually operate the opening of the vents to regulate the temperature during the daytime. Therefore, the state of vent opening is not recorded because control systems are not usually installed due to economic reasons. The solution presented in this paper consists of developing a Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) as a soft sensor to estimate vent opening using the measurements of different inside and outside greenhouse climate variables as input data. A dataset from a traditional greenhouse located in Almería (Spain) was used. The data were processed and analyzed to study the relationships between the measured climate variables and the state of vent opening, both statistically (using correlation coefficients) and graphically (with regression analysis). The dataset (with 81 recorded days) was then used to train, validate, and test a set of candidate LSTM-based networks for the soft sensor. The results show that the developed soft sensor can estimate the actual opening of the vents with a mean absolute error of 4.45%, which encourages integrating the soft sensor as part of decision support systems for farmers and using it to calculate other essential variables, such as greenhouse ventilation rate. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

20 pages, 2502 KiB  
Article
On-Device Intelligence for Malfunction Detection of Water Pump Equipment in Agricultural Premises: Feasibility and Experimentation
by Dimitrios Loukatos, Maria Kondoyanni, Gerasimos Alexopoulos, Chrysanthos Maraveas and Konstantinos G. Arvanitis
Sensors 2023, 23(2), 839; https://doi.org/10.3390/s23020839 - 11 Jan 2023
Cited by 8 | Viewed by 2673
Abstract
The digital transformation of agriculture is a promising necessity for tackling the increasing nutritional needs on Earth and the degradation of natural resources. Toward this direction, the availability of innovative electronic components and of the accompanying software programs can be exploited to detect [...] Read more.
The digital transformation of agriculture is a promising necessity for tackling the increasing nutritional needs on Earth and the degradation of natural resources. Toward this direction, the availability of innovative electronic components and of the accompanying software programs can be exploited to detect malfunctions in typical agricultural equipment, such as water pumps, thereby preventing potential failures and water and economic losses. In this context, this article highlights the steps for adding intelligence to sensors installed on pumps in order to intercept and deliver malfunction alerts, based on cheap in situ microcontrollers, sensors, and radios and easy-to-use software tools. This involves efficient data gathering, neural network model training, generation, optimization, and execution procedures, which are further facilitated by the deployment of an experimental platform for generating diverse disturbances of the water pump operation. The best-performing variant of the malfunction detection model can achieve an accuracy rate of about 93% based on the vibration data. The system being implemented follows the on-device intelligence approach that decentralizes processing and networking tasks, thereby aiming to simplify the installation process and reduce the overall costs. In addition to highlighting the necessary implementation variants and details, a characteristic set of evaluation results is also presented, as well as directions for future exploitation. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

11 pages, 2144 KiB  
Article
An IoT-Based Data-Driven Real-Time Monitoring System for Control of Heavy Metals to Ensure Optimal Lettuce Growth in Hydroponic Set-Ups
by Sambandh Bhusan Dhal, Shikhadri Mahanta, Jonathan Gumero, Nick O’Sullivan, Morayo Soetan, Julia Louis, Krishna Chaitanya Gadepally, Snehadri Mahanta, John Lusher and Stavros Kalafatis
Sensors 2023, 23(1), 451; https://doi.org/10.3390/s23010451 - 1 Jan 2023
Cited by 17 | Viewed by 4005
Abstract
Heavy metal concentrations that must be maintained in aquaponic environments for plant growth have been a source of concern for many decades, as they cannot be completely eliminated in a commercial set-up. Our goal was to create a low-cost real-time smart sensing and [...] Read more.
Heavy metal concentrations that must be maintained in aquaponic environments for plant growth have been a source of concern for many decades, as they cannot be completely eliminated in a commercial set-up. Our goal was to create a low-cost real-time smart sensing and actuation system for controlling heavy metal concentrations in aquaponic solutions. Our solution entails sensing the nutrient concentrations in the hydroponic solution, specifically calcium, sulfate, and phosphate, and sending them to a Machine Learning (ML) model hosted on an Android application. The ML algorithm used in this case was a Linear Support Vector Machine (Linear-SVM) trained on top three nutrient predictors chosen after applying a pipeline of Feature Selection methods namely a pairwise correlation matrix, ExtraTreesClassifier and Xgboost classifier on a dataset recorded from three aquaponic farms from South-East Texas. The ML algorithm was then hosted on a cloud platform which would then output the maximum tolerable levels of iron, copper and zinc in real time using the concentration of phosphorus, calcium and sulfur as inputs and would be controlled using an array of dispensing and detecting equipments in a closed loop system. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

12 pages, 4560 KiB  
Article
Imaging and Deep Learning Based Approach to Leaf Wetness Detection in Strawberry
by Arth M. Patel, Won Suk Lee and Natalia A. Peres
Sensors 2022, 22(21), 8558; https://doi.org/10.3390/s22218558 - 7 Nov 2022
Cited by 4 | Viewed by 2168
Abstract
The Strawberry Advisory System (SAS) is a tool developed to help Florida strawberry growers determine the risk of common fungal diseases and the need for fungicide applications. Leaf wetness duration (LWD) is one of the important parameters in SAS disease risk modeling. By [...] Read more.
The Strawberry Advisory System (SAS) is a tool developed to help Florida strawberry growers determine the risk of common fungal diseases and the need for fungicide applications. Leaf wetness duration (LWD) is one of the important parameters in SAS disease risk modeling. By accurately measuring the LWD, disease risk can be better assessed, leading to less fungicide use and more economic benefits to the farmers. This research aimed to develop and test a more accurate leaf wetness detection system than traditional leaf wetness sensors. In this research, a leaf wetness detection system was developed and tested using color imaging of a reference surface and a convolutional neural network (CNN), which is one of the artificial-intelligence-based learning methods. The system was placed at two separate field locations during the 2021–2022 strawberry-growing season. The results from the developed system were compared against manual observation to determine the accuracy of the system. It was found that the AI- and imaging-based system had high accuracy in detecting wetness on a reference surface. The developed system can be used in SAS for determining accurate disease risks and fungicide recommendations for strawberry production and allows the expansion of the system to multiple locations. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

14 pages, 4170 KiB  
Article
Correlation Study between the Organic Compounds and Ripening Stages of Oil Palm Fruitlets Based on the Raman Spectra
by Muhammad Haziq Imran Md Azmi, Fazida Hanim Hashim, Aqilah Baseri Huddin and Mohd Shaiful Sajab
Sensors 2022, 22(18), 7091; https://doi.org/10.3390/s22187091 - 19 Sep 2022
Cited by 6 | Viewed by 2404
Abstract
The degree of maturity of oil palm fresh fruit bunches (FFB) at the time of harvest heavily affects oil production, which is expressed in the oil extraction rate (OER). Oil palm harvests must be harvested at their optimum maturity to maximize oil yield [...] Read more.
The degree of maturity of oil palm fresh fruit bunches (FFB) at the time of harvest heavily affects oil production, which is expressed in the oil extraction rate (OER). Oil palm harvests must be harvested at their optimum maturity to maximize oil yield if a rapid, non-intrusive, and accurate method is available to determine their level of maturity. This study demonstrates the potential of implementing Raman spectroscopy for determining the maturity of oil palm fruitlets. A ripeness classification algorithm has been developed utilizing machine learning by classifying the components of organic compounds such as β-carotene, amino acid, etc. as parameters to distinguish the ripeness of fruits. In this study, 47 oil palm fruitlets spectra from three different ripeness levels—under ripe, ripe, and over ripe—were examined. To classify the oil palm fruitlets into three maturity categories, the extracted features were put to the test using 31 machine learning models. It was discovered that the Medium, Weighted KNN, and Trilayered Neural Network classifier has a maximum overall accuracy of 90.9% by using four significant features extracted from the peaks as the predictors. To conclude, the Raman spectroscopy method may offer a precise and efficient means to evaluate the maturity level of oil palm fruitlets. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

16 pages, 1409 KiB  
Article
High Precision Classification of Resting and Eating Behaviors of Cattle by Using a Collar-Fitted Triaxial Accelerometer Sensor
by Kim Margarette Corpuz Nogoy, Sun-il Chon, Ji-hwan Park, Saraswathi Sivamani, Dong-Hoon Lee and Seong Ho Choi
Sensors 2022, 22(16), 5961; https://doi.org/10.3390/s22165961 - 9 Aug 2022
Cited by 13 | Viewed by 2723
Abstract
Cattle are less active than humans. Hence, it was hypothesized in this study that transmitting acceleration signals at a 1 min sampling interval to reduce storage load has the potential to improve the performance of motion sensors without affecting the precision of behavior [...] Read more.
Cattle are less active than humans. Hence, it was hypothesized in this study that transmitting acceleration signals at a 1 min sampling interval to reduce storage load has the potential to improve the performance of motion sensors without affecting the precision of behavior classification. The behavior classification performance in terms of precision, sensitivity, and the F1-score of the 1 min serial datasets segmented in 3, 4, and 5 min window sizes based on nine algorithms were determined. The collar-fitted triaxial accelerometer sensor was attached on the right side of the neck of the two fattening Korean steers (age: 20 months) and the steers were observed for 6 h on day one, 10 h on day two, and 7 h on day three. The acceleration signals and visual observations were time synchronized and analyzed based on the objectives. The resting behavior was most correctly classified using the combination of a 4 min window size and the long short-term memory (LSTM) algorithm which resulted in 89% high precision, 81% high sensitivity, and 85% high F1-score. High classification performance (79% precision, 88% sensitivity, and 83% F1-score) was also obtained in classifying the eating behavior using the same classification method (4 min window size and an LSTM algorithm). The most poorly classified behavior was the active behavior. This study showed that the collar-fitted triaxial sensor measuring 1 min serial signals could be used as a tool for detecting the resting and eating behaviors of cattle in high precision by segmenting the acceleration signals in a 4 min window size and by using the LSTM classification algorithm. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

14 pages, 4767 KiB  
Article
Development of a Lightweight Crop Disease Image Identification Model Based on Attentional Feature Fusion
by Zekai Cheng, Meifang Liu, Rong Qian, Rongqing Huang and Wei Dong
Sensors 2022, 22(15), 5550; https://doi.org/10.3390/s22155550 - 25 Jul 2022
Cited by 4 | Viewed by 2327
Abstract
Crop diseases are one of the important factors affecting crop yield and quality and are also an important research target in the field of agriculture. In order to quickly and accurately identify crop diseases, help farmers to control crop diseases in time, and [...] Read more.
Crop diseases are one of the important factors affecting crop yield and quality and are also an important research target in the field of agriculture. In order to quickly and accurately identify crop diseases, help farmers to control crop diseases in time, and reduce crop losses. Inspired by the application of convolutional neural networks in image identification, we propose a lightweight crop disease image identification model based on attentional feature fusion named DSGIResNet_AFF, which introduces self-built lightweight residual blocks, inverted residuals blocks, and attentional feature fusion modules on the basis of ResNet18. We apply the model to the identification of rice and corn diseases, and the results show the effectiveness of the model on the real dataset. Additionally, the model is compared with other convolutional neural networks (AlexNet, VGG16, ShuffleNetV2, MobileNetV2, MobileNetV3-Small and MobileNetV3-Large), and the experimental results show that the accuracy, sensitivity, F1-score, AUC of the proposed model DSGIResNet_AFF are 98.30%, 98.23%, 98.24%, 99.97%, respectively, which are better than other network models, while the complexity of the model is significantly reduced (compared with the basic model ResNet18, the number of parameters is reduced by 94.10%, and the floating point of operations(FLOPs) is reduced by 86.13%). The network model DSGIResNet_AFF can be applied to mobile devices and become a useful tool for identifying crop diseases. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

16 pages, 8692 KiB  
Article
Learning-Based Slip Detection for Robotic Fruit Grasping and Manipulation under Leaf Interference
by Hongyu Zhou, Jinhui Xiao, Hanwen Kang, Xing Wang, Wesley Au and Chao Chen
Sensors 2022, 22(15), 5483; https://doi.org/10.3390/s22155483 - 22 Jul 2022
Cited by 14 | Viewed by 3109
Abstract
Robotic harvesting research has seen significant achievements in the past decade, with breakthroughs being made in machine vision, robot manipulation, autonomous navigation and mapping. However, the missing capability of obstacle handling during the grasping process has severely reduced harvest success rate and limited [...] Read more.
Robotic harvesting research has seen significant achievements in the past decade, with breakthroughs being made in machine vision, robot manipulation, autonomous navigation and mapping. However, the missing capability of obstacle handling during the grasping process has severely reduced harvest success rate and limited the overall performance of robotic harvesting. This work focuses on leaf interference caused slip detection and handling, where solutions to robotic grasping in an unstructured environment are proposed. Through analysis of the motion and force of fruit grasping under leaf interference, the connection between object slip caused by leaf interference and inadequate harvest performance is identified for the first time in the literature. A learning-based perception and manipulation method is proposed to detect slip that causes problematic grasps of objects, allowing the robot to implement timely reaction. Our results indicate that the proposed algorithm detects grasp slip with an accuracy of 94%. The proposed sensing-based manipulation demonstrated great potential in robotic fruit harvesting, and could be extended to other pick-place applications. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

29 pages, 37628 KiB  
Article
deepNIR: Datasets for Generating Synthetic NIR Images and Improved Fruit Detection System Using Deep Learning Techniques
by Inkyu Sa, Jong Yoon Lim, Ho Seok Ahn and Bruce MacDonald
Sensors 2022, 22(13), 4721; https://doi.org/10.3390/s22134721 - 22 Jun 2022
Cited by 15 | Viewed by 5031
Abstract
This paper presents datasets utilised for synthetic near-infrared (NIR) image generation and bounding-box level fruit detection systems. A high-quality dataset is one of the essential building blocks that can lead to success in model generalisation and the deployment of data-driven deep neural networks. [...] Read more.
This paper presents datasets utilised for synthetic near-infrared (NIR) image generation and bounding-box level fruit detection systems. A high-quality dataset is one of the essential building blocks that can lead to success in model generalisation and the deployment of data-driven deep neural networks. In particular, synthetic data generation tasks often require more training samples than other supervised approaches. Therefore, in this paper, we share the NIR+RGB datasets that are re-processed from two public datasets (i.e., nirscene and SEN12MS), expanded our previous study, deepFruits, and our novel NIR+RGB sweet pepper (capsicum) dataset. We oversampled from the original nirscene dataset at 10, 100, 200, and 400 ratios that yielded a total of 127 k pairs of images. From the SEN12MS satellite multispectral dataset, we selected Summer (45 k) and All seasons (180k) subsets and applied a simple yet important conversion: digital number (DN) to pixel value conversion followed by image standardisation. Our sweet pepper dataset consists of 1615 pairs of NIR+RGB images that were collected from commercial farms. We quantitatively and qualitatively demonstrate that these NIR+RGB datasets are sufficient to be used for synthetic NIR image generation. We achieved Frechet inception distances (FIDs) of 11.36, 26.53, and 40.15 for nirscene1, SEN12MS, and sweet pepper datasets, respectively. In addition, we release manual annotations of 11 fruit bounding boxes that can be exported in various formats using cloud service. Four newly added fruits (blueberry, cherry, kiwi and wheat) compound 11 novel bounding box datasets on top of our previous work presented in the deepFruits project (apple, avocado, capsicum, mango, orange, rockmelon and strawberry). The total number of bounding box instances of the dataset is 162 k and it is ready to use from a cloud service. For the evaluation of the dataset, Yolov5 single stage detector is exploited and reported impressive mean-average-precision, mAP[0.5:0.95] results of min:0.49, max:0.812. We hope these datasets are useful and serve as a baseline for future studies. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Graphical abstract

18 pages, 5523 KiB  
Article
Research on an Improved Segmentation Recognition Algorithm of Overlapping Agaricus bisporus
by Shuzhen Yang, Bowen Ni, Wanhe Du and Tao Yu
Sensors 2022, 22(10), 3946; https://doi.org/10.3390/s22103946 - 23 May 2022
Cited by 13 | Viewed by 2518
Abstract
The accurate identification of overlapping Agaricus bisporus in a factory environment is one of the challenges faced by automated picking. In order to better segment the complex adhesion between Agaricus bisporus, this paper proposes a segmentation recognition algorithm for overlapping Agaricus bisporus [...] Read more.
The accurate identification of overlapping Agaricus bisporus in a factory environment is one of the challenges faced by automated picking. In order to better segment the complex adhesion between Agaricus bisporus, this paper proposes a segmentation recognition algorithm for overlapping Agaricus bisporus. This algorithm calculates the global gradient threshold and divides the image according to the image edge gradient feature to obtain the binary image. Then, the binary image is filtered and morphologically processed, and the contour of the overlapping Agaricus bisporus area is obtained by edge detection in the Canny operator, the convex hull and concave area are extracted for polygon simplification, and the vertices are extracted using Harris corner detection to determine the segmentation point. After dividing the contour fragments by the dividing point, the branch definition algorithm is used to merge and group all the contours of the same Agaricus bisporus. Finally, the least squares ellipse fitting algorithm and the minimum distance circle fitting algorithm are used to reconstruct the outline of Agaricus bisporus, and the demand information of Agaricus bisporus picking is obtained. The experimental results show that this method can effectively overcome the influence of uneven illumination during image acquisition and be more adaptive to complex planting environments. The recognition rate of Agaricus bisporus in overlapping situations is more than 96%, and the average coordinate deviation rate of the algorithm is less than 1.59%. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

17 pages, 23940 KiB  
Article
Supervised and Weakly Supervised Deep Learning for Segmentation and Counting of Cotton Bolls Using Proximal Imagery
by Shrinidhi Adke, Changying Li, Khaled M. Rasheed and Frederick W. Maier
Sensors 2022, 22(10), 3688; https://doi.org/10.3390/s22103688 - 12 May 2022
Cited by 12 | Viewed by 2737
Abstract
The total boll count from a plant is one of the most important phenotypic traits for cotton breeding and is also an important factor for growers to estimate the final yield. With the recent advances in deep learning, many supervised learning approaches have [...] Read more.
The total boll count from a plant is one of the most important phenotypic traits for cotton breeding and is also an important factor for growers to estimate the final yield. With the recent advances in deep learning, many supervised learning approaches have been implemented to perform phenotypic trait measurement from images for various crops, but few studies have been conducted to count cotton bolls from field images. Supervised learning models require a vast number of annotated images for training, which has become a bottleneck for machine learning model development. The goal of this study is to develop both fully supervised and weakly supervised deep learning models to segment and count cotton bolls from proximal imagery. A total of 290 RGB images of cotton plants from both potted (indoor and outdoor) and in-field settings were taken by consumer-grade cameras and the raw images were divided into 4350 image tiles for further model training and testing. Two supervised models (Mask R-CNN and S-Count) and two weakly supervised approaches (WS-Count and CountSeg) were compared in terms of boll count accuracy and annotation costs. The results revealed that the weakly supervised counting approaches performed well with RMSE values of 1.826 and 1.284 for WS-Count and CountSeg, respectively, whereas the fully supervised models achieve RMSE values of 1.181 and 1.175 for S-Count and Mask R-CNN, respectively, when the number of bolls in an image patch is less than 10. In terms of data annotation costs, the weakly supervised approaches were at least 10 times more cost efficient than the supervised approach for boll counting. In the future, the deep learning models developed in this study can be extended to other plant organs, such as main stalks, nodes, and primary and secondary branches. Both the supervised and weakly supervised deep learning models for boll counting with low-cost RGB images can be used by cotton breeders, physiologists, and growers alike to improve crop breeding and yield estimation. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

19 pages, 6904 KiB  
Article
Sugarcane Nitrogen Concentration and Irrigation Level Prediction Based on UAV Multispectral Imagery
by Xiuhua Li, Yuxuan Ba, Muqing Zhang, Mengling Nong, Ce Yang and Shimin Zhang
Sensors 2022, 22(7), 2711; https://doi.org/10.3390/s22072711 - 1 Apr 2022
Cited by 14 | Viewed by 3165
Abstract
Sugarcane is the main industrial crop for sugar production, and its growth status is closely related to fertilizer, water, and light input. Unmanned aerial vehicle (UAV)-based multispectral imagery is widely used for high-throughput phenotyping, since it can rapidly predict crop vigor at field [...] Read more.
Sugarcane is the main industrial crop for sugar production, and its growth status is closely related to fertilizer, water, and light input. Unmanned aerial vehicle (UAV)-based multispectral imagery is widely used for high-throughput phenotyping, since it can rapidly predict crop vigor at field scale. This study focused on the potential of drone multispectral images in predicting canopy nitrogen concentration (CNC) and irrigation levels for sugarcane. An experiment was carried out in a sugarcane field with three irrigation levels and five fertilizer levels. Multispectral images at an altitude of 40 m were acquired during the elongating stage. Partial least square (PLS), backpropagation neural network (BPNN), and extreme learning machine (ELM) were adopted to establish CNC prediction models based on various combinations of band reflectance and vegetation indices. The simple ratio pigment index (SRPI), normalized pigment chlorophyll index (NPCI), and normalized green-blue difference index (NGBDI) were selected as model inputs due to their higher grey relational degree with the CNC and lower correlation between one another. The PLS model based on the five-band reflectance and the three vegetation indices achieved the best accuracy (Rv = 0.79, RMSEv = 0.11). Support vector machine (SVM) and BPNN were then used to classify the irrigation levels based on five spectral features which had high correlations with irrigation levels. SVM reached a higher accuracy of 80.6%. The results of this study demonstrated that high resolution multispectral images could provide effective information for CNC prediction and water irrigation level recognition for sugarcane crop. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

18 pages, 10604 KiB  
Article
An Automated, Clip-Type, Small Internet of Things Camera-Based Tomato Flower and Fruit Monitoring and Harvest Prediction System
by Unseok Lee, Md Parvez Islam, Nobuo Kochi, Kenichi Tokuda, Yuka Nakano, Hiroki Naito, Yasushi Kawasaki, Tomohiko Ota, Tomomi Sugiyama and Dong-Hyuk Ahn
Sensors 2022, 22(7), 2456; https://doi.org/10.3390/s22072456 - 23 Mar 2022
Cited by 16 | Viewed by 4592
Abstract
Automated crop monitoring using image analysis is commonly used in horticulture. Image-processing technologies have been used in several studies to monitor growth, determine harvest time, and estimate yield. However, accurate monitoring of flowers and fruits in addition to tracking their movements is difficult [...] Read more.
Automated crop monitoring using image analysis is commonly used in horticulture. Image-processing technologies have been used in several studies to monitor growth, determine harvest time, and estimate yield. However, accurate monitoring of flowers and fruits in addition to tracking their movements is difficult because of their location on an individual plant among a cluster of plants. In this study, an automated clip-type Internet of Things (IoT) camera-based growth monitoring and harvest date prediction system was proposed and designed for tomato cultivation. Multiple clip-type IoT cameras were installed on trusses inside a greenhouse, and the growth of tomato flowers and fruits was monitored using deep learning-based blooming flower and immature fruit detection. In addition, the harvest date was calculated using these data and temperatures inside the greenhouse. Our system was tested over three months. Harvest dates measured using our system were comparable with the data manually recorded. These results suggest that the system could accurately detect anthesis, number of immature fruits, and predict the harvest date within an error range of ±2.03 days in tomato plants. This system can be used to support crop growth management in greenhouses. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

17 pages, 24844 KiB  
Article
Traction Performance Evaluation of the Electric All-Wheel-Drive Tractor
by Seung-Yun Baek, Seung-Min Baek, Hyeon-Ho Jeon, Wan-Soo Kim, Yeon-Soo Kim, Tae-Yong Sim, Kyu-Hong Choi, Soon-Jung Hong, Hyunggun Kim and Yong-Joo Kim
Sensors 2022, 22(3), 785; https://doi.org/10.3390/s22030785 - 20 Jan 2022
Cited by 22 | Viewed by 4885
Abstract
This study aims to design, develop, and evaluate the traction performance of an electric all-wheel-drive (AWD) tractor based on the power transmission and electric systems. The power transmission system includes the electric motor, helical gear reducer, planetary gear reducer, and tires. The electric [...] Read more.
This study aims to design, develop, and evaluate the traction performance of an electric all-wheel-drive (AWD) tractor based on the power transmission and electric systems. The power transmission system includes the electric motor, helical gear reducer, planetary gear reducer, and tires. The electric system consists of a battery pack and charging system. An engine-generator and charger are installed to supply electric energy in emergency situations. The load measurement system consists of analog (current) and digital (battery voltage and rotational speed of the electric motor) components using a controller area network (CAN) bus. A traction test of the electric AWD tractor was performed towing a test vehicle. The output torques of the tractor motors during the traction test were calculated using the current and torque curves provided by the motor manufacturer. The agricultural work performance is verified by comparing the torque and rpm (T–N) curve of the motor with the reduction ratio applied. The traction is calculated using torque and specifications of the wheel, and traction performance is evaluated using tractive efficiency (TE) and dynamic ratio (DR). The results suggest a direction for the improvement of the electric drive system in agricultural research by comparison with the conventional tractor through the analysis of the agricultural performance and traction performance of the electric AWD tractor. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

18 pages, 3029 KiB  
Article
Monocular Depth Estimation with Self-Supervised Learning for Vineyard Unmanned Agricultural Vehicle
by Xue-Zhi Cui, Quan Feng, Shu-Zhi Wang and Jian-Hua Zhang
Sensors 2022, 22(3), 721; https://doi.org/10.3390/s22030721 - 18 Jan 2022
Cited by 7 | Viewed by 3527
Abstract
To find an economical solution to infer the depth of the surrounding environment of unmanned agricultural vehicles (UAV), a lightweight depth estimation model called MonoDA based on a convolutional neural network is proposed. A series of sequential frames from monocular videos are used [...] Read more.
To find an economical solution to infer the depth of the surrounding environment of unmanned agricultural vehicles (UAV), a lightweight depth estimation model called MonoDA based on a convolutional neural network is proposed. A series of sequential frames from monocular videos are used to train the model. The model is composed of two subnetworks—the depth estimation subnetwork and the pose estimation subnetwork. The former is a modified version of U-Net that reduces the number of bridges, while the latter takes EfficientNet-B0 as its backbone network to extract the features of sequential frames and predict the pose transformation relations between the frames. The self-supervised strategy is adopted during the training, which means the depth information labels of frames are not needed. Instead, the adjacent frames in the image sequence and the reprojection relation of the pose are used to train the model. Subnetworks’ outputs (depth map and pose relation) are used to reconstruct the input frame, then a self-supervised loss between the reconstructed input and the original input is calculated. Finally, the loss is employed to update the parameters of the two subnetworks through the backward pass. Several experiments are conducted to evaluate the model’s performance, and the results show that MonoDA has competitive accuracy over the KITTI raw dataset as well as our vineyard dataset. Besides, our method also possessed the advantage of non-sensitivity to color. On the computing platform of our UAV’s environment perceptual system NVIDIA JETSON TX2, the model could run at 18.92 FPS. To sum up, our approach provides an economical solution for depth estimation by using monocular cameras, which achieves a good trade-off between accuracy and speed and can be used as a novel auxiliary depth detection paradigm for UAVs. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

17 pages, 52502 KiB  
Article
Posture Detection of Individual Pigs Based on Lightweight Convolution Neural Networks and Efficient Channel-Wise Attention
by Yizhi Luo, Zhixiong Zeng, Huazhong Lu and Enli Lv
Sensors 2021, 21(24), 8369; https://doi.org/10.3390/s21248369 - 15 Dec 2021
Cited by 19 | Viewed by 3995
Abstract
In this paper, a lightweight channel-wise attention model is proposed for the real-time detection of five representative pig postures: standing, lying on the belly, lying on the side, sitting, and mounting. An optimized compressed block with symmetrical structure is proposed based on model [...] Read more.
In this paper, a lightweight channel-wise attention model is proposed for the real-time detection of five representative pig postures: standing, lying on the belly, lying on the side, sitting, and mounting. An optimized compressed block with symmetrical structure is proposed based on model structure and parameter statistics, and the efficient channel attention modules are considered as a channel-wise mechanism to improve the model architecture.The results show that the algorithm’s average precision in detecting standing, lying on the belly, lying on the side, sitting, and mounting is 97.7%, 95.2%, 95.7%, 87.5%, and 84.1%, respectively, and the speed of inference is around 63 ms (CPU = i7, RAM = 8G) per postures image. Compared with state-of-the-art models (ResNet50, Darknet53, CSPDarknet53, MobileNetV3-Large, and MobileNetV3-Small), the proposed model has fewer model parameters and lower computation complexity. The statistical results of the postures (with continuous 24 h monitoring) show that some pigs will eat in the early morning, and the peak of the pig’s feeding appears after the input of new feed, which reflects the health of the pig herd for farmers. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

Back to TopTop