Next Article in Journal
Combined Propulsion and Levitation Control for Maglev/Hyperloop Systems Utilizing Asymmetric Double-Sided Linear Induction Motors
Next Article in Special Issue
Tracking and Counting of Tomato at Different Growth Period Using an Improving YOLO-Deepsort Network for Inspection Robot
Previous Article in Journal
Dynamic Analysis of an Enhanced Multi-Frequency Inertial Exciter for Industrial Vibrating Machines
Previous Article in Special Issue
An Open-Source Low-Cost Imaging System Plug-In for Pheromone Traps Aiding Remote Insect Pest Population Monitoring in Fruit Crops
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Computer Vision in Self-Steering Tractors

by
Eleni Vrochidou
1,
Dimitrios Oustadakis
2,
Axios Kefalas
2 and
George A. Papakostas
2,*
1
Department of Computer Science, International Hellenic University, 65404 Kavala, Greece
2
MLV Research Group, Department of Computer Science, International Hellenic University, 65404 Kavala, Greece
*
Author to whom correspondence should be addressed.
Machines 2022, 10(2), 129; https://doi.org/10.3390/machines10020129
Submission received: 7 January 2022 / Revised: 6 February 2022 / Accepted: 9 February 2022 / Published: 11 February 2022

Abstract

:
Automatic navigation of agricultural machinery is an important aspect of Smart Farming. Intelligent agricultural machinery applications increasingly rely on machine vision algorithms to guarantee enhanced in-field navigation accuracy by precisely locating the crop lines and mapping the navigation routes of vehicles in real-time. This work presents an overview of vision-based tractor systems. More specifically, this work deals with (1) the system architecture, (2) the safety of usage, (3) the most commonly faced navigation errors, (4) the navigation control system of tractors and presents (5) state-of-the-art image processing algorithms for in-field navigation route mapping. In recent research, stereovision systems emerge as superior to monocular systems for real-time in-field navigation, demonstrating higher stability and control accuracy, especially in extensive crops such as cotton, sunflower, maize, etc. A detailed overview is provided for each topic with illustrative examples that focus on specific agricultural applications. Several computer vision algorithms based on different optical sensors have been developed for autonomous navigation in structured or semi-structured environments, such as orchards, yet are affected by illumination variations. The usage of multispectral imaging can overcome the encountered limitations of noise in images and successfully extract navigation paths in orchards by using a combination of the trees’ foliage with the background of the sky. Concisely, this work reviews the current status of self-steering agricultural vehicles and presents all basic guidelines for adapting computer vision in autonomous in-field navigation.

1. Introduction

Crop monitoring can lead to profitable decisions if properly managed. Recent advances in data analysis and management are turning agricultural data into the key elements for critical decision-making in favor of farmers. In-field acquired sensory data can be used as efficient information for effective resource management towards maximum production and sustainability [1]. Cloud computing has been subsequently developed to handle the unprecedented volume of acquired data, known as Big Data, creating new prospects for data-intensive techniques in the agricultural domain [2]. Data-based farm management, combined with robotics and the integration of Artificial Intelligence (AI) techniques, paves the way for the next generation of agriculture, namely Agriculture 5.0.
Agriculture 4.0, also known as Digital or Smart Farming, incorporates precision agriculture principles and data processing to assist farmers’ operational decisions [2,3]. Smart farming provides a practical and systematic tool that aims to detect unforeseen problems that are hard to notice either due to the lack of experienced workers or due to large-scale farms that are difficult surveil. Going one step further, Agriculture 5.0 incorporates robotics and AI algorithms to already existing data-driven farms [1,4], implying autonomous decision systems and unmanned operations. The concept of Agriculture 5.0 along a crop management cycle is illustrated in Figure 1. The crop management system starts from the crop. Spatial measurements of crops imply on-the-go in-field monitoring platforms. The platforms collect data from the crop, soil and environment through remote sensing and provide spatial inputs to the decision system. AI algorithms are employed for effective real-time decision-making and action, driven by the decision system, occurs as a reaction to the sensory feedback. The process is repeated throughout the crop’s life cycle. The advent of robots [5] and non-invasive sensors with their simultaneous reduction in size [6], emerging digital technologies such as remote sensing [7], the Internet of Things (IoT) [8] and Cloud Computing [9], support the process.
On-the-go monitoring platforms are mounted on agricultural vehicles. Smart sensors can provide conventional agricultural vehicles, such as tractors, with adequate self-awareness and extend them into self-steering vehicles with built-in intelligence, able to act autonomously in the field. Therefore, the agricultural vehicles of Agriculture 5.0 are either self-steering tractors or autonomous robots [10].
Agricultural machinery, such as tractors, is meant to operate for many hours in large areas and perform repetitive tasks. The automatic navigation of agricultural vehicles can ensure the high intensity of automation of cultivation tasks, the enhanced precision of navigation between crop structures, an increase in operation safety and a decrease in human labor and operation costs. Autonomous navigation systems have been employed toward the mechanization of different agricultural tasks [10]: weeding, harvesting, spraying, planting, etc. Autonomy is obtained by sensing the environment. In general, automated navigation of agricultural tracts can be achieved by using either local positioning information or global positioning information [11]. Local information refers to the relative position of the tractor with respect to the crops, provided by sensors mounted on the tractor, such as vision sensors (cameras), laser scanners, ultrasonic sensors, odometers, Internal Measurement Units (IMU), gyroscopes, digital compasses, etc. Global information refers to the absolute position of the tractor in the field, provided by the Geo-Positioning System (GPS).
This work focuses on the use of local positioning information obtained by vision sensors toward self-steering tractors. The scope of the present research is to provide an overview of vision-based, self-steering tractor systems. The main contributions of this work are: (1) to highlight the degree of integration of computer vision in the field of tractors, identifying the usefulness of this technology in specific functions and applications, (2) to augment the knowledge on agricultural vision-based navigation methods, (3) to provide evidence on related trends and challenges and (4) to extend the knowledge on vision-based machinery, covering aspects such as architecture, sensors, algorithms and safety issues. This research aims to prove the feasibility of machine vision applications in the targeted problem of agricultural machinery navigation and extend the provided knowledge to other contexts in favor of the broader research community, e.g., towards autonomous navigation for off-road vehicles, etc.
Towards this end, this work reviews the following aspects: (1) the system architecture, (2) the safety of usage, (3) the most commonly faced navigation errors, (4) the navigation control system of vision-based, self-steering tractors and to present (5) state-of-the-art image processing algorithms for in-field navigation route mapping. The remainder of the paper focuses on the five aforementioned categories. A detailed overview is provided for each category with illustrative examples that focus on specific agricultural applications. Finally, a resume of the most important conclusions of the reviewed literature is presented.

2. Evolution of Vision-Based Self-Steering Tractors

The rapid development of computers, electronic sensors and computing technologies in the 1980s has motivated the interest in autonomous vehicle guidance systems. A number of guidance technologies have been proposed [12,13]; ultrasonic, optical, mechanical, etc. Since the early 1990s, GPS systems have been used widely as relatively newly introduced and accurate guiding sensors in numerous agricultural applications towards fully autonomous navigation [14]. However, the high cost of reliable GPS sensors made them prohibitive to use in agricultural navigation applications. Machine vision technologies based on optical local sensors could be alternatively used to guide agricultural vehicles when crop row structures can be observed. Then, the camera system could determine the relative position of the machinery in relation to the crop rows and guide the vehicle between them to perform field operations. Local features could help to fine-tune the trajectory of the vehicle on-site. The latter is the main reason why most of the existing studies on vision-based guided tractors focus on structured fields that are characterized by crop rows. A number of image processing methodologies have been suggested to define the guidance path from crop row images; yet only a finite number of vision-based guidance systems have been developed for real in-field applications [15].
Machine vision was first introduced for the automatic navigation of tractors and combines in the 1980s. In 1987, Reid and Searcy [16] developed a dynamic thresholding technique to extract path information from field images. The same authors, later in the same year [17], proposed a variation of their previous work. The guidance signal was computed by the same algorithm. Additionally, the distribution of the crop-background was estimated by a bimodal Gaussian distribution function, and run-length encoding was employed for locating the center points of row crop canopy shapes in thresholded images. Billingsley and Schoenfisch [18] designed a vision guidance system to steer a tractor relative to crop rows. The system could detect the end of the row and warn the driver to turn the tractor. The tractor could automatically acquire its track in the next row. The system was further optimized later by changes in technology; however, the fundamental principles of their previous research have remained the same [19]. Pinto and Reid [20] proposed a heading angle and offset determination using principal component analysis in order to visually guide a tractor. The task was addressed as a pose recognition problem where a pose was defined by the combination of heading angle and offset. In [21], Benson et al. developed a machine vision algorithm for crop edge detection. The algorithm was integrated into a tractor for automated harvest to locate the field boundaries for guidance. The same authors, in [22], automated a maize harvest with a combine vision-based steering system based on fuzzy logic.
In [23], three machine vision guidance algorithms were developed to mimic the perceptive process of a human operator towards automated harvest, both in the day and at night, reporting accuracies equivalent to a GPS. In [24], a machine vision system was developed for an agricultural small-grain combine harvester. The proposed algorithm used a monochromatic camera to separate the uncut crop rows from the background and to calculate a guidance signal. Keicher and Seudert [25] developed an automatic guidance system for mechanical weeding in crop rows based on a digital image processing system combined with a specific controller and a proportional hydraulic valve. Åstrand and Baerveldt performed extensive research on the vision-based guidance of tractors and developed robust image processing algorithms integrated with agricultural tractors to detect the position of crop rows [26]. Søgaard and Olsen [27] developed a method to guide a tractor with respect to the crop rows. The method was based on color images of the field surface. Lang [28] proposed an automatic steering control system for a plantation tractor based on the direction and distance of the camera to the stems of the plants. Kise [29] presented a row-detection algorithm for a stereovision-based agricultural machinery guidance system. The algorithm used functions for stereo-image processing, extracted elevation maps and determined navigation points. In [30], Tillett and Hague proposed a computer vision guidance system for cereals that was mounted on a hoe tractor. In subsequent work [31], they presented a method for locating crop rows in images and tested it for the guidance of a mechanical hoe in winter wheat. Later, they extended the operating range of their tracking system to sugar beets [32]. Subramanian et al. [33] tested machine vision for the guidance of a tractor in a citrus grove alleyway and compared it to a laser radar. Both approaches for path tracking performed similarly. An automatic steering rice transplanter based on image-processing self-guidance was presented by Misao [34]. The steering system used a video camera zoom system. Han et al. [35] developed a guidance directrix planner to control an agricultural vehicle that was converted to the desired steering wheel angle through navigation. In [36], Okamoto et al. presented an automatic guidance system based on a crop row sensor consisting of a charge-coupled device (CCD) camera and an image processing algorithm, implemented for the autonomous guidance of a weeding cultivator.
Autonomous tractor steering is the most established among agricultural navigation technologies; self-steering tractors have already been commercialized for about two decades [12,13]. Commercial tractor navigation techniques involve a fusion of sensors and are not based solely on machine vision; therefore, they are not in the scope of this research.
Although vision-based tractor navigation systems have been developed, their commercial application is still in its early stages, due to problems affecting their reliability, as reported subsequently. However, relevant research reveals the potential of vision-based automatic guidance in agricultural machinery; thus, the next decade is expected to be crucial for vision-based self-steering tractors to revolutionize the agricultural sector. A revolution is also expected by the newest trend in agriculture: agricultural robots, namely Agrobots, that claim to replace tractors. Agrobots can navigate autonomously in fields based on the same principles and sensors and can work on crop scale with precision and dexterity [5]. However, compared to tractors, an Agrobot is a sensitive, high-cost tool that can perform specific tasks. In contrast, a tractor is very durable and sturdy, can operate under adverse weather conditions and is versatile since it allows for the flexibility to adapt to a multitude of tools (topping tools, lawnmowers, sprayers, etc.) for a variety of tasks. Therefore, tractors are key pieces of equipment for all farms, from small to commercial scale, and at present, there is no intention to replace them but to upgrade them in terms of navigational autonomy.

3. Safety Issues

Most of the injuries related to agricultural activities are connected to the use of agricultural tractors [37,38]. The latter is attributed to the following reasons: (1) the large number of small farms lacking expert equipment and operators, (2) the wide range of agricultural tasks in need of machinery contribution, (3) the engagement of the same operators for all the different tasks, which require both the adaptation of different tools to the tractors and different handling, (4) the seasonal work associated with changes in the field per season in addition to the constant alteration of workspaces that do not allow the user to get acquainted with the environment, (5) the use of outdated machinery not complying with safety regulations and (6) the use of obsolete sensors that have not been updated to their more recent improved versions with better technical specifications and performance.
In order for self-steering tractors to fully act autonomously, the autonomous steering system needs to be safer and more precise than any human operator. Therefore, the study of tractor safety issues can help the design of safer systems towards complete navigation autonomy. However, self-steering tractors are only to provide steering aid to the human operator rather than to replace him. Driving for hours along the vast farms is attention-intensive and tedious. The tractor needs to autonomously navigate through crop lines and the presence of the human operator is to respond to emergencies regarding navigation troubleshooting and to perform additional agricultural operations, e.g., pruning, spraying, etc. [11]. Many researchers investigate tractor safety issues. Their focus is mainly on issues related to technical features such as vibrations [39], rollover protection systems (ROPS) [40], ergonomic design with respect to the operator’s position [41], etc. Safety issues are also related to the operators’ skills and attitudes [42]. Feasible solutions for monitoring mechanical hazards suggest devices to monitor the status of a tractor’s components [43]. Other researchers investigated the augmentation of the visibility of the human operator [44], while advanced solutions such as virtual reality (VR) for intuitive tractor navigation have also been proposed [45]. Since error prevention is not always possible, it is common for the burden to fall on systems that monitor and report system malfunctions in a timely manner. Warning and alert systems [46], as well as emergency notification systems in case of accidents [47], have been developed to keep the operator awake and situationally aware. In general, situational awareness while operating agricultural machinery in complex and dynamic environments, such as fields, is critical. By using design and practical interventions, farmers’ situational awareness can be supported and enhanced and, thus, prevent fatal incidents [48]. Figure 2 depicts the most common reasons leading to tractor safety issues [37].
When it comes to self-steering tractors, safety is closely related to the reliability of the steering system architecture, in terms of both hardware and software. The latter depends on the degree of autonomy of the system; yet, even for steering systems of the same degree of autonomy, architectures may significantly differ. A typical sensory-based autonomous navigation system consists of (1) a sensory-based perception system, (2) an algorithm-based decision system and (3) an actuator-based activation system. Errors can occur in all three parts of the system. Therefore, the most common errors can be either perception errors, decision errors or activation errors.
Activation errors include the instability of the electrohydraulic control system, the side slip of the tractor when turning at high speed, the real-time control of the vehicle, the steering in upland, the operating speed, etc. Decision errors mainly emanate from misjudging slopes. Perception errors are attributed to the measurements of the sensory system. In the case of self-steering tractor systems based on machine vision, all perception errors result from the image processing unit and methods [49]. Image acquisition and processing in real time with simultaneous decision making needs to be fast and accurate. The trade-off between the speed and accuracy of detection algorithms in dynamic environments such as fields is a challenge. In-field automated guidance is identified as crop row guidance. Crop conditions affect the system’s performance to such an extent that it cannot function properly if the crop is not clearly detected. Therefore, crop rows need to be distinguishable under varying environmental conditions. Missing plants, small plants of different stages of growth or with different densities of leaves and weeds are the most common problems for identifying crop row structures. Weeds are highly similar in features to certain kinds of small crops, e.g., sugar beets have the same green color, size and shape. Therefore, the vision algorithms need to be robust for crops of all stages of growth and tolerant to weeds [26]. Another disturbance is due to lighting conditions; changes in the brightness of the field may affect the algorithms. Moreover, direct sunlight may cause shadows from the tractor, leading to unfortunate detection results [32].
The perception system includes optical sensors. The quality and positioning of the sensors are crucial, since proximal sensing can derive comprehensive data. A camera mounted on the cab can view more than one mounted on the head of the tractor. The height of crops can also restrict the crop row detection ability of the system. Tractors are ground vehicles; therefore, acquired sensory data is in crop scale and can be characterized by increased accuracy and high resolution. When the data is of such high quality, environmental conditions such as lighting and shadowing may severely deteriorate the accuracy of the system [50]. Figure 3 summarizes the main issues related to errors of the visual perception systems of self-steering tractors.
According to the above, vision-based steering, although flexible, can be affected by in-field factors. Multi-sensory systems that fuse the information from a variety of sensors can significantly increase the steering accuracy [49]. This is the main reason why there are no navigation systems in the recent literature that rely solely on vision. Future automated guidance systems will mainly rely on multi-sensory fusing techniques. However, fields remain complicated and unstable environments, which focuses future research in artificial intelligence and machine learning towards self-learning and self-adapting guiding systems.

4. Self-Steering Tractors’ System Architecture

In what follows, the basic modeling of self-steering tractors is presented. A vision-based system architecture is provided and key elements essential for performing autonomous navigation operations are reviewed.

4.1. Basic Modeling

In order to develop autonomous driving machinery, a cyclic flow of information is required; it is known as the sense-perceive-plan-act (SPPA) cycle [51]. The SPPA cycle connects sensing, perceiving, planning and acting through a closed-loop relation; sensors collect (sense) physical information, the information is received and interpreted (perceive), feasible trajectories for navigation are selected (plan) and the tractor is controlled to follow the selected trajectory (act). Figure 4 illustrates the basic modeling of self-steering tractors.
In order to automate the guidance of tractors, two basic elements need to be combined: basic machinery and cognitive driving intelligence (CDI). CDI needs to be integrated into both hardware and software for the navigation and control of the platform. Navigation includes localization, mapping and path planning, while control includes all regulating steering parameters, e.g., steering rate and angle, speed, etc. CDI is made possible by using sensory data from navigation and localization sensors, algorithms for path planning and software for steering control. The basic machinery refers to the tractor where the CDI will be applied. Based on the above and in relation to the SPPA cycle, the basic elements for the automated steering of tractors are sensors for object detection, localization and mapping [52,53,54], path planning algorithms [55], path tracking and steering control [56]. Table 1 includes a list of the aforementioned basic elements for an autonomous self-steering tractor. Most commonly used sensors and algorithms are also included in Table 1.

4.2. Vision-Based Architecture

A fundamental vision-based architecture for self-steering tractors is presented in [29]. Figure 5 illustrates the flow diagram of the proposed vision-based navigation system.
The main sensor of the architecture is an optical sensor. The optical sensor captures images that are processed by a computer (PC), which also receives real-time kinematic global GPS (RTK-GPS) information and extracts the steering signal. The steering signal is fed to the tractor control unit (TCU) that generates a pulse width modulation (PWM) signal to automate steering. The closed loop of the steering actuator is comprised of an electrohydraulic steering valve and a wheel angle sensor. The system prototype of Figure 5 was installed on a commercial tractor, and a series of self-steering tests were conducted to evaluate the system in the field. Results reported a root mean square (RMS) error of lateral deviation of less than 0.05 m on straight and curved rows for speed of up to 3.0 m/s.

4.3. Path Tracking Control System

The basic design principle of a path-tracking control system comprises three main systems, as depicted in Figure 6: image detection, tracking and a steering control system. Acquired images from an optical sensor, i.e., a camera, are sent to a computer for real-time processing. The center of the crop row line is identified, and the navigation path is extracted. The system uses a feedback sensory signal for the proportional steering control of the electrohydraulic valve of the vehicle for adaptive path tracking [57].

4.4. Basic Sensors

Sensors record physical data from the environment and convert them into digital measurements that can be processed by the system. Determining the exact position of sensors on a tractor presupposes knowledge of both the operation of each sensor (field of view, resolution, range, etc.) and the geometry of the tractor, so that by being placed in the appropriate position onboard the vehicle, the sensor could perform to its maximum [58]. Navigation sensors can be either object sensors or pose sensors. Object sensors are used for the detection and identification of objects in the surrounding environment, while pose sensors are used for the localization of the tractor. Both categories can include active types of sensors, i.e., sensors that generate energy to measure things such as LiDAR, radar or ultrasonic, or passive types of sensors, e.g., optical sensors, GNSS, etc.
Sensory fusion can enhance navigation accuracy. The selection of appropriate sensors is based upon a number of factors, such as the sampling rate, the field of view, the reported accuracy, the range, the cost and the overall complexity of the final system. A vision-based system usually combines sensory data from cameras with data acquired from LiDAR, RADAR scanners, ultrasonic sensors, GPS and IMU.
Cameras capture 2D images by collecting light reflected on 3D objects. Images from different perspectives can be combined to reconstruct the geometry of the 3D navigation scenery. Image acquisition, however, is subject to the noise applied by the dynamically changing environmental conditions such as weather and lighting [59]. Thus, a fusion of sensors is required. LiDAR sensors can provide accurate models of the 3D navigation scene and, therefore, are used in autonomous navigation applications for depth perception. LiDAR sensors emit a laser light, which travels until it bounces off of objects and returns to the LiDAR. The system measures the travel time of the light to calculate distance, resulting in an elevation map of the surrounding environment. Radars are also used for autonomous driving applications [60]. Radars transmit an electromagnetic wave and analyze its reflections, deriving radar measurements such as range and radial velocity. Similar to radars, ultrasonic sensors calculate the object-source distance by measuring the time between the transmission of an ultrasonic signal and its reception by the receiver. Ultrasonic sensors are commonly used to autonomously locate and navigate a vehicle [61]. GPS and IMU are additional widely used sensors for autonomous navigation systems. GNSS can provide the geographic coordinates and time information to a GPS receiver anywhere on the planet as long as there is an unobstructed line of sight to at least four GPS satellites. The main disadvantage of GPS is that it sometimes fails to be accurate due to obstacles blocking the signals, such as buildings, trees or intense atmospheric conditions. Therefore, GPS is usually fused with IMU measurements to ensure signal coverage and precise position tracking. An IMU combines multiple sensors like a gyroscope, accelerometer, digital compass, magnetometer, etc. When fused with a high-speed GNSS receiver and combined with sophisticated algorithms, reliable navigation and orientation can be delivered.

5. Vision-Based Navigation

Vision-based navigation can be performed by using monocular vision, binocular vision or multi-vision, depending on the number of visual sensors and by using appropriate image processing algorithms.

5.1. Monocular Vision Methods

Monocular vision is widely used for navigation purposes in agricultural machinery [20]. Essentially, the problem of visual in-field navigation is about detecting crop lines and obstacles on the pathway in between the crop lines. In [62], a monocular vision system was proposed to guide a tractor; the vehicle captured images while moving through the crop rows and corrected the steering angle by identifying the heading and offset errors from the line. Results indicated acceptable performance, with a 0.024 m maximum error of position identification in the offset and 1.5° in the attitude angle for a 0.25 m/s navigation speed. In [19], the proposed monocular vision system was able to automatically drive a tractor for a 35 s trial at a speed of 1 m/s with an accuracy of 0.020 m. In [24], a monocular vision system was developed to guide a tractor at a maximum velocity of 1.3 m/s with an overall accuracy of 0.050 m, in day and night navigation trials. The monocular vision system in [63] resulted in steering performance comparable to steering by human operators, with an accuracy of 0.050 m at 0.16, 0.5 and 1.11 m/s. The same research team as in [64] developed a monocular vision guiding system that could navigate for a 125-m run by keeping a stable distance of 10 cm from the left side of the crop row in varying environmental illumination with two different speeds: 1.33 m/s and 3.58 m/s. In both cases, for 70 trials, the robotic system performed 95% of the trials and a standard deviation (SD) from the predetermined route was identical to that of a human driver. Monocular sensing for crop line tracking was also used in [65]; 95% of rows were segmented correctly over a distance of 5 km at a maximum speed of 1.94 m/s. In [36], the proposed monocular vision system achieved a round mean square (RMS) offset error and a heading error between the camera and the crop row less than 0.030 m and 0.3°, respectively. The robotic monocular vision-based sprayer of [66] reported an average error of 0.010 m inside a straight plant path and 0.011 m and 0.078 m before and after a 90° turn, respectively. The system introduced in [67] relied on monocular vision to guide a vehicle inside artificial crop rows with a speed of 1.5 m/s and accuracy of ±0.020 m. In [68], the proposed system was able to navigate by using only monocular vision in different environmental illuminations. The reported mean of the lateral deviation in a straight line was 0.018 m, while the minimum deviation in curves reached 0.161 m. Guidance accuracy was similar to that of guiding with an RTK GPS sensor. A monocular vision-based method to track the direction and lateral offset of crop rows was presented in [69]. The method could perform in different cultivations without modifications. The minimum RMS error for open loop experiments was 0.034 m, while for close loop experiments, it was 0.028 m.
Monocular vision, when compared to binocular vision, simplifies the hardware, but it needs to be coupled with more complex algorithms in order to function with adequate accuracy [70]. Towards this end, many algorithms have been developed, focusing mainly on: (1) expert systems, (2) image processing, (3) crop-row segmentation and (4) path determination.
Expert systems are based on human knowledge. Two basic approaches have been considered: one that uses only images and one that builds a map of the trajectory. The first approach relies on images extracted from the predetermined navigation route; the vehicle drives in the specified path as it is captured from one image to another by computing its relative position from the current image and moving accordingly [71,72,73,74]. This approach avoids reconstructing the entire navigation scene and defines the environment from overlapping images. The second approach builds a map of the environment a priori, resulting in faster and more accurate localization and navigation. On one hand, the latter is time-consuming. On the other hand, the process is done offline and before use. In addition, fusion with appropriate sensors such as GPS can provide global coordinates for the localization of the vehicle [68,75]. SLAM combined with monocular vision has also been considered [76]; however, a small landmark database is required for real-time navigation responses.
Image processing algorithms are essential for effective navigation to deal with weeds, shadowing and other noise that infect in-field acquired images. To this end, monocular vision systems use near infrared (NIR) cameras, grayscale cameras or filters to determine the optical properties of crops that are strongly related to their physical properties, such as greenness [77]. The latter can help segmentation tasks to detect crop rows by discriminating between green and non-green features in a scene or gray levels of soil [26] in order to deal with light changes, weed noise [78], etc.
Navigation methods based on crop-row segmentation focus on detecting multiple crop rows and determining their exact position so as to define the navigation pathway in between them [79,80]. Alternatively, methods for direct path determination can be applied. A typical method to determine pathways is the Hough transform; yet, it is sensitive to discontinuities and needs considerable computational time [81]. Variations of the Hough transform, such as adaptive Hough transform [82], intrinsic blob analysis [83] and curve fitting [84], are introduced to deal with the reported defects.

5.2. Binocular Vision Methods

Binocular vision combines two monocular cameras simultaneously so that each camera contributes to a single common perception. Information acquired by a binocular vision system can be used to define the exact location of objects in a scene. Compared to monocular vision, binocular vision can provide better overall depth, distance measurements and 3D viewing details, therefore, it is more resistant to varying illuminations and more accurate in locating regions of interest [59]. Binocular vision systems are used for the autonomous navigation of agricultural vehicles. In [85], a low-cost binocular vision system is proposed for the automatic driving of an agricultural machine. The results indicated a mean deviation between the actual middle of the road and traveled trajectory of 0.031 m, 0.069 m and 0.105 m, for straight, multi-curvature and undulating roads, respectively. An adaptive binocular vision-based algorithm was proposed in [86]. Experiments on S-type and O-type paths resulted in an absolute mean of turning angle of 0.7° and an absolute standard deviation of 1.5° for navigation speeds less than 0.5 m/s. In [59], a navigation algorithm based on binocular vision is proposed, resulting in a correct detection rate greater than 92.78% for the average deviation angle, an absolute average value less than 1.05° and an average standard deviation less than 3.66° in paths without turnrows. A tractor path-tracking control system based on binocular vision is presented in [57]. In-field experiments indicated a mean absolute deviation of course angle of 0.95° and a standard deviation of 1.26°.
Binocular vision-based algorithms for autonomous vehicle navigation in agriculture focus mainly on: (1) obstacle detection, (2) 3D scene reconstruction and (3) crop-row detection.
Obstacle detection methods are critical for the safety of in-field automated operations of agricultural machinery. Binocular vision can provide the depth information of obstacles in an agricultural scene by stereo matching; thus, its application in obstacle detection attracts growing attention [87]. One binocular vision approach is based on an inverse perspective transformation and the selection of non-zero disparity zones [88]. This method is effective when applied on flat surfaces. Other approaches use the plane-line projection characteristics of the UV-disparity, where the height and width of obstacles are acquired from the height of vertical line segments in V-disparity maps and the length of horizontal line segments in U-disparity maps, respectively [89]. These methods can detect simple obstacles in structured environments. The most common approach is based on binocular stereo matching [90]. However, the stereo matching of in-field images is time-consuming and not very precise. In order to enhance the precision of stereo matching, motion analysis for object tracking can be considered (2008). Moreover, the processing time could be reduced by considering fewer points than the entire 3D reconstruction of a field scene.
The 3D reconstruction of a field scene can determine all surrounding environments accurately with adequate detail [85]. Binocular vision-based methods can provide 3D field maps even in unstructured and complex environments [91]. An accurate disparity map can help agricultural machinery to navigate safely in the fields [92]. However, stereo matching methods are preferable for 3D scene reconstruction due to reduced processing times, making them more flexible for real-time applications.
Crop-row detection algorithms based on binocular vision are effective when applied to fields where the crops are significantly higher than the weeds [93], due to complex in-field features that obstruct quick and accurate stereo matching. Crop rows are traditionally detected by Hough transform or by horizontal strip segmentation, but these methods do not fully exploit binocular vision techniques [59]. Binocular vision combined with a pure pursuit path-tracking algorithm can provide reliable information to navigate tractors in the fields [57]. The continuous advancements in image processing and automatic control can guarantee accurate real-time information about the surrounding environment for automatic vehicle control in the near future.

Classification of Stereovision Methods

Stereovision analysis consists of the following basic steps: (1) image acquisition, (2) the modeling of the camera, (3) feature extraction, (4) stereo matching, (5) determination of depth and (6) interpolation. Stereo matching, i.e., the identification of pixels in two images that correspond to the same 3D point in the scene, is the most important step of the process. In order to resolve the stereo matching problem a set of constraints are applied: epipolar geometry, similarity, uniqueness and smoothness [94]. Epipolar geometry defines the correspondence between two pixels in stereo images by relating 3D objects to their 2D projection. The similarity constraint matches pixels with similar properties. Uniqueness defines the existence of a unique match between two pixels in stereo images, apart from occlusions. Finally, the smoothness constraint determines a smooth change in neighboring disparity values, apart from discontinuities resulting from sharp edges.
Stereo matching methods can be either local, global or semi-global [95]. Local methods achieve matching on a local window. Challenges are due to regions with repetitive or low textures that introduce ambiguities. Global methods result in disparity maps with high accuracy, but global methods are computationally expensive due to the calculation of the disparity of every pixel in an image by optimizing a global energy function. Semi-global methods are introduced to balance disparity maps’ estimation accuracy and computational time by performing the optimization of the global energy function on part of the entire image. Vision-based disparity estimation algorithms are comprised of the following basic steps that formulate stereo matching as a multistage optimization problem: (1) computation of cost, (2) aggregation of cost, (3) optimization of disparity and (4) refinement of disparity. The speed and accuracy of disparity estimation are equally important and are taken into consideration for the overall performance evaluation of stereo vision algorithms. The research focuses on reducing computational complexity while achieving better disparity estimation accuracy. The latter is the greater challenge when developing a stereovision algorithm [96]. Traditional stereo matching algorithms are mainly software-based implementations of global and local methods of generating disparity maps [97]. The ability to deliver stereo matching in real time by using parallel processing or additional hardware in less processing time paved the way for new research in the field. Recently, due to the advancement of convolutional neural networks, stereo matching is treated as a deep learning task [98].

5.3. Multi-Vision Methods

Multi-vision systems include input images of multiple vision sensors. In [99] three images for three cameras and GPS data were fused to generate a localization system. The use of multiple cameras reduced the complexity of the system since the final image was extracted by image stitching without the need to rotate the cameras to capture multiple viewing angles. Compared to conventional stereo vision systems, the proposed multi-camera system resulted in better viewing angles four times faster. Trinocular vision and odometry were used in [100], while in [101] a multi-vision method was introduced for multi-sensor surveillance and long-distance localization.
A comparative table (Table 2) of all vision-based navigation algorithms for self-steering tractors reviewed in this work is subsequently provided. Details regarding the utilized vision systems and the performance of the algorithms are included in the same table. It should be noted that methods included in Table 2 use cameras as their main navigation sensor. Additional sensors included in Table 2 are auxiliary or only used to enable comparison between sensors and methods. Moreover, only the methods that have been tested for navigation and localization purposes are considered and included in Table 2. Many vision-based navigation algorithms have been developed, their performance to detect crop rows has been investigated and their potential use in vehicle guidance has been evaluated [102,103], yet only a part of them has actually been integrated into agricultural vehicles and tested in real-life navigation applications.

6. Discussion

The autonomous navigation of agricultural machinery is an important aspect of smart farming and has been widely used in several agricultural practices for sustainable high yield production and in-field automation. Computer vision has been integrated into agricultural machinery to guarantee enhanced navigation accuracy in real in-field conditions. Figure 7 illustrates the categories of agricultural machinery to which navigation algorithms have been integrated, according to the bibliography of Table 2. The actual problem that the use of computer vision aims to overcome is the exact localization of crop lines, the mapping of the navigation routes and its real-time correction.
Navigation control systems are based on monocular vision and, more recently, on binocular vision or multi-vision systems. Optical sensors provide images to effective image processing algorithms. The extracted data from image processing are then fused with additional data from other on-board sensors. All processing is completed in real time on a computer mounted to the autonomous vehicle.
According to the revised bibliography, it has resulted that multi-vision and stereovision systems are superior to monocular vision systems and meet the agricultural requirements for navigating vehicles in the fields. However, monocular systems are more often found in the literature, according to Table 2. The latter can be better visualized in Figure 8. This is due to the simpler system design, the comparatively low-cost of the monocular vision sensors and the less complex image processing algorithms that accompany them. Additionally, although the use of one camera is affected by environmental noise, navigation results still range to sustainable levels of accuracy, allowing monocular vision systems to drive a tractor safely.
In particular, binocular vision systems demonstrate higher stability control accuracy, allowing for automatic control of the navigation path between the crop lines, especially for large crops such as cotton, sunflower, maize, etc. Indicatively, an algorithm [59] for detecting crop rows based on binocular vision combines image pre-processing, stereo matching and centerline detection of multiple rows. The method first converts the stereoscopic image to grayscale by using the improved 2G-R-B greyscale transformation. Then, the Harris corner point detector is employed to extract the candidates for stereo matching. The 3D coordinates of crop rows are calculated with stereo matching by the disparities between the binocular images. Finally, the crop lines are determined by using the normalized sum of absolute difference for matching (NSAD) metric and the random sample consensus method (RANSAC) for the optimization of disparity. The results demonstrated the efficiency of the algorithm in dealing with various visual noises such as lighting, shadows, weeds and density of crop rows. Moreover, results revealed the satisfactory speed and accuracy of the algorithm, especially when the camera was mounted at an appropriate height on the vehicle and when the crops were significantly higher than the weeds. The latter is more common in orchards.
In the case of vision-based autonomous navigation in orchards, several algorithms have been developed that take advantage of terrestrial structures such as tree trunks and foliage. Many approaches combine data from two or more sensors in order to detect and locate objects, while in recent years RGB-D, Kinect and other sensors have been widely used [104]. Even though methods with sensor fusion perform satisfactorily in orchards, they are challenged by shading, the altering angle of the sun, the chromatic similarity of crops, the visibility of tree trunks in the adjacent rows, etc.
A recent methodology [105] provided a solution to the above challenges by combining the tree foliage with the background sky instead of tree trunks and the ground; by looking up instead of down, environmental factors decreased considerably. The method used a multispectral camera mounted on the front of an agricultural vehicle to capture images and a computer to process them in real time. The captured image was cropped and the lower part was used to extract the green color plane, which provided greater contrast between the foliage and the sky. Simple thresholding was applied to derive the path plane of the agricultural vehicle. Finally, after filtering out the noise, the centroid path plane resulted. Results revealed the potential of this original approach to successfully guide agricultural vehicles in orchards.
Future work may include state-of-the-art sensors and algorithms for newly introduced vision-based system architecture. Regarding image processing algorithms, there are several issues to be addressed. First, algorithms must balance the processing of large amounts of data from the environment with low processing times so that decisions can be made in real time. Complicated and multiple input data may provide a better understanding of the surroundings, yet result in increased processing time, obstructing the real time control and actions of the tractor. Second, optimal navigation accuracies are achieved with multi-sensor systems; vision, although flexible, is affected by environmental noise. A variety of sensors can lead to advanced accuracies in navigation, mapping and the position estimation of tractors. Therefore, future work must focus on fusion techniques that are able to deal efficiently with data from multiple sensors. Factors such as the variation of the vehicle’s speed, irregular terrain and varying controllers should also be considered. Figure 9 summarizes the range of maximum navigation speeds at which the best performance of the algorithms in Table 2 were obtained. It seems that the optimal speed for a tractor is between 1–2 m/s, which is fast enough for sustainable automation and sufficient for the data processing of the autonomous navigation algorithms. However, it should be noted that speeds over 2 m/s are usually used in cultivations with tall (e.g., corn) or dense plants (e.g., cotton), where the crop line is theoretically detected more easily.
Effective control algorithms are very important in dynamic outdoor environments. When autonomous navigation is employed in structured or semi-structured environments where disturbances are known or can be easily predicted, simulations and expert knowledge can be used efficiently to control a guiding system. However, a complicated environment introduces many uncertainties into the system. For this reason, future work should concentrate on control algorithms that are able to self-learn from and self-adapt to the environment. Finally, a future investigation should focus further on in-field testing for different types of crops at different growing stages and under varying environmental conditions.
The referenced literature revealed that autonomous navigation systems of agricultural vehicles have been in the spotlight for decades, paving the way for the sustainable agriculture of the future. However, research needs to be perpetual in order to face challenges and overcome all reported limitations.

7. Conclusions

Data-driven agriculture combined with appropriate sensory systems, including artificial intelligence methods, paves the way for the sustainable agriculture of the future. To this end, a vision-based system is capable of providing precise navigation information to autonomously guide a tractor to follow crop rows. Thus, the burden of the monotonous crop row following-tasks will fall to the self-steering system and the operator could be engaged in maneuvering and other tasks, thus increasing the guiding performance, the working efficiency and the overall safety of farming operations.
The present work demonstrated the feasibility of machine vision systems in self-steering tractors with respect to the following issues: vision-based tractors’ system architecture, the safety of usage and navigation errors, the navigation control system of vision-based self-steering tractors and state-of-the-art image processing algorithms for in-field navigation route mapping. Research revealed the potential of machine vision systems to autonomously navigate agricultural machinery in open fields in the future.
The aim of this work is to augment the knowledge on agricultural vision-based navigation methods and provide evidence of trends and challenges in a systematic manner. The reviewed methods included in this work could be used for future studies to extend the knowledge of vision-based machinery architecture, sensors, algorithms and safety issues. The provided knowledge could be extended from agricultural navigation tasks to other contexts where the use of autonomously guided vehicles will be the focus of interest.

Author Contributions

Conceptualization, G.A.P.; methodology, D.O., A.K. and E.V.; investigation, A.K., D.O. and E.V.; resources, A.K., D.O. and E.V.; writing—original draft preparation, E.V., D.O. and A.K.; writing—review and editing, E.V. and G.A.P.; visualization, E.V.; supervision, G.A.P.; project administration, G.A.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

This work was supported by the MPhil program “Advanced Technologies in Informatics and Computers”, hosted by the Department of Computer Science, International Hellenic University, Greece.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Saiz-Rubio, V.; Rovira-Más, F. From Smart Farming towards Agriculture 5.0: A Review on Crop Data Management. Agronomy 2020, 10, 207. [Google Scholar] [CrossRef] [Green Version]
  2. Lokers, R.; Knapen, R.; Janssen, S.; van Randen, Y.; Jansen, J. Analysis of Big Data technologies for use in agro-environmental science. Environ. Model. Softw. 2016, 84, 494–504. [Google Scholar] [CrossRef] [Green Version]
  3. De Clercq, M.; Vats, A.; Biel, A. Agriculture 4.0: The future of farming technology. In Proceedings of the World Government Summit, Dubai, United Arab Emirates, 11–13 February 2018; pp. 11–13. [Google Scholar]
  4. Martos, V.; Ahmad, A.; Cartujo, P.; Ordoñez, J. Ensuring Agricultural Sustainability through Remote Sensing in the Era of Agriculture 5.0. Appl. Sci. 2021, 11, 5911. [Google Scholar] [CrossRef]
  5. Sparrow, R.; Howard, M. Robots in agriculture: Prospects, impacts, ethics, and policy. Precis. Agric. 2021, 22, 818–833. [Google Scholar] [CrossRef]
  6. Aqeel-ur-Rehman; Abbasi, A.Z.; Islam, N.; Shaikh, Z.A. A review of wireless sensors and networks’ applications in agriculture. Comput. Stand. Interfaces 2014, 36, 263–270. [Google Scholar] [CrossRef]
  7. Shanmugapriya, P.; Rathika, S.; Ramesh, T.; Janaki, P. Applications of Remote Sensing in Agriculture-A Review. Int. J. Curr. Microbiol. Appl. Sci. 2019, 8, 2270–2283. [Google Scholar] [CrossRef]
  8. Fan, J.; Zhang, Y.; Wen, W.; Gu, S.; Lu, X.; Guo, X. The future of Internet of Things in agriculture: Plant high-throughput phenotypic platform. J. Clean. Prod. 2021, 280, 123651. [Google Scholar] [CrossRef]
  9. Wolfert, S.; Ge, L.; Verdouw, C.; Bogaardt, M.-J. Big Data in Smart Farming—A review. Agric. Syst. 2017, 153, 69–80. [Google Scholar] [CrossRef]
  10. Shalal, N.; Low, T.; McCarthy, C.; Hancock, N. A Review of Autonomous Navigation Systems in Agricultural Environments. In Proceedings of the Society for Engineering in Agriculture Conference: Innovative Agricultural Technologies for a Sustainable Future, Barton, WA, Australia, 22–25 September 2013. [Google Scholar]
  11. Rovira-Más, F.; Zhang, Q.; Reid, J.F.; Will, J.D. Machine Vision Based Automated Tractor Guidance. Int. J. Smart Eng. Syst. Des. 2003, 5, 467–480. [Google Scholar] [CrossRef]
  12. Thomasson, J.A.; Baillie, C.P.; Antille, D.L.; Lobsey, C.R.; McCarthy, C.L. Autonomous Technologies in Agricultural Equipment: A Review of the State of the Art. In Proceedings of the 2019 Agricultural Equipment Technology Conference, Louisville, KY, USA, 11–13 February 2019; American Society of Agricultural and Biological Engineers: St. Joseph, MI, USA, 2019; pp. 1–17. [Google Scholar]
  13. Baillie, C.P.; Lobsey, C.R.; Antille, D.L.; McCarthy, C.L.; Thomasson, J.A. A review of the state of the art in agricultural automation. Part III: Agricultural machinery navigation systems. In Proceedings of the 2018 Detroit, Michigan, 29 July–1 August 2018; American Society of Agricultural and Biological Engineers: St. Joseph, MI, USA, 2018; p. 1. [Google Scholar]
  14. Schmidt, G.T. GPS Based Navigation Systems in Difficult Environments. Gyroscopy Navig. 2019, 10, 41–53. [Google Scholar] [CrossRef]
  15. Wilson, J. Guidance of agricultural vehicles—A historical perspective. Comput. Electron. Agric. 2000, 25, 3–9. [Google Scholar] [CrossRef]
  16. Reid, J.; Searcy, S. Vision-based guidance of an agriculture tractor. IEEE Control Syst. Mag. 1987, 7, 39–43. [Google Scholar] [CrossRef]
  17. Reid, J.F.; Searcy, S.W. Automatic Tractor Guidance with Computer Vision. In SAE Technical Papers; SAE International: Warrendale, PA, USA, 1987. [Google Scholar]
  18. Billingsley, J.; Schoenfisch, M. Vision-guidance of agricultural vehicles. Auton. Robots 1995, 2, 65–76. [Google Scholar] [CrossRef]
  19. Billingsley, J.; Schoenfisch, M. The successful development of a vision guidance system for agriculture. Comput. Electron. Agric. 1997, 16, 147–163. [Google Scholar] [CrossRef]
  20. Pinto, F.A.C.; Reid, J.F. Heading angle and offset determination using principal component analysis. In Proceedings of the ASAE Paper, Disney’s Coronado Springs, Orlando, FL, USA, 12–16 July 1998; p. 983113. [Google Scholar]
  21. Benson, E.R.; Reid, J.F.; Zhang, Q.; Pinto, F.A.C. An adaptive fuzzy crop edge detection method for machine vision. In Proceedings of the 2000 ASAE Annual Intenational Meeting, Technical Papers: Engineering Solutions for a New Century, Milwaukee, WI, USA, 9–12 July 2000; pp. 49085–49659. [Google Scholar]
  22. Benson, E.R.; Reid, J.F.; Zhang, Q. Development of an automated combine guidance system. In Proceedings of the 2000 ASAE Annual Intenational Meeting, Technical Papers: Engineering Solutions for a New Century, Milwaukee, WI, USA, 9–12 July 2000; pp. 1–11. [Google Scholar]
  23. Benson, E.R.; Reid, J.F.; Zhang, Q. Machine Vision Based Steering System for Agricultural Combines. In Proceedings of the 2001 Sacramento, Sacramento, CA, USA, 29 July–1 August 2001; American Society of Agricultural and Biological Engineers: St. Joseph, MI, USA, 2001. [Google Scholar]
  24. Benson, E.R.; Reid, J.F.; Zhang, Q. Machine vision-based guidance system for an agricultural small-grain harvester. Trans. ASAE 2003, 46, 1255–1264. [Google Scholar] [CrossRef]
  25. Keicher, R.; Seufert, H. Automatic guidance for agricultural vehicles in Europe. Comput. Electron. Agric. 2000, 25, 169–194. [Google Scholar] [CrossRef]
  26. Åstrand, B.; Baerveldt, A.-J. A vision based row-following system for agricultural field machinery. Mechatronics 2005, 15, 251–269. [Google Scholar] [CrossRef]
  27. Søgaard, H.T.; Olsen, H.J. Crop row detection for cereal grain. In Precision Agriculture ’99; Sheffield Academic Press: Sheffield, UK, 1999; pp. 181–190. ISBN 1841270423. [Google Scholar]
  28. Láng, Z. Image processing based automatic steering control in plantation. VDI Ber. 1998, 1449, 93–98. [Google Scholar]
  29. Kise, M.; Zhang, Q.; Rovira Más, F. A Stereovision-Based Crop Row Detection Method for Tractor-automated Guidance. Biosyst. Eng. 2005, 90, 357–367. [Google Scholar] [CrossRef]
  30. Tillett, N.D.; Hague, T. Computer-Vision-based Hoe Guidance for Cereals—An Initial Trial. J. Agric. Eng. Res. 1999, 74, 225–236. [Google Scholar] [CrossRef]
  31. Hague, T.; Tillett, N.D. A bandpass filter-based approach to crop row location and tracking. Mechatronics 2001, 11, 1–12. [Google Scholar] [CrossRef]
  32. Tillett, N.D.; Hague, T.; Miles, S.J. Inter-row vision guidance for mechanical weed control in sugar beet. Comput. Electron. Agric. 2002, 33, 163–177. [Google Scholar] [CrossRef]
  33. Subramanian, V.; Burks, T.F.; Arroyo, A.A. Development of machine vision and laser radar based autonomous vehicle guidance systems for citrus grove navigation. Comput. Electron. Agric. 2006, 53, 130–143. [Google Scholar] [CrossRef]
  34. Misao, Y.; Karahashi, M. An image processing based automatic steering rice transplanter (II). In Proceedings of the 2000 ASAE Annual Intenational Meeting, Technical Papers: Engineering Solutions for a New Century, Milwaukee, WI, USA, 9–12 July 2000; pp. 1–5. [Google Scholar]
  35. Han, S.; Dickson, M.A.; Ni, B.; Reid, J.F.; Zhang, Q. A Robust Procedure to Obtain a Guidance Directrix for Vision-Based Vehicle Guidance Systems. In Proceedings of the Automation Technology for Off-Road Equipment Proceedings of the 2002 Conference, Chicago, IL, USA, 26–27 July 2002; American Society of Agricultural and Biological Engineers: St. Joseph, MI, USA, 2013; p. 317. [Google Scholar]
  36. Okamoto, H.; Hamada, K.; Kataoka, T.; Terawaki, M.; Hata, S. Automatic Guidance System with Crop Row Sensor. In Proceedings of the Automation Technology for Off-Road Equipment Proceedings of the 2002 Conference, Chicago, IL, USA, 26–27 July 2002; American Society of Agricultural and Biological Engineers: St. Joseph, MI, USA, 2013; p. 307. [Google Scholar]
  37. Fargnoli, M.; Lombardi, M. Safety Vision of Agricultural Tractors: An Engineering Perspective Based on Recent Studies (2009–2019). Safety 2019, 6, 1. [Google Scholar] [CrossRef] [Green Version]
  38. Kumar, A.; Varghese, M.; Mohan, D. Equipment-related injuries in agriculture: An international perspective. Inj. Control Saf. Promot. 2000, 7, 175–186. [Google Scholar] [CrossRef]
  39. Vallone, M.; Bono, F.; Quendler, E.; Febo, P.; Catania, P. Risk exposure to vibration and noise in the use of agricultural track-laying tractors. Ann. Agric. Environ. Med. 2016, 23, 591–597. [Google Scholar] [CrossRef] [Green Version]
  40. Irwin, A.; Poots, J. Investigation of UK Farmer Go/No-Go Decisions in Response to Tractor-Based Risk Scenarios. J. Agromedicine 2018, 23, 154–165. [Google Scholar] [CrossRef]
  41. Jamshidi, N.; Abdollahi, S.M.; Maleki, A. A survey on the actuating force on brake and clutch pedal controls in agricultural tractor in use in Iran. Polish Ann. Med. 2016, 23, 113–117. [Google Scholar] [CrossRef]
  42. Fargnoli, M.; Lombardi, M.; Puri, D. Applying Hierarchical Task Analysis to Depict Human Safety Errors during Pesticide Use in Vineyard Cultivation. Agriculture 2019, 9, 158. [Google Scholar] [CrossRef] [Green Version]
  43. Bo, H.; Liang, W.; Yuefeng, D.; Zhenghe, S.; Enrong, M.; Zhongxiang, Z. Design and Experiment on Integrated Proportional Control Valve of Automatic Steering System. IFAC Pap. 2018, 51, 389–396. [Google Scholar] [CrossRef]
  44. Franceschetti, B.; Rondelli, V.; Ciuffoli, A. Comparing the influence of Roll-Over Protective Structure type on tractor lateral stability. Saf. Sci. 2019, 115, 42–50. [Google Scholar] [CrossRef]
  45. Kaizu, Y.; Choi, J. Development of a Tractor Navigation System Using Augmented Reality. Eng. Agric. Environ. Food 2012, 5, 96–101. [Google Scholar] [CrossRef]
  46. Ehlers, S.G.; Field, W.E.; Ess, D.R. Methods of Collecting and Analyzing Rearward Visibility Data for Agricultural Machinery: Hazard and/or Object Detectability. J. Agric. Saf. Health 2017, 23, 39–53. [Google Scholar] [PubMed]
  47. Liu, B.; Koc, A.B. Field Tests of a Tractor Rollover Detection and Emergency Notification System. J. Agric. Saf. Health 2015, 21, 113–127. [Google Scholar]
  48. Irwin, A.; Caruso, L.; Tone, I. Thinking Ahead of the Tractor: Driver Safety and Situation Awareness. J. Agromedicine 2019, 24, 288–297. [Google Scholar] [CrossRef]
  49. Liu, B.; Liu, G.; Wu, X. Research on Machine Vision Based Agricultural Automatic Guidance Systems. In Computer and Computing Technologies in Agriculture; Springer: Boston, MA, USA, 2008; Volume I, pp. 659–666. ISBN 9780387772509. [Google Scholar]
  50. Lameski, P.; Zdravevski, E.; Kulakov, A. Review of Automated Weed Control Approaches: An Environmental Impact Perspective. In Communications in Computer and Information Science; Springer: Berlin/Heidelberg, Germany, 2018; Volume 940, pp. 132–147. ISBN 9783030008246. [Google Scholar]
  51. Rowduru, S.; Kumar, N.; Kumar, A. A critical review on automation of steering mechanism of load haul dump machine. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 2020, 234, 160–182. [Google Scholar] [CrossRef]
  52. Eddine Hadji, S.; Kazi, S.; Howe Hing, T.; Mohamed Ali, M.S. A Review: Simultaneous Localization and Mapping Algorithms. J. Teknol. 2015, 73. [Google Scholar] [CrossRef] [Green Version]
  53. Rodríguez Flórez, S.A.; Frémont, V.; Bonnifait, P.; Cherfaoui, V. Multi-modal object detection and localization for high integrity driving assistance. Mach. Vis. Appl. 2014, 25, 583–598. [Google Scholar] [CrossRef] [Green Version]
  54. Jha, H.; Lodhi, V.; Chakravarty, D. Object Detection and Identification Using Vision and Radar Data Fusion System for Ground-Based Navigation. In Proceedings of the 2019 6th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 7–8 March 2019; pp. 590–593. [Google Scholar]
  55. Karur, K.; Sharma, N.; Dharmatti, C.; Siegel, J.E. A Survey of Path Planning Algorithms for Mobile Robots. Vehicles 2021, 3, 448–468. [Google Scholar] [CrossRef]
  56. Ge, J.; Pei, H.; Yao, D.; Zhang, Y. A robust path tracking algorithm for connected and automated vehicles under i-VICS. Transp. Res. Interdiscip. Perspect. 2021, 9, 100314. [Google Scholar] [CrossRef]
  57. Zhang, S.; Wang, Y.; Zhu, Z.; Li, Z.; Du, Y.; Mao, E. Tractor path tracking control based on binocular vision. Inf. Process. Agric. 2018, 5, 422–432. [Google Scholar] [CrossRef]
  58. Pajares, G.; García-Santillán, I.; Campos, Y.; Montalvo, M.; Guerrero, J.; Emmi, L.; Romeo, J.; Guijarro, M.; Gonzalez-de-Santos, P. Machine-Vision Systems Selection for Agricultural Vehicles: A Guide. J. Imaging 2016, 2, 34. [Google Scholar] [CrossRef] [Green Version]
  59. Zhai, Z.; Zhu, Z.; Du, Y.; Song, Z.; Mao, E. Multi-crop-row detection algorithm based on binocular vision. Biosyst. Eng. 2016, 150, 89–103. [Google Scholar] [CrossRef]
  60. Schouten, G.; Steckel, J. A Biomimetic Radar System for Autonomous Navigation. IEEE Trans. Robot. 2019, 35, 539–548. [Google Scholar] [CrossRef]
  61. Wang, R.; Chen, L.; Wang, J.; Zhang, P.; Tan, Q.; Pan, D. Research on autonomous navigation of mobile robot based on multi ultrasonic sensor fusion. In Proceedings of the 2018 IEEE 4th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 14–16 December 2018; pp. 720–725. [Google Scholar]
  62. Torii, T.; Takamizawa, A.; Okamoto, T.; Imou, K. Crop Row Tracking by an Autonomous Vehicle Using Machine Vision (part 2): Field test using an autonomous tractor. J. Jpn. Soc. Agric. Mach. 2000, 62, 37–42. [Google Scholar]
  63. Fehr, B.W.; Garish, J.B. Vision-guided row-crop follower. Appl. Eng. Agric. 1995, 11, 613–620. [Google Scholar] [CrossRef]
  64. Gerrish, J.B.; Fehr, B.W.; Van Ee, G.R.; Welch, D.P. Self-steering tractor guided by computer-vision. Appl. Eng. Agric. 1997, 13, 559–563. [Google Scholar] [CrossRef]
  65. Fitzpatrick, K.; Pahnos, D.; Pype, W.V. Robot windrower is first unmanned harvester. Ind. Robot. Int. J. 1997, 24, 342–348. [Google Scholar] [CrossRef]
  66. Younse, P.; Burks, T. Intersection Detection and Navigation for an Autonomous Greenhouse Sprayer using Machine Vision. In Proceedings of the 2005 Tampa, Tampa, FL, USA, 17–20 July 2005; American Society of Agricultural and Biological Engineers: St. Joseph, MI, USA, 2005; p. 1. [Google Scholar]
  67. Hague, T.; Tillett, N.D. Navigation and control of an autonomous horticultural robot. Mechatronics 1996, 6, 165–180. [Google Scholar] [CrossRef]
  68. Royer, E.; Bom, J.; Dhome, M.; Thuilot, B.; Lhuillier, M.; Marmoiton, F. Outdoor autonomous navigation using monocular vision. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 1253–1258. [Google Scholar]
  69. English, A.; Ross, P.; Ball, D.; Corke, P. Vision based guidance for robot navigation in agriculture. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 29 September 2014; pp. 1693–1698. [Google Scholar]
  70. da Silva, S.P.P.; Almeida, J.S.; Ohata, E.F.; Rodrigues, J.J.P.C.; de Albuquerque, V.H.C.; Reboucas Filho, P.P. Monocular Vision Aided Depth Map from RGB Images to Estimate of Localization and Support to Navigation of Mobile Robots. IEEE Sens. J. 2020, 20, 12040–12048. [Google Scholar] [CrossRef]
  71. Ohno, T.; Ohya, A.; Yuta, S. Autonomous navigation for mobile robots referring pre-recorded image sequence. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. IROS ’96, Osaka, Japan, 8 November 1996; Volume 2, pp. 672–679. [Google Scholar]
  72. Remazeilles, A.; Chaumette, F.; Gros, P. Robot motion control from a visual memory. In Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004; Volume 5, pp. 4695–4700. [Google Scholar]
  73. Montalvo, M.; Pajares, G.; Guerrero, J.M.; Romeo, J.; Guijarro, M.; Ribeiro, A.; Ruz, J.J.; Cruz, J.M. Automatic detection of crop rows in maize fields with high weeds pressure. Expert Syst. Appl. 2012, 39, 11889–11897. [Google Scholar] [CrossRef] [Green Version]
  74. Guerrero, J.M.; Guijarro, M.; Montalvo, M.; Romeo, J.; Emmi, L.; Ribeiro, A.; Pajares, G. Automatic expert system based on images for accuracy crop row detection in maize fields. Expert Syst. Appl. 2013, 40, 656–664. [Google Scholar] [CrossRef] [Green Version]
  75. Kidono, K.; Miura, J.; Shirai, Y. Autonomous visual navigation of a mobile robot using a human-guided experience. Rob. Auton. Syst. 2002, 40, 121–130. [Google Scholar] [CrossRef]
  76. Davison, A.J. Real-time simultaneous localisation and mapping with a single camera. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; Volume 2, pp. 1403–1410. [Google Scholar]
  77. Vrochidou, E.; Bazinas, C.; Manios, M.; Papakostas, G.A.; Pachidis, T.P.; Kaburlasos, V.G. Machine Vision for Ripeness Estimation in Viticulture Automation. Horticulturae 2021, 7, 282. [Google Scholar] [CrossRef]
  78. Meng, Q.; Qiu, R.; He, J.; Zhang, M.; Ma, X.; Liu, G. Development of agricultural implement system based on machine vision and fuzzy control. Comput. Electron. Agric. 2015, 112, 128–138. [Google Scholar] [CrossRef]
  79. Burgos-Artizzu, X.P.; Ribeiro, A.; Guijarro, M.; Pajares, G. Real-time image processing for crop/weed discrimination in maize fields. Comput. Electron. Agric. 2011, 75, 337–346. [Google Scholar] [CrossRef] [Green Version]
  80. Jiang, G.; Wang, Z.; Liu, H. Automatic detection of crop rows based on multi-ROIs. Expert Syst. Appl. 2015, 42, 2429–2441. [Google Scholar] [CrossRef]
  81. Fernandes, L.A.F.; Oliveira, M.M. Real-time line detection through an improved Hough transform voting scheme. Pattern Recognit. 2008, 41, 299–314. [Google Scholar] [CrossRef]
  82. Leemans, V.; Destain, M.-F. Line cluster detection using a variant of the Hough transform for culture row localisation. Image Vis. Comput. 2006, 24, 541–550. [Google Scholar] [CrossRef] [Green Version]
  83. Fontaine, V.; Crowe, T.G. Development of line-detection algorithms for local positioning in densely seeded crops. Can. Biosyst. Eng. 2006, 48, 19–29. [Google Scholar]
  84. Zhang, L.; Grift, T.E. A New Approach to Crop-Row Detection in Corn. In Proceedings of the 2010 Pittsburgh, Pittsburgh, PA, USA, 20–23 June 2010; American Society of Agricultural and Biological Engineers: St. Joseph, MI, USA, 2010; p. 1. [Google Scholar]
  85. Li, Y.; Wang, X.; Liu, D. 3D Autonomous Navigation Line Extraction for Field Roads Based on Binocular Vision. J. Sens. 2019, 2019, 1–16. [Google Scholar] [CrossRef] [Green Version]
  86. Zhang, Z.; Li, P.; Zhao, S.; Lv, Z.; Du, F.; An, Y. An Adaptive Vision Navigation Algorithm in Agricultural IoT System for Smart Agricultural Robots. Comput. Mater. Contin. 2020, 66, 1043–1056. [Google Scholar] [CrossRef]
  87. Wang, Q.; Meng, Z.; Liu, H. Review on Application of Binocular Vision Technology in Field Obstacle Detection. IOP Conf. Ser. Mater. Sci. Eng. 2020, 806, 012025. [Google Scholar] [CrossRef]
  88. Zhang, T.; Li, H.; Chen, D.; Huang, P.; Zhuang, X. Agricultural vehicle path tracking navigation system based on information fusion of multi-source sensor. Nongye Jixie Xuebao/Trans. Chin. Soc. Agric. Mach. 2015, 46, 37–42. [Google Scholar]
  89. Dairi, A.; Harrou, F.; Senouci, M.; Sun, Y. Unsupervised obstacle detection in driving environments using deep-learning-based stereovision. Rob. Auton. Syst. 2018, 100, 287–301. [Google Scholar] [CrossRef] [Green Version]
  90. Ji, Y.; Li, S.; Peng, C.; Xu, H.; Cao, R.; Zhang, M. Obstacle detection and recognition in farmland based on fusion point cloud data. Comput. Electron. Agric. 2021, 189, 106409. [Google Scholar] [CrossRef]
  91. Ann, N.Q.; Achmad, M.S.H.; Bayuaji, L.; Daud, M.R.; Pebrianti, D. Study on 3D scene reconstruction in robot navigation using stereo vision. In Proceedings of the 2016 IEEE International Conference on Automatic Control and Intelligent Systems (I2CACIS), Selangor, Malaysia, 22 October 2016; pp. 72–77. [Google Scholar]
  92. Song, D.; Jiang, Q.; Sun, W.; Yao, L. A Survey: Stereo Based Navigation for Mobile Binocular Robots. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2013; pp. 1035–1046. ISBN 9783642373732. [Google Scholar]
  93. Zhu, Z.X.; He, Y.; Zhai, Z.Q.; Liu, J.Y.; Mao, E.R. Research on Cotton Row Detection Algorithm Based on Binocular Vision. Appl. Mech. Mater. 2014, 670–671, 1222–1227. [Google Scholar] [CrossRef]
  94. Herrera, P.J.; Pajares, G.; Guijarro, M.; Ruz, J.J.; Cruz, J.M. A Stereovision Matching Strategy for Images Captured with Fish-Eye Lenses in Forest Environments. Sensors 2011, 11, 1756–1783. [Google Scholar] [CrossRef] [Green Version]
  95. Zhang, X.; Dai, H.; Sun, H.; Zheng, N. Algorithm and VLSI Architecture Co-Design on Efficient Semi-Global Stereo Matching. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 4390–4403. [Google Scholar] [CrossRef]
  96. Kumari, D.; Kaur, K. A Survey on Stereo Matching Techniques for 3D Vision in Image Processing. Int. J. Eng. Manuf. 2016, 6, 40–49. [Google Scholar] [CrossRef] [Green Version]
  97. Hamzah, R.A.; Ibrahim, H. Literature Survey on Stereo Vision Disparity Map Algorithms. J. Sens. 2016, 2016, 1–23. [Google Scholar] [CrossRef] [Green Version]
  98. Luo, W.; Schwing, A.G.; Urtasun, R. Efficient Deep Learning for Stereo Matching. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 5695–5703. [Google Scholar]
  99. Son, J.; Kim, S.; Sohn, K. A multi-vision sensor-based fast localization system with image matching for challenging outdoor environments. Expert Syst. Appl. 2015, 42, 8830–8839. [Google Scholar] [CrossRef]
  100. Se, S.; Lowe, D.; Little, J. Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks. Int. J. Rob. Res. 2002, 21, 735–758. [Google Scholar] [CrossRef]
  101. Guang-lin, H.; Li, L. The Multi-vision Method for Localization Using Modified Hough Transform. In Proceedings of the 2009 WRI World Congress on Computer Science and Information Engineering, Los Angeles, CA, USA, 2 April–31 March 2009; pp. 34–37. [Google Scholar]
  102. Rabab, S.; Badenhorst, P.; Chen, Y.-P.P.; Daetwyler, H.D. A template-free machine vision-based crop row detection algorithm. Precis. Agric. 2021, 22, 124–153. [Google Scholar] [CrossRef]
  103. Zhang, Y.; Yang, H.; Liu, Y.; Yu, N.; Liu, X.; Pei, H. Camera Calibration Algorithm for Tractor Vision Navigation. In Proceedings of the 2020 3rd International Conference on E-Business, Information Management and Computer Science, Wuhan, China, 5–6 December 2020; ACM: New York, NY, USA, 2020; pp. 459–463. [Google Scholar]
  104. Corno, M.; Furioli, S.; Cesana, P.; Savaresi, S.M. Adaptive Ultrasound-Based Tractor Localization for Semi-Autonomous Vineyard Operations. Agronomy 2021, 11, 287. [Google Scholar] [CrossRef]
  105. Radcliffe, J.; Cox, J.; Bulanon, D.M. Machine vision for orchard navigation. Comput. Ind. 2018, 98, 165–171. [Google Scholar] [CrossRef]
Figure 1. The concept of Agriculture 5.0 along a crop management cycle.
Figure 1. The concept of Agriculture 5.0 along a crop management cycle.
Machines 10 00129 g001
Figure 2. The most common reasons leading to tractor safety issues.
Figure 2. The most common reasons leading to tractor safety issues.
Machines 10 00129 g002
Figure 3. The main issues related to errors of the visual perception systems of self-steering tractors.
Figure 3. The main issues related to errors of the visual perception systems of self-steering tractors.
Machines 10 00129 g003
Figure 4. A basic modeling of self-steering tractors.
Figure 4. A basic modeling of self-steering tractors.
Machines 10 00129 g004
Figure 5. A fundamental flow of the vision-based navigation systems of tractors.
Figure 5. A fundamental flow of the vision-based navigation systems of tractors.
Machines 10 00129 g005
Figure 6. The basic design of a path tracking control system.
Figure 6. The basic design of a path tracking control system.
Machines 10 00129 g006
Figure 7. Types of agricultural machinery that use vision-based navigation methods.
Figure 7. Types of agricultural machinery that use vision-based navigation methods.
Machines 10 00129 g007
Figure 8. The distribution of tractor navigation literature based on the type of vision system.
Figure 8. The distribution of tractor navigation literature based on the type of vision system.
Machines 10 00129 g008
Figure 9. The ranges of maximum navigations speeds.
Figure 9. The ranges of maximum navigations speeds.
Machines 10 00129 g009
Table 1. An indicative list of basic elements for vision-based autonomous self-steering tractors.
Table 1. An indicative list of basic elements for vision-based autonomous self-steering tractors.
SensorsObject DetectionLocalization and
Mapping
Path PlanningPath Tracking and Steering Control
Optical sensorsConvolutional neural network (CNN)Simultaneous localization and mapping (SLAM)A* algorithmProportional integral derivative (PID)
RadarRegion-based fully convolutional network (R-FCN)Dead reckoningDijkstra AlgorithmStanley controller
Laser scannerFast R-CNNCurb localizationD* AlgorithmPure pursuit algorithm
Light detection and ranging (LiDAR)Faster R-CNNVisual object detectionRapidly Exploring Random Trees (RRT)Fuzzy logic controller
UltrasonicHistogram of oriented gradients (HOG)Particle filterGenetic Algorithm (GA)Neural Networks
Global positioning system (GPS)Single shot detector (SSD)Extended Kalman filterAnt Colony AlgorithmH-infinity controller
Global navigation satellite system (GNSS)You only look once (YOLO)Covariance intersectionFirefly AlgorithmModel predictive controller
Internal measurement unit (IMU)
Odometry
Table 2. A comparative table of vision-based navigation algorithms applied to agricultural machinery.
Table 2. A comparative table of vision-based navigation algorithms applied to agricultural machinery.
Ref.Vision SystemVisual SensorsAdditional SensorsImage Processing AlgorithmApplicationPerformance
[62]MonocularSONY-CCD-TR55, Focal length 0.011 mN/AHSI to detect different colored crops, Horizontal Scanning Method and Least Squares Method to detect the boundary of crop rowTested on a 4-Wheel drive tractor in a Komatsuna field on 3 rows 20 m long at 0.2 m intervals of 1.7 m and 0.7 m width0.024 m maximum error in the offset and 1.5° in the attitude angle for 0.25 m/s navigation speed
[19]MonocularCreative technology Ltd. ‘Video Blaster’ 3rd revisionN/ALevel-adjustment thresholding to discriminate rows from gaps, an averaging technique using a viewport to locate rows, regression analysis to fit the best lineTested on a CASE 7 140 tractor model to target a white tape 15 mm width at a 35-s run0.020 m accuracy at a speed of 1 m/s
[24]MonocularCohu 2100 monochromatic cameraTrimble 4400 RTK GPSHistogram-based segmentation, line-by-line low-pass filtering, blob
analysis
Tested on a Case 2188 combine harvester, in-lab and on a typical Illinois field with an average yield of 8.61 t/ha0.050 m overall accuracy in day and night navigation trials at a speed of 1.3 m/s
[63]MonocularMetal-oxide semiconductor-type color camera Hitachi VK-C3400AN/AA contrast algorithm applied on a color histogram to detect rows, image scanning with intensity threshold to locate left edge of rows, and a color discrimination algorithm applied to color histograms, average color intensity calculation, image scanning to locate both edges of rowsTested on a 15 kW General Electric
ElecTrac lawn tractor, in-lab and on a Michigan State University corn farm for 1.5 m
0.050 m best performance at testing speed of 0.16, 0.5 and 1.11 m/s
[64]MonocularThree-element video camera Model WVD5100
by Panasonic, with a vertically polarizing filter
in front of the lens
N/ACalibration to determine target reference color from average color intensity of histograms for crop row detection, image scanning to locate the rowTested on a J.I. Case-IH 7110 tractor, for a 125-m run by keeping a stable distance of 10 cm from the left side of a corn crop row in varying environmental illumination for 70 trials95% of the trials reported SD from the predetermined route identical to that of a human driver with speeds of 1.33 and 3.58 m/s
[65]MonocularSony CCD color cameraRadar, Laser, Encoders, Potentiometers, Limit switches, INS, GPSIntensity segmentation and color segmentation to detect and track the crop cut lineTested on standard self-propelled New Holland Model 2550 windrower over 5 km in El Centro, California95% of rows were segmented correctly at a speed of 1.94 m/s
[36]MonocularCCD color cameraN/AGround coordinate image transformation, R and G histogram image intensity integration to extract crop areasTested on a weeding machine, Nichinoki Seiko Inc. NAK-5, and a tractor at a farm of Hokkaido University over 150 m0.030 m RMS offset error and 0.3° heading error between camera and crop row for speeds of up to 1.3 m/s
[66]MonocularSony FCB-EX7805 CCD cameraN/APath segmentation using color thresholding, image scanning to determine path edges, least squares and RANSAC to find best fit lines, intersection detectionTested on Singh sprayer in a greenhouse for a tape path and a plant path of 61 cm width for both a straight path and a 90° turn0.010 m average error inside a straight plant path and 0.011 m-before/0.078 m-after a 90° turn at a speed of 0.2 m/s
[67]MonocularNot specifiedEncodersInfrared images to heighten contrast between plants and soil, amplitude threshold to identify the area of plants, Hough transform to determine position of rowsTested on a commercially manufactured tractor for use on horticultural plots inside artificial crop rows±0.020 m peak offset error with a speed of 1.5 m/s
[68]MonocularA camera equipped with a fish-eye lens with a 130° field of viewN/A3D map reconstruction, key frame selection, camera motion computation, Hierarchical bundle adjustmentTested on an experimental electric vehicle, Cycab, on a 127 m trajectory in sunny and cloudy weather0.018 m mean of the lateral deviation in straight line, 0.161 m minimum deviation in curves
[69]MonocularMicrosoft LifeCam Cinema web cameraIMU (CH Robotics UM6),
RTK-GPS/INS
(Novatel FlexPack with Tactical Grade IMU)
Lens distortion and down-sample correction, image stabilization, warp of image into overhead view, estimation of dominant parallel texture, image skewing for heading correction, frame template generationTested on a John Deere Gator TE electric utility vehicle for spraying weeds in wheat and sorghum stubble fields in Emerald, Australia, during day and night0.034 m minimum RMS error for open-loop experiments, 0.028 m for closed-loop experiments
[26]MonocularCOHU CCD gray-scale camera with a near-infrared filter 780 nm and 8.5 mm focal length lensN/AOpening operation on image to get an intensity independent gray-level image, thresholding to derive a binary image, perspective transform of world to image coordinates, Hough transform to find crop boxesTested on an inter-row cultivator tractor and a mobile robot for 5.5 Km in a sugar beet field0.027 m SD for the position of the tractor and 0.023 m for the robot
[78]MonocularColor video cameraGPS receiverThresholding of H component histogram of HIS color model to discriminate plants, vertical projection method to detect number and position of crop rows, linear scanning method to detect crop linesTested on a tractor in a corn field in Shang Zhuang under varying illuminations0.027 m maximum average error at speeds of 0.6, 1.0 and 1.4 m/s
[104]MonocularGoPro Hero 3+ with a near infrared filter at 750 nmN/ALower part of mage cropped, green plane extraction from cropped image, thresholding to extract path plane, filtering to remove noise, centroid of path plane determinationTested on a GEARs Surface Mobility Platform
in a laboratory setting and in a peach orchard
0.023 m RMS in-Lab, 0.021 m RMS in-field
[16]MonocularCamera with a near infrared optical filter 850 nm and 100 nm bandwidth and an auto-iris lensN/AImage intensity distribution, thresholding based on distribution of image pixels, segmentation of guidance rowsTested on a Nebraska Ford tractor in a cotton field with straight lines of Texas agricultural experiment station0.35 to 0.55 m heading error
[17]MonocularMOS camera with near infrared Oriel model
number 57700 SSO-nm, narrow-band pass filter and a Computar APC, 1: 1.3 25 mm auto-iris lens
N/AImage subsampling to evaluate the intensity of sample size, Bayes thresholding and class mixture coefficient calculation, RLE to identify edge points of crop rows, heuristic detection algorithm to determine parameters of lines belonging to crop rowsTested on a Ford 7710 tractor in cotton, sorghum and soybean crop rows−0.65 m to 0.90 m heading error, 0.0 m to −0.27 m offset error, with speeds of 0.88, 1.55 and 2.13 m/s
[30]MonocularPulnix TM500 charge coupled device (CCD) video
camera
N/AThresholding, features extractionTested on a tractor-mounted steerage hoe for cereals by Garford Farm Machinery at speeds up to 1.66 m/s0.013 m standard error in hoe position independent of speed
[18]MonocularVideo Blaster camera inferenceN/ARegression analysis to estimate the best-fit line through a row of blobs within a frame, thresholding for brightness adjustmentTested on a Case Maxxum 100 horsepower tractor for 100 m in a cotton field0.5 m initial displacement error from row for speeds of 1 m/s and 5 m/s, settles at 0.050 m after 20 m
[31]MonocularStandard CCD camera sensitive in the near infra-redN/ABandpass filtering to extract image intensity due to crop rowsTested on a mechanical hoe in winter wheat at Silsoe,
UK
0.0156 m positional error, at a speed of 1.6 m/s
[32]MonocularStandard mono-chrome CCD with a near infrared bandpass filter and 4.8 mm focal lengthN/ASaturation detection in histograms of pixel intensities, image thresholding, Kalman filtering to extract row featuresTested on a hoe of 6 m span standard design by Garford Farm Machinery Ltd. for a 110 m0.016 m SD in lateral error and ±0.010 m mean bias at 1.6 m/s
[33]MonocularSony FCB-EX780S “block” single CCD analog color video cameraLidar SICK LMS 200, DGPS receiverSegmentation algorithm, adaptive thresholding, morphological operationsTested on a John Deere 6410 tractor in a citrus grove on a 22 m straight path and a 17 m curve0.028 m average error in curved path at a speed of 3.1 m/s
[59]BinocularBumblebee2 parallel
binocular camera BB2-08S2C-38 (baseline 120 mm, focal length 3.8 mm, horizontal field of view 66°)
N/AImproved 2G-R-B grayscale transformation, Harris corner point detection, rank NSAD region matching, RANSAC refining of disparity, location of 3D position, crop row classification, centerline pathway determinationTested on a manual four-wheel trolley in cotton fields on cloudy and sunny days with a speed between 1.0 and 2.0 m/s92.78% detection rate for average deviation angle, 1.05°absolute average value, 3.66° average SD
[29]BinocularSTH-MD1
(VidereDesign, CA) stereo camera
RTK-GPSDisparity map computing, 3D points reconstruction, C-to-V transformation, elevation map creation, median filtering, navigation point determinationTested on a John Deere 7700 tractor in soya bean fields0.050 m RMS error
of lateral deviation
on straight and curved rows at speeds up to 3.0 m/s
[86]BinocularBumblebee2 binocular vision system Model BB2-03S2C-60N/ASURF for feature extraction and matching to obtain feature pairs, confidence density image construction by integrating the enhanced elevation image and the corresponding binarized crop row imageTested on a smart agricultural robot manufactured in Shanghai, China on S-type and O-type in-lab simulated crop plant leaves paths0.7° absolute mean of turning angle and 1.5° absolute SD for speed less than 0.5 m/s
[57]BinocularBumblebee 2 by Point GreyN/AExcess green minus excess red function to transform RGB to greyscale, smallest uni-value segment assimilating nucleus detector to detect contour of crop rows, stereo matching with Census transform to calculate disparity of corner points of crop rowsTested on Revo Leopard TG1254 tractor in a cotton leaves line with a crop distance
of 0.60 cm, row spacing of 1.20 m and 0.25 m minimum
and 0.50 m maximum height of leaves at 1.2 m/s
0.95° mean absolute deviation of course angle/1.26° SD, 0.040 m mean absolute deviation of lateral position/0.049 m SD, 2.99° mean absolute deviation of frond wheel angle/0.036 m SD
[85]BinocularRER-720P2
CAM-90 by
RERVISION Technology Co., Ltd. (Shenzhen, China)
RTK-GPSThreshold segmentation, RGB to HSV color space conversion, HSV channel segmentation, Otsu threshold segmentation of V component and morphological filtering, point operation of S and V components and Otsu threshold segmentation for shadow processingTested on an experimental autonomous carrier in a field road 1.2 m wide with a change in altitude and curvature0.031 m, 0.069 m and 0.105 m mean deviation for straight, multi-curvature and undulating roads, respectively, at 2 m/s
[21]Multi-visionTwo COHU 2100 series monochrome cameras with 800 nm narrow band NIR filtersN/AFull image processing, vertical transition image reduction and adaptive fuzzy linear regressionTested on Case 2188 Axial-Flow Combine in a corn fieldNot reported (The algorithm satisfactorily guided the combine)
[22]Multi-visionTwo COHU 2100 series monochrome cameras with 800 nm filters and 3 mm focal length lensAFS beacon GPSTwo-class K-means threshold segmentation, run length encoding to simplify the segmented image, classification, transition detection, sequential linear regression, adaptive fuzzy evaluationTested on Case 2188 Axial-Flow Combine in a corn fieldNot reported (The system successfully autonomously harvested corn at speeds of up to 2.66 m/s.
[23]Multi-visionThree camerasTrimble (Sunnyvale, CT) Ag122 beacon GPS, Trimble
4400 RTK GPS receiver
Adaptive segmentation based on histograms of image intensity row-by-row low-pass filtering, blob analysis by run length encoding, linear regression to track the center inter-rowTested on a Case 2188 rotary combine in a corn field of 4.6 ha at day and night for 14 runs with a maximum speed between 0.8 m/s to 1.3 m/s0.006 m overall accuracy with 0.133 m SD, 0.003 m average daytime accuracy and 0.133 m SD, −0.024 m average nighttime accuracy and 0.129 m SD
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vrochidou, E.; Oustadakis, D.; Kefalas, A.; Papakostas, G.A. Computer Vision in Self-Steering Tractors. Machines 2022, 10, 129. https://doi.org/10.3390/machines10020129

AMA Style

Vrochidou E, Oustadakis D, Kefalas A, Papakostas GA. Computer Vision in Self-Steering Tractors. Machines. 2022; 10(2):129. https://doi.org/10.3390/machines10020129

Chicago/Turabian Style

Vrochidou, Eleni, Dimitrios Oustadakis, Axios Kefalas, and George A. Papakostas. 2022. "Computer Vision in Self-Steering Tractors" Machines 10, no. 2: 129. https://doi.org/10.3390/machines10020129

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop