Skip to Content
SensorsSensors
  • Article
  • Open Access

29 December 2022

Hyperspectral Imaging for Mobile Robot Navigation

,
,
and
1
Institute of Automatic Control and Robotics, Warsaw University of Technology, 02-525 Warsaw, Poland
2
Łukasiewicz Research Network—Industrial Research Institute for Automation and Measurements PIAP, 02-486 Warsaw, Poland
*
Author to whom correspondence should be addressed.
This article belongs to the Section Physical Sensors

Abstract

The article presents the application of a hyperspectral camera in mobile robot navigation. Hyperspectral cameras are imaging systems that can capture a wide range of electromagnetic spectra. This feature allows them to detect a broader range of colors and features than traditional cameras and to perceive the environment more accurately. Several surface types, such as mud, can be challenging to detect using an RGB camera. In our system, the hyperspectral camera is used for ground recognition (e.g., grass, bumpy road, asphalt). Traditional global path planning methods take the shortest path length as the optimization objective. We propose an improved A* algorithm to generate the collision-free path. Semantic information makes it possible to plan a feasible and safe path in a complex off-road environment, taking traveling time as the optimization objective. We presented the results of the experiments for data collected in a natural environment. An important novelty of this paper is using a modified nearest neighbor method for hyperspectral data analysis and then using the data for path planning tasks in the same work. Using the nearest neighbor method allows us to adjust the robotic system much faster than using neural networks. As our system is continuously evolving, we intend to examine the performance of the vehicle on various road surfaces, which is why we sought to create a classification system that does not require a prolonged learning process. In our paper, we aimed to demonstrate that the incorporation of a hyperspectral camera can not only enhance route planning but also aid in the determination of parameters such as speed and acceleration.

1. Introduction

Over the last decades, we have observed that the market for autonomous cars and transport within industrial plants is developing in many countries.
Research on this type of autonomy system at the Łukasiewicz Research Network–Industrial Research Institute for Automation and Measurements (PIAP) has been conducted since the launch of the project in 2018, called ATENA.
Autonomous systems for terrain UGV (unmanned ground vehicle) platforms with the following leader function were implemented in the field of scientific research and development work for the defense and security founded by the National Center for Research and Development in Poland under the program Future Technologies for Defense in the Competition of Young Scientists. The ATENA project was about developing an autonomous system for unmanned ground vehicles operating in rough, unknown terrains. The ATENA system builds the model of an environment based on the point cloud. This system was tested in the field using two offroad cars modified by the Łukasiewicz-PIAP by adding the Łukasiewicz-PIAP drive-by-wire system, making them UGVs. These cars are shown in Figure 1.
Figure 1. Łukasiewicz-PIAP ATENA demonstrator of technology.
The traversability estimation algorithm used in ATENA transforms the point cloud to a 2.5D grid-based map. A sample of a point cloud from 3D is shown in Figure 2. The main problems occur when the ATENA system is planning clear paths in the field covered by vegetation.
Figure 2. Łukasiewicz-PIAP ATENA demonstrator of technology in obtaining 3D data.
An autonomous system based on the geometry rules is vulnerable to the wrong classification of single vegetation as a stiff obstacle that cannot be passed. This problem is well-known in the field of autonomous driving in rough terrain and could be one of the crucial problems to solve in the field of special and combat use of UGVs. An example of unstructured terrain is shown in Figure 3.
Figure 3. Łukasiewicz-PIAP ATENA demonstrator of technology in unstructured terrain.
Autonomous systems for a particular use are one of the most innovative and essential aspects of today’s usage of the UGV in modern warfare because UGVs can keep soldiers out of the danger zone. Good maneuverability in rough terrain is the key to survivability on the war field. The autonomous systems must be equipped with the traversability estimation algorithm to have the ability to move in rough terrain. The scope of the work is related to hyperspectral imaging for UGV use and is about adding this functionality to the autonomous systems of Łukasiewicz—PIAP UGVs, which will give a market advantage in this field. Hyperspectral imaging can increase the abilities of UGVs to better plan a safe path in rough terrain. Based on this sensor, it is possible to classify the materials of the terrain ahead of the UGV. This information can be used for modifying the cost of passing the terrain. This approach can be widened to even classify the obstacles in the field as being traversable. Humans do this classification automatically, e.g., when a driver drives a car on the ground road, he automatically decides that lonely high vegetation can be passed. Usually, the driver keeps driving on the road even if the path over the grass is shorter, and the autonomous algorithm must reproduce this behavior. This paper describes a method that covers this behavior in path planning based on traversability estimation with the use of hyperspectral imaging.

ATENA Navigation System

The ATENA vehicle is equipped with a set of sensors: three LIDAR sensors—Velodyne VLP-16 (16 layers, 100 m range), 5 Basler acA1920-48gc (50 frames per second at 2.3 MP resolution) cameras and the Xsens MTi-G-710 IMU sensor and hyperspectral camera Cubert Q285. The specs of the hyperspectral camera are shown in Figure 4.
Figure 4. The image and table with properties of the hyperspectral camera Cubert Q285.
Based on Lidar sensor indication, the ATENA system builds a grid-based map of the environment. This part of our research is described in [1]. In a classic approach, the grids are divided into groups—occupied and free from obstacles. In our system, a semantic label is attached to each free-from-obstacles cell. This label represents the type of ground and is defined based on hyperspectral camera indication.
Then modified A* algorithm is used for collision-free path planning. The algorithm takes into account the type of surface. That allows us to consider various optimization criteria in the path-planning process—fuel consumption, route length, and travel time.
This paper is organized as follows: following the introduction is the section discussing related work. In Section 3, the method of ground recognition is explained; this section contains experimental results illustrating the advantages of our approach. Section 4 presents the influence of hyperspectral imaging in the collision-free path planning method considering the cost of driving over different terrain. The article concludes with a summary and bibliography.

3. The Methodology

Our approach uses the supervised learning method to classify the ground based on hyperspectral camera images.
The algorithm consists of the following steps:
  • Data assembling,
  • Data reduction,
  • Normalization and choosing the metric,
  • Creating models of the classes (learning phase).

3.1. Data Assembling

The hyperspectral images can be represented as a set of triples:
{ ( x i j , y i j , λ i j ) }
where x and y represent 2D spatial dimensions of the scene, and λ represents the 2D spectral dimension. The parameter λ i j can be used during the segmentation process.
Figure 5 presents the result of potted plant image segmentation based on spectral responses. The image is represented as a grid of cells, and each area’s average value of λ is computed. It is assumed that two cells belong to the same class if the corresponding values of parameters λ are near each other.
Figure 5. The image taken by a Cubert Q285 camera (a), and the result of segmentation (b).
Data for traversability estimation were collected as the robot traveled in different light and weather conditions. The vehicle moved on asphalt, a bumpy road, a forest road, and grass. Figure 6 shows images of each type of road.
Figure 6. Different types of roads and corresponding spectral response: (a) dirt road, (b) forest road (land), (c) bumpy/mixed road (with ruts).
The photos are divided into the square cells shown in Figure 6 (red grid). A spectral distribution is created for each cell, and a label (class) is assigned.

3.2. Data Reduction

The distributions of the spectral response of areas belonging to the same class and similar values are averaged using the kernel density estimation (KDE) technique [36].
Figure presents the idea behind the method. On the left are images obtained from the hyperspectral camera. The squares indicate selected road parts for which wavelength distributions were created. The figures on the right show the spectral distributions. The solid line marks the result of averaging. In our approach, spectral distribution ( λ ) is treated as a feature vector and is used during classification.
To avoid the influence of lighting conditions, the values of λ have been normalized according to the formula:
r i N = r i r m a x r m i n
where r i —intensity of ith wavelength.
During the experiments, it turned out that the same type of area (grass) has different values of λ for different seasons (Figure 7). Therefore, some classes are represented by several vectors of features.
Figure 7. Grass at a different time of year: (a) grass in May, (b) grass in October.
To classify an unknown area, we need to find the nearest area with a known class membership [37]. For this purpose, we have to define a metric to compare the distance between spectral distributions. In our algorithm, we adopted methods of comparing histograms [38].

3.3. Choosing the Proper Distance Function

The choice of the appropriate distance function significantly affects the classification result. We adopted the methods that are used to compare histograms: the correlation between histograms (Equation (3)), the product of histograms (Equation (5)), and evaluating distances using the chi-square method (Equation (6)). x i and y i are the corresponding values of the histogram pair ( X , Y ) . It is assumed that x i and y i are the corresponding values of the histogram pair ( X , Y ) .
d c ( X , Y ) = i = 1 n ( x i x ^ ) ( y i y ^ ) i = 1 n ( x i x ^ ) 2 ( y i y ^ ) 2
where:
x ^ = i = 1 n x i n , y ^ = i = 1 n y i n
d c ( X , Y ) )—correlation between histograms.
d ( X , Y ) = i = 1 n m i n ( x i , y i )
d ( X , Y ) —product of histograms
d c 2 ( X , Y ) = i = 1 n ( x i y i ) 2 ( x i + y i )
d c 2 ( X , Y ) —chi-square methods.
The distance between two different classes should be as large as possible. The metric values for elements belonging to the same class should be close to zero. The experiments show that the best results were obtained for the chi-square function.

3.4. Classification Phase

In our approach, the nearest-neighbor supervised learning algorithm is used. The examples in the dataset have labels assigned to them, and their classes are known (Section 3.2). Figure 8 shows the idea behind the method. The algorithm consists of the following steps: (1) calculating the distance between a test example and dataset examples, and (2) finding the minimum distance and corresponding class label. The label is the result of the classification.
Figure 8. Classification diagram.

3.5. Experimental Results

Classification accuracy was determined by adopting a chi-square metric. Figure 9 shows the confusion matrix (in percent). The matrix compares the actual class values with those predicted by the learning model. Vegetated terrain is not confused with other terrains.
Figure 9. Confusion matrix for terrain classification matrix (%).
After classifying cells of the area, semantic information should be taken into account. For example, if a cell of an image is classified as grass and is surrounded by cells classified as a road, the classification result is changed. This method resembles the method of n-nearest neighbors [37].
Figure 10 shows the result of the classification. The gray color shows parts of the asphalt road, and the green one shows the grass. Context is taken into account in the classification process, i.e., if a single piece is surrounded by fragments belonging to another class, the value is changed.
Figure 10. The segmentation results, (a) original image, (b) image after segmentation.

4. Application of Surface Recognition in Collision-Free Path Planning

In this article, we suggest using the diffusion method for path planning [1,39]. The main advantage of the approach is that we can easily consider the cost of driving over different terrains.
The map of the environment is represented as a grid of cells (array map). Obstacle-free cells represent the possible robot positions (states). Two states are distinguished: the initial robot position ( c R ) and the goal position ( c G ). In classical path planning systems, we divide cells into two classes: free from obstacles and occupied. For 2.5D maps, class membership is determined by thresholding. A cell is classified as occupied if the observed height exceeds a certain threshold and is free of obstacles otherwise. Figure 11 presents the map built by the ATENA navigation system [1]. Red fragments represent areas occupied by obstacles, green—free of obstacles, white—unexplored. The algorithm consists of the following steps:
Figure 11. The map of the environment: (a) the map, (b) the path.
  • A diffusion map (array v) is initialized in the path planning algorithm’s first step. The big value is assigned to the cell, representing the goal position ( c G ), and the values 0.0 are attached to other cells.
  • For each unoccupied cell ( m a p i j ), the value ( v i j ) is calculated according to the formula:
    v i j = m a x v e k l N i j ( v e k l d i s t ( c k l , c i j ) )
    where N i j —neighborhood of the cell i j , d i s t —distance between cells. This process continues until stability is established.
  • The path (list of cells) is generated during the next step. The first cell represents the initial robot’s position. The next one is indicated by the neighbor of ( c R ) with a maximum value of v k l . The process continues until the cell with the maximum function value is reached. Figure 11b represents the path generated by the algorithm. A black line indicates the planned path. R—the robot’s initial position, G—the target’s location.
In the case of outdoor robots, we look for the path with the shortest travel time. The travel time (robot speed) can be related to the type of ground, distance to obstacles, etc. To solve this problem, we have to modify the map of the environment and the diffusion method. The map of the environment represents the cost function. The value depends on the type of ground detected, thanks to hyperspectral imaging. In the case of a perfectly smooth terrain (asphalt), the value of this parameter equals 0.0. In the case of uneven terrain, the value is increased proportionally to the increase in movement cost (time).
Figure 12 presents the map if the type of ground is considered. Red fragments represent areas occupied by obstacles, green—grass, gray—asphalt, white—unexplored.
Figure 12. The map of the environment if the type of ground is recognized: (a) the map, (b) the path.
Equation (7) is modified as follows:
v i j = m a x v e k l N i j ( v e k l d i s t ( c e k l , c a i j ) m a p k l )
Figure 12b represents the path generated by the algorithm. A black line indicates the planned path. R—the robot’s initial position, G—the target’s location.
The path’s start and end are exactly the same as in Figure 11, but because the cost of travel for different terrains is taken into account, the paths differ significantly. Most of the second path runs on asphalt.

Discussion

We want to show the advantages of our system by comparing it with the algorithm presented in [40]. The paper proposes a hierarchical path-planning method for a wheeled robot. It is assumed that the vehicle moves in a partially known uneven terrain, and the 3D map stores elevation data, such as the terrain’s slope, step, and unevenness. The A* algorithm is used for global path planning. The Q-learning method is employed to avoid locally detected obstacles.
In our opinion, geometric information about the surface is insufficient when the robot moves on difficult, rugged, uneven terrain. An area defined as impassable based on the elevation map (for example, high tufts of grass) may be accessible to the robot. Without knowledge of the ground structure, it is difficult to determine whether the robot can drive up a given hill. A hill covered by asphalt is more accessible than a hill covered by mud; both hills could have the same geometry, but the cost of passing them through is different. Additionally, hyperspectral imaging can give information that will change the permissible speed of the robot on a given surface. Similarly, in the case of the Q-learning algorithm, it is worth taking into account semantic information about obstacles.

5. Conclusions and Future Works

In the paper, we presented the use of a hyperspectral camera in the navigation of a mobile robot. An effective surface classification method has been developed. An important novelty of this paper is the use of a modified method of the nearest neighbor. Thanks to this solution, we can quickly teach the system to add and remove surface classes. This feature is essential when the system is being developed and modified. Unlike neural networks, the algorithm is transparent and non-parametric. Transparency of classification is vital in security systems. In the case of CNN, if an image’s context is misleading or confusing, it can cause the model to make incorrect predictions. Experiments carried out in the natural environment have shown the effectiveness of the proposed solution. It has been shown that information about the surface type can be easily included in path planning and determining the permissible speeds of the vehicle. In the future, we plan to expand the system by adding new classes of surface types. We want to use the surface type information in global path planning and in controlling the robot, e.g., determining the allowable values of linear or angular acceleration. Recognition of surfaces with the help of a camera has been described in the literature, but we have not found an article in which information from a hyperspectral camera was used to determine the cost of driving on a given surface and the permissible speeds. Our experience shows that the current price of hyperspectral cameras for a robot moving in a structured environment is not profitable. Using a hyperspectral camera on a robot moving in difficult, uneven terrain can significantly improve the system’s efficiency. It can also increase the safety of the mission. In the current version of the system, the vehicle’s dynamic is taken into account to a minimal extent. The mass of a vehicle, coefficient of friction, and/or torque limitations can make a particular surface impassable for the robot. The corresponding grid-based map cells are marked as occupied by obstacles. The cost of travel depends on the distance from obstacles. In the future, we will adopt the methods described in [41,42] to our path planning algorithm to allow the robot to move with higher velocities and consider the dynamic to a broader extent.
We plan to expand the system by adding a robot self-location module, in which semantic information will be used in addition to metric information.
In the articles discussed in Section 2, the robots used the hyperspectral cameras to carry out specific tasks, such as plant irrigation tests, not in path planning, so the system proposed in this work can be successfully used in the robots mentioned above that already have an HS camera installed for other purposes. Nevertheless, the price of hyperspectral cameras is high, but it is constantly falling. Ten years ago, the price of 3D laser rangefinders was also high, and now they are standard equipment for mobile robots. In the case of a natural, unstructured environment, the use of information obtained from a hyperspectral camera can improve not only the efficiency but also the security of the system.

Author Contributions

Methodology and writing: K.J., B.S., R.W. and J.R.; software for path planning, B.S.; hyperspectral images and data preparation R.W.; classification method K.J.; related work J.R.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The source code and data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Duszak, P.; Siemiatkowska, B.; Więckowski, R. Hexagonal Grid-based Framework for Mobile Robot Navigation. Remote Sens. 2021, 13, 4216. [Google Scholar] [CrossRef]
  2. Tomsett, C.; Leyland, J. Development and Testing of a Uav Laser Scanner and Multispectral Camera System for Eco-geomorphic Applications. Sensors 2021, 21, 7719. [Google Scholar] [CrossRef] [PubMed]
  3. Aasen, H.; Burkart, A.; Bolten, A.; Bareth, G. Generating 3D Hyperspectral Information with Lightweight UAV Snapshot Cameras for Vegetation Monitoring: From Camera Calibration to Quality Assurance. ISPRS J. Photogramm. Remote Sens. 2015, 108, 245–259. [Google Scholar] [CrossRef]
  4. Yue, J.; Yang, G.; Li, C.; Li, Z.; Wang, Y.; Feng, H.; Xu, B. Estimation of Winter Wheat Above-ground Biomass Using Unmanned Aerial Vehicle-based Snapshot Hyperspectral Sensor and Crop Height Improved Models. Remote Sens. 2017, 9, 708. [Google Scholar] [CrossRef]
  5. Liu, M.; Zhang, Z.; Liu, X.; Yao, J.; Du, T.; Ma, Y.; Shi, L. Discriminant Analysis of the Damage Degree Caused by Pine Shoot Beetle to Yunnan Pine Using UAV-based Hyperspectral Images. Forests 2020, 11, 1–22. [Google Scholar] [CrossRef]
  6. Teague, J.; Megson-Smith, D.A.; Allen, M.J.; Day, J.C.C.; Scott, T.B. A Review of Current and New Optical Techniques for Coral Monitoring. Oceans 2022, 3, 30–45. [Google Scholar] [CrossRef]
  7. Cortesi, I.; Masiero, A.; Tucci, G.; Topouzelis, K. UAV-Based River Plastic Detection with a Multispectral Camera. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, XLIII-B3-2022, 855–861. [Google Scholar] [CrossRef]
  8. Mei, A.; Zampetti, E.; Mascio, P.D.; Fontinovo, G.; Papa, P.; D’Andrea, A. ROADS-Rover for Bituminous Pavement Distress Survey: An Unmanned Ground Vehicle (UGV) Prototype for Pavement Distress Evaluation. Sensors 2022, 22, 3414. [Google Scholar] [CrossRef]
  9. Ilehag, R.; Leitloff, J.; Weinmann, M.; Schenk, A. Urban Material Classification Using Spectral and Textural Features Retrieved from Autoencoders. ISPRS Ann. Photogramm. Remote. Sens. Spat. Inf. Sci. 2020, 5, 25–32. [Google Scholar] [CrossRef]
  10. Bajić, M.; Bajić, M. Modeling and Simulation of Very High Spatial Resolution Uxos and Landmines in a Hyperspectral Scene for Uav Survey. Remote Sens. 2021, 13, 837. [Google Scholar] [CrossRef]
  11. Gorsevski, P.V.; Gessler, P.E. The Design and the Development of a Hyperspectral and Multispectral Airborne Mapping System. ISPRS J. Photogramm. Remote Sens. 2009, 64, 184–192. [Google Scholar] [CrossRef]
  12. Xiang, H.; Tian, L. Method for Automatic Georeferencing Aerial Remote Sensing (RS) Images from an Unmanned Aerial Vehicle (UAV) Platform. Biosyst. Eng. 2011, 108, 104–113. [Google Scholar] [CrossRef]
  13. Beauvisage, A.; Ahiska, K.; Aouf, N. Robust Multispectral Visual-Inertial Navigation with Visual Odometry Failure Recovery. IEEE Trans. Intell. Transp. Syst. 2022, 23, 9089–9101. [Google Scholar] [CrossRef]
  14. Andreou, C.; Karathanassi, V.; Kolokoussis, P. Investigation of Hyperspectral Remote Sensing for Mapping Asphalt Road Conditions. Int. J. Remote Sens. 2011, 32, 6315–6333. [Google Scholar] [CrossRef]
  15. Knaeps, E.; Sterckx, S.; Strackx, G.; Mijnendonckx, J.; Moshtaghi, M.; Garaba, S.P.; Meire, D. Hyperspectral-reflectance Dataset of Dry, Wet and Submerged Marine Litter. Earth Syst. Sci. Data 2021, 13, 713–730. [Google Scholar] [CrossRef]
  16. Pozo, S.D.; Rodríguez-Gonzálvez, P.; Hernández-López, D.; Felipe-García, B. Vicarious Radiometric Calibration of a Multispectral Camera on Board an Unmanned Aerial System. Remote Sens. 2014, 6, 1918–1937. [Google Scholar] [CrossRef]
  17. Guo, Y.; Senthilnath, J.; Wu, W.; Zhang, X.; Zeng, Z.; Huang, H. Radiometric Calibration for Multispectral Camera of Different Imaging Conditions Mounted on a UAV Platform. Sustainability 2019, 11, 978. [Google Scholar] [CrossRef]
  18. Krishna, M.B.; Porwal, A. Hyperspectral Image Processing and Analysis. Curr. Sci. 2015, 108, 833–841. [Google Scholar]
  19. Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Fauvel, M.; Gamba, P.; Gualtieri, A.; et al. Recent Advances in Techniques for Hyperspectral Image Processing. Remote Sens. Environ. 2009, 113, S110–S122. [Google Scholar] [CrossRef]
  20. Kozinov, I.A.; Maltsev, G.N. Development and Processing of Hyperspectral Images in Optical–electronic Remote Sensing Systems. Opt. Spectrosc. 2016, 121, 934–946. [Google Scholar] [CrossRef]
  21. Elfes, A. Using Occupancy Grids for Mobile Robot Perception and Navigation. Computer 1989, 22, 46–57. [Google Scholar] [CrossRef]
  22. Reinoso, O.; Payá, L. Special Issue on Mobile Robots Navigation. Appl. Sci. 2020, 10, 1317. [Google Scholar] [CrossRef]
  23. Belter, D.; Skrzypczyński, P. Rough Terrain Mapping and Classification for Foothold Selection in a Walking Robot. J. Field Robot. 2011, 28, 497–528. [Google Scholar] [CrossRef]
  24. Siegwart, R.; Nourbakhsh, I.R.; Scaramuzza, D. Introduction to Autonomous Mobile Robots, 2nd ed.; Intelligent Robotics and Autonomous Agents Series; MIT Press: Cambridge, MA, USA, 2011. [Google Scholar]
  25. Sevastopoulos, C.; Konstantopoulos, S. A Survey of Traversability Estimation for Mobile Robots. IEEE Access 2022, 10, 1. [Google Scholar] [CrossRef]
  26. Thoresen, M.; Nielsen, N.H.; Mathiassen, K.; Pettersen, K.Y. Path Planning for UGVs Based on Traversability Hybrid A. IEEE Robot. Autom. Lett. 2021, 6, 1216–1223. [Google Scholar] [CrossRef]
  27. Ho, K.; Peynot, T.; Sukkarieh, S. Nonparametric Traversability Estimation in Partially Occluded and Deformable Terrain. J. Field Robot. 2016, 33, 1131–1158. [Google Scholar] [CrossRef]
  28. Waibel, G.G.; Low, T.; Nass, M.; Howard, D.; Bandyopadhyay, T.; Borges, P.V.K. How Rough Is the Path? Terrain Traversability Estimation for Local and Global Path Planning. IEEE Trans. Intell. Transp. Syst. 2022, 23, 16462–16473. [Google Scholar] [CrossRef]
  29. Sevastopoulos, C.; Oikonomou, K.M.; Konstantopoulos, S. Improving Traversability Estimation Through Autonomous Robot Experimentation. Comput. Vis. Syst. 2019, 11754, 175–184. [Google Scholar]
  30. Liyanage, D.C.; Hudjakov, R.; Tamre, M. Hyperspectral Imaging Methods Improve RGB Image Semantic Segmentation of Unstructured Terrains. In Proceedings of the 2020 International Conference Mechatronic Systems and Materials (MSM), Bialystok, Poland, 1–3 July 2020; pp. 1–5. [Google Scholar] [CrossRef]
  31. Winkens, C.; Sattler, F.; Paulus, D. Hyperspectral terrain classification for ground vehicles. In Proceedings of the 12th International Conference on Computer Vision Theory and Applications (VISAPP), Porto, Portugal, 27 February–1 March 2017. [Google Scholar]
  32. Winkens, C.; Kobelt, V.; Paulus, D. Robust Features for Snapshot Hyperspectral Terrain-Classification. Comput. Anal. Images Patterns 2017, 10424, 16–27. [Google Scholar]
  33. Basterretxea, K.; Martínez, V.; Echanobe, J.; Gutiérrez-Zaballa, J.; Campo, I.D. HSI-Drive: A Dataset for the Research of Hyperspectral Image Processing Applied to Autonomous Driving Systems. In Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan, 11–17 July 2021; pp. 866–873. [Google Scholar] [CrossRef]
  34. Xu, R.; Li, C. A Review of High-Throughput Field Phenotyping Systems: Focusing on Ground Robots. Plant Phenomics 2022, 2022, 1–20. [Google Scholar] [CrossRef]
  35. Ravankar, A.; Ravankar, A.A.; Rawankar, A.; Hoshino, Y. Autonomous and Safe Navigation of Mobile Robots in Vineyard with Smooth Collision Avoidance. Agriculture 2021, 11, 954. [Google Scholar] [CrossRef]
  36. Agarwal, N.; Aluru, N.R. A Data-driven Stochastic Collocation Approach for Uncertainty Quantification in MEMS. Int. J. Numer. Methods Eng. 2010, 83, 575–597. [Google Scholar] [CrossRef]
  37. Russell, S.J.; Norvig, P.; Davis, E.; Hall, P. Artificial Intelligence: A Modern Approach, 3rd ed.; Prentice Hall Series in Artificial Intelligence; Prentice Hall: Hoboken, NJ, USA, 2010. [Google Scholar]
  38. Cha, S.-H.; Srihari, S.N. On Measuring the Distance between Histograms. Pattern Recognit. 2002, 35, 1355–1370. [Google Scholar] [CrossRef]
  39. Marcin, C.; Barbara, S.; Wojciech, S.; Sławomir, S. Energy Efficient UAV Flight Control Method in an Environment with Obstacles and Gusts of Wind. Energies 2022, 15, 3730. [Google Scholar]
  40. Zhang, B.; Li, G.; Zheng, Q.; Bai, X.; Ding, Y.; Khan, A. Path Planning for Wheeled Mobile Robot in Partially Known Uneven Terrain. Sensors 2022, 22, 5217. [Google Scholar] [CrossRef]
  41. Hua, C.; Niu, R.; Yu, B.; Zheng, X.; Bai, R.; Zhang, S. A Global Path Planning Method for Unmanned Ground Vehicles in Off-Road Environments Based on Mobility Prediction. Machines 2022, 10, 375. [Google Scholar] [CrossRef]
  42. Guo, D.; Wang, J.; Zhao, J.B.; Sun, F.; Gao, S.; Li, C.D.; Li, M.H.; Li, C.C. A vehicle path planning method based on a dynamic traffic network that considers fuel consumption and emissions. Sci. Total. Environ. 2019, 663, 935–943. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.