Next Article in Journal
Effects of Soil Physical Properties on Soil Infiltration in Forest Ecosystems of Southeast China
Previous Article in Journal
Changes in Soil Total and Microbial Biomass Nitrogen in Deforested and Eroded Areas in the Western Black Sea Region of Turkey
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Developing Forest Road Recognition Technology Using Deep Learning-Based Image Processing

1
Forest Technology and Management Research Center, National Institute of Forest Science, Pocheon 11187, Republic of Korea
2
Department of Biosystems Engineering, Kangwon National University, 1 Kangwondaehak-gil, Chuncheon 24341, Republic of Korea
3
Interdisciplinary Program in Smart Agriculture, Graduate School, Kangwon National University, 1 Kangwondaehak-gil, Chuncheon 24341, Republic of Korea
*
Authors to whom correspondence should be addressed.
Forests 2024, 15(8), 1469; https://doi.org/10.3390/f15081469
Submission received: 9 July 2024 / Revised: 25 July 2024 / Accepted: 19 August 2024 / Published: 21 August 2024

Abstract

:
This study develops forest road recognition technology using deep learning-based image processing to support the advancement of autonomous driving technology for forestry machinery. Images were collected while driving a tracked forwarder along approximately 1.2 km of forest roads. A total of 633 images were acquired, with 533 used for the training and validation sets, and the remaining 100 for the test set. The YOLOv8 segmentation technique was employed as the deep learning model, leveraging transfer learning to reduce training time and improve model performance. The evaluation demonstrates strong model performance with a precision of 0.966, a recall of 0.917, an F1 score of 0.941, and a mean average precision (mAP) of 0.963. Additionally, an image-based algorithm is developed to extract the center from the forest road areas detected by YOLOv8 segmentation. This algorithm detects the coordinates of the road edges through RGB filtering, grayscale conversion, binarization, and histogram analysis, subsequently calculating the center of the road from these coordinates. This study demonstrates the feasibility of autonomous forestry machines and emphasizes the critical need to develop forest road recognition technology that functions in diverse environments. The results can serve as important foundational data for the future development of image processing-based autonomous forestry machines.

1. Introduction

The efficient and sustainable management of forest resources is essential for wood production, a major economic activity worldwide [1,2,3,4,5,6]. The forestry industry is experiencing rapid technological advancements, including the introduction of autonomous machinery [7,8]. While these autonomous machines can reduce workforce requirements and improve efficiency, the complex road environments in forest areas pose significant challenges to their operation [9]. Forest road detection technology has emerged as a key solution to address this issue [10,11].
Forest road detection technology supports the safe and efficient operation of autonomous forestry machines under various environmental conditions [12]. Forest roads are characterized by a variety of obstacles, such as trees, leaves, rocks, and animals, as well as sudden terrain changes [13]. Consequently, the accurate detection and analysis of road conditions are crucial [14,15]. These capabilities play an important role in planning optimal routes, ensuring safe movement, and performing tasks effectively. Furthermore, forest road detection technology can greatly improve the efficiency of wood production. For example, it can reduce work time and fuel consumption by performing the real-time monitoring of road conditions and enabling logging trucks and other forestry machines to move along optimized routes within the forest area [16]. Not only does it help lower overall production costs, but real-time road condition monitoring also contributes to enhancing work speed and accuracy, preventing mechanical damage to vehicles, and reducing maintenance costs [17].
Environmental protection and sustainable forest management are paramount considerations in modern forestry [18]. In this context, forest road detection technology emerges as a pivotal element that can revolutionize wood production and the operation of autonomous forestry machines. This technology holds immense potential to enhance work efficiency, reduce transportation costs, safeguard worker safety, and contribute to the preservation of forest ecosystems [19].
Reviewing previous studies on the development of autonomous forestry machinery and forest road recognition technology, Ringdahl et al. [20] conducted a study on a forwarder capable of autonomous driving along pre-programmed routes using high-precision GPS and gyroscopes. Leidenfrost et al. [21] developed an autonomous driving system for forest roads based on a compass, ultrasonic sensors, and stereoscopic cameras. Fleischmann et al. [22] presented a new approach to forest road detection using stereo vision. Nakagomi et al. [12] developed technology to recognize forest road surfaces using LiDAR-SLAM and U-Net. Lei et al. [11] developed technology combining image processing and 2D LiDAR detection to detect and recognize irregular roads in forest environments, using the improved SEEDS SVM method for rapid image classification and recognition.
As demonstrated by these studies, active research is underway on autonomous driving-related technologies to enable self-driving on forest roads and in forest environments. In most of these previous studies, sensors such as LiDAR and GPS were used to develop autonomous driving systems. Sensors exposed to external environments are highly affected by environmental and meteorological conditions in terms of accuracy and reliability [23,24]. To achieve autonomous driving on forest roads, the system must automatically recognize the road irrespective of changes in the surrounding environment. However, detecting forest roads is quite challenging due to the diverse color changes based on environmental conditions (light, season, soil), and the typically irregular shape of the roads [25]. To overcome these challenges, researchers increasingly resort to deep learning-based image processing techniques, leveraging the advantages of deep learning-based object recognition methods in extracting irregular objects such as vehicles and people. In this study, we developed technology to automatically recognize forest roads using deep learning.

2. Materials and Methods

2.1. Image Collection

The study area is located in a forest road section near Jikdong-ri, Soheul-eup, Pocheon-si, Gyeonggi-do, South Korea (37°45′45.975″ N, 127°10′31.687″ E). Videos were collected while driving a tracked forwarder back and forth along a 1.2 km road section (Figure 1). Three cameras (DFK 36CR0234-I67, The Imaging Source, Bremen, Germany) were installed to obtain simultaneous video recordings from the left, right, and center viewpoints (Figure 2). These recordings captured the same driving route under various weather conditions. Each drive lasted approximately 20 min, resulting in a total of 120 min of video footage across six drives. Since the shape and brightness of the forest road seen in the captured image could be changed by the amount and the incidence angle of the light source, we tried to take photos on various ambient light conditions such as in the morning, and in the afternoon on three different levels of cloudiness as shown in Table 1, considering the average amount of cloud as a meteorological variable. In addition, since wood harvesting work is not performed in rainy or snowy weather conditions because it is dangerous, experiments were conducted by varying only the light source and the average amount of cloud that can affect the angle of incidence. Frames were extracted from the videos at regular intervals, ensuring that images did not overlap during training. A total of 633 images were extracted from the videos recorded from the three directions. The extracted images were saved in JPEG format with a resolution of 1920 × 1080.

2.2. Image Dataset Composition and Annotation

The image size was adjusted to 640 × 480 pixels to facilitate the training and testing of the model. From the total 633 images captured under natural conditions, 533 were randomly selected and adjusted to meet the input requirements of the YOLOv8 model. The dataset was split into training (80%), validation (10%), and test (10%) sets. Once deep learning training was completed, the test set was used to evaluate the model’s performance with images captured under different conditions. To create mask images following the shape of forest roads, polygonal regions were manually annotated using the visual geometry group (VGG) Image Annotator program [26]. Figure 3a,b shows examples of manual masking along the contour of the forest road in the image.

2.3. YOLOv8 Segmentation

In general, deep learning-based object detection techniques integrate a classification step (to determine meaningful objects within the image) and a localization step (to estimate the location of classified objects). Widely used techniques include You Only Look Once (YOLO) and regions with convolutional neural network (R-CNN) features. Since Redmon et al. [27] first proposed “You Only Look Once: Unified, Real-Time Object Detection”, various algorithms based on YOLO have been developed, including YOLOv4, YOLT, and the latest version, YOLOv8. YOLOv8 shares the same architecture as YOLOv6 but incorporates significant improvements, such as a new neural network architecture that utilizes both feature pyramid network (FPN) and path aggregation network (PAN), along with new labeling, offering a variety of enhancements over previous versions [28]. YOLOv8 uses the modified CSPDarknet53 backbone as its feature extractor (Figure 4). The CSPLayer used in YOLOv5 has been replaced with the C2f module, and the spatial pyramid pooling fast (SPPF) layer accelerates computation by pooling image features into a fixed-sized map. Each convolution applies batch normalization (BN) and SiLU activation. The head module of YOLOv8 is divided into three branches, each independently responsible for a specific task: processing objectness, classification, and regression [29].

2.4. Deep Learning Training

In this study, the modified CSPDarknet53 was employed as the base network for feature extraction [30]. Deep learning architectures with 255 layers pose significant challenges, including the number of layers, activation functions, and hyperparameters to consider for each new problem (model), leading to extended training times. To address these limitations and reduce the number of forest road images required to train the entire network, transfer learning was applied using the Microsoft Common Objects in Context (MS-COCO) dataset, which comprises 80 object categories and 200,000 images.
To fine tune the weights for forest road detection, only the weights of the RPN, classifier, and mask generation parts of the network were trained. The training model received color images resized to 640 × 480 pixels, with bounding boxes and masks annotated around each forest road. The learning rate, weight decay, momentum, batch size, and number of epochs were set to 0.001, 0.0005, 0.937, 64, and 300, respectively. These hyperparameters were chosen to achieve optimal performance based on the model’s accuracy and the maximum resources of the hardware used for deep learning training. Table 2 presents the specifications of the hardware used for deep learning training.

2.5. Deep Learning Evaluation Methods

To evaluate the forest road detection performance, we used precision, recall, F1 score, and mean average precision (mAP) as evaluation metrics. Precision is the ratio of true positives to the total number of instances predicted as positive by the model. Recall is the ratio of true positives to the total number of actual positive instances. In forest road detection, higher precision indicates higher detection accuracy, while higher recall indicates higher detection efficiency. The F1 score, which is the harmonic mean of precision and recall, provides an overall measure of detection performance. For object recognition tasks, mAP is commonly used as an evaluation metric, calculated by averaging the precision values for each class. For the evaluation, we randomly selected 60 images. In this study, an object is considered detected when the intersection over union (IoU) between the predicted bounding boxes and the ground truth bounding boxes is greater than or equal to 0.5. An IoU threshold of 0.5 (IoU ≥ 0.5) means that the two bounding boxes overlap by at least 50%, indicating a significant match. The precision, recall, and F1 score were calculated using Equations (1) to (4) based on IoU ≥ 0.5. Similarly, the mAP was calculated based on an IoU ≥ 0.5, meaning that only objects with an IoU of 0.5 or higher were considered detected.
Precision = T p T p + F p
R e c a l l = T p T p + T n
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
m A P = 1 N i = 1 N A P i
where
  • T p = the number of true positives (correct detections);
  • T n = the number of true negatives (correct rejection);
  • F p = the number of false positives (false detection);
  • F n = the number of false negatives (miss);
  • N = the number of classes or categories, and;
  • A P i = APi is the average precision (AP) for class i.

2.6. Image-Based Forest Road Center Extraction Algorithm

YOLOv8 segmentation was used to separately extract regions containing masked forest roads. An image post-processing algorithm was then added to compute the center of the extracted forest roads. This center extraction algorithm employs functions provided by Python-based OpenCV, as detailed in Figure 5.
The regions detected as forest roads by YOLOv8 segmentation were color masked and then filtered through an RGB color filter. Subsequently, grayscale conversion was applied to create a binary image. The binarized image, as shown in Figure 6, was used to detect the start and end points of the left and right edges of the forest road through a histogram. To visually represent the endpoints of the extracted forest road, a green line was drawn, and the central values of the left and right coordinates were marked in yellow. The red line indicates the center of the camera’s horizontal axis, which corresponds to the center of the forestry machine.
Using this method, the center of a forest road can be extracted by identifying the edges in an image recognized as a forest road using the deep learning-based image processing. This center is necessary for calculating the lateral offset required for autonomous driving. The developed image-based forest road center extraction algorithm was validated using real videos of forest roads filmed while driving on straight sections and around corners curving left and right. The field applicability of this algorithm was assessed using real videos of driving on forest roads filmed in environments not used in the deep learning training.

3. Results

3.1. Results of the Trained YOLOv8-Based Model

The training loss values of the YOLOv8 model across different epochs are plotted in Figure 7a,b. Overall, the loss values show a consistent downward trend, indicating that the model’s performance progressively improved during training. Convergence to low and stable loss values across all loss types further demonstrates the model’s effective learning from the datasets.
Figure 8 shows an example image in which YOLOv8 segmentation successfully detects and segments a forest road. The blue-colored area depicts the identified forest road segment. Only the segmented results of the objects that meet the confidence threshold of IoU ≥ 0.5 are outputted.
Table 3 presents the classification performance metrics for the trained model, showing excellent results with a precision of 0.966, a recall of 0.917, an F1-score of 0.941, and an mAP of 0.963.

3.2. Results of the Image-Based Forest Road Center Extraction Algorithm

The areas recognized as forest roads were separately extracted using the image-based forest road center extraction algorithm. The road edge lines were detected using a histogram. Figure 9b shows the image with the road area blue-masked on the original image in Figure 9a. The masked image was then processed with an RGB filter, changing only the blue areas to white and the rest to black (Figure 9c). This binarized image is used for generating a histogram, representing the sum of the pixels along the X-axis. Due to perspective, a road in the histogram typically appears in a trapezoidal shape, and the coordinates of the left and right edges of the trapezoid base are calculated and extracted.
The extracted center points are used as the extracted center of the forest road. The difference between an unknown center, such as the center of a vehicle, and the extracted forest road center, converted into the lateral offset of the vehicle, can be used as the autonomous driving–steering ratio applicable as a reference for autonomous driving and steering. Figure 9e illustrates the final image output through the image-based forest road center extraction algorithm.

3.3. Verification of Image-Based Forest Road Center-Extraction Algorithm

Figure 10 illustrates the results of applying the image-based forest road center-extraction algorithm to images captured while driving on forest roads, including straight sections and corners curving left and right. Despite using images captured under different conditions to those used for deep learning model training, the algorithm successfully recognized these road features and correctly extracted the road center (Figure 10a–c).
However, the algorithm encountered difficulties when the forest road had forks or multiple paths. In these situations, the algorithm recognized all visible paths in the image, leading to challenges in extracting a single center (Figure 10d). Additionally, roads with ambiguous shapes or objects resembling roads can cause misidentification, as seen in Figure 10e,f.
To reduce such recognition errors, the deep learning model’s performance can be improved by providing additional training data that includes images of various forest roads and environments (e.g., different weather conditions and lighting variations). Alternatively, sensor fusion with other sensors, such as GPS and IMU, can be employed to overcome the limitations of image processing alone and enhance the reliability of autonomous driving systems.

4. Discussion

This paper developed a road recognition technology using deep learning-based image processing to identify forest roads and extract their boundaries to introduce autonomous forestry machinery. The trained deep learning model demonstrated high accuracy and reliability, with a precision of 0.966, a recall of 0.917, an F1-score of 0.941, and an mAP of 0.963. An algorithm was additionally developed to extract the center of the forest road using the forest road detected by deep learning. This algorithm detects the coordinates of both ends of the road and extracts the center based on these coordinates. Using forestry machinery equipped with autonomous driving systems offers numerous advantages, such as reducing labor and preventing accidents, which leads to more active research and the development of such systems. Ringdahl et al. [20] developed an autonomous driving system for forwarders using Gyro and DGPS. Leidenfrost et al. [21] proposed a method for autonomous driving on forest paths without GPS using a compass, ultrasonic sensors, and stereoscopic cameras. Nakagomi et al. [12] researched and developed technology to recognize the surface of forest roads by integrating Li-DAR-SLAM and U-Net. Giusti et al. [31] showed the possibility of autonomous UAV driving along a forest trail using deep learning-based image classification using DNN. Çalışkan and Sevim [13] used deep learning (CNN) to extract only forest paths from high-resolution orthoimages, and as a result of comparing four models, the model using ResNet-50 showed the best performance with an accuracy of 98.49%. Since forestry machinery typically travels along designated forest roads, most autonomous driving technologies utilize the coordinates of the forest road or detect the road itself. Although deep learning-based image processing is generally more robust than conventional methods, it can still misrecognize shapes such as wheel tracks and vines (as shown in Figure 10e,f). It is necessary to improve the performance of the deep learning model by capturing additional forest roads to overcome these limitations. Additionally, there were instances, as shown in Figure 10d, in which two forest roads were both recognized at a fork. It needs an algorithm to decide which way to go based on the path planning assisted by additional devices such as a GPS or a predefined map.
Further research is needed to enhance the stability of the autonomous driving system by integrating sensors such as GPS and LiDAR. This integration can help overcome the limitations of image processing technology and achieve more reliable autonomous driving in the forestry sector.

5. Conclusions

This study focused on developing forest road recognition technology for autonomous forestry machines using deep learning-based image processing techniques. This was achieved by developing an algorithm that applies the YOLOv8 segmentation model to recognize roads and extract their center. This model demonstrated high accuracy and reliability, with a precision of 0.966, a recall of 0.917, an F1 score of 0.941, and an mAP of 0.963. Additionally, an image-based algorithm was developed to extract the center from the forest road areas detected by the YOLOv8 segmentation model. This algorithm involves detecting the coordinates of both road edge lines through RGB filtering, grayscale conversion, binarization, and histogram analysis. The road center is then calculated based on these coordinates.
The results of this study demonstrate the feasibility of autonomous forestry machines and highlight the importance of developing forest road recognition technology for various forest environments. The technology presented here serves as a crucial foundation for improving the autonomous driving capabilities of forestry machines in the future. Future research directions include collecting additional data to improve the deep learning model’s performance in recognizing forest roads under diverse environmental conditions. Additionally, research is needed to explore sensor fusion with GPS and LiDAR to address the limitations of image processing techniques.
This study has laid an important technological foundation for the development of autonomous forestry machines. This technology is expected to significantly contribute to sustainable forest management and enhance the efficiency of forestry machinery.

Author Contributions

Conceptualization, J.-H.O. and H.-S.L.; methodology, H.-S.L.; software, H.-S.L.; validation, B.-S.S. and J.-H.O.; investigation, G.-H.K. and H.S.J.; resources, H.-S.L. and H.-S.M.; data curation, H.-S.L.; writing original draft preparation, H.-S.L.; writing—review and editing, J.-H.O. and B.-S.S.; visualization, H.-S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This study was conducted with the support of the R&D Program for Forest Science Technology [grant number 2023475A00-2325-BB01] provided by the Korea Forest Service (Korea Forestry Promotion Institute).

Data Availability Statement

Please contact the corresponding author for data requests.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Marchi, E.; Chung, W.; Visser, R.; Abbas, D.; Nordfjell, T.; Mederski, P.S.; McEwan, A.; Brink, M.; Laschi, A. Sustainable Forest Operations (SFO): A New Paradigm in a Changing World and Climate. Sci. Total Environ. 2018, 634, 1385–1397. [Google Scholar] [CrossRef] [PubMed]
  2. Higman, S.; Mayer, J.; Bajo, E.; Judd, N.; Nusbaum, R. Manual de Silvicultura Sostenible Una Guía Práctica Para Administradores de Bosques Tropicales Sobre La Implementación de Nuevos Estándares; Earthscan Publ.: London, UK, 2005; ISBN 9781844071180. [Google Scholar]
  3. Klenner, W.; Arsenault, A.; Brockerhoff, E.G.; Vyse, A. Biodiversity in Forest Ecosystems and Landscapes: A Conference to Discuss Future Directions in Biodiversity Management for Sustainable Forestry. For. Ecol. Manag. 2009, 258, S1–S4. [Google Scholar] [CrossRef]
  4. Karlsson, L.; Bergsten, U.; Ulvcrona, T.; Elfving, B. Long-Term Effects on Growth and Yield of Corridor Thinning in Young Pinus Sylvestris Stands. Scand. J. For. Res. 2013, 28, 28–37. [Google Scholar] [CrossRef]
  5. Schweier, J.; Magagnotti, N.; Labelle, E.R.; Athanassiadis, D. Sustainability Impact Assessment of Forest Operations: A Review. Curr. For. Rep. 2019, 5, 101–113. [Google Scholar] [CrossRef]
  6. Schulze, E.D.; Bouriaud, O.; Irslinger, R.; Valentini, R. The Role of Wood Harvest from Sustainably Managed Forests in the Carbon Cycle. Ann. For. Sci. 2022, 79, 17. [Google Scholar] [CrossRef]
  7. Billingsley, J.; Visala, A.; Dunn, M. Robotics in Agriculture and Forestry. In Springer Handbook of Robotics; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1065–1077. [Google Scholar] [CrossRef]
  8. Hellström, T.; Lärkeryd, P.; Nordfjell, T.; Ringdahl, O. Autonomous Forest Vehicles: Historic, Envisioned, and State-of-the-Art. Int. J. For. Eng. 2009, 20, 31–38. [Google Scholar] [CrossRef]
  9. Visser, R.; Obi, O.F. Automation and Robotics in Forest Harvesting Operations: Identifying near-Term Opportunities. Croat. J. For. Eng. 2021, 42, 13–24. [Google Scholar] [CrossRef]
  10. La Hera, P.; Mendoza-Trejo, O.; Lindroos, O.; Lideskog, H.; Lindbäck, T.; Latif, S.; Li, S.; Karlberg, M. Exploring the Feasibility of Autonomous Forestry Operations: Results from the First Experimental Unmanned Machine. J. Field Robot. 2024, 41, 942–965. [Google Scholar] [CrossRef]
  11. Lei, G.; Yao, R.; Zhao, Y.; Zheng, Y. Detection and Modeling of Unstructured Roads in Forest Areas Based on Visual-2D Lidar Data Fusion. Forests 2021, 12, 820. [Google Scholar] [CrossRef]
  12. Nakagomi, H.; Fuse, Y.; Nagata, Y.; Hosaka, H.; Miyamoto, H.; Yokozuka, M.; Kamimura, A.; Watanabe, H.; Tanzawa, T.; Kotani, S. Forest Road Surface Detection Using LiDAR-SLAM and U-Net. In Proceedings of the 2021 IEEE/SICE International Symposium on System Integration, SII 2021, Iwaki, Fukushima, Japan, 11–14 January 2021; pp. 727–732. [Google Scholar] [CrossRef]
  13. Çalışkan, E.; Sevim, Y. Forest Road Detection Using Deep Learning Models. Geocarto Int. 2022, 37, 5875–5890. [Google Scholar] [CrossRef]
  14. Buján, S.; Guerra-Hernández, J.; González-Ferreiro, E.; Miranda, D. Forest Road Detection Using LiDAR Data and Hybrid Classification. Remote Sens. 2021, 13, 393. [Google Scholar] [CrossRef]
  15. Dabbiru, L.; Sharma, S.; Goodin, C.; Ozier, S.; Hudson, C.R.; Carruth, D.W.; Doude, M.; Mason, G.; Ball, J.E. Traversability Mapping in Off-Road Environment Using Semantic Segmentation. In Proceedings of the Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2021, Online, 12–17 April 2021; Volume 11748, pp. 78–83. [Google Scholar] [CrossRef]
  16. Guimarães, P.P.; Arce, J.E.; Lopes, E.d.S.; Fiedler, N.C.; Robert, R.C.G.; Seixas, F. Analysis of Fuel Consumption Sensitivity in Forestry Road Transport. Floresta 2019, 49, 155–162. [Google Scholar] [CrossRef]
  17. Venanzi, R.; Latterini, F.; Civitarese, V.; Picchio, R. Recent Applications of Smart Technologies for Monitoring the Sustainability of Forest Operations. Forests 2023, 14, 1503. [Google Scholar] [CrossRef]
  18. Yakovenko, N.; Guseynova, N.; Kolotushkin, A. A Forest Management System Based on Sustainable Development. BIO Web Conf. 2024, 93, 01008. [Google Scholar] [CrossRef]
  19. Abdelsalam, A.; Happonen, A.; Karha, K.; Kapitonov, A.; Porras, J. Towards Autonomous Vehicles and Machinery in Mill Yards of the Forest Industry: Technologies and Proposals for Autonomous Vehicle Operations. IEEE Access 2022, 10, 88234–88250. [Google Scholar] [CrossRef]
  20. Hellström, T.; Johansson, T.; Ringdahl, O. Development of an Autonomous Forest Machine for Path Tracking. In Springer Tracts in Advanced Robotics; Springer: Berlin/Heidelberg, Germany, 2006; Volume 25, pp. 603–614. [Google Scholar] [CrossRef]
  21. Leidenfrost, H.T.; Tate, T.T.; Canning, J.R.; Anderson, M.J.; Soule, T.; Edwards, D.B.; Frenzel, J.F. Autonomous navigation of forest trails by an industrial-size robot. Trans. ASABE 2013, 56, 1273–1290. [Google Scholar]
  22. Fleischmann, P.; Kneip, J.; Berns, K. An Adaptive Detection Approach for Autonomous Forest Path Following Using Stereo Vision. In Proceedings of the 2016 14th International Conference on Control, Automation, Robotics and Vision, ICARCV 2016, Phuket, Thailand, 13–15 November 2016. [Google Scholar] [CrossRef]
  23. Etezadi, H.; Eshkabilov, S. A Comprehensive Overview of Control Algorithms, Sensors, Actuators, and Communication Tools of Autonomous All-Terrain Vehicles in Agriculture. Agriculture 2024, 14, 163. [Google Scholar] [CrossRef]
  24. Guimarães, N.; Pádua, L.; Marques, P.; Silva, N.; Peres, E.; Sousa, J.J. Forestry Remote Sensing from Unmanned Aerial Vehicles: A Review Focusing on the Data, Processing and Potentialities. Remote Sens. 2020, 12, 1046. [Google Scholar] [CrossRef]
  25. Archana, R.; Jeevaraj, P.S.E. Deep Learning Models for Digital Image Processing: A Review. Artif. Intell. Rev. 2024, 57, 11. [Google Scholar] [CrossRef]
  26. Dutta, A.; Zisserman, A. The VIA Annotation Software for Images, Audio and Video. In Proceedings of the MM 2019-Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 2276–2279. [Google Scholar] [CrossRef]
  27. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
  28. Terven, J.; Córdova-Esparza, D.M.; Romero-González, J.A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
  29. What Is YOLOv8? The Ultimate Guide. 2024. Available online: https://blog.roboflow.com/whats-new-in-yolov8/ (accessed on 1 July 2024).
  30. Wang, C.Y.; Mark Liao, H.Y.; Wu, Y.H.; Chen, P.Y.; Hsieh, J.W.; Yeh, I.H. CSPNet: A New Backbone That Can Enhance Learning Capability of CNN. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 1571–1580. [Google Scholar] [CrossRef]
  31. Giusti, A.; Guzzi, J.; Cireşan, D.C.; He, F.-L.; Rodríguez, J.P.; Fontana, F.; Faessler, M.; Forster, C.; Schmidhuber, J.; Di Caro, G.; et al. A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots. IEEE Robot. Autom. Lett. 2015, 1, 661–667. [Google Scholar] [CrossRef]
Figure 1. Location of study site and tested forest road route.
Figure 1. Location of study site and tested forest road route.
Forests 15 01469 g001
Figure 2. Vision cameras for image data collection.
Figure 2. Vision cameras for image data collection.
Forests 15 01469 g002
Figure 3. Visualization of mask image.
Figure 3. Visualization of mask image.
Forests 15 01469 g003
Figure 4. YOLOv8 architecture [29].
Figure 4. YOLOv8 architecture [29].
Forests 15 01469 g004
Figure 5. Flow chart of image-based forested center-extraction algorithm.
Figure 5. Flow chart of image-based forested center-extraction algorithm.
Forests 15 01469 g005
Figure 6. Binary image histogram.
Figure 6. Binary image histogram.
Forests 15 01469 g006
Figure 7. (a) YOLOv8: bounding box, segmentation, classification, and distribution focal loss for training dataset. (b) YOLOv8: bounding box, segmentation, classification, and distribution focal loss (dfl_loss) for validation dataset.
Figure 7. (a) YOLOv8: bounding box, segmentation, classification, and distribution focal loss for training dataset. (b) YOLOv8: bounding box, segmentation, classification, and distribution focal loss (dfl_loss) for validation dataset.
Forests 15 01469 g007
Figure 8. Forest road detection and segmentation using YOLOv8 in test image.
Figure 8. Forest road detection and segmentation using YOLOv8 in test image.
Forests 15 01469 g008
Figure 9. Image-based forest center extraction results.
Figure 9. Image-based forest center extraction results.
Forests 15 01469 g009aForests 15 01469 g009b
Figure 10. Recognition results by algorithm on actual forest roads.
Figure 10. Recognition results by algorithm on actual forest roads.
Forests 15 01469 g010
Table 1. Meteorological conditions on image collection dates.
Table 1. Meteorological conditions on image collection dates.
Image Collection DatesMeteorological Conditions
Temperature (°C)Humidity (%)Average Cloud Cover (1/10)
11 September 202326.2767
13 September 202325.9913
16 October 202325.9620
Table 2. Hardware specifications used for deep learning training.
Table 2. Hardware specifications used for deep learning training.
HardwareType
CPUAMD Ryzen 7 3700X @ 3.6 GHz × 16
Memory64 GB
GPUNVIDIA GeForce RTX2080 Ti × 2
Table 3. Performance evaluation of YOLOv8 approaches on forest road.
Table 3. Performance evaluation of YOLOv8 approaches on forest road.
ApproachPrecisionRecallF1-ScoremAP
YOLOv80.9660.9170.9410.963
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, H.-S.; Kim, G.-H.; Ju, H.S.; Mun, H.-S.; Oh, J.-H.; Shin, B.-S. Developing Forest Road Recognition Technology Using Deep Learning-Based Image Processing. Forests 2024, 15, 1469. https://doi.org/10.3390/f15081469

AMA Style

Lee H-S, Kim G-H, Ju HS, Mun H-S, Oh J-H, Shin B-S. Developing Forest Road Recognition Technology Using Deep Learning-Based Image Processing. Forests. 2024; 15(8):1469. https://doi.org/10.3390/f15081469

Chicago/Turabian Style

Lee, Hyeon-Seung, Gyun-Hyung Kim, Hong Sik Ju, Ho-Seong Mun, Jae-Heun Oh, and Beom-Soo Shin. 2024. "Developing Forest Road Recognition Technology Using Deep Learning-Based Image Processing" Forests 15, no. 8: 1469. https://doi.org/10.3390/f15081469

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop