Next Article in Journal
Resilience Regulation Strategy for Container Port Supply Chain under Disruptive Events
Previous Article in Journal
Design, Analysis and Simulation of Microstrip Antenna Arrays with Flexible Substrate in Different Frequency, for Use in UAV-Assisted Marine Communications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Lane Detection Algorithm for Autonomous Trucks in Container Terminals

1
Division of Logistics, Korea Maritime and Ocean University, Busan 49112, Republic of Korea
2
Division of Mechanical Engineering, Korea Maritime and Ocean University, Busan 49112, Republic of Korea
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2023, 11(4), 731; https://doi.org/10.3390/jmse11040731
Submission received: 19 February 2023 / Revised: 19 March 2023 / Accepted: 27 March 2023 / Published: 27 March 2023
(This article belongs to the Section Coastal Engineering)

Abstract

:
Container terminal automation offers many potential benefits, such as increased productivity, reduced cost, and improved safety. Autonomous trucks can lead to more efficient container transport. A novel lane detection method is proposed using score-based generative modeling through stochastic differential equations for image-to-image translation. Image processing techniques are combined with Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Genetic Algorithm (GA) to ensure fast and accurate lane positioning. A robust lane detection method can deal with complicated detection problems in realistic road scenarios. The proposed method is validated by a dataset collected from the port terminals under different environmental conditions; in addition, the robustness of the lane detection method with stochastic noise is tested.

1. Introduction

The global trade carried by sea transportation accounted for around 80% of the total volume, with a handling capacity of approximately 160 million Twenty-foot Equivalent Units (TEUs), in the maritime containerization market in the year 2021 [1]. As sea freight is highly cost-effective for any cargo size, maritime shipping remains the backbone of global trade and economy. Container terminals must be effectively constructed to meet the increasing expectations and demands of shipping companies for the reliability of port services. Seaports are critical nodes in the network for economic and social development as they empower trade and support supply chain networks. They have recently evolved into comprehensive transportation hubs that are essential for connecting railways, roads, and airports. Transforming a port into a smart port has become one of the most practical strategies for offering today’s intelligent port platform. Three primary target areas characterize smart port automation, sustainability, and collaboration, in which automation has exploded with the implementation of 4.0 technology such as big data, Artificial Intelligence (AI), Internet of Things (IoT), etc. [2]. The increased digitization can help port authorities prepare for volatility. One major operation in the terminals is handling various containers transported by yard trucks. Automating Container Trucks (ACT) can bring significant benefits and is crucial to port automation. In this field, a whole automated container transport system via a roll-on/roll-off method was proposed for connecting a seaport and a hinterland port [3]. While hybrid technology significantly enhances the port performance and productivity, optimal design methods offer the considerable potential of automated technology to the fullest. In addition, the possible options for employing advanced technology include driver assistance, remote control, and autonomous driving in automated terminals. In particular, improving lane marking detection and classification is a crucial technology in developing autonomous logistics or Intelligent Transportation Systems (ITS). This technology enables the precise estimation of a vehicle’s lateral position and provides valuable traffic information to the ACT, making more informed driving possible. As a result, it has garnered significant interest and concerns among numerous researchers. For example, Bosch is working on creating cost-effective yet practical products for autonomous driving in structured environments. Bosch technology combines a camera and mmW-radar to provide Forward Collision Warning (FCW) or Lane Departure Warning (LDW) [4]. Advanced Driver-Assistance Systems (ADAS) focus on realizing lane detection and transverse obstacle recognition. These systems aim to enhance safety during the operation of forklifts or other industrial trucks [5,6].
Through real-time lane detection and a tracking system, the position and azimuth deviation of an ACT are generated to provide inputs to its control system. Compared with magnetic nails and LiDAR-based positioning and navigation techniques [7], vision-based lane deviation detection offers great potential due to its high accuracy, low cost, and strong visual content. To date, the vision-based method has been extensively studied and validated in urban road scenarios [8,9,10]. The core of lane detection study includes detection accuracy and robust tracking with efficiency. In addition to image processing techniques, deep learning-based lane detection and tracking methods have received considerable attention as a result of their promising applications [11]. Two Convolution Neural Networks (CNNs) have been developed in complex traffic scenes [10]. The first CNN was utilized for detecting the existence of road markings and determining their geometric characteristics, while the other was employed for structural prediction. Another study introduced a dual-view CNN for lane detection and claimed that the method was more reliable than the current state-of-the-art methods [12]. Unfortunately, most studies required large-scale datasets, such as the CULane or TuSimple dataset, to train their network models [13,14,15]. The open-source datasets are mostly related to urban road scenarios and might not help detect lanes on the roads for the ACT in container ports. It can be a costly business to collect a lane dataset containing numerous samples with accuracy in realistic road scenarios. However, lane detection and tracking methods based on image processing combinations do not require large-scale datasets [16,17]. Moreover, strong structural constraints of lanes in container yard scenes can help achieve better detection and tracking effects in the vision methods.
In this study, a robust lane detection method can deal with complicated detection problems in realistic road scenarios. A generative model utilizing image processing, machine learning, and optimization techniques is implemented to locate and reconstruct the lanes on the roads in the seaport terminals. The proposed strategy efficiently supports driver assistance, remote monitoring, and self-driving vehicles in the port terminals for handling containers.
The remainder of this study is organized as follows. Section 2 discusses the related works, and the problem description in the port terminal will be presented in Section 3. Section 4 offers the details of the lane detection method to be realized. Finally, Section 5 and Section 6 express the experiment results and conclusion of the paper, respectively.

2. Related Work

Over the past few years, many researchers have proposed various techniques for detecting and tracking lanes in images and videos employing computer vision to assist autonomous driving. One of the earlier approaches to implementing lane detection was based on a combination of the Generalized Hough Transform (GHT) and the Kalman Filter. This approach was first proposed by [18], providing resilience to noise and reducing the computational load. While this method might best fit straight-line detection, it is less effective in detecting curved lanes. Researchers have proposed methods based on curve fitting techniques to address the problem of curved lane detection. For example, a popular approach is to fit a polynomial curve to the lane markings using techniques such as least squares curve fitting or Random Sample Consensus (RANSAC) [19]. While this method effectively detects curved lanes, it might still be sensitive to noise and does not work well in complex scenarios. Another popular approach to lane detection is based on deep learning algorithms. In this method, a lightweight UNet is trained to classify pixels or regions in an image as either belonging to a lane or not [20]. This approach is effective in achieving state-of-the-art results on various benchmark datasets. In addition, researchers have also proposed methods based on lane structural analysis and CNNs [21]. The characteristics of local waveforms were employed to overcome the detection issues related to changing or curved lanes. The studies mentioned above share a common focus on addressing lane detection for any public or private road. Rubber-tired Gantry Cranes (RTG) [22] were proposed for a lane detection and tracking method to deal with the specific lanes in the port terminals. Instead of a deep learning model, the RTG approach employed traditional image process techniques for autonomous driving in a container yard.
Based on the above analysis, lane detection is an active research topic, and various approaches have recently been proposed for advanced driver assistant systems. Each approach has advantages and limitations, and the choice of method depends on the specific requirements for a particular system or application. This study presents a novel lane detection scheme integrating traditional image processing and Score-based Generative Modeling with Differential Equation (SGMDE). In addition, various road conditions in the container terminals might be considered to upgrade the detection ability and performance.

3. Problem Description

Figure 1 illustrates the practical setting of the lanes on the road conditions in various environments. Facilities have been constructed for port operations over the years; the degradation is evident in the infrastructures and maintenance activities rarely occur at the terminals. Most seaports are likely to be operated throughout the day, without stopping in any period, in order to meet global demand and improve the supply chain productivity. The maintenance of lanes and roads forces a stop to operating port terminals for a certain period, resulting in a decline in the port throughput and handling volume. In the case of infrastructure maintenance, port authorities must also guarantee the shipping time, order schedule, and throughput to meet the strategic goals. In addition, other port operations may be hindered due to the blockade for maintenance activities, thus disrupting the port operations smooth running. As a result, the color vision may become washed out or faded throughout the port operations, as shown in Figure 1. This has the potential to cause many challenges and issues for the container truck drivers and for capturing images to be used for lane identification processing, which will support the ACT framework moving towards higher levels of port automation. Therefore, it is essential to reconstruct the lanes and roads on the monitor screen of the container trucks to assist the driver or to support the remote-control system of ACT operations. The autonomous trucks must perform flawlessly in realistic road scenarios (sunny, rainy, smog, etc.) in any climate. However, shade on sunny days could also cause potential incidents during cargo handling operations. With all the challenges in terms of the environmental conditions, this study proposes optimal solutions using a generative model based on a deep learning neural network. Moreover, stochastic noise is injected into every training process to increase the accuracy of the generation model. Then, this proposed model can efficiently deal with many scenarios of the environment during cargo handling operations in the terminals.

4. Robust Lane Detection Model

This study uses a stochastic generative model and image processing techniques to realize a robust lane detection model. A machine learning and optimization technique is employed to implement lane detection and tracking for autonomous trucks in a container yard. In particular, the generative model is introduced to solve the complex detection problems of lanes with smudges, occlusions, and breakages. The whole process of the lane detection model contains two main stages: pre-processing and lane positioning. In image pre-processing, the model should deal with various environmental conditions of terminal images. The SGMDE technique is used to transfer bad images to better images to enhance the accuracy of the lane detection model. Finally, the lane positioning method is demonstrated using DBSCAN clustering and GA optimization.

4.1. Transforming Images

As illustrated in Figure 2, the generative model has several advantages over the existing models in image processing. The SGMDE is developed to transfer image-to-image information based on the innovation idea [23]. The diffusion model contains a forward process (data noise) and a reverse process (noise data).
This model is one of the most sophisticated generative models that creates high-resolution images. In the forward process, an image is perturbed with multiple scales of Gaussian noise. A stochastic differential equation is employed to inject noises into the images in each iteration, moving from x 0 to x T . As illustrated in Figure 3, x 0 corresponds to a complex data distribution sampled from the original data distribution and x T describes purely noises. x T will eventually be an isotropic Gaussian distribution ( T ). In the reverse process, a stochastic differential equation transforms the prior distribution back into the data distribution by removing noises, moving from x T to x 0 .
The image conversion from x t 1 to x t in the forward process is expressed in the following distribution:
q x t | x t 1 = N x t ; 1 β t x t 1 , β t I
where x t is a new latent variable representing an image at the period of t; 1 β t x t 1 represents the mean of the normal distribution for each time step; β t I indicates a diagonal matrix of covariance for the multi-dimensional scenarios with the identity matrix I. The normal distribution (or a Gaussian) is used for understanding the distributions of the factors in the images. The reparametrized process is described using Taylor expansion, as follows:
x t = 1 β t x t 1 + β t N ( 0 , I ) = 1 β ( t ) Δ t x t 1 + β ( t ) Δ t N ( 0 , I ) β t : = β ( t ) Δ t x t 1 β ( t ) Δ t 2 x t 1 + β ( t ) Δ t N ( 0 , I )
where Δ t is the time step. The stochastic differential equation models are developed for randomly varying systems. The Ito stochastic differential equation is given by:
d x t = 1 2 β ( t ) x t d t + β ( t ) d ω t
where ω t describes the standard Winer process as a random variable; d ω t can be regarded as infinitesimal white noise; β ( t ) is the diffusion coefficient. The solution of a stochastic differential equation is a collection of random variables x t t [ 0 , T ] , in which the step sizes are controlled by a variance schedule. Any diffusion process can be described as being the solution of a stochastic differential equation. Injecting stochastic noise into the image is intended to consider various conditions of input images to enhance the robustness of the lane detection method. In the reverse process, the image conversion from x t to x t 1 is expressed as follows:
p x t 1 | x t = N x t 1 ; μ θ ( x t , t ) , β I
d x t = 1 2 β ( t ) x t β ( t ) x t log q t ( x t ) d t + β ( t ) d ω ¯ t
where x t is the gradient with respect to x t . Denoising score matching for a model to learn for the score function can be realized by:
min θ Ε t ~ u ( 0 , T ) Ε x 0 ~ q 0 ( x 0 ) Ε x t ~ q t ( x t | x 0 ) s θ ( x t , t ) x t log q t ( x t | x 0 ) 2 2
where · 2 denotes the L 2 norm (Euclidean distance); u ( 0 , T ) describes a uniform distribution over the time interval. The score-based model s θ estimates the score function by training a neural network. Reparametrized sampling is given by:
x t = γ t x 0 + σ t ε ε ~ N ( 0 , I )
To compute the reverse differential equation, one needs to estimate the score functions that describe the log probability density function gradients. The score function of a data distribution q t is calculated as follows:
x t log q t ( x t | x 0 ) = x t x t γ t x 0 2 2 σ t 2 = x t γ t x 0 σ t 2 = γ t x 0 + σ t ε γ t x 0 σ t 2 = ε σ t
In the reverse process, the Neural Ordinary Differential Equation (NODE) [24] approximates the time-dependent score-based model. The NODE is a subset of deep learning models that uses a neural network to parameterize the derivative of the hidden state (Figure 4). This process follows the solution of an ordinary differential equation:
d x = f ( x , t ) 1 2 g ( t ) 2 x log p t x d t
The loss function of the reverse process is given by:
min θ Ε t ~ u ( 0 , T ) Ε x 0 ~ q 0 ( x 0 ) Ε x t ~ q t ( x t | x 0 ) 1 σ t 2 ε ε θ ( x t , t ) 2 2
The test setup of the pre-processing stage is illustrated in Figure 5.

4.2. Lane Positioning

The lane positioning accuracy can be improved by utilizing a color-based comparison principle and extracting a Region of Interest (RoI) in the image that focuses on the lane area, while disregarding the areas outside the lane. A sliding value of the image method is then employed to enhance the accuracy of the straight-line detection. Subsequently, noise-removing models, such as DBSCAN and GA, are utilized to determine the lane positioning. Figure 6 shows the flowchart demonstrating the vision-based lane positioning.

4.2.1. Extract the Region of Interest and Slide of a Grayscale Image

The pre-processing is intended to provide a better input image for the lane positioning stage. The original image is translated from the noise images with shaded or wet roads inside the RoI segmentation. This information is employed to extract the area that the lane locates from an original image to reduce the interference of the background environment and improve the detection efficiency. To meet this requirement, the lanes must remain within the RoI image, even as the distance between lanes changes during the operation. The RoI is narrowed down to focus primarily on the lanes. Given that the captured images from the container yard truck include numerous non-essential elements (such as container blocks and cranes), it is crucial to eliminate these to improve the speed and accuracy of the lane detection method. This is followed by converting the cut-off image to a grayscale format, and the test results are presented in Figure 7.
The sliding of a grayscale image method is proposed to find the threshold for the binary image conversion process. After the RoI is extracted from the image, a horizontal line is used to slide each vertical pixel, following the ascending, and retrieve its grayscale value. An example of five lines in the sliding process is illustrated in Figure 8. As shown in Figure 9, the scale values obtained from the five sample lines are plotted in the graphs. Among these, the fourth and fifth lines exhibit some similarities, resulting in the determination of the threshold value for converting the image into a binary format.

4.2.2. Noise Removing

After converting to the binary image, the images might contain more noise. As shown in Figure 10, these noises will disturb the performance of the lane positioning method. Erosion, followed by dilation, is employed to remove the noises. The noise removing process is essential for the easier positioning of the lane location. After applying the filtering technique, there is an almost 30% cut-off from the binary image to the image. The process will hide the noise, and then the following steps will express the straight line.

4.2.3. DBSCAN Clustering

The clustering algorithms are used to find similarities or dissimilarities among the data points. When representing the data points in space, the regions of high density will be typically interspersed by regions of low density. Based on this property, the DBSCAN clustering algorithm is implemented using unsupervised clustering machine learning models [25]. This algorithm considers clusters as high-density regions separated by low-density regions and can detect clusters and noises of arbitrary shapes in the location data. In lane positioning, a DBSCAN algorithm supports the clustering of the noise and the lane from the binary image after filtering. Although every parameter influences the algorithm in specific ways, two main parameters play essential roles in the DBSCAN algorithm, which is a base algorithm for density-based clustering. The first parameter is Min_point, which describes a threshold of the minimum number of data points grouped to define a high-density epsilon neighborhood. The Min_point does not include the center point. The other parameter ε describes the distance used to determine the neighborhood point with the core point P. The set of neighboring points N is given by:
N ε P = Q D : d P , Q ε
where D is the set of all the data points, and Q is a point belonging to D.
Figure 11 illustrates an example of employing the DBSCAN clustering with the Min_point = 3 and the distance parameter ε . After applying the clustering technique, five clusters are found from the binary data point. The clusters represented in blue, orange, and green denote the lane locations, which will be optimized using an algorithm to improve the lane positioning. The remaining clusters, indicated by white and flamingo colors, correspond to roadside containers. The optimization algorithm guarantees convergence when all data points have been visited, ensuring optimal lane positioning.

4.2.4. Genetic Algorithm

A Genetic Algorithm (GA) is a metaheuristic optimization technique motivated by the evolutionary process of natural selection and natural genetics [26]. The adaptive method is used to offer the best solutions to the optimization problems. The GA generates an initial set of solutions and uses the rules of genetics and natural selection to create different solutions until an optimal solution is found or a stopping condition is met. For the lane reconstruction in the container terminals, the GA algorithm is implemented to find the optimal lane for each cluster from the result of DBSCAN. The target is to find out the parameters of the straight lines in the terminal scene, as follows:
The gene structure consists of three chromosomes, the parameters (a, b, and c) in Figure 12. For evaluating the optimal result, the Ordinary Least Square Method (OLSM) is employed to formulate the objective function of the GA. The OLSM is an optimization method for selecting the best fit for a range of data. The objective function is expressed as follows:
min J = i = 1 n a x i + b y i + c 2 a 2 + b 2
The pseudo-code is presented in Algorithm 1 for finding the optimal lane. In the first step, a new method to select an initial pair of geneses is proposed to reduce the number of iterations and to enhance the test results. From the cluster data, choosing four random points combined with the formula is given in Figure 13. Then, there are two gene sequences ( a , b , c ) to prepare for the first generation.
Algorithm 1. Lane positioning based on Genetic Algorithm
Input: Clusters data from the DBSCAN clustering.
   For cluster data in range (Num_clusters):
     Select four random points in the cluster
     Based on the four random points, find out two gene sequences (a,b,c)
     For i in range (Max_iteration):
       Selection of a pair of parent genes—Roulette Wheel Selection
       Crossover the two genes
       Mutation
       Evaluation of the objective function using the Formula (12)
Output: Optimal gene (a,b,c) for each cluster.

5. Test Results and Discussion

The experiments were conducted using a Python program running on a PC with a Window 10 operating system, AMD Ryzen 5 5600G (12 cores), a base clock speed of 3.9 GHz, RAM of 16 GB, and a supported NVIDIA GeForce RTX 3060 graphics card. The effectiveness is evaluated in various scenarios and compared to a range of lane detection techniques. For training the SGMDE algorithm, over 1100 lane images on the roads under different environmental conditions are used, as shown in Figure 14. The dataset is divided into three portions, with 70% of the data serving as the input for training, 20% used as the reference data, and the remaining 10% for testing purposes. After the SGMDE is trained, the image will be used to test the performance by applying the lane positioning method to other images. Many image sequences are obtained under various environmental conditions, such as breakage, sunshine, shade, and a wet lane. Various image formats can be used for image processing. Sometimes, JPEGs might be an ideal choice; JPEG images have a resolution of 1366 × 720 pixels. The test results are recorded as colored lines superimposed on the input images. The number of frames with successful recognition is then counted, and the recognition rate is calculated, as shown in Table 1.
Various testing conditions in the python program are applied to assess the suitability of low-resolution images and evaluate the performance of the detection method. These conditions include scenarios such as breakage, sunlight, shade, and wet scenes. The batch size of 16 and 100 epochs is set in the training process. The DBSCAN clustering and GA parameters are set to their values: Min_point = 5, ε = 20 and Max_iteration = 500. The final results of the reconstruction lanes in the terminals are shown in Figure 15. With various environmental conditions, the experiments prove the reliability and robustness of the proposed algorithm in lane detection.
Based on the test results listed in Table 1, the overall accuracy is obtained at over 93%, partly showing the recognition ability of the proposed model. This detection method can be employed in the automated port application. There are some failure cases during the testing, and these errors come from the poor environmental conditions in the dataset when training the SGMDE algorithm.

6. Conclusions

This study proposes AI-based image generation schemes using generative diffusion methods for lane detection in autonomous yard trucks. The robust recognition of lanes has become a hot topic in industry and academia. The diffusion model considers the effects of stochastic injection noise on image processing to implement the generative model. This article employs machine learning and optimization techniques to improve the performance and accuracy of the lane detection approach. An SGMDE is used for image-to-image translation and image-processing techniques in combination with DBSCAN clustering and GA optimization for lane reconstruction. The proposed method is introduced for reconstructing road lanes with smudges and breakages over operating time. The experimental results show that the proposed method efficiently detects the lanes on the roads in various environments using a real terminal dataset. The detection results are obtained by 93%, illustrating that the model meets the requirement of ACT in the port terminal. Robust lane detection aids drivers and autonomous container vehicles to operate in the terminals more efficiently.
Nevertheless, every study has limitations to some extent. A limitation of this study is that the ability to detect will decrease when one of two lanes is entirely in the shadow. In addition, the methodologies should be refined to comprise curved lanes and recognize the road instruction signs in the seaport terminals. More sophisticated techniques should be devised to detect obstacles, humans, or containers, thus enhancing the safety and efficiency in terminal operations. Instead of using an RGB camera, a stereo camera can be employed to collect the depth data in real-time for measuring the distance to the objects to improve the reliability of the autonomous system.

Author Contributions

Conceptualization, N.Q.V. and S.-S.Y.; Data curation, N.Q.V. and H.-S.K.; Investigation, L.N.B.L. and H.-S.K.; Software, N.Q.V. and L.N.B.L.; Writing—original draft, N.Q.V. and H.-S.K.; Methodology, L.N.B.L. and S.-S.Y.; Writing—review and editing, N.Q.V. and H.-S.K.; Supervision, H.-S.K. and S.-S.Y.; Visualization, N.Q.V. and L.N.B.L.; Funding acquisition: H.-S.K. and S.-S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Korea Institute of Marine Science & Technology Promotion (KIMST) funded by the Ministry of Oceans and Fisheries, Korea (20220573).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All of the data generated or analyzed during this study are included in this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Review of Maritime Report 2021. Available online: https://unctad.org/system/files/official-document/rmt2021_en_0.pdf (accessed on 19 February 2023).
  2. Heikkilä, M.; Saarni, J.; Saurama, A. Innovation in Smart Ports: Future Directions of Digitalization in Container Ports. J. Mar. Sci. Eng. 2022, 10, 1925. [Google Scholar] [CrossRef]
  3. Hur, S.H.; Lee, C.; Roh, H.S.; Park, S.; Choi, Y. Design and Simulation of a New Intermodal Automated Container Transport System (ACTS) Considering Different Operation Scenarios of Container Terminals. J. Mar. Sci. Eng. 2020, 8, 233. [Google Scholar] [CrossRef] [Green Version]
  4. Bimbraw, K. Autonomous cars: Past, present and future: A review of the developments in the last century, the present scenario and the expected future of autonomous vehicle technology. In Proceedings of the ICINCO 2015—12th International Conference on Informatics in Control, Automation and Robotics, Colmar, France, 21–23 July 2015; Volume 1. [Google Scholar] [CrossRef]
  5. Rebelle, J.; Mistrot, P.; Poirot, R. Development and validation of a numerical model for predicting forklift truck tip-over. Veh. Syst. Dyn. 2009, 47, 771–804. [Google Scholar] [CrossRef]
  6. Martini, A.; Bonelli, G.P.; Rivola, A. Virtual testing of counterbalance forklift trucks: Implementation and experimental validation of a numerical multibody model. Machines 2020, 8, 26. [Google Scholar] [CrossRef]
  7. Ogawa, T.; Takagi, K. Lane recognition using on-vehicle LIDAR. In Proceedings of the 2006 IEEE Intelligent Vehicles Symposium, Tokyo, Japan, 13–15 June 2006; pp. 540–545. [Google Scholar] [CrossRef]
  8. Yim, Y.U.; Oh, S.Y. Three-feature based automatic lane detection algorithm (TFALDA) for autonomous driving. IEEE Trans. Intell. Transp. Syst. 2003, 4, 219–225. [Google Scholar] [CrossRef]
  9. Tan, H.; Zhou, Y.; Zhu, Y.; Yao, D.; Li, K. A novel curve lane detection based on Improved River Flow and RANSA. In Proceedings of the 2014 17th IEEE International Conference on Intelligent Transportation Systems, Qingdao, China, 8–11 October 2014. [Google Scholar] [CrossRef]
  10. Zhou, S.; Jiang, Y.; Xi, J.; Gong, J.; Xiong, G.; Chen, H. A novel lane detection based on geometrical model and Gabor filter. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010; pp. 59–64. [Google Scholar] [CrossRef]
  11. Tang, J.; Li, S.; Liu, P. A review of lane detection methods based on deep learning. Pattern Recognit. 2021, 111, 107623. [Google Scholar] [CrossRef]
  12. He, B.; Ai, R.; Yan, Y.; Lang, X. Accurate and robust lane detection based on Dual-View Convolutional Neutral Network. In Proceedings of the IEEE Intelligent Vehicles Symposium, Gothenburg, Sweden, 19–22 June 2016; pp. 1041–1046. [Google Scholar] [CrossRef]
  13. Zou, Q.; Jiang, H.; Dai, Q.; Yue, Y.; Chen, L.; Wang, Q. Robust lane detection from continuous driving scenes using deep neural networks. IEEE Trans. Veh. Technol. 2020, 69, 41–54. [Google Scholar] [CrossRef] [Green Version]
  14. Neven, D.; de Brabandere, B.; Georgoulis, S.; Proesmans, M.; van Gool, L. Towards End-to-End Lane Detection: An Instance Segmentation Approach. In Proceedings of the IEEE Intelligent Vehicles Symposium, Changshu, China, 26–30 June 2018; Volume 2018, pp. 286–291. [Google Scholar] [CrossRef] [Green Version]
  15. Xiao, D.; Zhuo, L.; Li, J.; Li, J. Structure-prior deep neural network for lane detection. J. Vis. Commun. Image Represent. 2021, 81, 103373. [Google Scholar] [CrossRef]
  16. Muthalagu, R.; Bolimera, A.; Kalaichelvi, V. Lane detection technique based on perspective transformation and histogram analysis for self-driving cars. Comput. Electr. Eng. 2020, 85, 106653. [Google Scholar] [CrossRef]
  17. Huang, Y.; Li, Y.; Hu, X.; Ci, W. Lane detection based on inverse perspective transformation and Kalman filter. KSII Trans. Internet Inf. Syst. 2018, 12, 643–661. [Google Scholar] [CrossRef]
  18. Voisin, V.; Avila, M.; Emile, B.; Begot, S.; Bardet, J.C. Road markings detection and tracking using Hough Transform and Kalman filter. Lect. Notes Comput. Sci. 2005, 3708, 76–83. [Google Scholar] [CrossRef]
  19. Waykole, S.; Shiwakoti, N.; Stasinopoulos, P. Review on lane detection and tracking algorithms of advanced driver assistance system. Sustainability 2021, 13, 11417. [Google Scholar] [CrossRef]
  20. Lee, D.H.; Liu, J.L. End-to-End Deep Learning of Lane Detection and Path Prediction for Real-Time Autonomous Driving. Signal Image Video Process. 2021, 17, 199–205. [Google Scholar] [CrossRef]
  21. Ye, Y.Y.; Hao, X.L.; Chen, H.J. Lane detection method based on lane structural analysis and CNNs. IET Intell. Transp. Syst. 2018, 12, 513–520. [Google Scholar] [CrossRef]
  22. Feng, Y.; Li, J.Y. Robust Lane Detection and Tracking for Autonomous Driving of Rubber-Tired Gantry Cranes in a Container Yard*. In Proceedings of the 2022 IEEE 18th International Conference on Automation Science and Engineering (CASE), Mexico City, México, 20–24 August 2022; pp. 1729–1734. [Google Scholar] [CrossRef]
  23. Song, Y.; Sohl-Dickstein, J.N.; Kingma, D.P.; Kumar, A.; Ermon, S.; Poole, B. Score-Based Generative Modeling through Stochastic Differential Equations. arXiv 2020, arXiv:2011.13456. [Google Scholar] [CrossRef]
  24. Chen, T.Q.; Rubanova, Y.; Bettencourt, J.; Duvenaud, D.K. Neural Ordinary Differential Equations. arXiv 2018, arXiv:1806.07366. [Google Scholar] [CrossRef]
  25. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, Portland, Oregon, 2–4 August 1996; Available online: http://www.cs.ecu.edu/~dingq/CSCI6905/readings/DBSCAN.pdf (accessed on 13 August 2022).
  26. Mirjalili, S. Genetic Algorithm. In Evolutionary Algorithms and Neural Networks: Theory and Applications; Mirjalili, S., Ed.; Springer International Publishing: Cham, Switzerland, 2019; pp. 43–55. [Google Scholar] [CrossRef]
Figure 1. Environmental conditions of lanes on the roads in a container port (breakage, sunshine, shade, wet lane, etc.).
Figure 1. Environmental conditions of lanes on the roads in a container port (breakage, sunshine, shade, wet lane, etc.).
Jmse 11 00731 g001
Figure 2. Workflow of the generative model.
Figure 2. Workflow of the generative model.
Jmse 11 00731 g002
Figure 3. Illustration of the general diffusion process.
Figure 3. Illustration of the general diffusion process.
Jmse 11 00731 g003
Figure 4. Computation graph of the latent ordinary differential equation model.
Figure 4. Computation graph of the latent ordinary differential equation model.
Jmse 11 00731 g004
Figure 5. Test setup of pre-processing image stage.
Figure 5. Test setup of pre-processing image stage.
Jmse 11 00731 g005
Figure 6. Block diagram of the lane positioning process.
Figure 6. Block diagram of the lane positioning process.
Jmse 11 00731 g006
Figure 7. The first processes of the lane positioning method: (a) input image, (b) extract RoI, (c) grayscale conversion.
Figure 7. The first processes of the lane positioning method: (a) input image, (b) extract RoI, (c) grayscale conversion.
Jmse 11 00731 g007
Figure 8. Example of five lines for sliding grayscale images.
Figure 8. Example of five lines for sliding grayscale images.
Jmse 11 00731 g008
Figure 9. The grayscale value of five lines: (a) first line; (b) second line; (c) third line; (d) fourth line; (e) fifth line.
Figure 9. The grayscale value of five lines: (a) first line; (b) second line; (c) third line; (d) fourth line; (e) fifth line.
Jmse 11 00731 g009
Figure 10. The test results of the noise-removing process: (a) binary image; (b) erosion followed by dilation.
Figure 10. The test results of the noise-removing process: (a) binary image; (b) erosion followed by dilation.
Jmse 11 00731 g010
Figure 11. Schematic illustration of DBSCAN clustering.
Figure 11. Schematic illustration of DBSCAN clustering.
Jmse 11 00731 g011
Figure 12. Genetic operations of GA application.
Figure 12. Genetic operations of GA application.
Jmse 11 00731 g012
Figure 13. Ordinary least square method.
Figure 13. Ordinary least square method.
Jmse 11 00731 g013
Figure 14. Dataset images in container terminals.
Figure 14. Dataset images in container terminals.
Jmse 11 00731 g014
Figure 15. The test results of the proposed lane detection method.
Figure 15. The test results of the proposed lane detection method.
Jmse 11 00731 g015
Table 1. Experimental results of the proposed model.
Table 1. Experimental results of the proposed model.
Lane TypeNo. ImagesFailedProportion of Lane Detection (%)Average Processing
Time (s)
Sunshine220896.4%0.063
Breakage4072893.1%0.058
Shade2972392.3%0.061
Wet1761591.5%0.082
Average93.3%0.066
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vinh, N.Q.; Kim, H.-S.; Long, L.N.B.; You, S.-S. Robust Lane Detection Algorithm for Autonomous Trucks in Container Terminals. J. Mar. Sci. Eng. 2023, 11, 731. https://doi.org/10.3390/jmse11040731

AMA Style

Vinh NQ, Kim H-S, Long LNB, You S-S. Robust Lane Detection Algorithm for Autonomous Trucks in Container Terminals. Journal of Marine Science and Engineering. 2023; 11(4):731. https://doi.org/10.3390/jmse11040731

Chicago/Turabian Style

Vinh, Ngo Quang, Hwan-Seong Kim, Le Ngoc Bao Long, and Sam-Sang You. 2023. "Robust Lane Detection Algorithm for Autonomous Trucks in Container Terminals" Journal of Marine Science and Engineering 11, no. 4: 731. https://doi.org/10.3390/jmse11040731

APA Style

Vinh, N. Q., Kim, H. -S., Long, L. N. B., & You, S. -S. (2023). Robust Lane Detection Algorithm for Autonomous Trucks in Container Terminals. Journal of Marine Science and Engineering, 11(4), 731. https://doi.org/10.3390/jmse11040731

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop