Next Article in Journal
Adaptive Resilient Control of AC Microgrids under Unbounded Actuator Attacks
Next Article in Special Issue
Electric Vehicle Charging Station Layout for Tourist Attractions Based on Improved Two-Population Genetic PSO
Previous Article in Journal
The Effect of Carbon Price Volatility on Firm Green Transitions: Evidence from Chinese Manufacturing Listed Firms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Applications of Artificial Intelligence in Heavy Duty Trucks

by
Sasanka Katreddi
1,
Sujan Kasani
2 and
Arvind Thiruvengadam
3,*
1
Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV 26505, USA
2
Intel Corporation, Chandler, AZ 85226, USA
3
Mechanical and Aerospace Engineering, West Virginia University, Morgantown, WV 26505, USA
*
Author to whom correspondence should be addressed.
Energies 2022, 15(20), 7457; https://doi.org/10.3390/en15207457
Submission received: 16 September 2022 / Revised: 1 October 2022 / Accepted: 8 October 2022 / Published: 11 October 2022
(This article belongs to the Special Issue Internet of Vehicles for Intelligent Transportation System)

Abstract

:
Due to the increasing use of automobiles, the transportation industry is facing challenges of increased emissions, driver safety concerns, travel demand, etc. Hence, automotive industries are manufacturing vehicles that produce fewer emissions, are fuel-efficient, and provide safety for drivers. Artificial intelligence has taken a major leap recently and provides unprecedented opportunities to enhance performance, including in the automotive and transportation sectors. Artificial intelligence shows promising results in the trucking industry for increasing productivity, sustainability, reliability, and safety. Compared to passenger vehicles, heavy-duty vehicles present challenges due to their larger dimensions/weight and require attention to dynamics during operation. Data collected from vehicles can be used for emission and fuel consumption testing, as the drive cycle data represent real-world operating characteristics based on heavy-duty vehicles and their vocational use. Understanding the activity profiles of heavy-duty vehicles is important for freight companies to meet fuel consumption and emission standards, prevent unwanted downtime, and ensure the safety of drivers. Utilizing the large amount of data being collected these days and advanced computational methods such as artificial intelligence can help obtain insights in less time without on-road testing. However, the availability of data and the ability to apply data analysis/machine learning methods on heavy-duty vehicles have room for improvement in areas such as autonomous trucks, connected vehicles, predictive maintenance, fault diagnosis, etc. This paper presents a review of work on artificial intelligence, recent advancements, and research challenges in the trucking industry. Different applications of artificial intelligence in heavy-duty trucks, such as fuel consumption prediction, emissions estimation, self-driving technology, and predictive maintenance using various machine learning and deep learning methods, are discussed.

1. Introduction

Industry 4.0 is transforming industries with technology such as artificial intelligence (AI), which was imagined in the 1900s during the time of early algorithms. The history of AI dates to 1900, with a turning point in 1950 with a paper published by Turing called Computing Machinery and Intelligence, which proposed a game that examines a machine’s capability of thinking. This, followed by the Hodgkin–Huxley model, led to the concept of AI at the Dartmouth Conference. The introduction of perceptron [1] in 1957 heightened interest in enabling models to learn from data. The investigation of machine learning (ML) for the game of checkers by Arthur Samuel [2] in 1959 popularized machine learning. However, due to criticisms raised, government funding for research was cut in the 1970s. This duration is referred to as the AI winter. The expert system is a rule-based algorithm that uses the knowledge of experts to create a program. The published work on back-propagation [3] by Geoffrey Hinton had a significant impact on AI, making it the most important algorithm for deep learning (DL). The algorithm can learn complex patterns in data using multi-layer models. However, the implementation of the back-propagation algorithm on large datasets did not scale and faced many problems. The second AI winter took place during the late 1980s and early 1990s, with people claiming that expert systems are slow at processing and expensive. In the early 1990s, big data led to intelligent agents that could be used for online shopping, news, and web browsing. During this period, natural language processing (NLP), which was unsuccessful in the 1960s, saw success with new machine learning (ML) algorithms, especially statistical models and advanced computational power leading to Siri (by Apple), Alexa, etc. Very soon these digital assistants were able to handle making a phone call, managing schedules, and more. AI algorithms started being integrated into larger systems in the 1990s and 2000s. In 1996, the first machine defeated the World Champion chess player. The GPUs and fast-processing computers developed by 1999 had a significant effect on deep learning.
In recent years, AI has seen success in industries such as banking, healthcare, manufacturing, agriculture, transportation, automobile/automotive, and many more. The potential of AI in the automotive industry is the major driving factor for the growth of automotive artificial intelligence. The customer’s preference for advanced and new features, such as driver assistance, self-driving, etc., has propelled the use of AI in the automotive industry. AI is being used in every phase of the automotive industry, from autonomous driving to manufacturing, to supply chain, to production, and to driver safety. Growth in the transportation sector has leaped due to global AI, especially in the trucking industry. Due to government regulations on fuel consumption and emissions, safety, and the lack of drivers, there is a need for cleaner and more efficient transportation.
As per the United States Environmental Protection Agency (USEPA) [4], about 32% of on-road NOx emissions and 23% of GHG emissions are produced by heavy-duty trucks, impacting the climate and people’s health. As per the National Highway Traffic Safety Administration (NHTSA) report in 2020, 76% of fatal crashes involved large trucks. Heavy-duty vehicles use about 18% of the energy and 17% of the petroleum used in the United States [5]. As per reports [6], predictive maintenance can save about 8–12%. A small change in transportation emissions and fuel economy, lowering maintenance time, and improving the safety of drivers can have outsized effects and significant global impacts on climate and natural resources. Due to advanced technologies, there is a wealth of data collected from vehicles these days. Analyzing these data using emerging technologies such as artificial intelligence helps draw key insights into heavy-duty transportation without on-road testing of vehicles.
Therefore, in this paper, an effort has been made to bring together the existing studies on the application of AI in heavy-duty trucks and identify gaps in the current research. Various studies from the literature on fuel consumption/efficiency, emissions, self-driving and truck platooning, and predictive maintenance have been presented. The studies presented use various AI techniques that help to identify patterns and make predictions/decisions. However, the feasibility and versatility of using data-driven methods and artificial neural networks are unclear in some applications, such as turbo machines and electrical submersible pumps [7]. Similarly, due to the challenges involved in implementing artificial intelligence in the trucking industry and the lack of surveys on studies related to AI in heavy-duty trucks, this paper focuses especially on the applications of AI in heavy-duty trucks. Truck platooning, which is considered partially autonomous, is the next big thing in truck transportation that can be achieved through AI techniques. Increased traffic congestion and accidents, stringent regulations for emissions and fuel consumption, a lack of truck drivers, and safety concerns have accelerated the application of AI in the trucking industry. Fleet management companies are already adopting AI technologies such as machine learning, deep learning, computer vision, and natural language processing, discussed in the following sections, to observe the performance of trucks, which can help in cost management, reducing downtime, analyzing truck performance, etc. AI helps fleet owners to make a prediction based on the patterns in the previous data, can enable vehicle 2 vehicle (V2V) communication, identify drivers’ behavior, and select the route with the lowest fuel consumption.

2. Artificial Intelligence

Artificial intelligence (AI) is described as “The science and engineering of making intelligent machines” by John McCarthy. AI makes it possible for machines to learn from experience and change behavior based on inputs in the same way as humans. The ability to rationalize and take actions on their own, combining computer science and datasets, is the ideal characteristic of AI. AI algorithms are designed to make decisions based on real-time data. Weak AI/narrow AI is task-specific which is weak, whereas Strong AI is a combination of general AI and super AI. General AI can understand the environment and reason and makes decisions accordingly. Super AI is the highest level of AI, where systems make decisions in unknown environments with uncertainty. The domains of AI shown in Figure 1 can be used to solve real-world problems the way humans do.

2.1. Machine Learning

Machine learning (ML) is a process where “A computer is said to learn from experience to some class of task T and performance measure P, if its performance at task T, as measured by P, improves with experience E” [9]. Machine learning uses statistical methods to learn and progressively improve without being programmed. It is categorized into four approaches, supervised learning, unsupervised learning, reinforcement learning, and semi-supervised learning, based on how the algorithms learn [10]. Different machine learning types are discussed below and shown in Figure 2.
  • Supervised learning maps an input to a corresponding labeled output to generate a model. The model predicts the responses to new data samples. The algorithm learns by finding the patterns from observations, making predictions, and adjusting the error until high accuracy is obtained. Supervised learning methods are used for classification, which is determining the category of new data points based on the observed previous data, and regression, which is predicting or forecasting by understanding the relationship between variables.
  • Unsupervised learning learns from input data but without any output information. The algorithm learns patterns in input data, leading to features that represent the class for each sample. The data are grouped into clusters. Unsupervised learning is used for clustering—combining data of similar patterns into a group making the interclass group as different as possible, and dimensionality reduction—reducing the input variable to find the required information.
  • Semi-supervised learning contains a small amount of labeled data with a large amount of unlabeled data during training. This technique can be used to label unlabeled data. This type of learning can be used for classification and clustering tasks.
  • Reinforcement learning learns from the environment and acts as a teacher providing feedback. The algorithm is provided with a set of actions, parameters, and outputs. The system is rewarded for correct output and penalized for incorrect output by defined rules.

2.2. Deep Learning

Deep learning (DL) is a machine learning technique that uses artificial neural networks inspired by human brain structure [12]. The raw data are passed through multiple layers connected with neurons that are capable of learning representations from data. The more hidden layers, the deeper the network becomes. The best results can be achieved by minimizing loss through adjusting weights and biases. The history of deep learning dates to 1943, when Walter Pitts and Warren McCulloch created a computer model based on the human brain. The second AI winter, from 1985 to the 1990s, affected research for neural networks and deep learning. In 1999, when computers with GPU were developed, neural networks started to compete with SVM. Deep learning mainly handles complex mappings from input to output and requires a large dataset and high computational power [13].
Multi-layer perceptron (MLP) or a fully connected neural network is a neural network with at least three layers: an input layer, one or more hidden layers, and an output layer that performs non-linear mapping of inputs and outputs. Each layer has multiple computational units called neuron/perceptron [1], shown in Figure 3. The neuron takes the input, multiplies it with weight, and adds bias, followed by a non-linear activation function. The output layer performs the classification or prediction. Thus, the output is given by Y = f ( i = 1 n w i x i + b ) where Y is the output, f denotes a non-linear function, w is the weight vector, b is the bias, and x is the input vector.
An algorithm called back-propagation [3] is used to determine the weights and biases. The algorithm calculates the gradients of the error function to weights, and these gradients are propagated backward, allowing efficient computation of gradients at each layer. MLPs are generally feed-forward neural networks where the connections between neurons are in one direction (forward) from input to output without forming any loop. The back-propagation algorithm with multiple hidden layers can cause exploding or vanishing gradients. Hence, deeper fully connected neural networks (FCNN) [15] (Figure 4a) with advanced activation functions and optimizers came into existence. The term deep here indicates the number of hidden layers, largely resulting in the powerful capability to model non-linear inputs. A convolutional neural network (CNN) [16] (Figure 4b) is a neural network with convolutional blocks that are used especially for image and video data. The development of ImageNet in 2012 made CNN one of the most powerful tools for solving computer vision problems. The objective of CNNs is to capture high-level features to process complex data, such as images/videos with parameter sharing using kernels, equivariant representations, and a smaller number of connections. Pooling layers are used to reduce the dimension of feature maps generated. There are different types of pooling, such as max pooling and average pooling [17]. The abstract representations are flattened to linear and passed to a fully connected layer that gives the probability of predictions. However, CNN works well with spatial data and cannot store temporal information in case of sequential data. Hence, the recurrent neural network (RNN) [3], which has memory to store information from time-series data was introduced. RNN is an ANN that has connections between neurons in the form of a graph using the internal state to process input sequences with varying lengths making them suitable for text processing and speech recognition. The state units of RNN store information from the past. The mechanism of RNN is shown (Figure 4c) with a circuit unfolded. RNN could not handle long-term dependencies [18], making way for long short-term memory (LSTM) [19]. LSTM (Figure 4d) contains three gates: the first gate, called the forget gate, determines whether the information from the previous timestamp is to be remembered or not; the second gate, called the input gate, learns new information from input to this cell; the third gate, called the output gate, passes the information from the current timestamp to the next. However, LSTM requires huge computational power. To handle that, another type of RNN, called gated recurrent units (GRU) [20], was introduced. GRU (Figure 4e) are gating mechanisms in RNN that solve the vanishing gradient problem. GRU use an update gate and resets gate that decides which information should be passed out. Autoencoders (AE) [21] (Figure 4f) are an unsupervised learning technique that uses neural networks for representation learning. The autoencoder reconstructs the input using encoder and decoder architecture. The architecture contains a bottleneck layer that represents the latent space of input, and the decoder reconstructs the input from the latent features. Applications of autoencoders include dimensionality reduction and denoising.
Various machine learning algorithms and deep learning models use supervised, unsupervised, and reinforcement learning techniques to make vehicles more intelligent. Supervised learning techniques, such as classification, regression, detection, and segmentation, and unsupervised learning techniques, such as clustering, dimensionality reduction, and association rules, are used to model the data collected from various sensors in vehicles to analyze the factors affecting fuel consumption, emissions, fault detection, maintenance, understanding the driver’s behavior, automated driver assistance systems, and self-driving [27,28,29,30]. Semi-supervised learning, deep learning, and reinforcement learning techniques are widely used in autonomous vehicles, as they require making decisions based on the environment, object detection, path planning, and decision-making control [31,32,33,34,35]. The extension of these artificial techniques to heavy-duty trucks is discussed in Section 4. The choice of learning type depends on the type of input data.

3. Datasets

Datasets are key to the progress of research in the field of machine learning. However, obtaining annotated data is not an easy task, especially for problems such as object detection and segmentation. The introduction of labeled datasets such as ImageNet [36], PascalVOC [37], and Microsoft COCO [38] with thousands of images has led to a breakthrough in the field of computer vision and is the most popular dataset to date. Many benchmark datasets [39,40,41,42,43,44,45] were introduced to reduce the gap between real-time and laboratory data for scenarios such as autonomous driving, motion estimation, recognition, reconstruction, and tracking. A few state-of-the-art datasets, with their website URLs, are presented in Table 1. However, datasets related to real-time on-road fuel consumption, emissions, or maintenance data are rarely available publicly for research, which is a huge limitation. Most of the studies presented in the literature are performed on the datasets collected from trucks manufactured by the same company or data related to the task of analysis.

4. Applications

4.1. Fuel Consumption/Economy

Fuel consumption is one of the important aspects of vehicles, especially fleet/heavy-duty vehicles. The Corporate Average Fuel Economy (CAFE) standards of the National Highway Traffic Safety Administration (NHTSA) regulate the fuel economy standards for vehicles. Fuel economy is a key factor in the overall operational cost of vehicles, especially heavy-duty trucks. Increasing fuel efficiency and reducing fuel consumption can result in significant savings for transportation companies. Several studies have been conducted to model fuel consumption/fuel efficiency using statistical and other approaches. Predicting fuel efficiency can help manage the fleet and for diagnostic purposes in case of high fuel consumption. The physics-based and statistical approaches to modeling are time-consuming and less accurate compared to machine learning methods. Several studies have been performed to predict fuel consumption in vehicles using machine learning and deep learning techniques [55,56,57,58,59,60,61].
Perrotta et al. [62] applied three machine learning techniques, namely support vector machine (SVM), random forest (RF), and artificial neural network (ANN), for the estimation of fuel consumption in trucks based on telematics and Highways Agency Pavement Management System (HAPMS) data. The parameters used in their modeling are gross vehicle weight, road gradient, vehicle speed, average acceleration, % start torque, % end torque, the engine revs at the start of the record, used gear, cruise control, the radius of curvature of the road, road roughness as longitudinal profile variance (LPV) at 3, 10, and 30 m wavelengths, and road surface macrotexture and achieved root mean square errors of 5.12, 4.64, 4.88 (L/100 km, i.e., liters of fuel consumption per 100 km) for SVM, RF and ANN, respectively. The comparison of results indicates the best performance was achieved with random forest, having RMSE of 4.64/100 km and R2 of 0.87, but SVM and ANN had better accuracy at prediction. The work is limited to features related to the engine, vehicle, and road, but considering other parameters, such as climate and drivers’ behavior, can improve the results. Katreddi et al. [63] predicted fuel consumption based on the input parameters engine load (%), engine speed (rpm), and vehicle speed (km/h) of heavy-duty trucks using a feed-forward neural network with back-propagation. The model predicts the average fuel consumption by the truck, given the input parameters. The predicted fuel consumption with distance is compared with other machine learning techniques, linear regression and random forest. It was shown that the MLP achieved the best performance with an RMSE of 0.0025 L (fuel consumed in liters) compared to machine learning techniques such as linear regression and random forest. The data used in this study were collected at WVU CAFEE using a PEMS device. The study is limited to understanding the effect of the very few features that can be obtained easily, unlike other studies where a collection of sensor data that might contain noise is not required. The shortcoming of this study is not considering external factors such as climate and GPS information. The work has been confined to a single truck which could impact the fuel consumption significantly in some cases considering the entire trip covered various stages of engine operations. The authors should have considered other parameters and compared the results of the neural network with a different number of hidden layers. A similar study was performed to predict the fuel consumed in mining dump trucks based on payload, loading time, idling while loaded, loaded travel time, empty travel time, and idling while empty using ANN [64]. The study involved analyzing data from 5001 cycles of haulage operations using a feed-forward neural network (6-9-9-1) with back-propagation. The results revealed that the idle time of dump trucks significantly impacts fuel consumption. The gap in addressing unnecessary fuel consumption and reducing emissions during idle speed has been addressed. The consideration of idle energy consumption and emissions is important in vocational trucks, such as school buses and dump trucks, that make frequent stops. Another study involving mining trucks was done by Soofastaei et al. [65]. The haulage vehicles are designed to perform well with heavy loads and with greater road grade and resistance. An ANN was used to find the correlation between fuel consumption and the input parameters, truckload, truck speed, and total haul road resistance. A genetic algorithm (GA) is then applied to optimize the fuel consumption based on the input parameters and fitness function created by ANN. The study used a large dataset that generalizes the model well and could give a good prediction on unseen data. Identifying the range of values for the gross vehicle weight and truck speed can help manage the fuel efficiently. Bodell et al. [66] compared the performance of machine learning algorithms linear regression (LR), k-nearest neighbor (KNN), ANN (MLP) with Adam, and ANN (MLP) with SGD (stochastic gradient descent) using simulated and operational data considering road slopes and driver profiles. For the simulated data scenario, ANN with Adam performed better than other methods with a mean square error of 0.026 L/100 km, whereas, for operational data, both the ANN algorithms (MLP with SGD and MLP with Adam) performed with a mean square error of 2.939 L/100 km. This work was limited to machine learning models and, due to challenges in computational power, has not been extended to deep learning methods. The fuel consumption in heavy-duty trucks with a combustion engine is affected by the operating points. Hence, the fuel consumption modeling using engine parameters of engine speed, torque, and fuel consumption at different operating points was performed by Wysocki et al. [67]. Their work evaluated the performance of polynomial regression, k-nearest neighbor (KNN), and artificial neural network (ANN) on the exploitation data collected and observed that ANN trained on the 8 input variables (engine speed and torque at initial, 500, 1000, and 1500th millisecond) performed best with less training data. The sensitivity of training data size has been evaluated for various models, which is helpful as the machine learning models are dependent on the amount and variation in data but are limited to combustion engines. A neural network model based on distance windows rather than for predicting the average fuel consumption in heavy vehicles was presented by Schoen et al. [68]. The model used is a feed-forward neural network trained using the input parameters: the number of stops, time stopped, average moving speed, characteristic acceleration, aerodynamic speed squared, change in kinetic energy, and change in potential energy. The model was evaluated with different time windows of 1 km, 2 km, and 5 km, and the 1 km window had the highest accuracy, with an accuracy of 0.91 and RMSE (liters/100 km) of 0.0132 for fuel consumption prediction. The window size is dependent on the data and application-specific, which is considered a limitation of work. In [69], an Explainable Boosting Machine (EBM) was used to measure the impact of actionable factors on fuel consumption with the data collected from different vehicles (cars to trucks). An algorithm to generate explanations related to the relationship between fuel consumption and fuel factors in the trained models was proposed. The data used in this work is independent of the fuel type and driving behavior, which might be considered a limitation, and the analysis of hybrid vehicle fuel consumption is considered a future scope. All the previous studies help manufacturers and fleet companies identify various factors affecting fuel consumption/fuel efficiency for diagnostic purposes and cost management.

4.2. Emission Estimation

As emissions regulations for transportation are being imposed, automotive industries, especially heavy-duty trucks, are focusing on reducing emissions. Emission estimation helps in developing emission inventories and sets standards for environmental protection. Due to challenges in physics-based models, companies are using data-driven approaches for estimating emissions and taking necessary actions to reduce the emissions. Previous studies for analysis and estimation of emissions such as carbon monoxide (CO), carbon dioxide (CO2) [70], nitrogen oxides (NOX) [71], hydrocarbon (HC), and particulate matter (PM) [72,73] from vehicles using machine learning have proved the ability of AI in emissions data studies [74,75,76,77,78,79,80,81]. These studies have motivated the use of artificial intelligence techniques in analyzing and estimating emissions in heavy-duty trucks [82].
Pillai et al. [83] modeled and predicted engine-out nitrogen oxide (NOx) and tailpipe nitrogen oxide (NOx) in heavy-duty vehicles using deep neural networks (DNN). Four DNNs to perform supervised regression tasks for estimating engine-out NOx and tailpipe NOx were developed using data collected from engine dynamometer and chassis dynamometer testing as input. It has been determined that high-accuracy models can be developed with minimal significant engine and after-treatment input parameters such as SCR inlet and outlet temperature, engine-out NOx, and exhaust mass flow rate. It has been observed that engine-out NOx has good prediction accuracy, with R2 = 0.99, whereas the tailpipe NOx has a prediction accuracy of R2 = 0.92. The results of actual and predicted engine-out NOx and tailpipe NOx used randomly selected input data. Extending this work on the on-road testing data might be more accurate as it considers the telemetric data. An ANN with Levenberg–Marquardt (LM) training algorithm was used by Mohammadhassani et al. [84] for NOx emission prediction of heavy-duty diesel engines. The model took engine speed, air intake temperature, and mass fuel as inputs and achieved an R2 of 0.89 for the test data. Considering just the engine operating parameters of specific engine types in heavy-duty vehicles limits the study, as the emissions are dependent on other factors such as vocation type, fuel type, age, road conditions, etc. A super learner [85] model based on random forest, XGBoost, Light GBM, and CatBoost was proposed by Wei et al. [86] for predicting CO2 and NOx emissions. One-level (Level 1 super learner regressions) model prediction was employed for CO2, and two-level (Level 1 super learner regression, Level 2 super learner classification) prediction for NOx prediction. The super learner achieved an R2 value of 0.94 and 0.84 for CO2 and NOx emissions, respectively, with the comparison of actual and predicted emissions with other methods. This study focused on onboard test data and was able to predict well for significantly different emissions. A single model that predicts CO2 and NOx could be considered. Prediction of emissions from diesel engines, which are mainly used in heavy-duty vehicles, was studied by Yu et al. [87] using CEEMDAN-LSTM. The CEEMDAN algorithm is used to extract subseries of NOx emission data at different frequencies. An LSTM neural network is then trained on the subseries data. The performance of CEEMDAN-LSTM was compared with other machine learning models, random forest, support vector regression, XGBoost, LSTM, CEEMDAN-RF, CEEMDA-SVR, and CEEMDAN-XGBoost. Including CEEMDAN algorithm data decomposition on the data reduced the sudden changes in the data and improved the accuracy of the LSTM neural network. The CEEMDAN-LSTM approach has a better performance compared to others mentioned in the paper, with RMSE of 46.11/ppm and R2 of 0.98. However, this work did not consider the effect of GIS parameters in emission estimation. The sensor data collected every second contains a lot of noise and is smoothed before feeding into the LSTM neural network, helping to stabilize the model performance, which was not done in most of the studies. Prediction of emissions from HDVs based on various scenarios using AI has been well studied, and many fleet management companies are adopting the methods to identify the faults in sensors, quality planning, and identifying the adaption of emissions with the age of the vehicle.

4.3. Self-Driving and Truck Platooning

Industry 4.0 technologies, such as deep neural networks, have led to the development of autonomous/self-driving vehicles. As the level of autonomous driving increases, the capability of the vehicle without human control increases. Level 0 constitutes human-directed vehicles. Level 1 and Level 2 autonomous vehicles provide driver assistance, such as lane assistance and cruise control. Level 3 autonomous vehicles have the environmental detection capability to make informed decisions with human override. Level 4 has a high capability of autonomous driving, where human interaction is not required in most scenarios but still requires human intervention. Level 5 autonomous vehicles are fully automated cars that do not require human interaction. Previous works on lane assistance [88,89,90,91,92], pedestrian detection [93,94,95,96,97,98], vehicle detection [99,100,101], object detection [102], traffic sign recognition [103,104,105], self-driving [106,107,108], determination of turning radius, and lateral acceleration in cargo [109] has shown great success in autonomous cars. Platooning-based video information sharing the Internet of Things framework has been proposed to enhance the safety and stability of autonomous vehicles [110]. However, autonomous trucks are still a challenge. With the success of autonomous vehicles, many companies are focusing on autonomous trucks. Autonomous vehicles are safer than human-controlled vehicles, enhancing safety and avoiding human errors, especially by preventing collisions during inclement weather and driver behaviors.
The underlying technology for autonomous cars and trucks is similar; however, trucks need to be able to sense conditions in advance. The current level of autonomy in trucks is within the range of truck platooning and is expected to achieve Level 4 by 2024 [111]. Truck platooning is where tucks travel together, connected by a computer and automated driving system. The trailing vehicles adapt and react based on the lead vehicle’s action, resulting in the semi-autonomous truck. There are three areas of truck platooning studies: fuel consumption in truck platoons [112,113], energy efficiency [114], speed control and control design, communication methods, and interaction for autonomous driving. Various studies have been performed on vehicle platooning, such as the prediction of drag force [115] and the effect of surrounding traffic behavior using machine learning. A neural network structure for deep deterministic policy gradient (DDPG)-based proportional integral derivative (PID) has been developed for vehicular platoon control [116]. This method uses reinforcement learning to find an optimal strategy for deciding based on collision, maintaining relative position, and host responses. The DDPG network consists of an actor-network with 1 input layer, 2 hidden layers with 150 and 100 neurons, and an output layer with ReLU activation function, and a critic network with 2 input layers, 3 hidden layers with 150, 200, and 100 neurons and 1 output layer. This method has high interpretability and stability compared to the traditional DRL method achieving a maximum speed error of 0.02–0.08 m/s lower than the conventional PID controller and a maximum distance error of 0.77 m less than the PID controller.
A CNN and LiDAR-based obstacle detection model with the bird-eye view (BEV) map generation was proposed in [117]. The traditional LiDAR point clouds were used by merging continuous frames and eliminating the ground. Different CNN models, namely YOLOv3-tiny, YOLOv3-tiny_3l, XNor, HetConv, and Stride-YOLO, are trained on three different LiDAR projection maps (BEV map of LiDAR point clouds): (1) c1f3g1: H-Map (1 channel) with a combination of three successive frames and elimination of the ground, (2) c3f3g0: HDD-Map (3 channel) with the combination of three successive frames and without elimination of the ground, (3) c3f3g1: H-Map (1 channel) with the combination of three successive frames and elimination of the ground. An intelligent self-driving truck system consisting of (1) real-world traffic simulation, (2) high fidelity truck that mimics real truck responses, and (3) an intelligent planning module and multi-mode trajectory planner was introduced by Wang et al. [118]. The realistic traffic simulator contains a mapped network, the traffic controller, and the vehicle meta-information. The high-fidelity truck model is implemented using a real truck’s kinematics and powertrain system trained with machine learning approaches. The reinforcement learning technique is used for decision-making and trajectory planning. However, this work is mainly focused on highway performance and is a long way to Level 4 autonomy. Self-driving provides safety in case of reckless driving, making faster decisions than humans. However, with higher technology also comes risks, such as technical errors and software attacks, and sensor failure issues due to inclement weather. There is a lot of scope for improvement in this area of research, as autonomous trucks are still in the initial phase of hitting the roads, using machine learning/computer vision techniques. Autonomous trucks would require more sensors (LIDAR/RADAR) and more computational power compared to autonomous cars.

4.4. Predictive Maintenance and Onboard Diagnostics

Maintenance is important in fleet management to improve the reliability and uptime of vehicles. The evolution of Maintenance 4.0 has enabled industries to adopt data-driven approaches for maintenance shifting the paradigm from Reactive Maintenance (RM) to Preventive Maintenance (PM) to Predictive Maintenance (PdM) [119]. Predictive maintenance uses continuous monitoring to determine when maintenance is required. PdM allows the use of historical data, statistical inference methods, and machine learning techniques for the early detection of failures [120]. Several studies have been conducted in the past for predictive maintenance of automobiles using machine learning [121,122,123].
Predictive maintenance in the fleet is of high importance to prevent the downtime of a vehicle which could lead to huge losses, especially for delivery trucks and trucks carrying goods. Prytz et al. [124] used data mining techniques on logged vehicle data from trucks to perform predictive maintenance of compressor faults. The dataset used was constructed from Volvo’s logged vehicle data (LVD), vehicle data administration (VDA), and vehicle service records (VSR) data. The supervised machine learning algorithms KNN, C5.0, and random forest evaluated the data and concluded that using logged vehicle data as a solution for predictive maintenance is feasible. This study only concentrated on Volvo’s data, but different manufacturers adopt different engines or systems which affect the maintenance of vehicles. The Synthetic Minority Over-Sampling Technique (SMOTE) was used to balance the data and further evaluated the importance of data independency for training and testing. The effect of the prediction horizon (PH), which is defined as the period of interest for the maintenance of classification, was evaluated. The author focused on data mining and machine learning techniques. Artificial neural networks would be another choice, as data processing requires a lot of knowledge regarding data and data preprocessing. In [125], a parallel stacked autoencoder was used to obtain low dimensional representations from the massive amount of high dimensional logged vehicle data collected from Volvo’s heavy-duty trucks, and the embeddings were passed to an autoencoder to predict the remaining useful life (RUL). This study is very useful in deploying less computationally expensive models on vehicles. The use of stacked autoencoders improves the performance by 6.31% with 99.7% data reduction and 23.03% with 86.99% data reduction. Scheduling maintenance becomes easy and reduces downtime if the time between failures (TBF) can be estimated based on historical data. Chen et al. [126] modeled the maintenance data collected from a fleet company using DCNN to predict the TBF. The maintenance data included parameters such as times of engine experience maintenance, age of the vehicle, cumulative miles when the failure occurred, model of the vehicle, model year, registration date of the vehicle, type of vehicle, workstation of vehicle, and area of the vehicle. The DNN model was trained on the input features and compared the performance with Bayesian regression, k-nearest neighbors, and decision tree algorithms and found that the DNN model had the lowest root mean square error of 366.73 days using historical maintenance data and 363.07 days using historical maintenance data with GIS data that included rainfall, days of rainfall >1 mm, max temperature, min temperature, and days of air frost during December and February. This work included processing nominal maintenance data using an autoencoder to obtain low-dimensional and robust data. The low-dimensional data is concatenated with the remaining features of historical data, and GIS data is passed to DNN for training. The weights of the neural network are then analyzed to determine the effect of GIS parameters on the output. This is interesting work, but the knowledge of maintenance followed by the company is required. Considering the type of vehicle, fuel type can improve the results, as diesel engines require more maintenance compared to alternative fuel engines. Sun et al. [127] proposed onboard predictive maintenance with machine learning and deep learning models for malfunction prediction and root cause analysis. Their work involved multiple steps. The first step is to identify the data with a high probability of failure collected using the majority voting method and the diagnostic trouble code (DTC), which are compared with multiple machine learning algorithms such as naïve Bayes classifier, decision trees, support vector machines (SVM), and nearest centroid. When an abnormality is detected, the time-series data is recorded, and the final step is to perform sensor data level analysis assuming the sensor data are dependent. A convolutional neural network is trained to reconstruct the expected behavior of the sensor so that the deviation in prediction can be identified as a malfunction. The abnormal sensor combinations are then mapped to root causes. CNN performed well in learning information from time-series data compared to ML algorithms. This technique is less expensive as the data is recorded only in case of abnormality, but the sensor signal selection was based on the domain knowledge of individuals and is assumed to be not independent. Additionally, a small deviation in recorded values may not show an impact immediately in the histograms. Rengaswamy et al. [128] studied the effect of dynamically weighted loss and focal loss in a neural network for prognostic and health management analysis of gas turbine engines and air pressure systems in heavy-duty trucks. Multiple models such as feed-forward neural network, 1D convolutional neural network, bidirectional gated recurrent unit, and long short-term memory were evaluated using the Scania truck dataset and obtained an improved classification using a dynamically weighted loss function. The proposed weighted loss function in this work uses the weight variable (D) given by D ( f ( x ) , y ) = { | f ( x ) y |   2 ,   i f   | f ( x ) y | < C | f ( x ) y | ,     O t h e r w i s e . The weighted loss function has statistically significant improvement in all models for remaining useful life (RUL) prediction of gas turbines and anomaly detection in the air pressure system of heavy-duty trucks. The learning process using weighted loss function depends on the weight of learning error giving more weightage to focus on the larger error data samples preventing the neural network from biasing the prediction. The freight companies are benefiting from predictive maintenance based on historical data. Predictive maintenance helps reduce the downtime of the vehicle, lowers the maintenance cost, and ensures safety by preventing sudden failures. The existing studies, however, are limited to predicting remaining useful life (RUL), time between maintenances (TBM), failure in individual components, abnormalities in sensor functioning, etc. There are very limited studies on estimating the maintenance cost or identifying the parameters in estimating the maintenance cost. This could give customers an idea to choose a vehicle based on the requirement of maintenance cost.

5. Conclusions

The high level of digitization has changed every industry drastically over the past few years. Artificial intelligence is a cutting-edge technology that imitates human intelligence. The potential of AI in the automotive industry mimics human action driving the development of AI in automotive. IoT and cloud technology have enabled the capability to process large volumes of data, paving the way for intelligent vehicles. The need for AI in the automotive industry, fueled by increasing demands for new features, incorporating new technologies, and the lack of truck drivers, has made modest progress since 2017. This has resulted in a rapid evolution in the past decade. Newer cars come with driverless intelligent decision-making systems, improved safety, driver assistance, fuel efficiency, and lower emissions. AI is becoming an essential part of automotive manufacturing, the supply chain, and automobiles themselves for self-driving, expanding the automotive industry. Companies adopting AI-based technologies and solutions can gain a significant advantage in the coming years. AI, IoT, and machine learning are changing the way people think about vehicles, extending the features to heavy-duty vehicles.
In this literature review, applications of AI in heavy-duty vehicles were introduced by discussing the data analysis/machine learning techniques that can improve fuel efficiency/predict fuel consumption, predict emissions, identify abnormalities in vehicle performance, predictive maintenance, calculate remaining useful life (RUL) and time between maintenance (TBM), and truck platooning for self-driving. A review of insightful research efforts has been presented in each of these areas. Industries adopt these technologies based on the research available. There are many research papers and review papers available in the field of automotive, AI in autonomous vehicles, AI in passenger vehicles, AI in predictive maintenance, etc. This comprehensive literature review is focused mainly on applying modern artificial intelligence technologies to heavy-duty vehicles, as freight transportation is one of the major contributors to climate change, health impacts, pollution, and the country’s economy. Even small improvements in fuel efficiency, lower emissions, and preventing downtime will help freight companies save a lot of money and reduce environmental impact. Although electric vehicles, hybrid vehicles, and autonomous vehicles are major trends using computer vision and decision-making, there is a long way to go for heavy-duty trucks to reach that point. Availability of data is the major requirement for machine learning/deep learning techniques. Collecting data from different vehicles using different fuel types, such as diesel, natural gas, and propane vehicles, could help new studies use machine learning to compare the performance of fuel consumption, emissions, and maintenance of alternate fuel and diesel vehicles. Reinforcement learning could be the future of automotive by improving performance and interacting with the environment. More focus is needed on improving the performance of machine learning models, as a false positive prediction can cost a lot, especially in scenarios such as predictive maintenance. One of the major challenges in using machine learning is the limitation of publicly available datasets, which could help in studies related to fuel consumption, emissions, and predictive maintenance of heavy-duty trucks and comparing the performance of different fuel types based on the activity of trucks, especially for vocational trucks.

Author Contributions

Conceptualization, S.K. (Sasanka Katreddi); investigation, S.K. (Sasanka Katreddi) and S.K. (Sujan Kasani); data curation, S.K. (Sasanka Katreddi); writing—original draft preparation, S.K. (Sasanka Katreddi); writing—review and editing, S.K. (Sasanka Katreddi) and S.K. (Sujan Kasani); visualization, S.K. (Sasanka Katreddi); supervision, A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rosenblatt, F. The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Samuel, A.L. Some Studies in Machine Learning Using the Game of Checkers. IBM J. Res. Dev. 1959, 3, 210–229. [Google Scholar] [CrossRef]
  3. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  4. US EPA. Clean Trucks Plan. Available online: https://www.epa.gov/regulations-emissions-vehicles-and-engines/clean-trucks-plan (accessed on 29 September 2022).
  5. Davis, S.C.; Diegel, S.W.; Boundy, R.G. Transportation Energy Data Book: Edition 30; Oak Ridge National Lab. (ORNL): Oak Ridge, TN, USA, 2011. [Google Scholar]
  6. Sullivan, G.; Pugh, R.; Melendez, A.P.; Hunt, W.D. Operations & Maintenance Best Practices—A Guide to Achieving Operational Efficiency (Release 3); Pacific Northwest National Lab. (PNNL): Richland, WA, USA, 2010. [Google Scholar]
  7. Ali, A.; Si, Q.; Wang, B.; Yuan, J.; Wang, P.; Rasool, G.; Shokrian, A.; Ali, A.; Zaman, M.A. Comparison of Empirical Models Using Experimental Results of Electrical Submersible Pump under Two-Phase Flow: Numerical and Empirical Model Validation. Phys. Scr. 2022, 97, 065209. [Google Scholar] [CrossRef]
  8. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of Deep Learning: Concepts, CNN Architectures, Challenges, Applications, Future Directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef] [PubMed]
  9. Mitchell, T.M. McGraw-Hill Series in Computer Science: Machine Learning; McGraw-Hill: New York, NY, USA, 1997; ISBN 978-0-07-042807-2. [Google Scholar]
  10. Oladipupo, T. Types of Machine Learning Algorithms. In New Advances in Machine Learning; Zhang, Y., Ed.; InTech: London, UK, 2010. [Google Scholar] [CrossRef] [Green Version]
  11. Nassif, A.B.; Shahin, I.; Attili, I.; Azzeh, M.; Shaalan, K. Speech Recognition Using Deep Neural Networks: A Systematic Review. IEEE Access 2019, 7, 19143–19165. [Google Scholar] [CrossRef]
  12. Lecun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  13. Shetty, D.; Varma, J.; Navi, S.; Ahmed, M.R. Diving Deep into Deep Learning: History, Evolution, Types and Applications. Int. J. Media Manag. 2020, 9, 2278–3075. [Google Scholar] [CrossRef]
  14. Emmert-Streib, F.; Yang, Z.; Feng, H.; Tripathi, S.; Dehmer, M. An Introductory Review of Deep Learning for Prediction Models With Big Data. Front. Artif. Intell. 2020, 3, 4. [Google Scholar] [CrossRef]
  15. LeCun, Y.; Boser, B.; Denker, J.; Henderson, D.; Howard, R.; Hubbard, W.; Jackel, L. Handwritten Digit Recognition with a Back-Propagation Network. In Advances in Neural Information Processing Systems; Morgan-Kaufmann: Burlington, MA, USA, 1989; Volume 2. [Google Scholar]
  16. LeCun, Y.; Kavukcuoglu, K.; Farabet, C. Convolutional Networks and Applications in Vision. In Proceedings of the 2010 IEEE International Symposium on Circuits and Systems, Paris, France, 30 May–2 June 2010; pp. 253–256. [Google Scholar] [CrossRef] [Green Version]
  17. Gholamalinezhad, H.; Khosravi, H. Pooling Methods in Deep Neural Networks, a Review. arXiv 2020, arXiv:2009.07485. [Google Scholar]
  18. Bengio, Y.; Simard, P.; Frasconi, P. Learning Long-Term Dependencies with Gradient Descent Is Difficult. IEEE Trans. Neural Netw. 1994, 5, 157–166. [Google Scholar] [CrossRef] [PubMed]
  19. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  20. Cho, K.; Merrienboer, B.; Bahdanau, D.; Bengio, Y. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. In Proceedings of the SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, Doha, Qatar, 25 October 2014; Association for Computational Linguistics: Stroudsburg, PA, USA, 2014. [Google Scholar] [CrossRef]
  21. Bank, D.; Koenigstein, N.; Giryes, R. Autoencoders. arXiv 2020, arXiv:abs/2003.05991. [Google Scholar] [CrossRef]
  22. Albelwi, S.; Mahmood, A. A Framework for Designing the Architectures of Deep Convolutional Neural Networks. Entropy 2017, 19, 242. [Google Scholar] [CrossRef] [Green Version]
  23. Fang, W.; Jiang, J.; Lu, S.; Gong, Y.; Tao, Y.; Tang, Y.; Yan, P.; Luo, H.; Liu, J. A LSTM Algorithm Estimating Pseudo Measurements for Aiding INS during GNSS Signal Outages. Remote Sens. 2020, 12, 256. [Google Scholar] [CrossRef] [Green Version]
  24. Widiputra, H.; Mailangkay, A.; Gautama, E. Multivariate CNN-LSTM Model for Multiple Parallel Financial Time-Series Prediction. Complexity 2021, 2021, 9903518. [Google Scholar] [CrossRef]
  25. Jabreel, M.; Moreno, A. A Deep Learning-Based Approach for Multi-Label Emotion Classification in Tweets. Appl. Sci. 2019, 9, 1123. [Google Scholar] [CrossRef] [Green Version]
  26. Elbattah, M.; Loughnane, C.; Guérin, J.-L.; Carette, R.; Cilia, F.; Dequen, G. Variational Autoencoder for Image-Based Augmentation of Eye-Tracking Data. J. Imaging 2021, 7, 83. [Google Scholar] [CrossRef]
  27. Vong, C.-M.; Wong, P.-K.; Li, Y.-P. Prediction of Automotive Engine Power and Torque Using Least Squares Support Vector Machines and Bayesian Inference. Eng. Appl. Artif. Intell. 2006, 19, 277–287. [Google Scholar] [CrossRef]
  28. Baraldi, P.; Cannarile, F.; Di Maio, F.; Zio, E. Hierarchical K-Nearest Neighbours Classification and Binary Differential Evolution for Fault Diagnostics of Automotive Bearings Operating under Variable Conditions. Eng. Appl. Artif. Intell. 2016, 56, 1–13. [Google Scholar] [CrossRef] [Green Version]
  29. Baraldi, P.; Di Maio, F.; Rigamonti, M.; Zio, E.; Seraoui, R. Clustering for Unsupervised Fault Diagnosis in Nuclear Turbine Shut-down Transients. Mech. Syst. Signal Process. 2015, 58–59, 160–178. [Google Scholar] [CrossRef]
  30. Zhai, Y.-J.; Yu, D.-L. Neural Network Model-Based Automotive Engine Air/Fuel Ratio Control and Robustness Evaluation. Eng. Appl. Artif. Intell. 2009, 22, 171–180. [Google Scholar] [CrossRef]
  31. Zhang, J.; Liu, H.; Lu, J. A Semi-Supervised 3D Object Detection Method for Autonomous Driving. Displays 2022, 71, 102117. [Google Scholar] [CrossRef]
  32. Hoang, T.-N.; Kim, D. Detecting In-Vehicle Intrusion via Semi-Supervised Learning-Based Convolutional Adversarial Autoencoders. Veh. Commun. 2022, 38, 100520. [Google Scholar] [CrossRef]
  33. Kiran, B.R.; Sobh, I.; Talpaert, V.; Mannion, P.; Sallab, A.A.A.; Yogamani, S.; Pérez, P. Deep Reinforcement Learning for Autonomous Driving: A Survey. IEEE Trans. Intell. Transp. Syst. 2022, 23, 4909–4926. [Google Scholar] [CrossRef]
  34. Naveed, K.B.; Qiao, Z.; Dolan, J.M. Trajectory Planning for Autonomous Vehicles Using Hierarchical Reinforcement Learning. In Proceedings of the IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 601–606. [Google Scholar]
  35. Aradi, S. Survey of Deep Reinforcement Learning for Motion Planning of Autonomous Vehicles. IEEE Trans. Intell. Transp. Syst. 2022, 23, 740–759. [Google Scholar] [CrossRef]
  36. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  37. Everingham, M.; Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The Pascal Visual Object Classes (VOC) Challenge. Int. J. Comput. Vis. 2009, 88, 303–338. [Google Scholar] [CrossRef] [Green Version]
  38. Lin, T.-Y.; Maire, M.; Belongie, S.; Bourdev, L.; Girshick, R.; Hays, J.; Perona, P.; Ramanan, D.; Zitnick, C.L.; Dollár, P. Microsoft COCO: Common Objects in Context. arXiv 2014, arXiv:1405.0312. [Google Scholar]
  39. Geiger, A.; Lenz, P.; Urtasun, R. Are We Ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
  40. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The Cityscapes Dataset for Semantic Urban Scene Understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vegas, NV, USA, 27–30 June 2016; pp. 3213–3223. [Google Scholar] [CrossRef]
  41. Kondermann, D.; Nair, R.; Honauer, K.; Krispin, K.; Andrulis, J.; Brock, A.; Gussefeld, B.; Rahimimoghaddam, M.; Hofmann, S.; Brenner, C.; et al. The HCI Benchmark Suite: Stereo and Flow Ground Truth With Uncertainties for Urban Autonomous Driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vegas, NV, USA, 26 June–1 July 2016; pp. 19–28. [Google Scholar] [CrossRef]
  42. Neuhold, G.; Ollmann, T.; Bulo, S.R.; Kontschieder, P. The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5000–5009. [Google Scholar] [CrossRef]
  43. Caltech Lanes Dataset. Vision Dataset. Available online: https://mldta.com/dataset/caltech-lanes-dataset/ (accessed on 22 June 2022).
  44. Lee, S.; Kim, J.; Yoon, J.S.; Shin, S.; Bailo, O.; Kim, N.; Lee, T.-H.; Hong, H.S.; Han, S.-H.; Kweon, I.S. VPGNet: Vanishing Point Guided Network for Lane and Road Marking Detection and Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Venice, Italy, 22–29 October 2017; pp. 1965–1973. [Google Scholar] [CrossRef] [Green Version]
  45. Chang, M.-F.; Lambert, J.; Sangkloy, P.; Singh, J.; Bak, S.; Hartnett, A.; Wang, D.; Carr, P.; Lucey, S.; Ramanan, D.; et al. Argoverse: 3D Tracking and Forecasting With Rich Maps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Beach, CA, USA, 5–20 June 2019; pp. 8740–8749. [Google Scholar] [CrossRef] [Green Version]
  46. Sun, P.; Kretzschmar, H.; Dotiwalla, X.; Chouard, A.; Patnaik, V.; Tsui, P.; Guo, J.; Zhou, Y.; Chai, Y.; Caine, B.; et al. Scalability in Perception for Autonomous Driving: Waymo Open Dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  47. Gaidon, A.; Wang, Q.; Cabon, Y.; Vig, E. Virtual Worlds as Proxy for Multi-Object Tracking Analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vegas, NV, USA, 27–30 June 2016; pp. 4340–4349. [Google Scholar] [CrossRef] [Green Version]
  48. Huang, X.; Cheng, X.; Geng, Q.; Cao, B.; Zhou, D.; Wang, P.; Lin, Y.; Yang, R. The ApolloScape Dataset for Autonomous Driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 954–960. [Google Scholar]
  49. Braun, M.; Krebs, S.; Flohr, F.; Gavrila, D.M. The EuroCity Persons Dataset: A Novel Benchmark for Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 1844–1861. [Google Scholar] [CrossRef] [Green Version]
  50. Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. NuScenes: A Multimodal Dataset for Autonomous Driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11621–11631. [Google Scholar]
  51. Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; Darrell, T. BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 2636–2645. [Google Scholar]
  52. Stallkamp, J.; Schlipsing, M.; Salmen, J.; Igel, C. The German Traffic Sign Recognition Benchmark: A Multi-Class Classification Competition. In Proceedings of the 2011 International Joint Conference on Neural Networks, Jose, CA, USA, 31 July–5 August 2011; pp. 1453–1460. [Google Scholar] [CrossRef]
  53. Houben, S.; Stallkamp, J.; Salmen, J.; Schlipsing, M.; Igel, C. Detection of Traffic Signs in Real-World Images: The German Traffic Sign Detection Benchmark. In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013; pp. 1–8. [Google Scholar] [CrossRef]
  54. Zhu, Z.; Liang, D.; Zhang, S.; Huang, X.; Li, B.; Hu, S. Traffic-Sign Detection and Classification in the Wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vegas, NV, USA, 27–30 June 2016; pp. 2110–2118. [Google Scholar] [CrossRef]
  55. Ziółkowski, J.; Oszczypała, M.; Szkutnik-Rogoż, J.; Malachowski, J. Use of Artificial Neural Networks to Predict Fuel Consumption on the Basis of Technical Parameters of Vehicles. Energies 2021, 14, 2639. [Google Scholar] [CrossRef]
  56. Heni, H.; Arona Diop, S.; Renaud, J.; Coelho, L.C. Measuring Fuel Consumption in Vehicle Routing: New Estimation Models Using Supervised Learning. Int. J. Prod. Res. 2021, 1–17. [Google Scholar] [CrossRef]
  57. Zargarnezhad, S.; Dashti, R.; Ahmadi, R. Predicting Vehicle Fuel Consumption in Energy Distribution Companies Using ANNs. Transp. Res. Part D Transp. Environ. 2019, 74, 174–188. [Google Scholar] [CrossRef]
  58. Moradi, E.; Miranda-Moreno, L. Vehicular Fuel Consumption Estimation Using Real-World Measures through Cascaded Machine Learning Modeling. Transp. Res. Part D Transp. Environ. 2020, 88, 102576. [Google Scholar] [CrossRef]
  59. Du, Y.; Wu, J.; Yang, S.; Zhou, L. Predicting Vehicle Fuel Consumption Patterns Using Floating Vehicle Data. J. Environ. Sci. 2017, 59, 24–29. [Google Scholar] [CrossRef] [PubMed]
  60. Parlak, A.; Islamoglu, Y.; Yasar, H.; Egrisogut, A. Application of Artificial Neural Network to Predict Specific Fuel Consumption and Exhaust Temperature for a Diesel Engine. Appl. Therm. Eng. 2006, 26, 824–828. [Google Scholar] [CrossRef]
  61. Yao, Y.; Zhao, X.; Liu, C.; Rong, J.; Zhang, Y.; Dong, Z.; Su, Y. Vehicle Fuel Consumption Prediction Method Based on Driving Behavior Data Collected from Smartphones. J. Adv. Transp. 2020, 2020, e9263605. [Google Scholar] [CrossRef]
  62. Perrotta, F.; Parry, T.; Neves, L. Application of Machine Learning for Fuel Consumption Modelling of Trucks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar] [CrossRef]
  63. Katreddi, S.; Thiruvengadam, A. Trip Based Modeling of Fuel Consumption in Modern Heavy-Duty Vehicles Using Artificial Intelligence. Energies 2021, 14, 8592. [Google Scholar] [CrossRef]
  64. Siami-Irdemoosa, E.; Dindarloo, S.R. Prediction of Fuel Consumption of Mining Dump Trucks: A Neural Networks Approach. Appl. Energy 2015, 151, 77–84. [Google Scholar] [CrossRef]
  65. Soofastaei, A.; Aminossadati, S.; Kizil, M.; Knights, P. Reducing Fuel Consumption of Haul Trucks in Surface Mines Using Artificial Intelligence Models. In Proceedings of the 16th Coal Operators’ Conference, Wollongong, Australia, 10–12 February 2016; pp. 477–489. [Google Scholar]
  66. Bodell, V.; Ekstrom, L.; Aghanavesi, S. Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles. Int. J. Transp. Veh. Eng. 2021, 15, 97–101. [Google Scholar]
  67. Wysocki, O.; Deka, L.; Elizondo, D.; Kropiwnicki, J.; Czyzewicz, J. Heavy Duty Vehicle Fuel Consumption Modelling Based on Exploitation Data by Using Artificial Neural Networks. In Proceedings of the IWANN, Gran Canaria, Spain, 12–14 June 2019. [Google Scholar]
  68. Schoen, A.; Byerly, A.; Hendrix, B.; Bagwe, R.M.; dos Santos, E.C.; Miled, Z.B. A Machine Learning Model for Average Fuel Consumption in Heavy Vehicles. IEEE Trans. Veh. Technol. 2019, 68, 6343–6351. [Google Scholar] [CrossRef]
  69. Barbado, A.; Corcho, Ó. Vehicle Fuel Optimization Under Real-World Driving Conditions: An Explainable Artificial Intelligence Approach. arXiv 2021, arXiv:2107.06031. [Google Scholar]
  70. Zhang, R.; Wang, Y.; Pang, Y.; Zhang, B.; Wei, Y.; Wang, M.; Zhu, R. A Deep Learning Micro-Scale Model to Estimate the CO2 Emissions from Light-Duty Diesel Trucks Based on Real-World Driving. Atmosphere 2022, 13, 1466. [Google Scholar] [CrossRef]
  71. Le Cornec, C.; Molden, N.; Van Reeuwijk, M.; Stettler, M. Modelling of Instantaneous Emissions from Diesel Vehicles with a Special Focus on NOx: Insights from Machine Learning Techniques. Sci. Total Environ. 2020, 737, 139625. [Google Scholar] [CrossRef]
  72. Danesh Yazdi, M.; Kuang, Z.; Dimakopoulou, K.; Barratt, B.; Suel, E.; Amini, H.; Lyapustin, A.; Katsouyanni, K.; Schwartz, J. Predicting Fine Particulate Matter (PM2.5) in the Greater London Area: An Ensemble Approach Using Machine Learning Methods. Remote Sens. 2020, 12, 914. [Google Scholar] [CrossRef] [Green Version]
  73. Palanichamy, N.; Haw, S.-C.; Subramanian, S.; Murugan, R.; Govindasamy, K. Machine learning methods to predict particulate matter PM2.5 [version 1; peer review: 1 approved]. F1000Research 2022, 11, 406. [Google Scholar] [CrossRef]
  74. Wen, H.-T.; Lu, J.-H.; Jhang, D.-S. Features Importance Analysis of Diesel Vehicles’ NOx and CO2 Emission Predictions in Real Road Driving Based on Gradient Boosting Regression Model. Int. J. Environ. Res. Public Health 2021, 18, 13044. [Google Scholar] [CrossRef]
  75. Bhowmik, S.; Paul, A.; Panua, R.; Ghosh, S.K.; Debroy, D. Performance-Exhaust Emission Prediction of Diesosenol Fueled Diesel Engine: An ANN Coupled MORSM Based Optimization. Energy 2018, 153, 212–222. [Google Scholar] [CrossRef]
  76. Roy, S.; Banerjee, R.; Bose, P.K. Performance and Exhaust Emissions Prediction of a CRDI Assisted Single Cylinder Diesel Engine Coupled with EGR Using Artificial Neural Network. Appl. Energy 2014, 119, 330–340. [Google Scholar] [CrossRef]
  77. Hosamani, B.R.; Abbas Ali, S.; Katti, V. Assessment of Performance and Exhaust Emission Quality of Different Compression Ratio Engine Using Two Biodiesel Mixture: Artificial Neural Network Approach. Alex. Eng. J. 2021, 60, 837–844. [Google Scholar] [CrossRef]
  78. Maino, C.; Misul, D.; Di Mauro, A.; Spessa, E. A Deep Neural Network Based Model for the Prediction of Hybrid Electric Vehicles Carbon Dioxide Emissions. Energy AI 2021, 5, 100073. [Google Scholar] [CrossRef]
  79. Seo, J.; Yun, B.; Park, J.; Park, J.; Shin, M.; Park, S. Prediction of Instantaneous Real-World Emissions from Diesel Light-Duty Vehicles Based on an Integrated Artificial Neural Network and Vehicle Dynamics Model. Sci. Total Environ. 2021, 786, 147359. [Google Scholar] [CrossRef] [PubMed]
  80. Tóth-Nagy, C.; Conley, J.J.; Jarrett, R.P.; Clark, N.N. Further Validation of Artificial Neural Network-Based Emissions Simulation Models for Conventional and Hybrid Electric Vehicles. J. Air Waste Manag. Assoc. 2006, 56, 898–910. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  81. Azeez, O.; Pradhan, B.; Shafri, H.; Shukla, N.; Rizeei, H. Modeling of CO Emissions from Traffic Vehicles Using Artificial Neural Networks. Appl. Sci. 2019, 9, 313. [Google Scholar] [CrossRef] [Green Version]
  82. Khurana, S.; Saxena, S.; Jain, S.; Dixit, A. Predictive Modeling of Engine Emissions Using Machine Learning: A Review. Mater. Today Proc. 2021, 38, 280–284. [Google Scholar] [CrossRef]
  83. Pillai, R.; Triantopoulos, V.; Berahas, A.S.; Brusstar, M.; Sun, R.; Nevius, T.; Boehman, A.L. Modeling and Predicting Heavy-Duty Vehicle Engine-Out and Tailpipe Nitrogen Oxide (NOx) Emissions Using Deep Learning. Front. Mech. Eng. 2022, 8, 840310. [Google Scholar] [CrossRef] [PubMed]
  84. Mohammadhassani, J.; Khalilarya, S.; Solimanpur, M.; Dadvand, A. Prediction of NOx Emissions from a Direct Injection Diesel Engine Using Artificial Neural Network. Model. Simul. Eng. 2012, 2012, e830365. [Google Scholar] [CrossRef] [Green Version]
  85. van der Laan, M.J.; Polley, E.C.; Hubbard, A.E. Super Learner. Stat. Appl. Genet. Mol. Biol. 2007, 6, 25. [Google Scholar] [CrossRef] [PubMed]
  86. Wei, N.; Zhang, Q.; Zhang, Y.; Jin, J.; Chang, J.; Yang, Z.; Ma, C.; Jia, Z.; Ren, C.; Wu, L.; et al. Super-Learner Model Realizes the Transient Prediction of CO2 and NOx of Diesel Trucks: Model Development, Evaluation and Interpretation. Environ. Int. 2022, 158, 106977. [Google Scholar] [CrossRef]
  87. Yu, Y.; Wang, Y.; Li, J.; Fu, M.; Shah, A.N.; He, C. A Novel Deep Learning Approach to Predict the Instantaneous NOx Emissions From Diesel Engine. IEEE Access 2021, 9, 11002–11013. [Google Scholar] [CrossRef]
  88. Wang, Q.; Zhuang, W.; Wang, L.; Ju, F. Lane Keeping Assist for an Autonomous Vehicle Based on Deep Reinforcement Learning; SAE International: Warrendale, PA, USA, 2020. [Google Scholar]
  89. Wei, Z.; Wang, C.; Hao, P.; Barth, M.J. Vision-Based Lane-Changing Behavior Detection Using Deep Residual Neural Network. In Proceedings of the IEEE Intelligent Transportation Systems Conference (ITSC) 2019, Auckland, New Zealand, 27–30 October 2019; pp. 3108–3113. [Google Scholar] [CrossRef] [Green Version]
  90. Mahajan, V.; Katrakazas, C.; Antoniou, C. Prediction of Lane-Changing Maneuvers with Automatic Labeling and Deep Learning. Transp. Res. Rec. 2020, 2674, 336–347. [Google Scholar] [CrossRef]
  91. Karthikeyan, M.; Sathiamoorthy, S.; Vasudevan, M. Lane Keep Assist System for an Autonomous Vehicle Using Support Vector Machine Learning Algorithm. In Innovative Data Communication Technologies and Application; Raj, J.S., Bashar, A., Ramson, S.R.J., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 101–108. [Google Scholar]
  92. Gao, J.; Yi, J.; Zhu, H.; Murphey, Y.L. A Personalized Lane-Changing Model for Advanced Driver Assistance System Based on Deep Learning and Spatial-Temporal Modeling. SAE Int. J. Transp. Saf. 2019, 7, 163–174. [Google Scholar]
  93. Navarro, P.J.; Fernández, C.; Borraz, R.; Alonso, D. A Machine Learning Approach to Pedestrian Detection for Autonomous Vehicles Using High-Definition 3D Range Data. Sensors 2016, 17, 18. [Google Scholar] [CrossRef] [Green Version]
  94. Islam, M.M.; Newaz, A.A.R.; Karimoddini, A. A Pedestrian Detection and Tracking Framework for Autonomous Cars: Efficient Fusion of Camera and LiDAR Data. In Proceedings of the 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, Australia, 17–20 October 2021; pp. 1287–1292. [Google Scholar] [CrossRef]
  95. Cao, J.; Song, C.; Peng, S.; Song, S.; Zhang, X.; Shao, Y.; Xiao, F. Pedestrian Detection Algorithm for Intelligent Vehicles in Complex Scenarios. Sensors 2020, 20, 3646. [Google Scholar] [CrossRef]
  96. Angelova, A.; Krizhevsky, A.; Vanhoucke, V. Pedestrian Detection with a Large-Field-Of-View Deep Network. In Proceedings of the IEEE International Conference on Robotics and Automation, Seattle, WA, USA, 26–30 May 2015; pp. 704–711. [Google Scholar]
  97. Ortiz Castelló, V.; del Tejo Catalá, O.; Salvador Igual, I.; Perez-Cortes, J.-C. Real-Time on-Board Pedestrian Detection Using Generic Single-Stage Algorithm Algorithms and Algorithm on-Road Databases. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420929175. [Google Scholar] [CrossRef]
  98. Zhao, L. Stereo- and Neural Network-Based Pedestrian Detection. J. Trans. Intell. Transport. Syst. 2000, 1, 148–154. [Google Scholar] [CrossRef] [Green Version]
  99. Herunde, H.; Singh, A.; Deshpande, H.; Shetty, P. Detection of Pedestrian and Different Types of Vehicles Using Image Processing. Int. J. Res. Ind. Eng. 2020, 9, 99–113. [Google Scholar] [CrossRef]
  100. Galvao, L.G.; Abbod, M.; Kalganova, T.; Palade, V.; Huda, M.N. Pedestrian and Vehicle Detection in Autonomous Vehicle Perception Systems—A Review. Sensors 2021, 21, 7267. [Google Scholar] [CrossRef]
  101. Song, H.; Liang, H.; Li, H.; Dai, Z.; Yun, X. Vision-Based Vehicle Detection and Counting System Using Deep Learning in Highway Scenes. Eur. Transp. Res. Rev. 2019, 11, 51. [Google Scholar] [CrossRef] [Green Version]
  102. Gupta, A.; Anpalagan, A.; Guan, L.; Khwaja, A.S. Deep Learning for Object Detection and Scene Perception in Self-Driving Cars: Survey, Challenges, and Open Issues. Array 2021, 10, 100057. [Google Scholar] [CrossRef]
  103. Mu, G.; Xinyu, Z.; Deyi, L.; Tianlei, Z.; Lifeng, A. Traffic Light Detection and Recognition for Autonomous Vehicles. J. China Univ. Posts Telecommun. 2015, 22, 50–56. [Google Scholar] [CrossRef]
  104. Swetha, S.; Sivakumar, P. SSLA Based Traffic Sign and Lane Detection for Autonomous Cars. In Proceedings of the 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), Coimbatore, India, 25–27 March 2021; pp. 766–771. [Google Scholar] [CrossRef]
  105. Li, Z.; Zeng, Q.; Liu, Y.; Liu, J.; Li, L. An Improved Traffic Lights Recognition Algorithm for Autonomous Driving in Complex Scenarios. Int. J. Distrib. Sens. Netw. 2021, 17, 15501477211018374. [Google Scholar] [CrossRef]
  106. Atakishiyev, S.; Salameh, M.; Yao, H.; Goebel, R. Explainable Artificial Intelligence for Autonomous Driving: An Overview and Guide for Future Research Directions. arXiv 2022, arXiv:2112.1156. [Google Scholar]
  107. Lugano, G. Virtual Assistants and Self-Driving Cars. In Proceedings of the 15th International Conference on ITS Telecommunications (ITST), Warsaw, Poland, 29–31 May 2017; pp. 1–5. [Google Scholar] [CrossRef]
  108. Cunneen, M.; Mullins, M.; Murphy, F. Autonomous Vehicles and Embedded Artificial Intelligence: The Challenges of Framing Machine Driving Decisions. Appl. Artif. Intell. 2019, 33, 706–731. [Google Scholar] [CrossRef] [Green Version]
  109. Jagelčák, J.; Gnap, J.; Kuba, O.; Frnda, J.; Kostrzewski, M. Determination of Turning Radius and Lateral Acceleration of Vehicle by GNSS/INS Sensor. Sensors 2022, 22, 2298. [Google Scholar] [CrossRef]
  110. Zhou, Z.; Akhtar, Z.; Man, K.L.; Siddique, K. A Deep Learning Platooning-Based Video Information-Sharing Internet of Things Framework for Autonomous Driving Systems. Int. J. Distrib. Sens. Netw. 2019, 15, 1550147719883133. [Google Scholar] [CrossRef]
  111. This Year, Autonomous Trucks Will Take to the Road with No One on Board. Available online: https://spectrum.ieee.org/this-year-autonomous-trucks-will-take-to-the-road-with-no-one-on-board (accessed on 30 May 2022).
  112. Tsugawa, S.; Jeschke, S.; Shladover, S.E. A Review of Truck Platooning Projects for Energy Savings. IEEE Trans. Intell. Veh. 2016, 1, 68–77. [Google Scholar] [CrossRef]
  113. Song, M.; Chen, F.; Ma, X. Organization of Autonomous Truck Platoon Considering Energy Saving and Pavement Fatigue. Transp. Res. Part D Transp. Environ. 2021, 90, 102667. [Google Scholar] [CrossRef]
  114. Tsugawa, S.; Kato, S.; Aoki, K. An Automated Truck Platoon for Energy Saving. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 4019–4114. [Google Scholar] [CrossRef]
  115. Jaffar, F.; Farid, T.; Sajid, M.; Ayaz, Y.; Khan, M.J. Prediction of Drag Force on Vehicles in a Platoon Configuration Using Machine Learning. IEEE Access 2020, 8, 201823–201834. [Google Scholar] [CrossRef]
  116. Yang, J.; Peng, W.; Sun, C. A Learning Control Method of Automated Vehicle Platoon at Straight Path with DDPG-Based PID. Electronics 2021, 10, 2580. [Google Scholar] [CrossRef]
  117. Zhang, C.; Ouyang, Z.; Ren, L.; Liu, Y. Low-Cost LiDAR-Based Vehicle Detection for Self-Driving Container Trucks at Seaport. In Collaborative Computing: Networking, Applications and Worksharing; Gao, H., Wang, X., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 451–466. [Google Scholar]
  118. Wang, D.; Gao, L.; Lan, Z.; Li, W.; Ren, J.; Zhang, J.; Zhang, P.; Zhou, P.; Wang, S.; Pan, J.; et al. An Intelligent Self-Driving Truck System for Highway Transportation. Front. Neurorobot. 2022, 16, 843026. [Google Scholar] [CrossRef]
  119. Ran, Y.; Zhou, X.; Lin, P.; Wen, Y.; Deng, R. A Survey of Predictive Maintenance: Systems, Purposes and Approaches. arXiv 2019, arXiv:1912.07383. [Google Scholar]
  120. Carvalho, T.P.; Soares, F.A.A.M.N.; Vita, R.; da Francisco, R.P.; Basto, J.P.; Alcalá, S.G.S. A Systematic Literature Review of Machine Learning Methods Applied to Predictive Maintenance. Comput. Ind. Eng. 2019, 137, 106024. [Google Scholar] [CrossRef]
  121. Theissler, A.; Pérez-Velázquez, J.; Kettelgerdes, M.; Elger, G. Predictive Maintenance Enabled by Machine Learning: Use Cases and Challenges in the Automotive Industry. Reliab. Eng. Syst. Saf. 2021, 215, 107864. [Google Scholar] [CrossRef]
  122. Chaudhuri, A. Predictive Maintenance for Industrial IoT of Vehicle Fleets Using Hierarchical Modified Fuzzy Support Vector Machine. arXiv 2018, arXiv:1806.09612. [Google Scholar]
  123. Arena, F.; Collotta, M.; Luca, L.; Ruggieri, M.; Termine, F. Predictive Maintenance in the Automotive Sector: A Literature Review. Math. Comput. Appl. 2021, 27, 2. [Google Scholar] [CrossRef]
  124. Prytz, R.; Nowaczyk, S.; Rögnvaldsson, T.; Byttner, S. Predicting the Need for Vehicle Compressor Repairs Using Maintenance Records and Logged Vehicle Data. Eng. Appl. Artif. Intell. 2015, 41, 139–150. [Google Scholar] [CrossRef] [Green Version]
  125. Revanur, V.; Ayibiowu, A.; Rahat, M.; Khoshkangini, R. Embeddings Based Parallel Stacked Autoencoder Approach for Dimensionality Reduction and Predictive Maintenance of Vehicles. In IoT Streams for Data-Driven Predictive Maintenance and IoT, Edge, and Mobile for Embedded Machine Learning; Gama, J., Pashami, S., Bifet, A., Sayed-Mouchawe, M., Fröning, H., Pernkopf, F., Schiele, G., Blott, M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 127–141. [Google Scholar]
  126. Chen, C.; Liu, Y.; Sun, X.; Cairano-Gilfedder, C.D.; Titmus, S. Automobile Maintenance Prediction Using Deep Learning with GIS Data. Procedia CIRP 2019, 81, 447–452. [Google Scholar] [CrossRef]
  127. Sun, Y.; Xu, Z.; Zhang, T. On-Board Predictive Maintenance with Machine Learning. SAE Tech. Pap. 2019, 1, 1048. [Google Scholar] [CrossRef]
  128. Rengasamy, D.; Jafari, M.; Rothwell, B.; Chen, X.; Figueredo, G.P. Deep Learning with Dynamically Weighted Loss Function for Sensor-Based Prognostics and Health Management. Sensors 2020, 20, 723. [Google Scholar] [CrossRef]
Figure 1. Subfields of Artificial Intelligence (adapted from [8]).
Figure 1. Subfields of Artificial Intelligence (adapted from [8]).
Energies 15 07457 g001
Figure 2. Machine Learning Types [11].
Figure 2. Machine Learning Types [11].
Energies 15 07457 g002
Figure 3. Perceptron (adapted from [14]).
Figure 3. Perceptron (adapted from [14]).
Energies 15 07457 g003
Figure 4. The architecture of different neural networks. (a) Fully Connected Neural Network (FCNN) (adapted from [14]), (b) Convolutional Neural Network (CNN) [22], (c) Recurrent Neural Network (RNN) [23], (d) Long Short−Term Memory (LSTM) [24], (e) Gated Recurrent Units (GRU) [25], (f) Autoencoders (AE) [26].
Figure 4. The architecture of different neural networks. (a) Fully Connected Neural Network (FCNN) (adapted from [14]), (b) Convolutional Neural Network (CNN) [22], (c) Recurrent Neural Network (RNN) [23], (d) Long Short−Term Memory (LSTM) [24], (e) Gated Recurrent Units (GRU) [25], (f) Autoencoders (AE) [26].
Energies 15 07457 g004
Table 1. Datasets for Autonomous Driving, Motion Estimation/Recognition, and Tracking.
Table 1. Datasets for Autonomous Driving, Motion Estimation/Recognition, and Tracking.
DatasetScenarioURL
ImageNet [36]Object Detection
Semantic Segmentation
https://www.image-net.org (accessed on 10 August 2022)
Pascal VOC [37]Object Detection
Semantic Segmentation
http://host.robots.ox.ac.uk/pascal/VOC/databases.html (accessed on 10 August 2022)
Microsoft COCO [38]Object Detection
Semantic Segmentation
https://cocodataset.org (accessed on 10 August 2022)
Cityscapes [40]Autonomous Driving
Object Detection
Semantic Segmentation
https://www.cityscapes-dataset.com/ (accessed on 14 August 2022)
Waymo Open Dataset [46]Autonomous Driving
Object Detection
Semantic Segmentation
Tracking
https://waymo.com/open/ (accessed on 10 August 2022)
KITTI [39]Autonomous Driving
Stereo
Reconstruction
Optical Flow
Object Detection
Semantic Segmentation
Road Detection
Lane Detection
Tracking
http://www.cvlibs.net/datasets/kitti/ (accessed on 14 August 2022)
Virtual KITTI [47]Autonomous Driving
Stereo
Reconstruction
Optical Flow
Object Detection
Semantic Segmentation
Tracking
https://europe.naverlabs.com/research/computer-vision/proxy-virtual-worlds-vkitti-2/ (accessed on 10 August 2022)
ApolloScape [48]Autonomous Driving
Object Detection
Semantic Segmentation
Lane Detection
Tracking
http://apolloscape.auto/ (accessed on 10 August 2022)
HCI Benchmark [41]Autonomous Driving
Optical Flow
EuroCity Persons Dataset [49]Autonomous Driving
Object Detection
https://eurocity-dataset.tudelft.nl/ (accessed on 14 August 2022)
Mapillary [42]Autonomous Driving
Semantic Segmentation
https://www.mapillary.com/datasets (accessed on 14 August 2022)
NuScenes [50]Autonomous Driving
Object Detection
Semantic Segmentation
https://www.nuscenes.org/ (accessed on 14 August 2022)
Berkeley DeepDrive [51]Autonomous Driving
Object Detection
Semantic Segmentation
Road Detection
Lane Detection
https://bdd-data.berkeley.edu/ (accessed on 14 August 2022)
German Traffic Sign Recognition [52]/Detection Benchmark [53]Autonomous Driving
Object Detection
Traffic Sign Detection
Semantic Segmentation
Road Detection
Lane Detection
Tsinghua-Tencent 100K [54]Autonomous Driving
Object Detection
Traffic Sign Detection
Semantic Segmentation
Road Detection
Lane Detection
https://cg.cs.tsinghua.edu.cn/traffic-sign/ (accessed on 20 August 2022)
Caltech Lanes Dataset [43]Autonomous Driving
Lane Detection
https://mldta.com/dataset/caltech-lanes-dataset/ (accessed on 20 August 2022)
VPGNET Dataset [44]Autonomous Driving
Lane Detection
Argoverse [45]Autonomous Driving, Trackinghttps://www.argoverse.org/ (accessed on 20 August 2022)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Katreddi, S.; Kasani, S.; Thiruvengadam, A. A Review of Applications of Artificial Intelligence in Heavy Duty Trucks. Energies 2022, 15, 7457. https://doi.org/10.3390/en15207457

AMA Style

Katreddi S, Kasani S, Thiruvengadam A. A Review of Applications of Artificial Intelligence in Heavy Duty Trucks. Energies. 2022; 15(20):7457. https://doi.org/10.3390/en15207457

Chicago/Turabian Style

Katreddi, Sasanka, Sujan Kasani, and Arvind Thiruvengadam. 2022. "A Review of Applications of Artificial Intelligence in Heavy Duty Trucks" Energies 15, no. 20: 7457. https://doi.org/10.3390/en15207457

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop