Figure 1.
Proposed framework for secure and sustainable green mobility, illustrating the interaction among IoT data collection, multimodal data fusion, machine learning algorithms, optimization techniques, and security mechanisms, culminating in improved mobility and sustainability.
Figure 1.
Proposed framework for secure and sustainable green mobility, illustrating the interaction among IoT data collection, multimodal data fusion, machine learning algorithms, optimization techniques, and security mechanisms, culminating in improved mobility and sustainability.
Figure 2.
Illustration of the multimodal data fusion process in the proposed framework. Input data sources, including traffic flow data (METR-LA), simulated GPS trajectories, and real-time weather data (OpenWeatherMap API), are integrated in the data fusion engine. The outputs include traffic predictions (via LSTM) and optimized routes (via RL optimization), demonstrating the critical role of data fusion in achieving efficiency and adaptability.
Figure 2.
Illustration of the multimodal data fusion process in the proposed framework. Input data sources, including traffic flow data (METR-LA), simulated GPS trajectories, and real-time weather data (OpenWeatherMap API), are integrated in the data fusion engine. The outputs include traffic predictions (via LSTM) and optimized routes (via RL optimization), demonstrating the critical role of data fusion in achieving efficiency and adaptability.
Figure 3.
LSTM architecture used for congestion prediction. This model processes traffic flow data, weather conditions, and historical patterns to capture temporal dependencies and predict traffic flow for specific road segments.
Figure 3.
LSTM architecture used for congestion prediction. This model processes traffic flow data, weather conditions, and historical patterns to capture temporal dependencies and predict traffic flow for specific road segments.
Figure 4.
Reinforcement learning-based route optimization workflow. This system uses Deep Q-Network (DQN) agent to dynamically select optimal routes based on traffic conditions, energy efficiency, and reward mechanisms, with continuous feedback loop for real-time updates.
Figure 4.
Reinforcement learning-based route optimization workflow. This system uses Deep Q-Network (DQN) agent to dynamically select optimal routes based on traffic conditions, energy efficiency, and reward mechanisms, with continuous feedback loop for real-time updates.
Figure 5.
The convergence of cumulative rewards during reinforcement learning training. The upward trend indicates the agent’s improved decision-making capabilities, demonstrating its ability to optimize routes effectively through iterative learning.
Figure 5.
The convergence of cumulative rewards during reinforcement learning training. The upward trend indicates the agent’s improved decision-making capabilities, demonstrating its ability to optimize routes effectively through iterative learning.
Figure 6.
Convergence of training and validation loss for LSTM model during congestion prediction. Declining trend in losses over 50 epochs demonstrates stable training and reliable generalization.
Figure 6.
Convergence of training and validation loss for LSTM model during congestion prediction. Declining trend in losses over 50 epochs demonstrates stable training and reliable generalization.
Figure 7.
Comparison of average travel times across baseline routing methods and proposed optimization framework. Optimized framework demonstrates 20% reduction in travel time by dynamically adapting to real-time traffic conditions. Error bars indicate variability observed within each simulation scenario.
Figure 7.
Comparison of average travel times across baseline routing methods and proposed optimization framework. Optimized framework demonstrates 20% reduction in travel time by dynamically adapting to real-time traffic conditions. Error bars indicate variability observed within each simulation scenario.
Figure 8.
Traffic congestion levels visualized as heatmaps before and after applying proposed optimization framework. Red heatmap indicates high congestion prior to optimization, while green heatmap highlights improved traffic flow, particularly during peak hours.
Figure 8.
Traffic congestion levels visualized as heatmaps before and after applying proposed optimization framework. Red heatmap indicates high congestion prior to optimization, while green heatmap highlights improved traffic flow, particularly during peak hours.
Figure 9.
Cumulative rewards achieved by RL agent during training. Consistent upward trend over 100 episodes reflects agent’s ability to learn optimal routing strategies and adapt to dynamic traffic conditions.
Figure 9.
Cumulative rewards achieved by RL agent during training. Consistent upward trend over 100 episodes reflects agent’s ability to learn optimal routing strategies and adapt to dynamic traffic conditions.
Figure 10.
Average energy consumption per kilometer across baseline and optimized scenarios. Optimized framework achieves 15% reduction in energy consumption through smoother traffic flow and route optimization. Error bars represent simulation variability.
Figure 10.
Average energy consumption per kilometer across baseline and optimized scenarios. Optimized framework achieves 15% reduction in energy consumption through smoother traffic flow and route optimization. Error bars represent simulation variability.
Figure 11.
Comparison of CO2 emissions per kilometer between baseline routing and optimized framework. Framework achieves 10% reduction in emissions, attributed to improved traffic flow and reduced idling times.
Figure 11.
Comparison of CO2 emissions per kilometer between baseline routing and optimized framework. Framework achieves 10% reduction in emissions, attributed to improved traffic flow and reduced idling times.
Figure 12.
Comparative sustainability contributions of baseline methods versus proposed framework. Chart highlights improvements in travel time reduction, energy savings, and CO2 emissions achieved by proposed framework.
Figure 12.
Comparative sustainability contributions of baseline methods versus proposed framework. Chart highlights improvements in travel time reduction, energy savings, and CO2 emissions achieved by proposed framework.
Figure 13.
Modular applications of the proposed framework. The diagram illustrates how the core IoT and AI framework can be extended to diverse applications, including traffic management, energy optimization, emergency services, logistics, and public transit coordination.
Figure 13.
Modular applications of the proposed framework. The diagram illustrates how the core IoT and AI framework can be extended to diverse applications, including traffic management, energy optimization, emergency services, logistics, and public transit coordination.
Table 1.
Summary of datasets used for case study on real-time traffic optimization, highlighting key characteristics, sources, and features of data.
Table 1.
Summary of datasets used for case study on real-time traffic optimization, highlighting key characteristics, sources, and features of data.
Dataset | Source | Type | Features | Size | Time Period | Resolution |
---|
METR-LA | Public dataset | Traffic Flow Data | Speed, Traffic Volume | ~1.5M data points | March 2012–June 2012 | 5 min intervals |
PEMS-BAY | Public dataset | Traffic Flow Data | Speed, Traffic Volume | ~2M data points | January 2017–December 2017 | 5 min intervals |
Berlin Open Mobility Data | Public dataset | Traffic and Mobility Data | Speed, Congestion, Multimodal Data | API-based | Real-time and historical | Variable |
OpenWeatherMap | API-based | Weather Data | Temperature, Precipitation, Wind | Depends on API requests | Real-time and historical data | Hourly or custom intervals |
Synthetic GPS Data | Simulated in-house | GPS Trajectories | Vehicle Location, Routes | ~50K trajectories simulated | Custom simulation period | Per-second intervals |
Table 2.
Consolidated performance impact of multimodal fusion strategies and RL optimization.
Table 2.
Consolidated performance impact of multimodal fusion strategies and RL optimization.
Category | No Fusion (Baseline) | Feature-Level Fusion (Kalman Filtering + PCA) | Deep Learning-Based Fusion |
---|
Traffic Prediction Accuracy (%) | 78.5 | 84.1 | 91.3 |
Route Optimization Improvement (%) | 12.3 | 18.5 | 26.7 |
Energy Efficiency Gains (%) | 6.7 | 10.2 | 14.9 |
AI Generalization Accuracy (New City) | 73.2 | 79.5 | 87.1 |
RL Policy Adaptation Score (0–100) | 45.3 | 67.9 | 88.5 |
Battery Longevity Improvement (%) | 0.0 | 14.5 | 39.1 |
Energy Cost Reduction (%) | 0.0 | 4.3 | 12.8 |
Table 3.
Parameters of LSTM model for congestion prediction.
Table 3.
Parameters of LSTM model for congestion prediction.
Parameter | Value | Explanation |
---|
Input Features | Traffic speed, weather data | Multimodal data sources providing critical information on road conditions and environmental factors. |
Time Steps | 12 (for 1 h prediction) | The number of previous time intervals (e.g., 12 × 5 min) used to predict the next time step. |
LSTM Layers | 2 | Stacked architecture to enhance the model’s capacity for learning complex patterns over time. |
Neurons per Layer | 64 | Number of units in each LSTM layer, controlling the model’s ability to capture data relationships. |
Dropout Rate | 0.2 | Regularization method to reduce overfitting by randomly deactivating 20% of neurons during training. |
Loss Function | Mean Squared Error (MSE) | Measures the average squared difference between predicted and actual values. Lower MSE indicates better model performance. |
Optimizer | Adam | Adaptive optimizer that adjusts learning rates dynamically for faster convergence. |
Learning Rate | 0.001 | Step size at each iteration of gradient descent, determining the model’s convergence rate. |
Batch Size | 32 | Number of samples processed together in a single forward and backward pass during training. |
Epochs | 50 | Number of complete iterations through the entire training dataset during model training. |
Training/Validation/Testing Split | 70%/5%/15% | Proportions used to allocate the dataset for training, validating, and testing the model’s performance. |
Evaluation Metrics | RMSE, MAE, R2 | RMSE (Root Mean Squared Error) and MAE (Mean Absolute Error) assess prediction accuracy; R2 measures explained variance. |
Table 4.
Parameters of the DQN model for route optimization.
Table 4.
Parameters of the DQN model for route optimization.
Parameter | Value | Explanation |
---|
State Representation | Traffic congestion levels, energy usage | Encodes the traffic network’s current state, including vehicle density and energy consumption metrics. |
Action Space | 10 possible route options | Defines the set of possible decisions (alternative routes) the agent can take at each step. |
Reward Function | Negative travel time, positive for energy-efficient routes | Encourages actions that minimize travel time and energy usage, penalizes congestion-prone choices. |
Neural Network Architecture | 3 layers, 128 neurons/layer | The architecture of the DQN, with fully connected layers enabling the agent to learn action-value mappings. |
Training Episodes | 10,000 | Total number of simulation runs where the agent learns by interacting with the environment. |
Steps per Episode | 50 | Number of sequential decisions (actions) the agent makes within a single training episode. |
Exploration Strategy | Epsilon-greedy, ε = 1 → 0.01 | Balances exploration of new actions (randomly) and exploitation of the best-known strategies, gradually shifting toward exploitation. |
Discount Factor (γ) | 0.99 | Ensures that the agent considers long-term rewards, prioritizing future outcomes over immediate ones. |
Optimizer | Adam | Optimizer used to adjust the weights of the neural network based on gradient descent. |
Learning Rate | 0.0005 | Determines how much to update the model parameters in response to the calculated error at each step. |
Evaluation Metrics | Average reward per episode, travel time reduction | Measures the agent’s effectiveness by monitoring cumulative rewards and percentage reductions in travel time. |
Table 5.
Comparative analysis of traffic optimization frameworks.
Table 5.
Comparative analysis of traffic optimization frameworks.
Metric | Baseline Methods | Proposed Framework | Improvement (%) |
---|
Travel Time (min) | 100 | 80 | 20% |
Energy Consumption (kJ/km) | 8.0 | 6.8 | 15% |
CO2 Emissions (g/km) | 200 | 180 | 10% |
Congestion Index | 0.7 | 0.4 | 43% |
Table 6.
Predictive performance of LSTM congestion forecasting model under varying traffic conditions and data fusion strategies. Deep learning-based fusion consistently yields highest accuracy, while model maintains generalizability across congestion levels and cities.
Table 6.
Predictive performance of LSTM congestion forecasting model under varying traffic conditions and data fusion strategies. Deep learning-based fusion consistently yields highest accuracy, while model maintains generalizability across congestion levels and cities.
Scenario | RMSE | MAE | R2 |
---|
General (LSTM) | 2.14 | 1.45 | 0.87 |
Low Congestion | 1.85 | 1.12 | 0.91 |
High Congestion | 2.65 | 1.98 | 0.79 |
Cross-City Generalization | 2.33 | 1.62 | 0.83 |
Fusion–None | 2.71 | 1.86 | 0.74 |
Fusion–Feature-Level | 2.19 | 1.37 | 0.85 |
Fusion–Deep Learning | 1.78 | 1.05 | 0.91 |
Table 7.
Comparative performance of reinforcement learning-based optimization policies. Adaptive RL model outperforms both baseline and static policies in travel time reduction, energy efficiency, and reward score, demonstrating its ability to learn dynamic routing strategies in real-time.
Table 7.
Comparative performance of reinforcement learning-based optimization policies. Adaptive RL model outperforms both baseline and static policies in travel time reduction, energy efficiency, and reward score, demonstrating its ability to learn dynamic routing strategies in real-time.
Policy Type | Avg Travel Time (min) | Energy Consumption (kJ/km) | Avg Reward | Time Reduction (%) | Energy Gain (%) |
---|
Baseline | 100 | 8.0 | – | – | – |
Static RL | 90 | 7.4 | 45.6 | 10 | 7.5 |
Adaptive RL | 80 | 6.8 | 63.2 | 20 | 15 |
Table 8.
Effect of real-world operational constraints on optimization performance and energy cost savings. Although security and privacy features introduce minor efficiency trade-offs, they maintain system viability and align with regulatory requirements. Cloud deployment improves optimization but may increase latency.
Table 8.
Effect of real-world operational constraints on optimization performance and energy cost savings. Although security and privacy features introduce minor efficiency trade-offs, they maintain system viability and align with regulatory requirements. Cloud deployment improves optimization but may increase latency.
Scenario | Optimization Score Impact | Energy Cost Savings (%) | Comment |
---|
Dynamic Energy Pricing | –3.5% | 12.3 | Slight loss in performance but notable cost gains |
Anomaly Detection Active | –4.2% | 11.7 | Anomaly detection adds load but maintains security |
Privacy Protection (Edge-only) | –6.0% | 9.5 | Privacy measures reduce optimization slightly |
Cloud-Based Deployment | +2.8% | 13.8 | Cloud model yields higher efficiency, more latency |