Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,193)

Search Parameters:
Keywords = time-gated

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
1911 KB  
Article
Predicting Urban Traffic Under Extreme Weather by Deep Learning Method with Disaster Knowledge
by Jiting Tang, Yuyao Zhu, Saini Yang and Carlo Jaeger
Appl. Sci. 2025, 15(17), 9848; https://doi.org/10.3390/app15179848 (registering DOI) - 8 Sep 2025
Abstract
Meteorological and climatological trends are surely changing the way urban infrastructure systems need to be operated and maintained. Urban road traffic fluctuates more significantly under the interference of strong wind–rain weather, especially during tropical cyclones. Deep learning-based methods have significantly improved the accuracy [...] Read more.
Meteorological and climatological trends are surely changing the way urban infrastructure systems need to be operated and maintained. Urban road traffic fluctuates more significantly under the interference of strong wind–rain weather, especially during tropical cyclones. Deep learning-based methods have significantly improved the accuracy of traffic prediction under extreme weather, but their robustness still has much room for improvement. As the frequency of extreme weather events increases due to climate change, accurately predicting spatiotemporal patterns of urban road traffic is crucial for a resilient transportation system. The compounding effects of the hazards, environments, and urban road network determine the spatiotemporal distribution of urban road traffic during an extreme weather event. In this paper, a novel Knowledge-driven Attribute-Augmented Attention Spatiotemporal Graph Convolutional Network (KA3STGCN) framework is proposed to predict urban road traffic under compound hazards. We design a disaster-knowledge attribute-augmented unit to enhance the model’s ability to perceive real-time hazard intensity and road vulnerability. The attribute-augmented unit includes the dynamic hazard attributes and static environment attributes besides the road traffic information. In addition, we improve feature extraction by combining Graph Convolutional Network, Gated Recurrent Unit, and the attention mechanism. A real-world dataset in Shenzhen City, China, was employed to validate the proposed framework. The findings show that the prediction accuracy of traffic speed can be significantly increased by 12.16%~31.67% with disaster information supplemented, and the framework performs robustly on different road vulnerabilities and hazard intensities. The framework can be migrated to other regions and disaster scenarios in order to strengthen city resilience. Full article
20 pages, 2671 KB  
Article
Multivariate Time Series Anomaly Detection Based on Inverted Transformer with Multivariate Memory Gate
by Yuan Ma, Weiwei Liu, Changming Xu, Luyi Bai, Ende Zhang and Junwei Wang
Entropy 2025, 27(9), 939; https://doi.org/10.3390/e27090939 (registering DOI) - 8 Sep 2025
Abstract
In the industrial IoT, it is vital to detect anomalies in multivariate time series, yet it faces numerous challenges, including highly imbalanced datasets, complex and high-dimensional data, and large disparities across variables. Despite the recent surge in proposals for deep learning-based methods, these [...] Read more.
In the industrial IoT, it is vital to detect anomalies in multivariate time series, yet it faces numerous challenges, including highly imbalanced datasets, complex and high-dimensional data, and large disparities across variables. Despite the recent surge in proposals for deep learning-based methods, these approaches typically treat the multivariate data at each point in time as a unique token, weakening the personalized features and dependency relationships between variables. As a result, their performance tends to degrade under highly imbalanced conditions, and reconstruction-based models are prone to overfitting abnormal patterns, leading to excessive reconstruction of anomalous inputs. In this paper, we propose ITMMG, an inverted Transformer with a multivariate memory gate. ITMMG employs an inverted token embedding strategy and multivariate memory to capture deep dependencies among variables and the normal patterns of individual variables. The experimental results obtained demonstrate that the proposed method exhibits superior performance in terms of detection accuracy and robustness compared with existing baseline methods across a range of standard time series anomaly detection datasets. This significantly reduces the probability of misclassifying anomalous samples during reconstruction. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 3666 KB  
Article
Efficient Retinal Vessel Segmentation with 78K Parameters
by Zhigao Zeng, Jiakai Liu, Xianming Huang, Kaixi Luo, Xinpan Yuan and Yanhui Zhu
J. Imaging 2025, 11(9), 306; https://doi.org/10.3390/jimaging11090306 - 8 Sep 2025
Abstract
Retinal vessel segmentation is critical for early diagnosis of diabetic retinopathy, yet existing deep models often compromise accuracy for complexity. We propose DSAE-Net, a lightweight dual-stage network that addresses this challenge by (1) introducing a Parameterized Cascaded W-shaped Architecture enabling progressive feature refinement [...] Read more.
Retinal vessel segmentation is critical for early diagnosis of diabetic retinopathy, yet existing deep models often compromise accuracy for complexity. We propose DSAE-Net, a lightweight dual-stage network that addresses this challenge by (1) introducing a Parameterized Cascaded W-shaped Architecture enabling progressive feature refinement with only 1% of the parameters of a standard U-Net; (2) designing a novel Skeleton Distance Loss (SDL) that overcomes boundary loss limitations by leveraging vessel skeletons to handle severe class imbalance; (3) developing a Cross-modal Fusion Attention (CMFA) module combining group convolutions and dynamic weighting to effectively expand receptive fields; and (4) proposing Coordinate Attention Gates (CAGs) to optimize skip connections via directional feature reweighting. Evaluated extensively on DRIVE, CHASE_DB1, HRF, and STARE datasets, DSAE-Net significantly reduces computational complexity while outperforming state-of-the-art lightweight models in segmentation accuracy. Its efficiency and robustness make DSAE-Net particularly suitable for real-time diagnostics in resource-constrained clinical settings. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

27 pages, 730 KB  
Article
Alleviating the Communication Bottleneck in Neuromorphic Computing with Custom-Designed Spiking Neural Networks
by James S. Plank, Charles P. Rizzo, Bryson Gullett, Keegan E. M. Dent and Catherine D. Schuman
J. Low Power Electron. Appl. 2025, 15(3), 50; https://doi.org/10.3390/jlpea15030050 - 8 Sep 2025
Abstract
For most, if not all, AI-accelerated hardware, communication with the agent is expensive and heavily bottlenecks the hardware performance. This omnipresent hardware restriction is also found in neuromorphic computing: a novel style of computing that involves deploying spiking neural networks to specialized hardware [...] Read more.
For most, if not all, AI-accelerated hardware, communication with the agent is expensive and heavily bottlenecks the hardware performance. This omnipresent hardware restriction is also found in neuromorphic computing: a novel style of computing that involves deploying spiking neural networks to specialized hardware to achieve low size, weight, and power (SWaP) compute. In neuromorphic computing, spike trains, times, and values are used to communicate information to, from, and within the spiking neural network. Input data, in order to be presented to a spiking neural network, must first be encoded as spikes. After processing the data, spikes are communicated by the network that represent some classification or decision that must be processed by decoder logic. In this paper, we first present principles for interconverting between spike trains, times, and values using custom-designed spiking subnetworks. Specifically, we present seven networks that encompass the 15 conversion scenarios between these encodings. We then perform three case studies where we either custom design a novel network or augment existing neural networks with these conversion subnetworks to vastly improve their communication performance with the outside world. We employ a classic space vs. time tradeoff by pushing spike data encoding and decoding techniques into the network mesh (increasing space) in order to minimize intra- and extranetwork communication time. This results in a classification inference speedup of 23× and a control inference speedup of 4.3× on field-programmable gate array hardware. Full article
(This article belongs to the Special Issue Neuromorphic Computing for Edge Applications)
Show Figures

Figure 1

23 pages, 10200 KB  
Article
Real-Time Driver State Detection Using mmWave Radar: A Spatiotemporal Fusion Network for Behavior Monitoring on Edge Platforms
by Shih-Pang Tseng, Wun-Yang Wu, Jhing-Fa Wang and Dawei Tao
Electronics 2025, 14(17), 3556; https://doi.org/10.3390/electronics14173556 - 7 Sep 2025
Abstract
Fatigue and distracted driving are among the leading causes of traffic accidents, highlighting the importance of developing efficient and non-intrusive driver monitoring systems. Traditional camera-based methods are often limited by lighting variations, occlusions, and privacy concerns. In contrast, millimeter-wave (mmWave) radar offers a [...] Read more.
Fatigue and distracted driving are among the leading causes of traffic accidents, highlighting the importance of developing efficient and non-intrusive driver monitoring systems. Traditional camera-based methods are often limited by lighting variations, occlusions, and privacy concerns. In contrast, millimeter-wave (mmWave) radar offers a non-contact, privacy-preserving, and environment-robust solution, providing a forward-looking alternative. This study introduces a novel deep learning model, RTSFN (radar-based temporal-spatial fusion network), which simultaneously analyzes the temporal motion changes and spatial posture features of the driver. RTSFN incorporates a cross-gated fusion mechanism that dynamically integrates multi-modal information, enhancing feature complementarity and stabilizing behavior recognition. Experimental results show that RTSFN effectively detects dangerous driving states with an average F1 score of 94% and recognizes specific high-risk behaviors with an average F1 score of 97% and can run in real-time on edge devices such as the NVIDIA Jetson Orin Nano, demonstrating its strong potential for deployment in intelligent transportation and in-vehicle safety systems. Full article
Show Figures

Figure 1

23 pages, 836 KB  
Article
Energy Consumption and Carbon Emissions of Compressed Earth Blocks Stabilized with Recycled Cement
by Alessandra Ranesi, Ricardo Cruz, Vitor Sousa and José Alexandre Bogas
Materials 2025, 18(17), 4194; https://doi.org/10.3390/ma18174194 - 6 Sep 2025
Viewed by 74
Abstract
Driven by the pursuit of more sustainable materials, earth construction has seen renewed interest in recent years. However, chemical stabilization is often required to ensure adequate water resistance. While recycled cement from concrete waste (RCC) has recently emerged as a more sustainable alternative [...] Read more.
Driven by the pursuit of more sustainable materials, earth construction has seen renewed interest in recent years. However, chemical stabilization is often required to ensure adequate water resistance. While recycled cement from concrete waste (RCC) has recently emerged as a more sustainable alternative to ordinary Portland cement (OPC) for soil stabilization, its environmental impact remains unassessed. A hybrid model, built on collected data and direct simulations, was implemented to estimate energy and carbon emissions of compressed earth blocks (CEBs) stabilized with RCC as a partial or total replacement of OPC. Four operational scenarios were assessed in a cradle-to-gate approach, evaluating the environmental impact per CEB unit, and normalizing it to the CEB compressive strength. OPC CEBs showed up to 9 times higher energy consumption (2.46 vs. 0.24 MJ/CEB) and about 35 times higher carbon emissions (0.438 vs. 0.012 kgCO2/CEB) than UCEBs. However, replacing OPC with RCC reduced energy consumption by up to 8% and carbon emissions by up to 64%. Although RCC CEBs showed lower mechanical strength, resulting in higher energy consumption when normalized to compressive strength, carbon emissions remained up to 48% lower compared to OPC CEBs. RCC emerged as a more sustainable alternative to OPC for earth stabilization, while also improving the mechanical strength and durability of UCEBs. Full article
Show Figures

Graphical abstract

14 pages, 2677 KB  
Article
Spatial Monitoring of I/O Interconnection Nets in Flip-Chip Packages
by Emmanuel Bender, Moshe Sitbon, Tsuriel Avraham and Michael Gerasimov
Electronics 2025, 14(17), 3549; https://doi.org/10.3390/electronics14173549 - 6 Sep 2025
Viewed by 161
Abstract
Here, we introduce a novel method for the real-time spatial monitoring of I/O interconnection nets in flip-flop packages. Resistance changes in 39 I/O nets are observed simultaneously to produce a spatial profile of the relative degradations of the solder ball joints, interconnection lines, [...] Read more.
Here, we introduce a novel method for the real-time spatial monitoring of I/O interconnection nets in flip-flop packages. Resistance changes in 39 I/O nets are observed simultaneously to produce a spatial profile of the relative degradations of the solder ball joints, interconnection lines, and transistor gates. Location-specific TTF profiles are generated from the degradation data to show the impact of the I/O nets in the context of their placement on the chip. The system succeeds in formulating a clear trend of resistance increase even in relatively mild constant temperature stress conditions. Test results of four temperatures from 80 °C to 120 °C show a dominant degradation pattern strongly influenced by BTI aging demonstrating an acute vulnerability in the pass gates to voltage and temperature stress. The proposed compact spatial monitor solution can be integrated into virtually all chip orientations. The outcome of this study can assist in foreseeing system vulnerabilities in a large spectrum of packaging and advanced packaging orientations in field applications. Full article
(This article belongs to the Special Issue Advances in Hardware Security Research)
Show Figures

Figure 1

23 pages, 1292 KB  
Article
Hardware Validation for Semi-Coherent Transmission Security
by Michael Fletcher, Jason McGinthy and Alan J. Michaels
Information 2025, 16(9), 773; https://doi.org/10.3390/info16090773 - 5 Sep 2025
Viewed by 188
Abstract
The rapid growth of Internet-connected devices integrating into our everyday lives has no end in sight. As more devices and sensor networks are manufactured, security tends to be a low priority. However, the security of these devices is critical, and many current research [...] Read more.
The rapid growth of Internet-connected devices integrating into our everyday lives has no end in sight. As more devices and sensor networks are manufactured, security tends to be a low priority. However, the security of these devices is critical, and many current research topics are looking at the composition of simpler techniques to increase overall security in these low-power commercial devices. Transmission security (TRANSEC) methods are one option for physical-layer security and are a critical area of research with the increasing reliance on the Internet of Things (IoT); most such devices use standard low-power Time-division multiple access (TDMA) or frequency-division multiple access (FDMA) protocols susceptible to reverse engineering. This paper provides a hardware validation of previously proposed techniques for the intentional injection of noise into the phase mapping process of a spread spectrum signal used within a receiver-assigned code division multiple access (RA-CDMA) framework, which decreases an eavesdropper’s ability to directly observe the true phase and reverse engineer the associated PRNG output or key and thus the spreading sequence, even at high SNRs. This technique trades a conscious reduction in signal correlation processing for enhanced obfuscation, with a slight hardware resource utilization increase of less than 2% of Adaptive Logic Modules (ALMs), solidifying this work as a low-power technique. This paper presents the candidate method, quantifies the expected performance impact, and incorporates a hardware-based validation on field-programmable gate array (FPGA) platforms using arbitrary-phase phase-shift keying (PSK)-based spread spectrum signals. Full article
(This article belongs to the Special Issue Hardware Security and Trust, 2nd Edition)
Show Figures

Figure 1

24 pages, 32280 KB  
Article
Spectral Channel Mixing Transformer with Spectral-Center Attention for Hyperspectral Image Classification
by Zhenming Sun, Hui Liu, Ning Chen, Haina Yang, Jia Li, Chang Liu and Xiaoping Pei
Remote Sens. 2025, 17(17), 3100; https://doi.org/10.3390/rs17173100 - 5 Sep 2025
Viewed by 198
Abstract
In recent years, the research trend of HSI classification has focused on the innovative integration of deep learning and Transformer architecture to enhance classification performance through multi-scale feature extraction, attention mechanism optimization, and spectral–spatial collaborative modeling. However, due to the excessive computational complexity [...] Read more.
In recent years, the research trend of HSI classification has focused on the innovative integration of deep learning and Transformer architecture to enhance classification performance through multi-scale feature extraction, attention mechanism optimization, and spectral–spatial collaborative modeling. However, due to the excessive computational complexity and the large number of parameters of the Transformer, there is an expansion bottleneck in long sequence tasks, and the collaborative optimization of the algorithm and hardware is required. To better handle this issue, our paper proposes a method which integrates RWKV linear attention with Transformer through a novel TC-Former framework, combining TimeMixFormer and HyperMixFormer architectures. Specifically, TimeMixFormer has optimized the computational complexity through time decay weights and gating design, significantly improving the processing efficiency of long sequences and reducing the computational complexity. HyperMixFormer employs a gated WKV mechanism and dynamic channel weighting, combined with Mish activation and time-shift operations, to optimize computational overhead while achieving efficient cross-channel interaction, significantly enhancing the discriminative representation of spectral features. The pivotal characteristic of the proposed method lies in its innovative integration of linear attention mechanisms, which enhance HSI classification accuracy while achieving lower computational complexity. Evaluation experiments on three public hyperspectral datasets confirm that this framework outperforms the previous state-of-the-art algorithms in classification accuracy. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

17 pages, 4556 KB  
Article
Multi-Element Prediction of Soil Nutrients Using Laser-Induced Breakdown Spectroscopy and Interpretable Multi-Output Weight Network
by Xiaolong Li, Liuye Cao, Chengxu Lyu, Zhengyu Tao, Anan Tao, Wenwen Kong and Fei Liu
Chemosensors 2025, 13(9), 336; https://doi.org/10.3390/chemosensors13090336 - 5 Sep 2025
Viewed by 176
Abstract
Rapid and green detection of soil nutrients is essential for soil fertility and plant growth. However, traditional methods cannot meet the needs of rapid detection, and the reagents easily cause environmental pollution. Hence, we proposed a multivariable output weighting-network (MW-Net) combined with laser-induced [...] Read more.
Rapid and green detection of soil nutrients is essential for soil fertility and plant growth. However, traditional methods cannot meet the needs of rapid detection, and the reagents easily cause environmental pollution. Hence, we proposed a multivariable output weighting-network (MW-Net) combined with laser-induced breakdown spectroscopy (LIBS) to achieve rapid and green detection for three soil nutrients. For a better spectral signal-to-background ratio (SBR), the two important parameters of delay time and gate width were optimized. Then, the spectral noise was removed by the near-zero standard deviation method. Three common quantitative models were investigated for single-element prediction, which are usually applied in LIBS analysis. Also, multi-element prediction was investigated using MW-Net. The results showed that MW-Net outperformed other models generally with very good quantification for soil total N and K (the determination coefficients in the prediction set (Rp2) of 0.75 and 0.83 and the relative percent difference in the prediction sets (RPD) of 2.05 and 2.43) and excellent indirect determination for soil exchangeable Ca (Rp2 of 0.93 and RPD of 3.91). Finally, the interpretability was realized through feature extraction from MW-Net, indicating its design rationality. The preliminary results indicated that MW-Net combined with LIBS technology could quantify the three soil nutrients simultaneously, improving the detection efficiency, and it could possibly be deployed on a LIBS portable instrument in the future for precision agriculture. Full article
(This article belongs to the Special Issue Application of Laser-Induced Breakdown Spectroscopy, 2nd Edition)
Show Figures

Figure 1

23 pages, 2939 KB  
Article
ADG-SleepNet: A Symmetry-Aware Multi-Scale Dilation-Gated Temporal Convolutional Network with Adaptive Attention for EEG-Based Sleep Staging
by Hai Sun and Zhanfang Zhao
Symmetry 2025, 17(9), 1461; https://doi.org/10.3390/sym17091461 - 5 Sep 2025
Viewed by 250
Abstract
The increasing demand for portable health monitoring has highlighted the need for automated sleep staging systems that are both accurate and computationally efficient. However, most existing deep learning models for electroencephalogram (EEG)-based sleep staging suffer from parameter redundancy, fixed dilation rates, and limited [...] Read more.
The increasing demand for portable health monitoring has highlighted the need for automated sleep staging systems that are both accurate and computationally efficient. However, most existing deep learning models for electroencephalogram (EEG)-based sleep staging suffer from parameter redundancy, fixed dilation rates, and limited generalization, restricting their applicability in real-time and resource-constrained scenarios. In this paper, we propose ADG-SleepNet, a novel lightweight symmetry-aware multi-scale dilation-gated temporal convolutional network enhanced with adaptive attention mechanisms for EEG-based sleep staging. ADG-SleepNet features a structurally symmetric, parallel multi-branch architecture utilizing various dilation rates to comprehensively capture multi-scale temporal patterns in EEG signals. The integration of adaptive gating and channel attention mechanisms enables the network to dynamically adjust the contribution of each branch based on input characteristics, effectively breaking architectural symmetry when necessary to prioritize the most discriminative features. Experimental results on the Sleep-EDF-20 and Sleep-EDF-78 datasets demonstrate that ADG-SleepNet achieves accuracy rates of 87.1% and 85.1%, and macro F1 scores of 84.0% and 81.1%, respectively, outperforming several state-of-the-art lightweight models. These findings highlight the strong generalization ability and practical potential of ADG-SleepNet for EEG-based health monitoring applications. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

21 pages, 471 KB  
Review
Long Short-Term Memory Networks: A Comprehensive Survey
by Moez Krichen and Alaeddine Mihoub
AI 2025, 6(9), 215; https://doi.org/10.3390/ai6090215 - 5 Sep 2025
Viewed by 423
Abstract
Long Short-Term Memory (LSTM) networks have revolutionized the field of deep learning, particularly in applications that require the modeling of sequential data. Originally designed to overcome the limitations of traditional recurrent neural networks (RNNs), LSTMs effectively capture long-range dependencies in sequences, making them [...] Read more.
Long Short-Term Memory (LSTM) networks have revolutionized the field of deep learning, particularly in applications that require the modeling of sequential data. Originally designed to overcome the limitations of traditional recurrent neural networks (RNNs), LSTMs effectively capture long-range dependencies in sequences, making them suitable for a wide array of tasks. This survey aims to provide a comprehensive overview of LSTM architectures, detailing their unique components, such as cell states and gating mechanisms, which facilitate the retention and modulation of information over time. We delve into the various applications of LSTMs across multiple domains, including the following: natural language processing (NLP), where they are employed for language modeling, machine translation, and sentiment analysis; time series analysis, where they play a critical role in forecasting tasks; and speech recognition, significantly enhancing the accuracy of automated systems. By examining these applications, we illustrate the versatility and robustness of LSTMs in handling complex data types. Additionally, we explore several notable variants and improvements of the standard LSTM architecture, such as Bidirectional LSTMs, which enhance context understanding, and Stacked LSTMs, which increase model capacity. We also discuss the integration of Attention Mechanisms with LSTMs, which have further advanced their performance in various tasks. Despite their strengths, LSTMs face several challenges, including high Computational Complexity, extensive Data Requirements, and difficulties in training, which can hinder their practical implementation. This survey addresses these limitations and provides insights into ongoing research aimed at mitigating these issues. In conclusion, we highlight recent advances in LSTM research and propose potential future directions that could lead to enhanced performance and broader applicability of LSTM networks. This survey serves as a foundational resource for researchers and practitioners seeking to understand the current landscape of LSTM technology and its future trajectory. Full article
Show Figures

Figure 1

24 pages, 2108 KB  
Article
A Deep Learning Approach on Traffic States Prediction of Freeway Weaving Sections Under Adverse Weather Conditions
by Jing Ma, Jiahao Ma, Mingzhe Zeng, Xiaobin Zou, Qiuyuan Luo, Yiming Zhang and Yan Li
Sustainability 2025, 17(17), 7970; https://doi.org/10.3390/su17177970 - 4 Sep 2025
Viewed by 287
Abstract
Freeway weaving sections’ states under adverse weather exhibit characteristics of randomness, vulnerability, and abruption. A deep learning-based model is proposed for traffic state identification and prediction, which can be used to formulate proactive management strategies. According to traffic characteristics under adverse weather, a [...] Read more.
Freeway weaving sections’ states under adverse weather exhibit characteristics of randomness, vulnerability, and abruption. A deep learning-based model is proposed for traffic state identification and prediction, which can be used to formulate proactive management strategies. According to traffic characteristics under adverse weather, a hybrid model combining Random Forest and an improved k-prototypes algorithm is established to redefine traffic states. Traffic state prediction is accomplished using the Weather Spatiotemporal Graph Convolution Network (WSTGCN) model. WSTGCN decomposes flows into spatiotemporal correlation and temporal variation features, which are learned using spectral graph convolutional networks (GCNs). A Time Squeeze-and-Excitation Network (TSENet) is constructed to extract the influence of weather by incorporating the weather feature matrix. The traffic states are then predicted using Gated Recurrent Unit (GRU). The proposed models were tested using data under rain, fog, and strong wind conditions from 201 weaving sections on China’s G5 and G55 freeway, and U.S. I-5 and I-80 freeway. The results indicated that the freeway weaving sections’ states under adverse weather can be classified into seven categories. Compared with other baseline models, WSTGCN achieved a 3.8–8.0% reduction in Root Mean Square Error, a 1.0–3.2% increase in Equilibrium Coefficient, and a 1.4–3.1% improvement in Accuracy Rate. Full article
Show Figures

Figure 1

15 pages, 395 KB  
Article
Multimodal Transport Optimization from Doorstep to Airport Using Mixed-Integer Linear Programming and Dynamic Programming
by Evangelos D. Spyrou, Vassilios Kappatos, Maria Gkemou and Evangelos Bekiaris
Sustainability 2025, 17(17), 7937; https://doi.org/10.3390/su17177937 - 3 Sep 2025
Viewed by 315
Abstract
Efficient multimodal transportation from a passenger’s doorstep to the airport is critical for ensuring timely arrivals, reducing travel uncertainty, and optimizing overall travel experience. However, coordinating different modes of transport—such as walking, public transit, ride-hailing, and private vehicles—poses significant challenges due to varying [...] Read more.
Efficient multimodal transportation from a passenger’s doorstep to the airport is critical for ensuring timely arrivals, reducing travel uncertainty, and optimizing overall travel experience. However, coordinating different modes of transport—such as walking, public transit, ride-hailing, and private vehicles—poses significant challenges due to varying schedules, traffic conditions, and transfer times. Traditional route planning methods often fail to account for real-time disruptions, leading to delays and inefficiencies. As air travel demand grows, optimizing these multimodal routes becomes increasingly important to minimize delays, improve passenger convenience, and enhance transport system resilience. To address this challenge, we propose an optimization framework combining Mixed-Integer Linear Programming (MILP) and Dynamic Programming (DP) to generate optimal travel routes from a passenger’s location to the airport gate. MILP is used to model and optimize multimodal trip decisions, considering time windows, cost constraints, and transfer dependencies. Meanwhile, DP allows for adaptive, real-time adjustments based on changing conditions such as traffic congestion, transit delays, and service availability. By integrating these two techniques, our approach ensures a robust, efficient, and scalable solution for multimodal transport routing, ultimately enhancing reliability and reducing travel time variability. The results demonstrate that the MILP solver converges within 20 iterations, reducing the objective value from 15.2 to 7.1 units with an optimality gap of 8.5%; the DP-based adaptation maintains feasibility under a 2 min disruption; and the multimodal analysis yields a total travel time of 9.0 min with a fare of 3.0 units, where the bus segment accounts for 6.5 min and 2.2 units of the total. In the multimodal transport evaluation, DP adaptation reduced cumulative delays by more than half after disruptions, while route selection demonstrated balanced trade-offs between cost and time across walking, bus, and train segments. Full article
Show Figures

Figure 1

11 pages, 480 KB  
Article
Calcium Hides the Clue: Unraveling the Diagnostic Value of Coronary Calcium Scoring in Cardiac Arrest Survivors
by Ana Margarida Martins, Joana Rigueira, Beatriz Valente Silva, Beatriz Nogueira Garcia, Pedro Alves da Silva, Ana Abrantes, Rui Plácido, Doroteia Silva, Fausto J. Pinto and Ana G. Almeida
J. Pers. Med. 2025, 15(9), 422; https://doi.org/10.3390/jpm15090422 - 3 Sep 2025
Viewed by 188
Abstract
Introduction: Coronary artery disease remains one of the most prevalent causes of hospital cardiac arrest (OHCA). Although the benefit of early coronary angiography is well stablished in patients with ST-segment elevation, the benefit and the timing of performing it in other patients [...] Read more.
Introduction: Coronary artery disease remains one of the most prevalent causes of hospital cardiac arrest (OHCA). Although the benefit of early coronary angiography is well stablished in patients with ST-segment elevation, the benefit and the timing of performing it in other patients remain a matter of debate. This is due to the difficulty of identifying those in which an infarction with non-ST-segment elevation is the cause of the OHCA. Coronary artery calcium (CAC) emerges as a reliable predictor of coronary disease and adverse cardiovascular events, detectable even in non-gated chest computed tomography (CT) scans commonly used in OHCA etiological studies, showcasing potential for streamlined risk assessment and management. Aim: The aim of this study was to evaluate if CAC in non-gated CT scans performed in OHCA survivors could act as a good predictor of coronary artery disease on coronary angiography. Methods: This is a single-center, retrospective study of OHCA survivors without ST-segment elevation at presentation. We selected patients for whom a non-gated chest CT was performed and underwent coronary angiography due to the clinical, electrocardiogram (ECG), or echocardiographic suspicion of acute coronary syndrome. An investigator, blinded to the coronary angiography report, evaluated CAC both quantitively (with Agatston score) and qualitatively (visual assessment: absent, mild, moderate, or severe). Results: A total of 44 consecutive patients were included: 70% male, mean age of 60 ± 13 years old. The mean Agatston score was 396 ± 573 AU (Agatston units). Regarding the qualitative assessment, CAC was classified as mild, moderate, and severe in 11%, 25%, and 20% of patients, respectively. The coronary angiography revealed significant coronary lesions in 15 patients (34%), of which 87% were revascularized (80% underwent PCI and 7% CABG). The quantitative CAC assessment accurately predicted the presence of significant lesions on coronary angiography (AUC = 0.90, 95% CI 0.81–0.99, p < 0.001). The presence of moderate or severe CAC by visual assessment also predicted significant lesions on coronary angiography (OR 2.66, 95% CI 1.87–109.71, p = 0.01). There was also a good and significant correlation between the vessel with severe calcification in the CT scan and the culprit vessel evaluated by coronary angiography. CAC was reported in only 16% of the reviewed CTs, most of them with severe calcification. Conclusion: The assessment of CAC in non-gated chest CT scans proved to be feasible and displayed a robust correlation with the presence, severity, and location of coronary artery disease. Its routine use upfront was shown to be an important complement to CT scan reports, ensuring more precise and personalized OHCA management. Full article
(This article belongs to the Special Issue State of the Art in Cardiac Imaging)
Show Figures

Figure 1

Back to TopTop