Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,193)

Search Parameters:
Keywords = DCNN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 6909 KB  
Article
Comparative Analysis of Deep Learning and Traditional Methods for High-Resolution Cropland Extraction with Different Training Data Characteristics
by Dujuan Zhang, Xiufang Zhu, Yaozhong Pan, Hengliang Guo, Qiannan Li and Haitao Wei
Land 2025, 14(10), 2038; https://doi.org/10.3390/land14102038 - 13 Oct 2025
Abstract
High-resolution remote sensing (HRRS) imagery enables the extraction of cropland information with high levels of detail, especially when combined with the impressive performance of deep convolutional neural networks (DCNNs) in understanding these images. Comprehending the factors influencing DCNNs’ performance in HRRS cropland extraction [...] Read more.
High-resolution remote sensing (HRRS) imagery enables the extraction of cropland information with high levels of detail, especially when combined with the impressive performance of deep convolutional neural networks (DCNNs) in understanding these images. Comprehending the factors influencing DCNNs’ performance in HRRS cropland extraction is of considerable importance for practical agricultural monitoring applications. This study investigates the impact of classifier selection and different training data characteristics on the HRRS cropland classification outcomes. Specifically, Gaofen-1 composite images with 2 m spatial resolution are employed for HRRS cropland extraction, and two county-wide regions with distinct agricultural landscapes in Shandong Province, China, are selected as the study areas. The performance of two deep learning (DL) algorithms (UNet and DeepLabv3+) and a traditional classification algorithm, Object-Based Image Analysis with Random Forest (OBIA-RF), is compared. Additionally, the effects of different band combinations, crop growth stages, and class mislabeling on the classification accuracy are evaluated. The results demonstrated that the UNet and DeepLabv3+ models outperformed OBIA-RF in both simple and complex agricultural landscapes, and were insensitive to the changes in band combinations, indicating their ability to learn abstract features and contextual semantic information for HRRS cropland extraction. Moreover, compared with the DL models, OBIA-RF was more sensitive to changes in the temporal characteristics. The performance of all three models was unaffected when the mislabeling error ratio remained below 5%. Beyond this threshold, the performance of all models decreased, with UNet and DeepLabv3+ showing similar performance decline trends and OBIA-RF suffering a more drastic reduction. Furthermore, the DL models exhibited relatively low sensitivity to the patch size of sample blocks and data augmentation. These findings can facilitate the design of operational implementations for practical applications. Full article
Show Figures

Figure 1

15 pages, 2736 KB  
Article
Exploring the Hyperspectral Response of Quercetin in Anoectochilus roxburghii (Wall.) Lindl. Using Standard Fingerprints and Band-Specific Feature Analysis
by Ziyuan Liu, Haoyuan Ding, Sijia Zhao, Hongzhen Wang and Yiqing Xu
Plants 2025, 14(20), 3141; https://doi.org/10.3390/plants14203141 (registering DOI) - 11 Oct 2025
Abstract
Quercetin, a key flavonoid in Anoectochilus roxburghii (Wall.) Lindl., plays an important role in determining the pharmacological value of this medicinal herb. However, traditional methods for quercetin quantification are destructive and time-consuming, limiting their application in real-time quality monitoring. This study investigates the [...] Read more.
Quercetin, a key flavonoid in Anoectochilus roxburghii (Wall.) Lindl., plays an important role in determining the pharmacological value of this medicinal herb. However, traditional methods for quercetin quantification are destructive and time-consuming, limiting their application in real-time quality monitoring. This study investigates the hyperspectral response characteristics of quercetin using near-infrared hyperspectral imaging and establishes a feature-based model to explore its detectability in A. roxburghii leaves. We scanned standard quercetin solutions of known concentration under the same imaging conditions as the leaves to produce a dilution series. Feature-selection methods used included the successive projections algorithm (SPA), Pearson correlation, and competitive adaptive reweighted sampling (CARS). A 1D convolutional neural network (1D-CNN) trained on SPA-selected wavelengths yielded the best prediction performance. These key wavelengths—particularly the 923 nm band—showed strong theoretical and statistical relevance to quercetin’s molecular absorption. When applied to plant leaf spectra, the standard-trained model produced continuous predicted quercetin values that effectively distinguished cultivars with varying flavonoid contents. PCA visualization and ROC-based classification confirmed spectral transferability and potential for functional evaluation. This study demonstrates a non-destructive, spatially resolved, and biochemically interpretable strategy for identifying bioactive markers in plant tissues, offering a methodological basis for future hyperspectral inversion studies and intelligent quality assessment in herbal medicine. Full article
Show Figures

Figure 1

21 pages, 2372 KB  
Article
IDG-ViolenceNet: A Video Violence Detection Model Integrating Identity-Aware Graphs and 3D-CNN
by Hong Huang and Qingping Jiang
Sensors 2025, 25(20), 6272; https://doi.org/10.3390/s25206272 - 10 Oct 2025
Viewed by 133
Abstract
Video violence detection plays a crucial role in intelligent surveillance and public safety, yet existing methods still face challenges in modeling complex multi-person interactions. To address this, we propose IDG-ViolenceNet, a dual-stream video violence detection model that integrates identity-aware spatiotemporal graphs with three-dimensional [...] Read more.
Video violence detection plays a crucial role in intelligent surveillance and public safety, yet existing methods still face challenges in modeling complex multi-person interactions. To address this, we propose IDG-ViolenceNet, a dual-stream video violence detection model that integrates identity-aware spatiotemporal graphs with three-dimensional convolutional neural networks (3D-CNN). Specifically, the model utilizes YOLOv11 for high-precision person detection and cross-frame identity tracking, constructing a dynamic spatiotemporal graph that encodes spatial proximity, temporal continuity, and individual identity information. On this basis, a GINEConv branch extracts structured interaction features, while an R3D-18 branch models local spatiotemporal patterns. The two representations are fused in a dedicated module for cross-modal feature integration. Experimental results show that IDG-ViolenceNet achieves accuracies of 97.5%, 99.5%, and 89.4% on the Hockey Fight, Movies Fight, and RWF-2000 datasets, respectively, significantly outperforming state-of-the-art methods. Additionally, ablation studies validate the contributions of key components in improving detection accuracy and robustness. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

13 pages, 2381 KB  
Article
DCNN–Transformer Hybrid Network for Robust Feature Extraction in FMCW LiDAR Ranging
by Wenhao Xu, Pansong Zhang, Guohui Yuan, Shichang Xu, Longfei Li, Junxiang Zhang, Longfei Li, Tianyu Li and Zhuoran Wang
Photonics 2025, 12(10), 995; https://doi.org/10.3390/photonics12100995 - 10 Oct 2025
Viewed by 162
Abstract
Frequency-Modulated Continuous-Wave (FMCW) Laser Detection and Ranging (LiDAR) systems are widely used due to their high accuracy and resolution. Nevertheless, conventional distance extraction methods often lack robustness in noisy and complex environments. To address this limitation, we propose a deep learning-based signal extraction [...] Read more.
Frequency-Modulated Continuous-Wave (FMCW) Laser Detection and Ranging (LiDAR) systems are widely used due to their high accuracy and resolution. Nevertheless, conventional distance extraction methods often lack robustness in noisy and complex environments. To address this limitation, we propose a deep learning-based signal extraction framework that integrates a Dual Convolutional Neural Network (DCNN) with a Transformer model. The DCNN extracts multi-scale spatial features through multi-layer and pointwise convolutions, while the Transformer employs a self-attention mechanism to capture global temporal dependencies of the beat-frequency signals. The proposed DCNN–Transformer network is evaluated through beat-frequency signal inversion experiments across distances ranging from 3 m to 40 m. The experimental results show that the method achieves a mean absolute error (MAE) of 4.1 mm and a root-mean-square error (RMSE) of 3.08 mm. These results demonstrate that the proposed approach provides stable and accurate predictions, with strong generalization ability and robustness for FMCW LiDAR systems. Full article
(This article belongs to the Section Optical Interaction Science)
Show Figures

Figure 1

26 pages, 12809 KB  
Article
Coating Thickness Estimation Using a CNN-Enhanced Ultrasound Echo-Based Deconvolution
by Marina Perez-Diego, Upeksha Chathurani Thibbotuwa, Ainhoa Cortés and Andoni Irizar
Sensors 2025, 25(19), 6234; https://doi.org/10.3390/s25196234 - 8 Oct 2025
Viewed by 232
Abstract
Coating degradation monitoring is increasingly important in offshore industries, where protective layers ensure corrosion prevention and structural integrity. In this context, coating thickness estimation provides critical information. The ultrasound pulse-echo technique is widely used for non-destructive testing (NDT), but closely spaced acoustic interfaces [...] Read more.
Coating degradation monitoring is increasingly important in offshore industries, where protective layers ensure corrosion prevention and structural integrity. In this context, coating thickness estimation provides critical information. The ultrasound pulse-echo technique is widely used for non-destructive testing (NDT), but closely spaced acoustic interfaces often produce overlapping echoes, which complicates detection and accurate isolation of each layer’s thickness. In this study, analysis of the pulse-echo signal from a coated sample has shown that the front-coating reflection affects each main backwall echo differently; by comparing two consecutive backwall echoes, we can cancel the acquisition system’s impulse response and isolate the propagation path-related information between the echoes. This work introduces an ultrasound echo-based methodology for estimating coating thickness by first obtaining the impulse response of the test medium (reflectivity sequence) through a deconvolution model, developed using two consecutive backwall echoes. This is followed by an enhanced detection of coating layer thickness in the reflectivity function using a 1D convolutional neural network (1D-CNN) trained with synthetic signals obtained from finite-difference time-domain (FDTD) simulations with k-Wave MATLAB toolbox (v1.4.0). The proposed approach estimates the front-side coating thickness in steel samples coated on both sides, with coating layers ranging from 60μm to 740μm applied over 5 mm substrates and under varying coating and steel properties. The minimum detectable thickness corresponds to approximately λ/5 for an 8 MHz ultrasonic transducer. On synthetic signals, where the true coating thickness and speed of sound are known, the model achieves an accuracy of approximately 8μm. These findings highlight the strong potential of the model for reliably monitoring relative thickness changes across a wide range of coatings in real samples. Full article
(This article belongs to the Special Issue Nondestructive Sensing and Imaging in Ultrasound—Second Edition)
Show Figures

Figure 1

21 pages, 4623 KB  
Article
Combining Neural Architecture Search and Weight Reshaping for Optimized Embedded Classifiers in Multisensory Glove
by Hiba Al Youssef, Sara Awada, Mohamad Raad, Maurizio Valle and Ali Ibrahim
Sensors 2025, 25(19), 6142; https://doi.org/10.3390/s25196142 - 4 Oct 2025
Viewed by 228
Abstract
Intelligent sensing systems are increasingly used in wearable devices, enabling advanced tasks across various application domains including robotics and human–machine interaction. Ensuring these systems are energy autonomous is highly demanded, despite strict constraints on power, memory and processing resources. To meet these requirements, [...] Read more.
Intelligent sensing systems are increasingly used in wearable devices, enabling advanced tasks across various application domains including robotics and human–machine interaction. Ensuring these systems are energy autonomous is highly demanded, despite strict constraints on power, memory and processing resources. To meet these requirements, embedded neural networks must be optimized to achieve a balance between accuracy and efficiency. This paper presents an integrated approach that combines Hardware-Aware Neural Architecture Search (HW-NAS) with optimization techniques—weight reshaping, quantization, and their combination—to develop efficient classifiers for a multisensory glove. HW-NAS automatically derives 1D-CNN models tailored to the NUCLEO-F401RE board, while the additional optimization further reduces model size, memory usage, and latency. Across three datasets, the optimized models not only improve classification accuracy but also deliver an average reduction of 75% in inference time, 69% in flash memory, and more than 45% in RAM compared to NAS-only baselines. These results highlight the effectiveness of integrating NAS with optimization techniques, paving the way towards energy-autonomous wearable systems. Full article
(This article belongs to the Special Issue Feature Papers in Smart Sensing and Intelligent Sensors 2025)
Show Figures

Figure 1

20 pages, 5116 KB  
Article
Design of Portable Water Quality Spectral Detector and Study on Nitrogen Estimation Model in Water
by Hongfei Lu, Hao Zhou, Renyong Cao, Delin Shi, Chao Xu, Fangfang Bai, Yang Han, Song Liu, Minye Wang and Bo Zhen
Processes 2025, 13(10), 3161; https://doi.org/10.3390/pr13103161 - 3 Oct 2025
Viewed by 393
Abstract
A portable spectral detector for water quality assessment was developed, utilizing potassium nitrate and ammonium chloride standard solutions as the subjects of investigation. By preparing solutions with differing concentrations, spectral data ranging from 254 to 1275 nm was collected and subsequently preprocessed using [...] Read more.
A portable spectral detector for water quality assessment was developed, utilizing potassium nitrate and ammonium chloride standard solutions as the subjects of investigation. By preparing solutions with differing concentrations, spectral data ranging from 254 to 1275 nm was collected and subsequently preprocessed using methods such as multiple scattering correction (MSC), Savitzky–Golay filtering (SG), and standardization (SS). Estimation models were constructed employing modeling algorithms including Support Vector Machine-Multilayer Perceptron (SVM-MLP), Support Vector Regression (SVR), random forest (RF), RF-Lasso, and partial least squares regression (PLSR). The research revealed that the primary variation bands for NH4+ and NO3 are concentrated within the 254–550 nm and 950–1275 nm ranges, respectively. For predicting ammonium chloride, the optimal model was found to be the SVM-MLP model, which utilized spectral data reduced to 400 feature bands after SS processing, achieving R2 and RMSE of 0.8876 and 0.0883, respectively. For predicting potassium nitrate, the optimal model was the 1D Convolutional Neural Network (1DCNN) model applied to the full band of spectral data after SS processing, with R2 and RMSE of 0.7758 and 0.1469, respectively. This study offers both theoretical and technical support for the practical implementation of spectral technology in rapid water quality monitoring. Full article
Show Figures

Figure 1

18 pages, 3177 KB  
Article
Ground Type Classification for Hexapod Robots Using Foot-Mounted Force Sensors
by Yong Liu, Rui Sun, Xianguo Tuo, Tiantao Sun and Tao Huang
Machines 2025, 13(10), 900; https://doi.org/10.3390/machines13100900 - 1 Oct 2025
Viewed by 251
Abstract
In field exploration, disaster rescue, and complex terrain operations, the accuracy of ground type recognition directly affects the walking stability and task execution efficiency of legged robots. To address the problem of terrain recognition in complex ground environments, this paper proposes a high-precision [...] Read more.
In field exploration, disaster rescue, and complex terrain operations, the accuracy of ground type recognition directly affects the walking stability and task execution efficiency of legged robots. To address the problem of terrain recognition in complex ground environments, this paper proposes a high-precision classification method based on single-leg triaxial force signals. The method first employs a one-dimensional convolutional neural network (1D-CNN) module to extract local temporal features, then introduces a long short-term memory (LSTM) network to model long-term and short-term dependencies during ground contact, and incorporates a convolutional block attention module (CBAM) to adaptively enhance the feature responses of critical channels and time steps, thereby improving discriminative capability. In addition, an improved whale optimization algorithm (iBWOA) is adopted to automatically perform global search and optimization of key hyperparameters, including the number of convolution kernels, the number of LSTM units, and the dropout rate, to achieve the optimal training configuration. Experimental results demonstrate that the proposed method achieves excellent classification performance on five typical ground types—grass, cement, gravel, soil, and sand—under varying slope and force conditions, with an overall classification accuracy of 96.94%. Notably, it maintains high recognition accuracy even between ground types with similar contact mechanical properties, such as soil vs. grass and gravel vs. sand. This study provides a reliable perception foundation and technical support for terrain-adaptive control and motion strategy optimization of legged robots in real-world environments. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

21 pages, 4285 KB  
Article
Spatiotemporal Modeling and Intelligent Recognition of Sow Estrus Behavior for Precision Livestock Farming
by Kaidong Lei, Bugao Li, Hua Yang, Hao Wang, Di Wang and Benhai Xiong
Animals 2025, 15(19), 2868; https://doi.org/10.3390/ani15192868 - 30 Sep 2025
Viewed by 223
Abstract
Accurate recognition of estrus behavior in sows is of great importance for achieving scientific breeding management, improving reproductive efficiency, and reducing labor costs in modern pig farms. However, due to the evident spatiotemporal continuity, stage-specific changes, and ambiguous category boundaries of estrus behaviors, [...] Read more.
Accurate recognition of estrus behavior in sows is of great importance for achieving scientific breeding management, improving reproductive efficiency, and reducing labor costs in modern pig farms. However, due to the evident spatiotemporal continuity, stage-specific changes, and ambiguous category boundaries of estrus behaviors, traditional methods based on static images or manual observation suffer from low efficiency and high misjudgment rates in practical applications. To address these issues, this study follows a video-based behavior recognition approach and designs three deep learning model structures: (Convolutional Neural Network combined with Long Short-Term Memory) CNN + LSTM, (Three-Dimensional Convolutional Neural Network) 3D-CNN, and (Convolutional Neural Network combined with Temporal Convolutional Network) CNN + TCN, aiming to achieve high-precision recognition and classification of four key behaviors (SOB, SOC, SOS, SOW) during the estrus process in sows. In terms of data processing, a sliding window strategy was adopted to slice the annotated video sequences, constructing image sequence samples with uniform length. The training, validation, and test sets were divided in a 6:2:2 ratio, ensuring balanced distribution of behavior categories. During model training and evaluation, a systematic comparative analysis was conducted from multiple aspects, including loss function variation (Loss), accuracy, precision, recall, F1-score, confusion matrix, and ROC-AUC curves. Experimental results show that the CNN + TCN model performed best overall, with validation accuracy exceeding 0.98, F1-score approaching 1.0, and an average AUC value of 0.9988, demonstrating excellent recognition accuracy and generalization ability. The 3D-CNN model performed well in recognizing short-term dynamic behaviors (such as SOC), achieving a validation F1-score of 0.91 and an AUC of 0.770, making it suitable for high-frequency, short-duration behavior recognition. The CNN + LSTM model exhibited good robustness in handling long-duration static behaviors (such as SOB and SOS), with a validation accuracy of 0.99 and an AUC of 0.9965. In addition, this study further developed an intelligent recognition system with front-end visualization, result feedback, and user interaction functions, enabling local deployment and real-time application of the model in farming environments, thus providing practical technical support for the digitalization and intelligentization of reproductive management in large-scale pig farms. Full article
Show Figures

Figure 1

28 pages, 6039 KB  
Article
Detection and Classification of Unhealthy Heartbeats Using Deep Learning Techniques
by Abdullah M. Albarrak, Raneem Alharbi and Ibrahim A. Ibrahim
Sensors 2025, 25(19), 5976; https://doi.org/10.3390/s25195976 - 26 Sep 2025
Viewed by 462
Abstract
Arrhythmias are a common and potentially life-threatening category of cardiac disorders, making accurate and early detection crucial for improving clinical outcomes. Electrocardiograms are widely used to monitor heart rhythms, yet their manual interpretation remains prone to inconsistencies due to the complexity of the [...] Read more.
Arrhythmias are a common and potentially life-threatening category of cardiac disorders, making accurate and early detection crucial for improving clinical outcomes. Electrocardiograms are widely used to monitor heart rhythms, yet their manual interpretation remains prone to inconsistencies due to the complexity of the signals. This research investigates the effectiveness of machine learning and deep learning techniques for automated arrhythmia classification using ECG signals from the MIT-BIH dataset. We compared Gradient Boosting Machine (GBM) and Multilayer Perceptron (MLP) as traditional machine learning models with a hybrid deep learning model combining one-dimensional convolutional neural networks (1D-CNNs) and long short-term memory (LSTM) networks. Furthermore, the Grey Wolf Optimizer (GWO) was utilized to automatically optimize the hyperparameters of the 1D-CNN-LSTM model, enhancing its performance. Experimental results show that the proposed 1D-CNN-LSTM model achieved the highest accuracy of 97%, outperforming both classical machine learning and other deep learning baselines. The classification report and confusion matrix confirm the model’s robustness in identifying various arrhythmia types. These findings emphasize the possible benefits of integrating metaheuristic optimization with hybrid deep learning. Full article
(This article belongs to the Special Issue Sensors Technology and Application in ECG Signal Processing)
Show Figures

Figure 1

16 pages, 1620 KB  
Article
An Attention-Driven Hybrid Deep Network for Short-Term Electricity Load Forecasting in Smart Grid
by Jinxing Wang, Sihui Xue, Liang Lin, Benying Tan and Huakun Huang
Mathematics 2025, 13(19), 3091; https://doi.org/10.3390/math13193091 - 26 Sep 2025
Viewed by 328
Abstract
With the large-scale development of smart grids and the integration of renewable energy, the operational complexity and load volatility of power systems have increased significantly, placing higher demands on the accuracy and timeliness of electricity load forecasting. However, existing methods struggle to capture [...] Read more.
With the large-scale development of smart grids and the integration of renewable energy, the operational complexity and load volatility of power systems have increased significantly, placing higher demands on the accuracy and timeliness of electricity load forecasting. However, existing methods struggle to capture the nonlinear and volatile characteristics of load sequences, often exhibiting insufficient fitting and poor generalization in peak and abrupt change scenarios. To address these challenges, this paper proposes a deep learning model named CGA-LoadNet, which integrates a one-dimensional convolutional neural network (1D-CNN), gated recurrent units (GRUs), and a self-attention mechanism. The model is capable of simultaneously extracting local temporal features and long-term dependencies. To validate its effectiveness, we conducted experiments on a publicly available electricity load dataset. The experimental results demonstrate that CGA-LoadNet significantly outperforms baseline models, achieving the best performance on key metrics with an R2 of 0.993, RMSE of 18.44, MAE of 13.94, and MAPE of 1.72, thereby confirming the effectiveness and practical potential of its architectural design. Overall, CGA-LoadNet more accurately fits actual load curves, particularly in complex regions, such as load peaks and abrupt changes, providing an efficient and robust solution for short-term load forecasting in smart grid scenarios. Full article
(This article belongs to the Special Issue AI, Machine Learning and Optimization)
Show Figures

Figure 1

23 pages, 11596 KB  
Article
Combined Hyperspectral Imaging with Wavelet Domain Multivariate Feature Fusion Network for Bioactive Compound Prediction of Astragalus membranaceus var. mongholicus
by Suning She, Zhiyun Xiao and Yulong Zhou
Agriculture 2025, 15(19), 2009; https://doi.org/10.3390/agriculture15192009 - 25 Sep 2025
Viewed by 277
Abstract
The pharmacological quality of Astragalus membranaceus var. mongholicus (AMM) is determined by its bioactive compounds, and developing a rapid prediction method is essential for quality assessment. This study proposes a predictive model for AMM bioactive compounds using hyperspectral imaging (HSI) and wavelet domain [...] Read more.
The pharmacological quality of Astragalus membranaceus var. mongholicus (AMM) is determined by its bioactive compounds, and developing a rapid prediction method is essential for quality assessment. This study proposes a predictive model for AMM bioactive compounds using hyperspectral imaging (HSI) and wavelet domain multivariate features. The model employs techniques such as the first-order derivative (FD) algorithm and the continuum removal (CR) algorithm for initial feature extraction. Unlike existing models that primarily focus on a single-feature extraction algorithm, the proposed tree-structured feature extraction module based on discrete wavelet transform and one-dimensional convolutional neural network (1D-CNN) integrates FD and CR, enabling robust multivariate feature extraction. Subsequently, the multivariate feature cross-fusion module is introduced to implement multivariate feature interaction, facilitating mutual enhancement between high- and low-frequency features through hierarchical recombination. Additionally, a multi-objective prediction mechanism is proposed to simultaneously predict the contents of flavonoids, saponins, and polysaccharides in AMM, effectively leveraging the enhanced, recombined spectral features. During testing, the model achieved excellent predictive performance with R2 values of 0.981 for flavonoids, 0.992 for saponins, and 0.992 for polysaccharides. The corresponding RMSE values were 0.37, 0.04, and 0.86; RPD values reached 7.30, 10.97, and 11.16; while MAE values were 0.14, 0.02, and 0.38, respectively. These results demonstrate that integrating multivariate features extracted through diverse methods with 1D-CNN enables efficient prediction of AMM bioactive compounds using HSI. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

34 pages, 11521 KB  
Article
Explainable AI-Driven 1D-CNN with Efficient Wireless Communication System Integration for Multimodal Diabetes Prediction
by Radwa Ahmed Osman
AI 2025, 6(10), 243; https://doi.org/10.3390/ai6100243 - 25 Sep 2025
Viewed by 590
Abstract
The early detection of diabetes risk and effective management of patient data are critical for avoiding serious consequences and improving treatment success. This research describes a two-part architecture that combines an energy-efficient wireless communication technology with an interpretable deep learning model for diabetes [...] Read more.
The early detection of diabetes risk and effective management of patient data are critical for avoiding serious consequences and improving treatment success. This research describes a two-part architecture that combines an energy-efficient wireless communication technology with an interpretable deep learning model for diabetes categorization. In Phase 1, a unique wireless communication model is created to assure the accurate transfer of real-time patient data from wearable devices to medical centers. Using Lagrange optimization, the model identifies the best transmission distance and power needs, lowering energy usage while preserving communication dependability. This contribution is especially essential since effective data transport is a necessary condition for continuous monitoring in large-scale healthcare systems. In Phase 2, the transmitted multimodal clinical, genetic, and lifestyle data are evaluated using a one-dimensional Convolutional Neural Network (1D-CNN) with Bayesian hyperparameter tuning. The model beat traditional deep learning architectures like LSTM and GRU. To improve interpretability and clinical acceptance, SHAP and LIME were used to find global and patient-specific predictors. This approach tackles technological and medicinal difficulties by integrating energy-efficient wireless communication with interpretable predictive modeling. The system ensures dependable data transfer, strong predictive performance, and transparent decision support, boosting trust in AI-assisted healthcare and enabling individualized diabetes control. Full article
Show Figures

Figure 1

23 pages, 3115 KB  
Article
Deep Learning-Based Prediction of Multi-Species Leaf Pigment Content Using Hyperspectral Reflectance
by Ziyu Wang and Duanyang Xu
Remote Sens. 2025, 17(19), 3293; https://doi.org/10.3390/rs17193293 - 25 Sep 2025
Viewed by 354
Abstract
Leaf pigment composition and concentration are crucial indicators of plant physiological status, photosynthetic capacity, and overall ecosystem health. While spectroscopy techniques show promise for monitoring vegetation growth, phenology, and stress, accurately estimating leaf pigments remains challenging due to the complex reflectance properties across [...] Read more.
Leaf pigment composition and concentration are crucial indicators of plant physiological status, photosynthetic capacity, and overall ecosystem health. While spectroscopy techniques show promise for monitoring vegetation growth, phenology, and stress, accurately estimating leaf pigments remains challenging due to the complex reflectance properties across diverse tree species. This study introduces a novel approach using a two-dimensional convolutional neural network (2D-CNN) coupled with a genetic algorithm (GA) to predict leaf pigment content, including chlorophyll a and b content (Cab), carotenoid content (Car), and anthocyanin content (Canth). Leaf reflectance and biochemical content measurements taken from 28 tree species were used in this study. The reflectance spectra ranging from 400 nm to 800 nm were encoded as 2D matrices with different sizes to train the 2D-CNN and compared with the one-dimensional convolutional neural network (1D-CNN). The results show that the 2D-CNN model (nRMSE = 11.71–31.58%) achieved higher accuracy than the 1D-CNN model (nRMSE = 12.79–55.34%) in predicting leaf pigment contents. For the 2D-CNN models, Cab achieved the best estimation accuracy with an nRMSE value of 11.71% (R2 = 0.92, RMSE = 6.10 µg/cm2), followed by Car (R2 = 0.84, RMSE = 1.03 µg/cm2, nRMSE = 12.29%) and Canth (R2 = 0.89, RMSE = 0.35 µg/cm2, nRMSE = 31.58%). Both 1D-CNN and 2D-CNN models coupled with GA using a subset of the spectrum produced higher prediction accuracy in all pigments than those using the full spectrum. Additionally, the generalization of 2D-CNN is higher than that of 1D-CNN. This study highlights the potential of 2D-CNN approaches for accurate prediction of leaf pigment content from spectral reflectance data, offering a promising tool for advanced vegetation monitoring. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

18 pages, 6280 KB  
Article
Estimation of Compression Depth During CPR Using FMCW Radar with Deep Convolutional Neural Network
by Insoo Choi, Stephen Gyung Won Lee, Hyoun-Joong Kong, Ki Jeong Hong and Youngwook Kim
Sensors 2025, 25(19), 5947; https://doi.org/10.3390/s25195947 - 24 Sep 2025
Viewed by 404
Abstract
Effective Cardiopulmonary Resuscitation (CPR) requires precise chest compression depth, but current out-of-hospital monitoring technologies face limitations. This study introduces a method using frequency-modulated continuous-wave (FMCW) radar to remotely and accurately monitor chest compressions. FMCW radar captures range, Doppler, and angular data, and we [...] Read more.
Effective Cardiopulmonary Resuscitation (CPR) requires precise chest compression depth, but current out-of-hospital monitoring technologies face limitations. This study introduces a method using frequency-modulated continuous-wave (FMCW) radar to remotely and accurately monitor chest compressions. FMCW radar captures range, Doppler, and angular data, and we utilize micro-Doppler signatures for detailed motion analysis. By integrating Doppler shifts over time, chest displacement is estimated. We compare a regression model based on maximum Doppler frequency with deep convolutional neural networks (DCNNs) trained on spectrograms generated via short-time Fourier transform (STFT) and the Wigner–Ville distribution (WVD). The regression model achieved a root mean square error (RMSE) of 0.535 cm. The STFT-based DCNN improved accuracy with an RMSE of 0.505 cm, while the WVD-based DCNN achieved the best performance with an RMSE of 0.447 cm, representing an 11.5% improvement over the STFT-based DCNN. These findings highlight the potential of combining FMCW radar and deep learning to provide accurate, real-time chest compression depth measurement during CPR, supporting the development of advanced, non-contact monitoring systems for emergency medical response. Full article
(This article belongs to the Special Issue AI-Enhanced Radar Sensors: Theories and Applications)
Show Figures

Figure 1

Back to TopTop