Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (13,486)

Search Parameters:
Keywords = deep learning approaches

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 6271 KB  
Article
Estimating Fractional Land Cover Using Sentinel-2 and Multi-Source Data with Traditional Machine Learning and Deep Learning Approaches
by Sergio Sierra, Rubén Ramo, Marc Padilla, Laura Quirós and Adolfo Cobo
Remote Sens. 2025, 17(19), 3364; https://doi.org/10.3390/rs17193364 (registering DOI) - 4 Oct 2025
Abstract
Land cover mapping is essential for territorial management due to its links with ecological, hydrological, climatic, and socioeconomic processes. Traditional methods use discrete classes per pixel, but this study proposes estimating cover fractions with Sentinel-2 imagery (20 m) and AI. We employed the [...] Read more.
Land cover mapping is essential for territorial management due to its links with ecological, hydrological, climatic, and socioeconomic processes. Traditional methods use discrete classes per pixel, but this study proposes estimating cover fractions with Sentinel-2 imagery (20 m) and AI. We employed the French Land cover from Aerospace ImageRy (FLAIR) dataset (810 km2 in France, 19 classes), with labels co-registered with Sentinel-2 to derive precise fractional proportions per pixel. From these references, we generated training sets combining spectral bands, derived indices, and auxiliary data (climatic and temporal variables). Various machine learning models—including XGBoost three deep neural network (DNN) architectures with different depths, and convolutional neural networks (CNNs)—were trained and evaluated to identify the optimal configuration for fractional cover estimation. Model validation on the test set employed RMSE, MAE, and R2 metrics at both pixel level (20 m Sentinel-2) and scene level (100 m FLAIR). The training set integrating spectral bands, vegetation indices, and auxiliary variables yielded the best MAE and RMSE results. Among all models, DNN2 achieved the highest performance, with a pixel-level RMSE of 13.83 and MAE of 5.42, and a scene-level RMSE of 4.94 and MAE of 2.36. This fractional approach paves the way for advanced remote sensing applications, including continuous cover-change monitoring, carbon footprint estimation, and sustainability-oriented territorial planning. Full article
(This article belongs to the Special Issue Multimodal Remote Sensing Data Fusion, Analysis and Application)
Show Figures

Figure 1

27 pages, 10093 KB  
Article
Estimating Gully Erosion Induced by Heavy Rainfall Events Using Stereoscopic Imagery and UAV LiDAR
by Lu Wang, Yuan Qi, Wenwei Xie, Rui Yang, Xijun Wang, Shengming Zhou, Yanqing Dong and Xihong Lian
Remote Sens. 2025, 17(19), 3363; https://doi.org/10.3390/rs17193363 (registering DOI) - 4 Oct 2025
Abstract
Gully erosion, driven by the interplay of natural processes and human activities, results in severe soil degradation and landscape alteration, yet approaches for accurately quantifying erosion triggered by extreme precipitation using multi-source high-resolution remote sensing remain limited. This study first extracted digital surface [...] Read more.
Gully erosion, driven by the interplay of natural processes and human activities, results in severe soil degradation and landscape alteration, yet approaches for accurately quantifying erosion triggered by extreme precipitation using multi-source high-resolution remote sensing remain limited. This study first extracted digital surface models (DSM) for the years 2014 and 2024 using Ziyuan-3 and GaoFen-7 satellite stereo imagery, respectively. Subsequently, the DSM was calibrated using high-resolution unmanned aerial vehicle photogrammetry data to enhance elevation accuracy. Based on the corrected DSMs, gully erosion depths from 2014 to 2024 were quantified. Erosion patches were identified through a deep learning framework applied to GaoFen-1 and GaoFen-2 imagery. The analysis further explored the influences of natural processes and anthropogenic activities on elevation changes within the gully erosion watershed. Topographic monitoring in the Sandu River watershed revealed a net elevation loss of 2.6 m over 2014–2024, with erosion depths up to 8 m in some sub-watersheds. Elevation changes are primarily driven by extreme precipitation-induced erosion alongside human activities, resulting in substantial spatial variability in surface lowering across the watershed. This approach provides a refined assessment of the spatial and temporal evolution of gully erosion, offering valuable insights for soil conservation and sustainable land management strategies in the Loess Plateau region. Full article
Show Figures

Figure 1

15 pages, 2358 KB  
Article
Optimized Lung Nodule Classification Using CLAHE-Enhanced CT Imaging and Swin Transformer-Based Deep Feature Extraction
by Dorsaf Hrizi, Khaoula Tbarki and Sadok Elasmi
J. Imaging 2025, 11(10), 346; https://doi.org/10.3390/jimaging11100346 (registering DOI) - 4 Oct 2025
Abstract
Lung cancer remains one of the most lethal cancers globally. Its early detection is vital to improving survival rates. In this work, we propose a hybrid computer-aided diagnosis (CAD) pipeline for lung cancer classification using Computed Tomography (CT) scan images. The proposed CAD [...] Read more.
Lung cancer remains one of the most lethal cancers globally. Its early detection is vital to improving survival rates. In this work, we propose a hybrid computer-aided diagnosis (CAD) pipeline for lung cancer classification using Computed Tomography (CT) scan images. The proposed CAD pipeline integrates ten image preprocessing techniques and ten pretrained deep learning models for feature extraction including convolutional neural networks and transformer-based architectures, and four classical machine learning classifiers. Unlike traditional end-to-end deep learning systems, our approach decouples feature extraction from classification, enhancing interpretability and reducing the risk of overfitting. A total of 400 model configurations were evaluated to identify the optimal combination. The proposed approach was evaluated on the publicly available Lung Image Database Consortium and Image Database Resource Initiative dataset, which comprises 1018 thoracic CT scans annotated by four thoracic radiologists. For the classification task, the dataset included a total of 6568 images labeled as malignant and 4849 images labeled as benign. Experimental results show that the best performing pipeline, combining Contrast Limited Adaptive Histogram Equalization, Swin Transformer feature extraction, and eXtreme Gradient Boosting, achieved an accuracy of 95.8%. Full article
(This article belongs to the Special Issue Advancements in Imaging Techniques for Detection of Cancer)
25 pages, 1601 KB  
Article
Evaluating Municipal Solid Waste Incineration Through Determining Flame Combustion to Improve Combustion Processes for Environmental Sanitation
by Jian Tang, Xiaoxian Yang, Wei Wang and Jian Rong
Sustainability 2025, 17(19), 8872; https://doi.org/10.3390/su17198872 (registering DOI) - 4 Oct 2025
Abstract
Municipal solid waste (MSW) refers to solid and semi-solid waste generated during human production and daily activities. The process of incinerating such waste, known as municipal solid waste incineration (MSWI), serves as a critical method for reducing waste volume and recovering resources. Automatic [...] Read more.
Municipal solid waste (MSW) refers to solid and semi-solid waste generated during human production and daily activities. The process of incinerating such waste, known as municipal solid waste incineration (MSWI), serves as a critical method for reducing waste volume and recovering resources. Automatic online recognition of flame combustion status during MSWI is a key technical approach to ensuring system stability, addressing issues such as high pollution emissions, severe equipment wear, and low operational efficiency. However, when manually selecting optimized features and hyperparameters based on empirical experience, the MSWI flame combustion state recognition model suffers from high time consumption, strong dependency on expertise, and difficulty in adaptively obtaining optimal solutions. To address these challenges, this article proposes a method for constructing a flame combustion state recognition model optimized based on reinforcement learning (RL), long short-term memory (LSTM), and parallel differential evolution (PDE) algorithms, achieving collaborative optimization of deep features and model hyperparameters. First, the feature selection and hyperparameter optimization problem of the ViT-IDFC combustion state recognition model is transformed into an encoding design and optimization problem for the PDE algorithm. Then, the mutation and selection factors of the PDE algorithm are used as modeling inputs for LSTM, which predicts the optimal hyperparameters based on PDE outputs. Next, during the PDE-based optimization of the ViT-IDFC model, a policy gradient reinforcement learning method is applied to determine the parameters of the LSTM model. Finally, the optimized combustion state recognition model is obtained by identifying the feature selection parameters and hyperparameters of the ViT-IDFC model. Test results based on an industrial image dataset demonstrate that the proposed optimization algorithm improves the recognition performance of both left and right grate recognition models, with the left grate achieving a 0.51% increase in recognition accuracy and the right grate a 0.74% increase. Full article
(This article belongs to the Section Waste and Recycling)
45 pages, 7440 KB  
Review
Integrating Speech Recognition into Intelligent Information Systems: From Statistical Models to Deep Learning
by Chaoji Wu, Yi Pan, Haipan Wu and Lei Ning
Informatics 2025, 12(4), 107; https://doi.org/10.3390/informatics12040107 (registering DOI) - 4 Oct 2025
Abstract
Automatic speech recognition (ASR) has advanced rapidly, evolving from early template-matching systems to modern deep learning frameworks. This review systematically traces ASR’s technological evolution across four phases: the template-based era, statistical modeling approaches, the deep learning revolution, and the emergence of large-scale models [...] Read more.
Automatic speech recognition (ASR) has advanced rapidly, evolving from early template-matching systems to modern deep learning frameworks. This review systematically traces ASR’s technological evolution across four phases: the template-based era, statistical modeling approaches, the deep learning revolution, and the emergence of large-scale models under diverse learning paradigms. We analyze core technologies such as hidden Markov models (HMMs), Gaussian mixture models (GMMs), recurrent neural networks (RNNs), and recent architectures including Transformer-based models and Wav2Vec 2.0. Beyond algorithmic development, we examine how ASR integrates into intelligent information systems, analyzing real-world applications in healthcare, education, smart homes, enterprise systems, and automotive domains with attention to deployment considerations and system design. We also address persistent challenges—noise robustness, low-resource adaptation, and deployment efficiency—while exploring emerging solutions such as multimodal fusion, privacy-preserving modeling, and lightweight architectures. Finally, we outline future research directions to guide the development of robust, scalable, and intelligent ASR systems for complex, evolving environments. Full article
(This article belongs to the Section Machine Learning)
21 pages, 2239 KB  
Article
Deep Reinforcement Learning Approach for Traffic Light Control and Transit Priority
by Saeed Mansouryar, Chiara Colombaroni, Natalia Isaenko and Gaetano Fusco
Future Transp. 2025, 5(4), 137; https://doi.org/10.3390/futuretransp5040137 (registering DOI) - 4 Oct 2025
Abstract
This study investigates the use of deep reinforcement learning techniques to improve traffic signal control systems through the integration of deep learning and reinforcement learning approaches. The purpose of a deep reinforcement learning architecture is to provide adaptive control via a reinforcement learning [...] Read more.
This study investigates the use of deep reinforcement learning techniques to improve traffic signal control systems through the integration of deep learning and reinforcement learning approaches. The purpose of a deep reinforcement learning architecture is to provide adaptive control via a reinforcement learning interface and deep learning for the representation of traffic queues with regards to signal timings. This has driven recent research, which has reported success in the use of such dynamic approaches. To further explore this success, we apply a deep reinforcement learning algorithm over a grid of 21 interconnected traffic signalized intersections and monitor its effectiveness. Unlike previous research, which often examined isolated or idealized scenarios, our model is applied to the real-world traffic network of Via “Prenestina” in eastern Rome. We utilize the Simulation of Urban MObility (SUMO) platform to simulate and test the model. This study has two main objectives: ensure the algorithm’s correct implementation in a real traffic network and assess its impact on public transportation, incorporating an additional priority reward for public transport. The simulation results confirm the model’s effectiveness in optimizing traffic signals and reducing delays for public transport. Full article
22 pages, 3580 KB  
Article
Edge-AI Enabled Resource Allocation for Federated Learning in Cell-Free Massive MIMO-Based 6G Wireless Networks: A Joint Optimization Perspective
by Chen Yang and Quanrong Fang
Electronics 2025, 14(19), 3938; https://doi.org/10.3390/electronics14193938 (registering DOI) - 4 Oct 2025
Abstract
The advent of sixth-generation (6G) wireless networks and cell-free massive multiple-input multiple-output (MIMO) architectures underscores the need for efficient resource allocation to support federated learning (FL) at the network edge. Existing approaches often treat communication, computation, and learning in isolation, overlooking dynamic heterogeneity [...] Read more.
The advent of sixth-generation (6G) wireless networks and cell-free massive multiple-input multiple-output (MIMO) architectures underscores the need for efficient resource allocation to support federated learning (FL) at the network edge. Existing approaches often treat communication, computation, and learning in isolation, overlooking dynamic heterogeneity and fairness, which leads to degraded performance in large-scale deployments. To address this gap, we propose a joint optimization framework that integrates communication–computation co-design, fairness-aware aggregation, and a hybrid strategy combining convex relaxation with deep reinforcement learning. Extensive experiments on benchmark vision datasets and real-world wireless traces demonstrate that the framework achieves up to 23% higher accuracy, 18% lower latency, and 21% energy savings compared with state-of-the-art baselines. These findings advance joint optimization in federated learning (FL) and demonstrate scalability for 6G applications. Full article
15 pages, 3389 KB  
Article
Photovoltaic Decomposition Method Based on Multi-Scale Modeling and Multi-Feature Fusion
by Zhiheng Xu, Peidong Chen, Ran Cheng, Yao Duan, Qiang Luo, Huahui Zhang, Zhenning Pan and Wencong Xiao
Energies 2025, 18(19), 5271; https://doi.org/10.3390/en18195271 (registering DOI) - 4 Oct 2025
Abstract
Deep learning-based Non-Intrusive Load Monitoring (NILM) methods have been widely applied to residential load identification. However, photovoltaic (PV) loads exhibit strong non-stationarity, high dependence on weather conditions, and strong coupling with multi-source data, which limit the accuracy and generalization of existing models. To [...] Read more.
Deep learning-based Non-Intrusive Load Monitoring (NILM) methods have been widely applied to residential load identification. However, photovoltaic (PV) loads exhibit strong non-stationarity, high dependence on weather conditions, and strong coupling with multi-source data, which limit the accuracy and generalization of existing models. To address these challenges, this paper proposes a multi-scale and multi-feature fusion framework for PV disaggregation, consisting of three modules: Multi-Scale Time Series Decomposition (MTD), Multi-Feature Fusion (MFF), and Temporal Attention Decomposition (TAD). These modules jointly capture short-term fluctuations, long-term trends, and deep dependencies across multi-source features. Experiments were conducted on real residential datasets from southern China. Results show that, compared with representative baselines such as SGN-Conv and MAT-Conv, the proposed method reduces MAE by over 60% and SAE by nearly 70% for some users, and it achieves more than 45% error reduction in cross-user tests. These findings demonstrate that the proposed approach significantly enhances both accuracy and generalization in PV load disaggregation. Full article
Show Figures

Figure 1

43 pages, 4746 KB  
Article
The BTC Price Prediction Paradox Through Methodological Pluralism
by Mariya Paskaleva and Ivanka Vasenska
Risks 2025, 13(10), 195; https://doi.org/10.3390/risks13100195 (registering DOI) - 4 Oct 2025
Abstract
Bitcoin’s extreme price volatility presents significant challenges for investors and traders, necessitating accurate predictive models to guide decision-making in cryptocurrency markets. This study compares the performance of machine learning approaches for Bitcoin price prediction, specifically examining XGBoost gradient boosting, Long Short-Term Memory (LSTM), [...] Read more.
Bitcoin’s extreme price volatility presents significant challenges for investors and traders, necessitating accurate predictive models to guide decision-making in cryptocurrency markets. This study compares the performance of machine learning approaches for Bitcoin price prediction, specifically examining XGBoost gradient boosting, Long Short-Term Memory (LSTM), and GARCH-DL neural networks using comprehensive market data spanning December 2013 to May 2025. We employed extensive feature engineering incorporating technical indicators, applied multiple machine and deep learning models configurations including standalone and ensemble approaches, and utilized cross-validation techniques to assess model robustness. Based on the empirical results, the most significant practical implication is that traders and financial institutions should adopt a dual-model approach, deploying XGBoost for directional trading strategies and utilizing LSTM models for applications requiring precise magnitude predictions, due to their superior continuous forecasting performance. This research demonstrates that traditional technical indicators, particularly market capitalization and price extremes, remain highly predictive in algorithmic trading contexts, validating their continued integration into modern cryptocurrency prediction systems. For risk management applications, the attention-based LSTM’s superior risk-adjusted returns, combined with enhanced interpretability, make it particularly valuable for institutional portfolio optimization and regulatory compliance requirements. The findings suggest that ensemble methods offer balanced performance across multiple evaluation criteria, providing a robust foundation for production trading systems where consistent performance is more valuable than optimization for single metrics. These results enable practitioners to make evidence-based decisions about model selection based on their specific trading objectives, whether focused on directional accuracy for signal generation or precision of magnitude for risk assessment and portfolio management. Full article
(This article belongs to the Special Issue Portfolio Theory, Financial Risk Analysis and Applications)
Show Figures

Figure 1

18 pages, 3251 KB  
Article
Classifying Advanced Driver Assistance System (ADAS) Activation from Multimodal Driving Data: A Real-World Study
by Gihun Lee, Kahyun Lee and Jong-Uk Hou
Sensors 2025, 25(19), 6139; https://doi.org/10.3390/s25196139 (registering DOI) - 4 Oct 2025
Abstract
Identifying the activation status of advanced driver assistance systems (ADAS) in real-world driving environments is crucial for safety, responsibility attribution, and accident forensics. Unlike prior studies that primarily rely on simulation-based settings or unsynchronized data, we collected a multimodal dataset comprising synchronized controller [...] Read more.
Identifying the activation status of advanced driver assistance systems (ADAS) in real-world driving environments is crucial for safety, responsibility attribution, and accident forensics. Unlike prior studies that primarily rely on simulation-based settings or unsynchronized data, we collected a multimodal dataset comprising synchronized controller area network (CAN)-bus and smartphone-based inertial measurement unit (IMU) signals from drivers on consistent highway sections under both ADAS-enabled and manual modes. Using these data, we developed lightweight classification pipelines based on statistical and deep learning approaches to explore the feasibility of distinguishing ADAS operation. Our analyses revealed systematic behavioral differences between modes, particularly in speed regulation and steering stability, highlighting how ADAS reduces steering variability and stabilizes speed control. Although classification accuracy was moderate, this study provides one of the first data-driven demonstrations of ADAS status detection under naturalistic conditions. Beyond classification, the released dataset enables systematic behavioral analysis and offers a valuable resource for advancing research on driver monitoring, adaptive ADAS algorithms, and accident forensics. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Automotive Engineering)
Show Figures

Figure 1

16 pages, 2720 KB  
Article
Shale Oil T2 Spectrum Inversion Method Based on Autoencoder and Fourier Transform
by Jun Zhao, Shixiang Jiao, Li Bai, Bing Xie, Yan Chen, Zhenguan Wu and Shaomin Zhang
Geosciences 2025, 15(10), 387; https://doi.org/10.3390/geosciences15100387 (registering DOI) - 4 Oct 2025
Abstract
Accurate inversion of the T2 spectrum of shale oil reservoir fluids is crucial for reservoir evaluation. However, traditional nuclear magnetic resonance inversion methods face challenges in extracting features from multi-exponential decay signals. This study proposed an inversion method that combines autoencoder (AE) [...] Read more.
Accurate inversion of the T2 spectrum of shale oil reservoir fluids is crucial for reservoir evaluation. However, traditional nuclear magnetic resonance inversion methods face challenges in extracting features from multi-exponential decay signals. This study proposed an inversion method that combines autoencoder (AE) and Fourier transform, aiming to enhance the accuracy and stability of T2 spectrum estimation for shale oil reservoirs. The autoencoder is employed to automatically extract deep features from the echo train, while the Fourier transform is used to enhance frequency domain features of multi-exponential decay information. Furthermore, this paper designs a customized weighted loss function based on a self-attention mechanism to focus the model’s learning capability on peak regions, thereby mitigating the negative impact of zero-value regions on model training. Experimental results demonstrate significant improvements in inversion accuracy, noise resistance, and computational efficiency compared to traditional inversion methods. This research provides an efficient and reliable new approach for precise evaluation of the T2 spectrum in shale oil reservoirs. Full article
(This article belongs to the Section Geophysics)
Show Figures

Figure 1

28 pages, 1334 KB  
Article
A Scalable Two-Level Deep Reinforcement Learning Framework for Joint WIP Control and Job Sequencing in Flow Shops
by Maria Grazia Marchesano, Guido Guizzi, Valentina Popolo and Anastasiia Rozhok
Appl. Sci. 2025, 15(19), 10705; https://doi.org/10.3390/app151910705 - 3 Oct 2025
Abstract
Effective production control requires aligning strategic planning with real-time execution under dynamic and stochastic conditions. This study proposes a scalable dual-agent Deep Reinforcement Learning (DRL) framework for the joint optimisation of Work-In-Process (WIP) control and job sequencing in flow-shop environments. A strategic DQN [...] Read more.
Effective production control requires aligning strategic planning with real-time execution under dynamic and stochastic conditions. This study proposes a scalable dual-agent Deep Reinforcement Learning (DRL) framework for the joint optimisation of Work-In-Process (WIP) control and job sequencing in flow-shop environments. A strategic DQN agent regulates global WIP to meet throughput targets, while a tactical DQN agent adaptively selects dispatching rules at the machine level on an event-driven basis. Parameter sharing in the tactical agent ensures inherent scalability, overcoming the combinatorial complexity of multi-machine scheduling. The agents coordinate indirectly via a shared simulation environment, learning to balance global stability with local responsiveness. The framework is validated through a discrete-event simulation integrating agent-based modelling, demonstrating consistent performance across multiple production scales (5–15 machines) and process time variabilities. Results show that the approach matches or surpasses analytical benchmarks and outperforms static rule-based strategies, highlighting its robustness, adaptability, and potential as a foundation for future Hierarchical Reinforcement Learning applications in manufacturing. Full article
(This article belongs to the Special Issue Intelligent Manufacturing and Production)
17 pages, 10273 KB  
Article
Deep Learning-Based Approach for Automatic Defect Detection in Complex Structures Using PAUT Data
by Kseniia Barshok, Jung-In Choi and Jaesun Lee
Sensors 2025, 25(19), 6128; https://doi.org/10.3390/s25196128 - 3 Oct 2025
Abstract
This paper presents a comprehensive study on automated defect detection in complex structures using phased array ultrasonic testing data, focusing on both traditional signal processing and advanced deep learning methods. As a non-AI baseline, the well-known signal-to-noise ratio algorithm was improved by introducing [...] Read more.
This paper presents a comprehensive study on automated defect detection in complex structures using phased array ultrasonic testing data, focusing on both traditional signal processing and advanced deep learning methods. As a non-AI baseline, the well-known signal-to-noise ratio algorithm was improved by introducing automatic depth gate calculation using derivative analysis and eliminated the need for manual parameter tuning. Even though this method demonstrates robust flaw indication, it faces difficulties for automatic defect detection in highly noisy data or in cases with large pore zones. Considering this, multiple DL architectures—including fully connected networks, convolutional neural networks, and a novel Convolutional Attention Temporal Transformer for Sequences—are developed and trained on diverse datasets comprising simulated CIVA data and real-world data files from welded and composite specimens. Experimental results show that while the FCN architecture is limited in its ability to model dependencies, the CNN achieves a strong performance with a test accuracy of 94.9%, effectively capturing local features from PAUT signals. The CATT-S model, which integrates a convolutional feature extractor with a self-attention mechanism, consistently outperforms the other baselines by effectively modeling both fine-grained signal morphology and long-range inter-beam dependencies. Achieving a remarkable accuracy of 99.4% and a strong F1-score of 0.905 on experimental data, this integrated approach demonstrates significant practical potential for improving the reliability and efficiency of NDT in complex, heterogeneous materials. Full article
Show Figures

Figure 1

38 pages, 2485 KB  
Review
Research Progress of Deep Learning-Based Artificial Intelligence Technology in Pest and Disease Detection and Control
by Yu Wu, Li Chen, Ning Yang and Zongbao Sun
Agriculture 2025, 15(19), 2077; https://doi.org/10.3390/agriculture15192077 - 3 Oct 2025
Abstract
With the rapid advancement of artificial intelligence technology, the widespread application of deep learning in computer vision is driving the transformation of agricultural pest detection and control toward greater intelligence and precision. This paper systematically reviews the evolution of agricultural pest detection and [...] Read more.
With the rapid advancement of artificial intelligence technology, the widespread application of deep learning in computer vision is driving the transformation of agricultural pest detection and control toward greater intelligence and precision. This paper systematically reviews the evolution of agricultural pest detection and control technologies, with a special focus on the effectiveness of deep-learning-based image recognition methods for pest identification, as well as their integrated applications in drone-based remote sensing, spectral imaging, and Internet of Things sensor systems. Through multimodal data fusion and dynamic prediction, artificial intelligence has significantly improved the response times and accuracy of pest monitoring. On the control side, the development of intelligent prediction and early-warning systems, precision pesticide-application technologies, and smart equipment has advanced the goals of eco-friendly pest management and ecological regulation. However, challenges such as high data-annotation costs, limited model generalization, and constrained computing power on edge devices remain. Moving forward, further exploration of cutting-edge approaches such as self-supervised learning, federated learning, and digital twins will be essential to build more efficient and reliable intelligent control systems, providing robust technical support for sustainable agricultural development. Full article
18 pages, 3114 KB  
Article
A Novel Empirical-Informed Neural Network Method for Vehicle Tire Noise Prediction
by Peisong Dai, Ruxue Dai, Yingqi Yin, Jingjing Wang, Haibo Huang and Weiping Ding
Machines 2025, 13(10), 911; https://doi.org/10.3390/machines13100911 - 2 Oct 2025
Abstract
In the evaluation of vehicle noise, vibration and harshness (NVH) performance, interior noise control is the core consideration. In the early stage of automobile research and development, accurate prediction of interior noise caused by road surface is very important for optimizing NVH performance [...] Read more.
In the evaluation of vehicle noise, vibration and harshness (NVH) performance, interior noise control is the core consideration. In the early stage of automobile research and development, accurate prediction of interior noise caused by road surface is very important for optimizing NVH performance and shortening the development cycle. Although the data-driven machine learning method has been widely used in automobile noise research due to its advantages of no need for accurate physical modeling, data learning and generalization ability, it still faces the challenge of insufficient accuracy in capturing key local features, such as peaks, in practical NVH engineering. Aiming at this challenge, this paper introduces a forecast approach that utilizes an empirical-informed neural network, which aims to integrate a physical mechanism and a data-driven method. By deeply analyzing the transmission path of interior noise, this method embeds the acoustic mechanism features such as local peak and noise correlation into the deep neural network as physical constraints; therefore, this approach significantly enhances the model’s predictive performance. Experimental findings indicate that, in contrast to conventional deep learning techniques, this method is able to develop better generalization capabilities with limited samples, while still maintaining prediction accuracy. In the verification of specific models, this method shows obvious advantages in prediction accuracy and computational efficiency, which verifies its application value in practical engineering. The main contributions of this study are the proposal of an empirical-informed neural network that embeds vibro-acoustic mechanisms into the loss function and the introduction of an adaptive weight strategy to enhance model robustness. Full article
(This article belongs to the Section Vehicle Engineering)
Show Figures

Figure 1

Back to TopTop