Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,443)

Search Parameters:
Keywords = hidden layers

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1473 KB  
Article
AI-Driven Firmness Prediction of Kiwifruit Using Image-Based Vibration Response Analysis
by Seyedeh Fatemeh Nouri, Saman Abdanan Mehdizadeh and Yiannis Ampatzidis
Sensors 2025, 25(17), 5279; https://doi.org/10.3390/s25175279 (registering DOI) - 25 Aug 2025
Abstract
Accurate and non-destructive assessment of fruit firmness is critical for evaluating quality and ripeness, particularly in postharvest handling and supply chain management. This study presents the development of an image-based vibration analysis system for evaluating the firmness of kiwifruit using computer vision and [...] Read more.
Accurate and non-destructive assessment of fruit firmness is critical for evaluating quality and ripeness, particularly in postharvest handling and supply chain management. This study presents the development of an image-based vibration analysis system for evaluating the firmness of kiwifruit using computer vision and machine learning. In the proposed setup, 120 kiwifruits were subjected to controlled excitation in the frequency range of 200–300 Hz using a vibration motor. A digital camera captured surface displacement over time (for 20 s), enabling the extraction of key dynamic features, namely, the damping coefficient (damping is a measure of a material’s ability to dissipate energy) and natural frequency (the first peak in the frequency spectrum), through image processing techniques. Results showed that firmer fruits exhibited higher natural frequencies and lower damping, while softer, more ripened fruits showed the opposite trend. These vibration-based features were then used as inputs to a feed-forward backpropagation neural network to predict fruit firmness. The neural network consisted of an input layer with two neurons (damping coefficient and natural frequency), a hidden layer with ten neurons, and an output layer representing firmness. The model demonstrated strong predictive performance, with a correlation coefficient (R2) of 0.9951 and a root mean square error (RMSE) of 0.0185, confirming its high accuracy. This study confirms the feasibility of using vibration-induced image data combined with machine learning for non-destructive firmness evaluation. The proposed method provides a reliable and efficient alternative to traditional firmness testing techniques and offers potential for real-time implementation in automated grading and quality control systems for kiwi and other fruit types. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
18 pages, 1061 KB  
Article
Using Causality-Driven Graph Representation Learning for APT Attacks Path Identification
by Xiang Cheng, Miaomiao Kuang and Hongyu Yang
Symmetry 2025, 17(9), 1373; https://doi.org/10.3390/sym17091373 - 22 Aug 2025
Viewed by 220
Abstract
In the cybersecurity attack and defense space, the “attacker” and the “defender” form a dynamic and symmetrical adversarial pair. Their strategy iterations and capability evolutions have long been in a symmetrical game of mutual restraint. We will introduce modern Intrusion Detection Systems (IDSs) [...] Read more.
In the cybersecurity attack and defense space, the “attacker” and the “defender” form a dynamic and symmetrical adversarial pair. Their strategy iterations and capability evolutions have long been in a symmetrical game of mutual restraint. We will introduce modern Intrusion Detection Systems (IDSs) from the defender’s side to counter the techniques designed by the attacker (APT attack). One major challenge faced by IDS is to identify complex attack paths from a vast provenance graph. By constructing an attack behavior tracking graph, the interactions between system entities can be recorded, but the malicious activities of attackers are often hidden among a large number of normal system operations. Although traditional methods can identify attack behaviors, they only focus on the surface association relationships between entities and ignore the deep causal relationships, which limits the accuracy and interpretability of detection. Existing graph anomaly detection methods usually assign the same weight to all interactions, while we propose a Causal Autoencoder for Graph Explanation (CAGE) based on reinforcement learning. This method extracts feature representations from the traceability graph through a graph attention network(GAT), uses Q-learning to dynamically evaluate the causal importance of edges, and highlights key causal paths through a weight layering strategy. In the DARPA TC project, the experimental results conducted on the selected three datasets indicate that the precision of this method in the anomaly detection task remains above 97% on average, demonstrating excellent accuracy. Moreover, the recall values all exceed 99.5%, which fully proves its extremely low rate of missed detections. Full article
(This article belongs to the Special Issue Advanced Studies of Symmetry/Asymmetry in Cybersecurity)
Show Figures

Figure 1

23 pages, 8922 KB  
Article
Research on Parameter Prediction Model of S-Shaped Inlet Based on FCM-NDAPSO-RBF Neural Network
by Ye Wei, Lingfei Xiao, Xiaole Zhang, Junyuan Hu and Jie Li
Aerospace 2025, 12(8), 748; https://doi.org/10.3390/aerospace12080748 - 21 Aug 2025
Viewed by 168
Abstract
To address the inefficiencies of traditional numerical simulations and the high cost of experimental validation in the aerodynamic–stealth integrated design of S-shaped inlets for aero-engines, this study proposes a novel parameter prediction model based on a fuzzy C-means (FCM) clustering and nonlinear dynamic [...] Read more.
To address the inefficiencies of traditional numerical simulations and the high cost of experimental validation in the aerodynamic–stealth integrated design of S-shaped inlets for aero-engines, this study proposes a novel parameter prediction model based on a fuzzy C-means (FCM) clustering and nonlinear dynamic adaptive particle swarm optimization-enhanced radial basis function neural network (NDAPSO-RBFNN). The FCM algorithm is applied to reduce the feature dimensionality of aerodynamic parameters and determine the optimal hidden layer structure of the RBF network using clustering validity indices. Meanwhile, the NDAPSO algorithm introduces a three-stage adaptive inertia weight mechanism to balance global exploration and local exploitation effectively. Simulation results demonstrate that the proposed model significantly improves training efficiency and generalization capability. Specifically, the model achieves a root mean square error (RMSE) of 3.81×108 on the training set and 8.26×108 on the test set, demonstrating robust predictive accuracy. Furthermore, 98.3% of the predicted values fall within the y=x±3β confidence interval (β=1.2×107). Compared with traditional PSO-RBF models, the number of iterations of NDAPSO-RBF network is lower, the single prediction time of NDAPSO-RBF network is shorter, and the number of calls to the standard deviation of the NDAPSO-RBF network is lower. These results indicate that the proposed model not only provides a reliable and efficient surrogate modeling method for complex inlet flow fields but also offers a promising approach for real-time multi-objective aerodynamic–stealth optimization in aerospace applications. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

21 pages, 2424 KB  
Article
Soft Computing Approaches for Predicting Shade-Seeking Behavior in Dairy Cattle Under Heat Stress: A Comparative Study of Random Forests and Neural Networks
by Sergi Sanjuan, Daniel Alexander Méndez, Roger Arnau, J. M. Calabuig, Xabier Díaz de Otálora Aguirre and Fernando Estellés
Mathematics 2025, 13(16), 2662; https://doi.org/10.3390/math13162662 - 19 Aug 2025
Viewed by 210
Abstract
Heat stress is one of the main welfare and productivity problems faced by dairy cattle in Mediterranean climates. The main objective of this work is to predict heat stress in livestock from shade-seeking behavior captured by computer vision, combined with some climatic features, [...] Read more.
Heat stress is one of the main welfare and productivity problems faced by dairy cattle in Mediterranean climates. The main objective of this work is to predict heat stress in livestock from shade-seeking behavior captured by computer vision, combined with some climatic features, in a completely non-invasive way. To this end, we evaluate two soft computing algorithms—Random Forests and Neural Networks—clarifying the trade-off between accuracy and interpretability for real-world farm deployment. Data were gathered at a commercial dairy farm in Titaguas (Valencia, Spain) using overhead cameras that counted cows in the shade every 5–10 min during summer 2023. Each record contains the shaded-cow count, ambient temperature, relative humidity, and an exact timestamp. From here, three thermal indices were derived: the current THI, the previous-night mean THI, and the day-time accumulated THI. The resulting dataset covers 75 days and 6907 day-time observations. To evaluate the models’ performance a 5-fold cross-validation is also used. The results show that both soft computing models outperform a single Decision Tree baseline. The best Neural Network (3 hidden layers, 16 neurons each, learning rate =103) reaches an average RMSE of 14.78, while a Random Forest (10 trees, depth =5) achieves 14.97 and offers the best interpretability. Daily error distributions reveal a median RMSE of 13.84 and confirm that predictions deviate less than one hour from observed shade-seeking peaks. Although the dataset came from a single farm, the results generalized well within the observed range. However, the models could not accurately predict the exact number of cows in the shade. This suggests the influence of other variables not included in the analysis (such as solar radiation or wind data), which opens the door for future research. Full article
(This article belongs to the Topic Soft Computing and Machine Learning)
Show Figures

Figure 1

34 pages, 4790 KB  
Article
An Explainable Approach to Parkinson’s Diagnosis Using the Contrastive Explanation Method—CEM
by Ipek Balikci Cicek, Zeynep Kucukakcali, Birgul Deniz and Fatma Ebru Algül
Diagnostics 2025, 15(16), 2069; https://doi.org/10.3390/diagnostics15162069 - 18 Aug 2025
Viewed by 286
Abstract
Background/Objectives: Parkinson’s disease (PD) is a progressive neurodegenerative disorder that requires early and accurate diagnosis. This study aimed to classify individuals with and without PD using volumetric brain MRI data and to improve model interpretability using explainable artificial intelligence (XAI) techniques. Methods: This [...] Read more.
Background/Objectives: Parkinson’s disease (PD) is a progressive neurodegenerative disorder that requires early and accurate diagnosis. This study aimed to classify individuals with and without PD using volumetric brain MRI data and to improve model interpretability using explainable artificial intelligence (XAI) techniques. Methods: This retrospective study included 79 participants (39 PD patients, 40 controls) recruited at Inonu University Turgut Ozal Medical Center between 2013 and 2025. A deep neural network (DNN) was developed using a multilayer perceptron architecture with six hidden layers and ReLU activation functions. Seventeen volumetric brain features were used as the input. To ensure robust evaluation and prevent overfitting, a stratified five-fold cross-validation was applied, maintaining class balance in each fold. Model transparency was explored using two complementary XAI techniques: the Contrastive Explanation Method (CEM) and Local Interpretable Model-Agnostic Explanations (LIME). CEM highlights features that support or could alter the current classification, while LIME provides instance-based feature attributions. Results: The DNN model achieved high diagnostic performance with 94.1% accuracy, 98.3% specificity, 90.2% sensitivity, and an AUC of 0.97. The CEM analysis suggested that reduced hippocampal volume was a key contributor to PD classification (–0.156 PP), whereas higher volumes in the brainstem and hippocampus were associated with the control class (+0.035 and +0.150 PP, respectively). The LIME results aligned with these findings, revealing consistent feature importance (mean = 0.1945) and faithfulness (0.0269). Comparative analyses showed different volumetric patterns between groups and confirmed the DNN’s superiority over conventional machine learning models such as SVM, logistic regression, KNN, and AdaBoost. Conclusions: This study demonstrates that a deep learning model, enhanced with CEM and LIME, can provide both high diagnostic accuracy and interpretable insights for PD classification, supporting the integration of explainable AI in clinical neuroimaging. Full article
(This article belongs to the Special Issue Artificial Intelligence in Brain Diseases)
Show Figures

Figure 1

25 pages, 4673 KB  
Article
Dynamic Monitoring and Evaluation of Fracture Stimulation Volume Based on Machine Learning
by Xiaodong He, Weibang Wang, Luyao Wang, Jinliang Xie, Chang Li, Lu Chen, Qinzhuo Liao and Shouceng Tian
Processes 2025, 13(8), 2590; https://doi.org/10.3390/pr13082590 - 16 Aug 2025
Viewed by 431
Abstract
Traditional hydraulic-fracturing models are restricted by low computational efficiency, insufficient field data, and complex physical mechanisms, causing evaluation delays and failing to meet practical engineering needs. To address these challenges, this study innovatively develops a dynamic hydraulic-fracturing monitoring method that integrates machine learning [...] Read more.
Traditional hydraulic-fracturing models are restricted by low computational efficiency, insufficient field data, and complex physical mechanisms, causing evaluation delays and failing to meet practical engineering needs. To address these challenges, this study innovatively develops a dynamic hydraulic-fracturing monitoring method that integrates machine learning with numerical simulation. Firstly, this study uses GOHFER 9.5.6 software to generate 12,000 sets of fracture geometry data and constructs a big dataset for hydraulic fracturing. In order to improve the efficiency of the simulation, a macro command is used in combination with a Python 3.11 code to achieve the automation of the simulation process, thereby expanding the data samples for the surrogate model. On this basis, a parameter sensitivity analysis is carried out to identify key input parameters, such as reservoir parameters and fracturing fluid properties, that significantly affect fracture geometry. Next, a neural-network surrogate model is established, which takes fracturing geological parameters and pumping parameters as inputs and fracture geometric parameters as outputs. Data are preprocessed using the min–max normalization method. A neural-network structure with two hidden layers is chosen, and the model is trained with the Adam optimizer to improve its predictive accuracy. The experimental results show that the efficiency of automated numerical simulation for hydraulic fracturing is significantly improved. The surrogate model achieved a prediction accuracy of over 90% and a response time of less than 10 s, representing a substantial efficiency improvement compared to traditional fracturing models. Through these technical approaches, this study not only enhances the effectiveness of fracturing but also provides a new, efficient, and accurate solution for oilfield fracturing operations. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

12 pages, 1443 KB  
Article
Identification of Selected Physical and Mechanical Properties of Cement Composites Modified with Granite Powder Using Neural Networks
by Slawomir Czarnecki
Materials 2025, 18(16), 3838; https://doi.org/10.3390/ma18163838 - 15 Aug 2025
Viewed by 397
Abstract
This study presents the development of a reliable predictive model for evaluating key physical and mechanical properties of cement-based composites modified with granite powder, a waste byproduct from granite rock cutting. The research addresses the need for more sustainable materials in the concrete [...] Read more.
This study presents the development of a reliable predictive model for evaluating key physical and mechanical properties of cement-based composites modified with granite powder, a waste byproduct from granite rock cutting. The research addresses the need for more sustainable materials in the concrete industry by exploring the potential of granite powder as a supplementary cementitious material (SCM) to partially replace cement and reduce CO2 emissions. The experimental program included standardized testing of samples containing up to 30% granite powder, focusing on compressive strength at 7, 28, and 90 days, bonding strength at 28 days, and packing density of the fresh mixture. A multilayer perceptron (MLP) artificial neural network was employed to predict these properties using four input variables: granite powder content, cement content, sand content, and water content. The network architecture, consisting of two hidden layers with 10 and 15 neurons, respectively, was selected as the most suitable for this purpose. The model achieved high predictive performance, with coefficients of determination (R) exceeding 0.9 and mean absolute percentage errors (MAPE) below 6% for all output variables, demonstrating its robustness and accuracy. The findings confirm that granite powder not only contributes positively to concrete performance over time, but also supports environmental sustainability goals by reducing the carbon footprint associated with cement production. However, the model’s applicability is currently limited to mixtures using granite powder at up to 30% cement replacement. This research highlights the effectiveness of machine learning, specifically neural networks, for solving multi-output problems in concrete technology. The successful implementation of the MLP network in this context may encourage broader adoption of data-driven approaches in the design and optimization of sustainable cementitious composites. Full article
(This article belongs to the Special Issue Advances in Modern Cement-Based Materials for Composite Structures)
Show Figures

Figure 1

36 pages, 9430 KB  
Article
Numerical Method for Internal Structure and Surface Evaluation in Coatings
by Tomas Kačinskas and Saulius Baskutis
Inventions 2025, 10(4), 71; https://doi.org/10.3390/inventions10040071 - 13 Aug 2025
Viewed by 235
Abstract
This study introduces a MATrix LABoratory (MATLAB, version R2024b, update 1 (24.2.0.2740171))-based automated system for the detection and measurement of indication areas in coated surfaces, enhancing the accuracy and efficiency of quality control processes in metal, polymeric and thermoplastic coatings. The developed code [...] Read more.
This study introduces a MATrix LABoratory (MATLAB, version R2024b, update 1 (24.2.0.2740171))-based automated system for the detection and measurement of indication areas in coated surfaces, enhancing the accuracy and efficiency of quality control processes in metal, polymeric and thermoplastic coatings. The developed code identifies various indication characteristics in the image and provides numerical results, assesses the size and quantity of indications and evaluates conformity to ISO standards. A comprehensive testing method, involving non-destructive penetrant testing (PT) and radiographic testing (RT), allowed for an in-depth analysis of surface and internal porosity across different coating methods, including aluminum-, copper-, polytetrafluoroethylene (PTFE)- and polyether ether ketone (PEEK)-based materials. Initial findings had a major impact on indicating a non-homogeneous surface of obtained coatings, manufactured using different technologies and materials. Whereas researchers using non-destructive testing (NDT) methods typically rely on visual inspection and manual counting, the system under study automates this process. Each sample image is loaded into MATLAB and analyzed using the Image Processing Tool, Computer Vision Toolbox, Statistics and Machine Learning Toolbox. The custom code performs essential tasks such as image conversion, filtering, boundary detection, layering operations and calculations. These processes are integral to rendering images with developed indications according to NDT method requirements, providing a detailed visual and numerical representation of the analysis. RT also validated the observations made through surface indication detection, revealing either the absence of hidden defects or, conversely, internal porosity correlating with surface conditions. Matrix and graphical representations were used to facilitate the comparison of test results, highlighting more advanced methods and materials as the superior choice for achieving optimal mechanical and structural integrity. This research contributes to addressing challenges in surface quality assurance, advancing digital transformation in inspection processes and exploring more advanced alternatives to traditional coating technologies and materials. Full article
(This article belongs to the Section Inventions and Innovation in Advanced Manufacturing)
Show Figures

Figure 1

30 pages, 2261 KB  
Article
Multilayer Perceptron Mapping of Subjective Time Duration onto Mental Imagery Vividness and Underlying Brain Dynamics: A Neural Cognitive Modeling Approach
by Matthew Sheculski and Amedeo D’Angiulli
Mach. Learn. Knowl. Extr. 2025, 7(3), 82; https://doi.org/10.3390/make7030082 - 13 Aug 2025
Viewed by 454
Abstract
According to a recent experimental phenomenology–information processing theory, the sensory strength, or vividness, of visual mental images self-reported by human observers reflects the intensive variation in subjective time duration during the process of generation of said mental imagery. The primary objective of this [...] Read more.
According to a recent experimental phenomenology–information processing theory, the sensory strength, or vividness, of visual mental images self-reported by human observers reflects the intensive variation in subjective time duration during the process of generation of said mental imagery. The primary objective of this study was to test the hypothesis that a biologically plausible essential multilayer perceptron (MLP) architecture can validly map the phenomenological categories of subjective time duration onto levels of subjectively self-reported vividness. A secondary objective was to explore whether this type of neural network cognitive modeling approach can give insight into plausible underlying large-scale brain dynamics. To achieve these objectives, vividness self-reports and reaction times from a previously collected database were reanalyzed using multilayered perceptron network models. The input layer consisted of six levels representing vividness self-reports and a reaction time cofactor. A single hidden layer consisted of three nodes representing the salience, task positive, and default mode networks. The output layer consisted of five levels representing Vittorio Benussi’s subjective time categories. Across different models of networks, Benussi’s subjective time categories (Level 1 = very brief, 2 = brief, 3 = present, 4 = long, 5 = very long) were predicted by visual imagery vividness level 1 (=no image) to 5 (=very vivid) with over 90% success in classification accuracy, precision, recall, and F1-score. This accuracy level was maintained after 5-fold cross validation. Linear regressions, Welch’s t-test for independent coefficients, and Pearson’s correlation analysis were applied to the resulting hidden node weight vectors, obtaining evidence for strong correlation and anticorrelation between nodes. This study successfully mapped Benussi’s five levels of subjective time categories onto the activation patterns of a simple MLP, providing a novel computational framework for experimental phenomenology. Our results revealed structured, complex dynamics between the task positive network (TPN), the default mode network (DMN), and the salience network (SN), suggesting that the neural mechanisms underlying temporal consciousness involve flexible network interactions beyond the traditional triple network model. Full article
Show Figures

Figure 1

27 pages, 2893 KB  
Article
Neural Network-Based Estimation of Gear Safety Factors from ISO-Based Simulations
by Moslem Molaie, Antonio Zippo and Francesco Pellicano
Symmetry 2025, 17(8), 1312; https://doi.org/10.3390/sym17081312 - 13 Aug 2025
Viewed by 309
Abstract
Digital Twins (DTs) have become essential tools for the design, diagnostics, and prognostics of mechanical systems. In gearbox applications, DTs are often built using physics-based simulations guided by ISO standards. However, standards-based approaches may suffer from complexity, licensing limitations, and computational costs. The [...] Read more.
Digital Twins (DTs) have become essential tools for the design, diagnostics, and prognostics of mechanical systems. In gearbox applications, DTs are often built using physics-based simulations guided by ISO standards. However, standards-based approaches may suffer from complexity, licensing limitations, and computational costs. The concept of symmetry is inherent in gear mechanisms, both in geometry and in operational conditions, yet practical applications often face asymmetric load distributions, misalignments, and asymmetric and symmetric nonlinear behaviors. In this study, we propose a hybrid method that integrates data-driven modeling with standard-based simulation to develop efficient and accurate digital twins for gear transmission systems. A digital twin of a spur gear transmission is generated using KISSsoft®, employing ISO standards to compute safety factors across varied geometries and load conditions. An automated MATLAB-KISSsoft® (COM-interface) enables large-scale data generation by systematically varying key input parameters such as torque, pinion speed, and center distance. This dataset is then used to train a neural network (NN) capable of predicting safety factors, with hyperparameter optimization improving the model’s predictive accuracy. Among the tested NN architectures, the model with a single hidden layer yielded the best performance, achieving maximum prediction errors below 0.01 for root and flank safety factors. More complex failure modes such as scuffing and micropitting exhibited higher maximum errors of 0.0833 and 0.0596, respectively, indicating areas for potential model refinement. Comparative analysis shows strong agreement between the NN outputs and KISSsoft® results, especially for root and flank safety factors. Performance is further validated through sensitivity analyses across seven cases, confirming the NN’s reliability as a surrogate model. This approach reduces simulation time while preserving accuracy, demonstrating the potential of neural networks to support real-time condition monitoring and predictive maintenance in gearbox systems. Full article
Show Figures

Figure 1

31 pages, 3266 KB  
Article
Context-Driven Recommendation via Heterogeneous Temporal Modeling and Large Language Model in the Takeout System
by Wei Deng, Dongyi Hu, Zilong Jiang, Peng Zhang and Yong Shi
Systems 2025, 13(8), 682; https://doi.org/10.3390/systems13080682 - 11 Aug 2025
Viewed by 335
Abstract
On food delivery platforms, user decisions are often driven by dynamic contextual factors such as time, intent, and lifestyle patterns. Traditional context-aware recommender systems struggle to capture such implicit signals, especially when user behavior spans heterogeneous long- and short-term patterns. To address this, [...] Read more.
On food delivery platforms, user decisions are often driven by dynamic contextual factors such as time, intent, and lifestyle patterns. Traditional context-aware recommender systems struggle to capture such implicit signals, especially when user behavior spans heterogeneous long- and short-term patterns. To address this, we propose a context-driven recommendation framework that integrates a hybrid sequence modeling architecture with a Large Language Model for post hoc reasoning and reranking. Specifically, the solution tackles several key issues: (1) integration of multimodal features to achieve explicit context fusion through a hybrid fusion strategy; (2) introduction of a context capture layer and a context propagation layer to enable effective encoding of implicit contextual states hidden in the heterogeneous long and short term; (3) cross attention mechanisms to facilitate context retrospection, which allows implicit contexts to be uncovered; and (4) leveraging the reasoning capabilities of DeepSeek-R1 as a post-processing step to perform open knowledge-enhanced reranking. Extensive experiments on a real-world dataset show that our approach significantly outperforms strong baselines in both prediction accuracy and Top-K recommendation quality. Case studies further demonstrate the model’s ability to uncover nuanced, implicit contextual cues—such as family roles and holiday-specific behaviors—making it particularly effective for personalized, dynamic recommendations in high-frequency scenes. Full article
Show Figures

Figure 1

22 pages, 7620 KB  
Article
DSTANet: A Lightweight and High-Precision Network for Fine-Grained and Early Identification of Maize Leaf Diseases in Field Environments
by Xinyue Gao, Lili He, Yinchuan Liu, Jiaxin Wu, Yuying Cao, Shoutian Dong and Yinjiang Jia
Sensors 2025, 25(16), 4954; https://doi.org/10.3390/s25164954 - 10 Aug 2025
Viewed by 473
Abstract
Early and accurate identification of maize diseases is crucial for ensuring sustainable agricultural development. However, existing maize disease identification models face challenges including high inter-class similarity, intra-class variability, and limited capability in identifying early-stage symptoms. To address these limitations, we proposed DSTANet (decomposed [...] Read more.
Early and accurate identification of maize diseases is crucial for ensuring sustainable agricultural development. However, existing maize disease identification models face challenges including high inter-class similarity, intra-class variability, and limited capability in identifying early-stage symptoms. To address these limitations, we proposed DSTANet (decomposed spatial token aggregation network), a lightweight and high-performance model for maize leaf disease identification. In this study, we constructed a comprehensive maize leaf image dataset comprising six common disease types and healthy samples, with early and late stages of northern leaf blight and eyespot specifically differentiated. DSTANet employed MobileViT as the backbone architecture, combining the advantages of CNNs for local feature extraction with transformers for global feature modeling. To enhance lesion localization and mitigate interference from complex field backgrounds, DSFM (decomposed spatial fusion module) was introduced. Additionally, the MSTA (multi-scale token aggregator) was designed to leverage hidden-layer feature channels more effectively, improving information flow and preventing gradient vanishing. Experimental results showed that DSTANet achieved an accuracy of 96.11%, precision of 96.17%, recall of 96.11%, and F1-score of 96.14%. With only 1.9M parameters, 0.6 GFLOPs (floating point operations), and an inference speed of 170 images per second, the model meets real-time deployment requirements on edge devices. This study provided a novel and practical approach for fine-grained and early-stage maize disease identification, offering technical support for smart agriculture and precision crop management. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

21 pages, 3338 KB  
Article
Novel Adaptive Intelligent Control System Design
by Worrawat Duanyai, Weon Keun Song, Min-Ho Ka, Dong-Wook Lee and Supun Dissanayaka
Electronics 2025, 14(15), 3157; https://doi.org/10.3390/electronics14153157 - 7 Aug 2025
Viewed by 269
Abstract
A novel adaptive intelligent control system (AICS) with learning-while-controlling capability is developed for a highly nonlinear single-input single-output plant by redesigning the conventional model reference adaptive control (MRAC) framework, originally based on first-order Lyapunov stability, and employing customized neural networks. The AICS is [...] Read more.
A novel adaptive intelligent control system (AICS) with learning-while-controlling capability is developed for a highly nonlinear single-input single-output plant by redesigning the conventional model reference adaptive control (MRAC) framework, originally based on first-order Lyapunov stability, and employing customized neural networks. The AICS is designed with a simple structure, consisting of two main subsystems: a meta-learning-triggered mechanism-based physics-informed neural network (MLTM-PINN) for plant identification and a self-tuning neural network controller (STNNC). This structure, featuring the triggered mechanism, facilitates a balance between high controllability and control efficiency. The MLTM-PINN incorporates the following: (I) a single self-supervised physics-informed neural network (PINN) without the need for labelled data, enabling online learning in control; (II) a meta-learning-triggered mechanism to ensure consistent control performance; (III) transfer learning combined with meta-learning for finely tailored initialization and quick adaptation to input changes. To resolve the conflict between streamlining the AICS’s structure and enhancing its controllability, the STNNC functionally integrates the nonlinear controller and adaptation laws from the MRAC system. Three STNNC design scenarios are tested with transfer learning and/or hyperparameter optimization (HPO) using a Gaussian process tailored for Bayesian optimization (GP-BO): (scenario 1) applying transfer learning in the absence of the HPO; (scenario 2) optimizing a learning rate in combination with transfer learning; and (scenario 3) optimizing both a learning rate and the number of neurons in hidden layers without applying transfer learning. Unlike scenario 1, no quick adaptation effect in the MLTM-PINN is observed in the other scenarios, as these struggle with the issue of dynamic input evolution due to the HPO-based STNNC design. Scenario 2 demonstrates the best synergy in controllability (best control response) and efficiency (minimal activation frequency of meta-learning and fewer trials for the HPO) in control. Full article
(This article belongs to the Special Issue Nonlinear Intelligent Control: Theory, Models, and Applications)
Show Figures

Figure 1

22 pages, 6288 KB  
Article
The Pontoon Design Optimization of a SWATH Vessel for Resistance Reduction
by Chun-Liang Tan, Chi-Min Wu, Chia-Hao Hsu and Shiu-Wu Chau
J. Mar. Sci. Eng. 2025, 13(8), 1504; https://doi.org/10.3390/jmse13081504 - 5 Aug 2025
Viewed by 285
Abstract
This study applies a deep neural network (DNN) to optimize the 22.5 m pontoon hull form of a small waterplane area twin hull (SWATH) vessel with fin stabilizers, aiming to reduce calm water resistance at a Froude number of 0.8 under even keel [...] Read more.
This study applies a deep neural network (DNN) to optimize the 22.5 m pontoon hull form of a small waterplane area twin hull (SWATH) vessel with fin stabilizers, aiming to reduce calm water resistance at a Froude number of 0.8 under even keel conditions. The vessel’s resistance is simplified into three components: pontoon, strut, and fin stabilizer. Four design parameters define the pontoon geometry: fore-body length, aft-body length, fore-body angle, and aft-body angle. Computational fluid dynamics (CFD) simulations using STAR-CCM+ 2302 provide 1400 resistance data points, including fin stabilizer lift and drag forces at varying angles of attack. These are used to train a DNN in MATLAB 2018a with five hidden layers containing six, eight, nine, eight, and seven neurons. K-fold cross-validation ensures model stability and aids in identifying optimal design parameters. The optimized hull has a 7.8 m fore-body, 6.8 m aft-body, 10° fore-body angle, and 35° aft-body angle. It achieves a 2.2% resistance reduction compared to the baseline. The improvement is mainly due to a reduced Munk moment, which lowers the angle of attack needed by the fin stabilizer, thereby reducing drag. The optimized design provides cost-efficient construction and enhanced payload capacity. This study demonstrates the effectiveness of combining CFD and deep learning for hull form optimization. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

24 pages, 4314 KB  
Article
Hyperparameter Optimization of Neural Networks Using Grid Search for Predicting HVAC Heating Coil Performance
by Yosef Jaber, Pasidu Dharmasena, Adam Nassif and Nabil Nassif
Buildings 2025, 15(15), 2753; https://doi.org/10.3390/buildings15152753 - 5 Aug 2025
Viewed by 468
Abstract
Heating, Ventilation, and Air Conditioning (HVAC) systems represent a significant portion of global energy use, yet they are often operated without optimized control strategies. This study explores the application of deep learning to accurately model heating system behavior as a foundation for predictive [...] Read more.
Heating, Ventilation, and Air Conditioning (HVAC) systems represent a significant portion of global energy use, yet they are often operated without optimized control strategies. This study explores the application of deep learning to accurately model heating system behavior as a foundation for predictive control and energy-efficient HVAC operation. Experimental data were collected under controlled laboratory conditions, and 288 unique hyperparameter configurations were developed. Each configuration was tested three times, resulting in a total of 864 artificial neural network models. Five key hyperparameters were varied systematically: number of epochs, network size, network shape, learning rate, and optimizer. The best-performing model achieved a mean squared error of 0.469 and featured 17 hidden layers, a left-triangle architecture trained for 500 epochs with a learning rate of 5 × 10−5, and Adam as the optimizer. The results highlighted the importance of hyperparameter tuning in improving model accuracy. Future research should extend the analysis to incorporate cooling operation and real-world building operation data for broader applicability. Full article
Show Figures

Figure 1

Back to TopTop