Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,174)

Search Parameters:
Keywords = machine reasoning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 3526 KB  
Article
Geometric Reasoning in the Embedding Space
by David Mojžíšek, Jan Hůla, Jiří Janeček, David Herel and Mikoláš Janota
Mach. Learn. Knowl. Extr. 2025, 7(3), 93; https://doi.org/10.3390/make7030093 - 2 Sep 2025
Abstract
While neural networks can solve complex geometric problems, as demonstrated by systems like AlphaGeometry, we have limited understanding of how they internally represent and reason about spatial relationships. In this work, we investigate how neural networks develop internal spatial understanding by training Graph [...] Read more.
While neural networks can solve complex geometric problems, as demonstrated by systems like AlphaGeometry, we have limited understanding of how they internally represent and reason about spatial relationships. In this work, we investigate how neural networks develop internal spatial understanding by training Graph Neural Networks and Transformers to predict point positions on a discrete 2D grid from geometric constraints that describe hidden figures. We show that both models develop interpretable internal representations that mirror the geometric structure of the problems they solve. Specifically, we observe that point embeddings self-organize into 2D grid structures during training, and during inference, the models iteratively construct the hidden geometric figures within their embedding spaces. Our analysis reveals how reasoning complexity correlates with prediction accuracy, and shows that models solve constraints through an iterative refinement process, which might resemble continuous optimization. We also find that Graph Neural Networks prove more suitable than Transformers for this type of structured constraint reasoning and scale more effectively to larger problems. These findings provide initial insights into how neural networks can develop structured understanding and contribute to their interpretability. Full article
21 pages, 852 KB  
Article
Classifying XAI Methods to Resolve Conceptual Ambiguity
by Lynda Dib and Laurence Capus
Technologies 2025, 13(9), 390; https://doi.org/10.3390/technologies13090390 - 1 Sep 2025
Abstract
This article provides an in-depth review of the concepts of interpretability and explainability in machine learning, which are two essential pillars for developing transparent, responsible, and trustworthy artificial intelligence (AI) systems. As algorithms become increasingly complex and are deployed in sensitive domains, the [...] Read more.
This article provides an in-depth review of the concepts of interpretability and explainability in machine learning, which are two essential pillars for developing transparent, responsible, and trustworthy artificial intelligence (AI) systems. As algorithms become increasingly complex and are deployed in sensitive domains, the need for interpretability has grown. However, the ongoing confusion between interpretability and explainability has hindered the adoption of clear methodological frameworks. To address this conceptual ambiguity, we draw on the formal distinction introduced by Dib, which rigorously separates interpretability from explainability. Based on this foundation, we propose a revised classification of explanatory approaches structured around three complementary axes: intrinsic vs. extrinsic, specific vs. agnostic, and local vs. global. Unlike many existing typologies that are limited to a single dichotomy, our framework provides a unified perspective that facilitates the understanding, comparison, and selection of methods according to their application context. We illustrate these elements through an experiment on the Breast Cancer dataset, where several models are analyzed: some through their intrinsically interpretable characteristics (logistic regression, decision tree) and others using post hoc explainability techniques such as treeinterpreter for random forests. Additionally, the LIME method is applied even to interpretable models to assess the relevance and robustness of the locally generated explanations. This contribution aims to structure the field of explainable AI (XAI) more rigorously, supporting a reasoned, contextualized, and operational use of explanatory methods. Full article
Show Figures

Figure 1

36 pages, 8964 KB  
Article
Verified Language Processing with Hybrid Explainability
by Oliver Robert Fox, Giacomo Bergami and Graham Morgan
Electronics 2025, 14(17), 3490; https://doi.org/10.3390/electronics14173490 - 31 Aug 2025
Abstract
The volume and diversity of digital information have led to a growing reliance on Machine Learning (ML) techniques, such as Natural Language Processing (NLP), for interpreting and accessing appropriate data. While vector and graph embeddings represent data for similarity tasks, current state-of-the-art pipelines [...] Read more.
The volume and diversity of digital information have led to a growing reliance on Machine Learning (ML) techniques, such as Natural Language Processing (NLP), for interpreting and accessing appropriate data. While vector and graph embeddings represent data for similarity tasks, current state-of-the-art pipelines lack guaranteed explainability, failing to accurately determine similarity for given full texts. These considerations can also be applied to classifiers exploiting generative language models with logical prompts, which fail to correctly distinguish between logical implication, indifference, and inconsistency, despite being explicitly trained to recognise the first two classes. We present a novel pipeline designed for hybrid explainability to address this. Our methodology combines graphs and logic to produce First-Order Logic (FOL) representations, creating machine- and human-readable representations through Montague Grammar (MG). The preliminary results indicate the effectiveness of this approach in accurately capturing full text similarity. To the best of our knowledge, this is the first approach to differentiate between implication, inconsistency, and indifference for text classification tasks. To address the limitations of existing approaches, we use three self-contained datasets annotated for the former classification task to determine the suitability of these approaches in capturing sentence structure equivalence, logical connectives, and spatiotemporal reasoning. We also use these data to compare the proposed method with language models pre-trained for detecting sentence entailment. The results show that the proposed method outperforms state-of-the-art models, indicating that natural language understanding cannot be easily generalised by training over extensive document corpora. This work offers a step toward more transparent and reliable Information Retrieval (IR) from extensive textual data. Full article
23 pages, 8316 KB  
Article
Response of Reinforced Concrete Columns Embedded with PET Bottles Under Axial Compression
by Sadiq Al Bayati and Sami W. Tabsh
Sustainability 2025, 17(17), 7825; https://doi.org/10.3390/su17177825 - 30 Aug 2025
Viewed by 139
Abstract
This study explores the potential use of Polyethylene Terephthalate (PET) plastic bottles as void makers in short reinforced concrete columns under pure axial compression. Such a scheme promotes sustainability by decreasing the consumption of concrete and reducing the pollution that comes with the [...] Read more.
This study explores the potential use of Polyethylene Terephthalate (PET) plastic bottles as void makers in short reinforced concrete columns under pure axial compression. Such a scheme promotes sustainability by decreasing the consumption of concrete and reducing the pollution that comes with the disposal of PET bottles. The experimental component of this study consisted of testing 16 reinforced concrete columns divided into two groups, based on the cross-section dimensions. One group contained eight columns of a length of 900 mm with a net cross-sectional area of about 40,000 mm2, while the second group contained eight columns of a length of 1100 mm with a net cross-sectional area of about 62,500 mm2. The diameter of the void within the small cross-section group was 100 mm and within the large cross-section group was 265 mm. The experimental program includes pairs of solid and corresponding void specimens with consideration of the size of the longitudinal steel reinforcement, lateral tie spacing, and concrete compressive strength. The tests are conducted using a universal test machine under displacement-controlled loading conditions with the help of strain gauges and Linear Variable differential transformers (LVDTs). The analysis of the test results showed that the columns that were embedded with a small void that occupied about 30% of the core area exhibited reductions of 9% in the ultimate capacity, 14% in initial stiffness, 20% in ductility, and 1% in residual strength. On the other hand, the columns that contained a large void occupying about 60% of the core area demonstrated reductions of 24% in the ultimate capacity, 34% in initial stiffness, and 26% in ductility, although the residual strength was slightly increased by 5%. The reason for the deficiency in the structural response in the latter case is because the void occupied a significant fraction of the concrete core. The theoretical part of this study showed that the ACI 318 code provisions can reasonably predict the uniaxial compressive strength of columns embedded with PET bottles if the void does not occupy more than 30% of the concrete core. This study confirmed that short columns embedded with relatively small voids made from PET bottles and subjected to pure axial compression create a balance between sustainability benefits and a structural performance tradeoff. Full article
Show Figures

Figure 1

30 pages, 2137 KB  
Review
A SPAR-4-SLR Systematic Review of AI-Based Traffic Congestion Detection: Model Performance Across Diverse Data Types
by Doha Bakir, Khalid Moussaid, Zouhair Chiba, Noreddine Abghour and Amina El omri
Smart Cities 2025, 8(5), 143; https://doi.org/10.3390/smartcities8050143 - 30 Aug 2025
Viewed by 118
Abstract
Traffic congestion remains a major urban challenge, impacting economic productivity, environmental sustainability, and commuter well-being. This systematic review investigates how artificial intelligence (AI) techniques contribute to detecting traffic congestion. Following the SPAR-4-SLR protocol, we analyzed 44 peer-reviewed studies covering three data categories—spatiotemporal, probe, [...] Read more.
Traffic congestion remains a major urban challenge, impacting economic productivity, environmental sustainability, and commuter well-being. This systematic review investigates how artificial intelligence (AI) techniques contribute to detecting traffic congestion. Following the SPAR-4-SLR protocol, we analyzed 44 peer-reviewed studies covering three data categories—spatiotemporal, probe, and hybrid/multimodal—and four AI model types—shallow machine learning (SML), deep learning (DL), probabilistic reasoning (PR), and hybrid approaches. Each model category was evaluated against metrics such as accuracy, the F1-score, computational efficiency, and deployment feasibility. Our findings reveal that SML techniques, particularly decision trees combined with optical flow, are optimal for real-time, low-resource applications. CNN-based DL models excel in handling unstructured and variable environments, while hybrid models offer improved robustness through multimodal data fusion. Although PR methods are less common, they add value when integrated with other paradigms to address uncertainty. This review concludes that no single AI approach is universally the best; rather, model selection should be aligned with the data type, application context, and operational constraints. This study offers actionable guidance for researchers and practitioners aiming to build scalable, context-aware AI systems for intelligent traffic management. Full article
(This article belongs to the Special Issue Cost-Effective Transportation Planning for Smart Cities)
Show Figures

Figure 1

14 pages, 2017 KB  
Article
Multiclass Classification of Coal Gangue Under Different Light Sources and Illumination Intensities
by Chunxia Zhou, Yeshuo Xi, Xiaolu Sun, Weinong Liang, Jiandong Fang, Guanghui Wang and Haijun Zhang
Minerals 2025, 15(9), 921; https://doi.org/10.3390/min15090921 - 29 Aug 2025
Viewed by 163
Abstract
As a solid mixture discharged during coal production, coal gangue possesses comprehensive utilization potential. Efficient sorting and pre-enrichment of its classification are crucial for green mining practices. This study categorizes coal gangue into four types—residual coal (RC), gray gangue (GG), red gangue (RG), [...] Read more.
As a solid mixture discharged during coal production, coal gangue possesses comprehensive utilization potential. Efficient sorting and pre-enrichment of its classification are crucial for green mining practices. This study categorizes coal gangue into four types—residual coal (RC), gray gangue (GG), red gangue (RG), and white gangue (WG)—based on their apparent color and utilization properties. The research systematically analyzed how different light sources and illumination intensities affect the visual characteristics of these gangue types. The results indicate that white light sources most accurately reproduce the real coloration and texture features of coal gangue, with optimal textural clarity achieved at moderate illumination levels. Different colored light sources selectively enhance spectral reflectance, and red light significantly improves RG recognition. Support vector machine (SVM)-based classification experiments demonstrate that white light sources achieve optimal performance under moderate illumination (23,000 Lux) with Macro-F1 = 0.90, representing a 15.38% improvement over other conditions. These findings reveal that reasonable matching of light source and illumination intensity can substantially enhance the accuracy of the visual recognition of coal gangue, providing valuable optimization guidance for future precise classification applications. Full article
(This article belongs to the Section Mineral Processing and Extractive Metallurgy)
Show Figures

Figure 1

17 pages, 8385 KB  
Article
Flow Field Simulation and Experimental Study of Electrode-Assisted Oscillating Electrical Discharge Machining in the Cf-ZrB2-SiC Micro-Blind Hole
by Chuanyang Ge, Sirui Gong, Junbo He, Kewen Wang, Jiahao Xiu and Zhenlong Wang
Materials 2025, 18(17), 3944; https://doi.org/10.3390/ma18173944 - 22 Aug 2025
Viewed by 370
Abstract
In the micro-EDM blind-hole machining of Cf-ZrB2-SiC ceramics, defects such as bottom surface protrusion and machining fillets are often encountered. The implementation of an electrode-assisted oscillating device has proven effective in improving machining outcomes. To unravel the fundamental reasons [...] Read more.
In the micro-EDM blind-hole machining of Cf-ZrB2-SiC ceramics, defects such as bottom surface protrusion and machining fillets are often encountered. The implementation of an electrode-assisted oscillating device has proven effective in improving machining outcomes. To unravel the fundamental reasons behind the optimization enabled by this auxiliary oscillating device, this paper presents fluid simulation research, providing a quantitative comparison of the differences in machining gap flow field characteristics and debris motion behaviors under conditions with and without the assistance of the oscillating device. Firstly, this paper briefly describes the characteristics of Cf-ZrB2-SiC discharge products and flow field deficiencies during conventional machining and introduces the working principle of electrode-assisted oscillation devices to establish the background and objectives of the simulation study. Subsequently, this research established simulation models for both conventional machining and oscillating machining based on actual processing conditions. CFD numerical simulations were conducted to compare flow field differences between conditions with and without auxiliary machining devices. The results demonstrate that, compared to conventional machining, electrode oscillation not only increases the maximum velocity of the working fluid by nearly 32% but also provides a larger debris accommodation space, effectively preventing secondary discharge. Regarding debris agglomeration, oscillating machining resolves the low-velocity zone issues present in conventional modes, increasing debris velocity from 0 mm/s to 7.5 mm/s and ensuring continuous debris motion. Furthermore, the DPM was used to analyze particle distribution and motion velocities, confirming that vortex effects form within the hole under oscillating conditions. These vortices effectively draw bottom debris outward, preventing local accumulation. Finally, from the perspective of debris distribution, the formation mechanisms of micro-hole morphology and the tool electrode wear patterns were explained. Full article
Show Figures

Graphical abstract

14 pages, 3644 KB  
Systematic Review
Artificial Intelligence Models for Predicting Outcomes in Spinal Metastasis: A Systematic Review and Meta-Analysis
by Vivek Sanker, Prachi Dawer, Alexander Thaller, Zhikai Li, Philip Heesen, Srinath Hariharan, Emil O. R. Nordin, Maria Jose Cavagnaro, John Ratliff and Atman Desai
J. Clin. Med. 2025, 14(16), 5885; https://doi.org/10.3390/jcm14165885 - 20 Aug 2025
Viewed by 379
Abstract
Background: Spinal metastases can cause significant impairment of neurological function and quality of life. Hence, personalized clinical decision-making based on prognosis and likely outcome is desirable. The effectiveness of AI in predicting complications and treatment outcomes for patients with spinal metastases is assessed. [...] Read more.
Background: Spinal metastases can cause significant impairment of neurological function and quality of life. Hence, personalized clinical decision-making based on prognosis and likely outcome is desirable. The effectiveness of AI in predicting complications and treatment outcomes for patients with spinal metastases is assessed. Methods: A thorough search was carried out through the PubMed, Scopus, Web of Science, Embase, and Cochrane databases up until 27 January 2025. Included were studies that used AI-based models to predict outcomes for adult patients with spinal metastases. Three reviewers independently extracted the data, and screening was conducted in accordance with PRISMA principles. AUC results were pooled using a random-effects model, and the PROBAST program was used to evaluate the study’s quality. Results: Included were 47 articles totaling 25,790 patients. For training, internal validation, and external validation, the weighted average AUCs were 0.762, 0.876, and 0.810, respectively. The Skeletal Oncology Research Group machine learning algorithms (SORG-MLAs) were the ones externally validated the most, continuously producing AUCs > 0.84 for 90-day and 1-year mortality. Models based on radiomics showed promise in preoperative planning, especially for outcomes of radiation and concealed blood loss. Most research concentrated on breast, lung, and prostate malignancies, which limited its applicability to less common tumors. Conclusions: AI models have shown reasonable accuracy in predicting mortality, ambulatory status, blood loss, and surgical complications in patients with spinal metastases. Wider implementation necessitates additional validation, data standardization, and ethical and regulatory framework evaluation. Future work should concentrate on creating multimodal, hybrid models and assessing their practical applications. Full article
(This article belongs to the Special Issue Recent Advances in Spine Tumor Diagnosis and Treatment)
Show Figures

Figure 1

25 pages, 2127 KB  
Perspective
Making AI Tutors Empathetic and Conscious: A Needs-Driven Pathway to Synthetic Machine Consciousness
by Earl Woodruff
AI 2025, 6(8), 193; https://doi.org/10.3390/ai6080193 - 19 Aug 2025
Viewed by 764
Abstract
As large language model (LLM) tutors evolve from scripted helpers into adaptive educational partners, their capacity for self-regulation, ethical decision-making, and internal monitoring will become increasingly critical. This paper introduces the Needs-Driven Consciousness Framework (NDCF) as a novel, integrative architecture that combines Dennett’s [...] Read more.
As large language model (LLM) tutors evolve from scripted helpers into adaptive educational partners, their capacity for self-regulation, ethical decision-making, and internal monitoring will become increasingly critical. This paper introduces the Needs-Driven Consciousness Framework (NDCF) as a novel, integrative architecture that combines Dennett’s multiple drafts model, Damasio’s somatic marker hypothesis, and Tulving’s tripartite memory system into a unified motivational design for synthetic consciousness. The NDCF defines three core regulators, specifically Survive (system stability and safety), Thrive (autonomy, competence, relatedness), and Excel (creativity, ethical reasoning, long-term purpose). In addition, there is a proposed supervisory Protect layer that detects value drift and overrides unsafe behaviours. The core regulators compute internal need satisfaction states and urgency gradients, feeding into a softmax-based control system for context-sensitive action selection. The framework proposes measurable internal signals (e.g., utility gradients, conflict intensity Ω), behavioural signatures (e.g., metacognitive prompts, pedagogical shifts), and three falsifiable predictions for educational AI testbeds. By embedding these layered needs directly into AI governance, the NDCF offers (i) a psychologically and biologically grounded model of emergent machine consciousness, (ii) a practical approach to building empathetic, self-regulating AI tutors, and (iii) a testable platform for comparing competing consciousness theories through implementation. Ultimately, the NDCF provides a path toward the development of AI tutors that are capable of transparent reasoning, dynamic adaptation, and meaningful human-like relationships, while maintaining safety, ethical coherence, and long-term alignment with human well-being. Full article
Show Figures

Figure 1

23 pages, 1553 KB  
Article
Assessing Chatbot Acceptance in Policyholder’s Assistance Through the Integration of Explainable Machine Learning and Importance–Performance Map Analysis
by Jaume Gené-Albesa and Jorge de Andrés-Sánchez
Electronics 2025, 14(16), 3266; https://doi.org/10.3390/electronics14163266 - 17 Aug 2025
Viewed by 331
Abstract
Companies are increasingly giving more attention to chatbots as an innovative solution to transform the customer service experience, redefining how they interact with users and optimizing their support processes. This study analyzes the acceptance of conversational robots in customer service within the insurance [...] Read more.
Companies are increasingly giving more attention to chatbots as an innovative solution to transform the customer service experience, redefining how they interact with users and optimizing their support processes. This study analyzes the acceptance of conversational robots in customer service within the insurance sector, using a conceptual model based on well-known new information systems adoption groundworks that are implemented with a combination of machine learning techniques based on decision trees and so-called importance–performance map analysis (IPMA). The intention to interact with a chatbot is explained by performance expectancy (PE), effort expectancy (EE), social influence (SI), and trust (TR). For the analysis, three machine learning methods are applied: decision tree regression (DTR), random forest (RF), and extreme gradient boosting (XGBoost). While the architecture of DTR provides a highly visual and intuitive explanation of the intention to use chatbots, its generalization through RF and XGBoost enhances the model’s explanatory power. The application of Shapley additive explanations (SHAP) to the best-performing model, RF, reveals a hierarchy of relevance among the explanatory variables. We find that TR is the most influential variable. In contrast, PE appears to be the least relevant factor in the acceptance of chatbots. IPMA suggests that SI, TR, and EE all deserve special attention. While the prioritization of TR and EE may be justified by their higher importance, SI stands out as the variable with the lowest performance, indicating the greatest room for improvement. In contrast, PE not only requires less attention, but it may even be reasonable to reallocate efforts away from improving PE in order to enhance the performance of the more critical variables. Full article
Show Figures

Figure 1

17 pages, 1234 KB  
Article
Avalanche Hazard Prediction in East Kazakhstan Using Ensemble Machine Learning Algorithms
by Yevgeniy Fedkin, Natalya Denissova, Gulzhan Daumova, Ruslan Chettykbayev and Saule Rakhmetullina
Algorithms 2025, 18(8), 505; https://doi.org/10.3390/a18080505 - 13 Aug 2025
Viewed by 292
Abstract
The study is devoted to the construction of an avalanche susceptibility map based on ensemble machine learning algorithms (random forest, XGBoost, LightGBM, gradient boosting machines, AdaBoost, NGBoost) for the conditions of the East Kazakhstan region. To train these models, data were collected on [...] Read more.
The study is devoted to the construction of an avalanche susceptibility map based on ensemble machine learning algorithms (random forest, XGBoost, LightGBM, gradient boosting machines, AdaBoost, NGBoost) for the conditions of the East Kazakhstan region. To train these models, data were collected on avalanche path profiles, meteorological conditions, and historical avalanche events. The quality of the trained machine learning models was assessed using metrics such as accuracy, precision, true positive rate (recall), and F1-score. The obtained metrics indicated that the trained machine learning models achieved reasonably accurate forecasting performance (forecast accuracy from 67% to 73.8%). ROC curves were also constructed for each obtained model for evaluation. The resulting AUCs for these ROC curves showed acceptable levels (from 0.57 to 0.73), which also indicated that the presented models could be used to predict avalanche danger. In addition, for each machine learning model, we determined the importance of the indicators used to predict avalanche danger. Analysis of the importance of the indicators showed that the most significant indicators were meteorological data, namely temperature and snow cover level in avalanche paths. Among the indicators that characterized the avalanche paths’ profiles, the most important were the minimum and maximum slope elevations. Thus, within the framework of this study, a highly accurate model was built using geospatial and meteorological data that allows identifying potentially dangerous slope areas. These results can support territorial planning, the design of protective infrastructure, and the development of early warning systems to mitigate avalanche risks. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

43 pages, 5258 KB  
Article
Twin Self-Supervised Learning Framework for Glaucoma Diagnosis Using Fundus Images
by Suguna Gnanaprakasam and Rolant Gini John Barnabas
Appl. Syst. Innov. 2025, 8(4), 111; https://doi.org/10.3390/asi8040111 - 11 Aug 2025
Viewed by 395
Abstract
Glaucoma is a serious eye condition that damages the optic nerve and affects the transmission of visual information to the brain. It is the second leading cause of blindness worldwide. With deep learning, CAD systems have shown promising results in diagnosing glaucoma but [...] Read more.
Glaucoma is a serious eye condition that damages the optic nerve and affects the transmission of visual information to the brain. It is the second leading cause of blindness worldwide. With deep learning, CAD systems have shown promising results in diagnosing glaucoma but mostly rely on small-labeled datasets. Annotated fundus image datasets improve deep learning predictions by aiding pattern identification but require extensive curation. In contrast, unlabeled fundus images are more accessible. The proposed method employs a semi-supervised learning approach to utilize both labeled and unlabeled data effectively. It follows traditional supervised training with the generation of pseudo-labels for unlabeled data, and incorporates self-supervised techniques that eliminate the need for manual annotation. It uses a twin self-supervised learning approach to improve glaucoma diagnosis by integrating pseudo-labels from one model into another self-supervised model for effective detection. The self-supervised patch-based exemplar CNN generates pseudo-labels in the first stage. These pseudo-labeled data, combined with labeled data, train a convolutional auto-encoder classification model in the second stage to identify glaucoma features. A support vector machine classifier handles the final classification of glaucoma in the model, achieving 98% accuracy and 0.98 AUC on the internal, same-source combined fundus image datasets. Also, the model maintains reasonably good generalization to the external (fully unseen) data, achieving AUC of 0.91 on the CRFO dataset and AUC of 0.87 on the Papilla dataset. These results demonstrate the method’s effectiveness, robustness, and adaptability in addressing limited labeled fundus data and aid in improved health and lifestyle. Full article
Show Figures

Figure 1

22 pages, 6453 KB  
Article
Experimental Study on the Microscale Milling Process of DD5 Nickel-Based Single-Crystal Superalloy
by Ying Li, Yadong Gong, Yang Liu, Zhiheng Wang, Junhe Zhao, Zhike Wang and Zelin Xu
Metals 2025, 15(8), 898; https://doi.org/10.3390/met15080898 - 11 Aug 2025
Viewed by 287
Abstract
Technological advances have expanded the use of single-crystal in microscale applications—particularly in infrared optics, electronics, and aerospace. Conducting research on the surface quality of micro-milling processes for single-crystal superalloys has become a key factor in expanding their applications. In this paper, the nickel-based [...] Read more.
Technological advances have expanded the use of single-crystal in microscale applications—particularly in infrared optics, electronics, and aerospace. Conducting research on the surface quality of micro-milling processes for single-crystal superalloys has become a key factor in expanding their applications. In this paper, the nickel-based single-crystal superalloy DD5 is selected as the test object, and the finite element analysis software ABAQUS 2022 version is used to conduct a simulation study on its micro-scale milling process with reasonable milling parameters. A three-factor five-level L25(53) slot milling orthogonal experiment is conducted to investigate the effects of milling speed, milling depth, and feed rate on its milling force and surface quality, respectively. The results show that the milling depth has the greatest impact on the milling force during the micro-milling process, while the milling speed has the greatest influence on the surface quality. Finally, based on the experimental data, the optimal parameter combination for micro-milling nickel-based single-crystal superalloy DD5 parts is found—when the milling speed is 1318.8 mm/s; the milling depth is 12 µm; the feed rate is 20 µm/s; and the surface roughness value is at its minimum, indicating the best surface quality—which has certain guiding significance for practical machining. Full article
Show Figures

Figure 1

45 pages, 2170 KB  
Article
EnergiQ: A Prescriptive Large Language Model-Driven Intelligent Platform for Interpreting Appliance Energy Consumption Patterns
by Christoforos Papaioannou, Ioannis Tzitzios, Alexios Papaioannou, Asimina Dimara, Christos-Nikolaos Anagnostopoulos and Stelios Krinidis
Sensors 2025, 25(16), 4911; https://doi.org/10.3390/s25164911 - 8 Aug 2025
Viewed by 379
Abstract
The increased usage of smart sensors has introduced both opportunities and complexities in managing residential energy consumption. Despite advancements in sensor data analytics and machine learning (ML), existing energy management systems (EMS) remain limited in interpretability, adaptability, and user engagement. This paper presents [...] Read more.
The increased usage of smart sensors has introduced both opportunities and complexities in managing residential energy consumption. Despite advancements in sensor data analytics and machine learning (ML), existing energy management systems (EMS) remain limited in interpretability, adaptability, and user engagement. This paper presents EnergiQ, an intelligent, end-to-end platform that leverages sensors and Large Language Models (LLMs) to bridge the gap between technical energy analytics and user comprehension. EnergiQ integrates smart plug-based IoT sensing, time-series ML for device profiling and anomaly detection, and an LLM reasoning layer to deliver personalized, natural language feedback. The system employs statistical feature-based XGBoost classifiers for appliance identification and hybrid CNN-LSTM autoencoders for anomaly detection. Through dynamic user feedback loops and instruction-tuned LLMs, EnergiQ generates context-aware, actionable recommendations that enhance energy efficiency and device management. Evaluations demonstrate high appliance classification accuracy (94%) using statistical feature-based XGBoost and effective anomaly detection across varied devices via a CNN-LSTM autoencoder. The LLM layer, instruction-tuned on a domain-specific dataset, achieved over 91% agreement with expert-written energy-saving recommendations in simulated feedback scenarios. By translating complex consumption data into intuitive insights, EnergiQ empowers consumers to engage with energy use more proactively, fostering sustainability and smarter home practices. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

20 pages, 3022 KB  
Article
Development of an Artificial Neural Network-Based Tool for Predicting Failures in Composite Laminate Structures
by Milica Milic Jankovic, Jelena Svorcan and Ivana Atanasovska
Biomimetics 2025, 10(8), 520; https://doi.org/10.3390/biomimetics10080520 - 8 Aug 2025
Viewed by 381
Abstract
Composite materials are widely used in aerospace, automotive, biomedical, and renewable energy sectors due to their high strength-to-weight ratio and design flexibility. However, their anisotropic and layered nature makes structural analysis and failure prediction challenging. Traditional methods require solving complex interlaminar stress–strain equations, [...] Read more.
Composite materials are widely used in aerospace, automotive, biomedical, and renewable energy sectors due to their high strength-to-weight ratio and design flexibility. However, their anisotropic and layered nature makes structural analysis and failure prediction challenging. Traditional methods require solving complex interlaminar stress–strain equations, demanding significant computational resources. This paper presents a bio-inspired machine learning approach, based on human reasoning, to accelerate predictions and reduce dependence on computationally intensive Finite Element Analysis (FEA). An artificial neural network model was developed to rapidly estimate key parameters—laminate thickness, total weight, maximum stress, displacement, deformation, and failure criteria—based on stacking sequence and geometry for a desired load case. Although validated using a specific composite beam, the methodology demonstrates potential for broader use in rapid structural assessment, with prediction deviations under 15% compared to FEA results. The time savings are particularly significant—while conventional FEA can take several hours or even days, the ANN model delivers accurate predictions within seconds. The approach significantly reduces computational time while maintaining precision. Moreover, with further refinement, this logic-driven model could be effectively applied to aircraft maintenance, enabling faster decision-making and improved structural reliability assessment. Full article
(This article belongs to the Section Biological Optimisation and Management)
Show Figures

Figure 1

Back to TopTop