Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (237)

Search Parameters:
Keywords = intelligent product design decision

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 2873 KB  
Review
Artificial Intelligence Across the Drug Development Lifecycle
by Grigory Demyashkin, Mikhail Parshenkov, Sergey Zyryanov, Alexander Yavorskiy, Petr Shegai and Andrey Kaprin
Med. Sci. 2026, 14(2), 248; https://doi.org/10.3390/medsci14020248 - 10 May 2026
Viewed by 421
Abstract
Artificial intelligence (AI) is becoming a central driver of change across the drug development lifecycle. However, its integration is evolving so rapidly that it remains essential to understand how these technologies are currently positioned within the field. Because reliable access to high-quality (effective [...] Read more.
Artificial intelligence (AI) is becoming a central driver of change across the drug development lifecycle. However, its integration is evolving so rapidly that it remains essential to understand how these technologies are currently positioned within the field. Because reliable access to high-quality (effective and safe) drugs is essential to public health, the pharmaceutical product lifecycle (PPL) offers a coherent framework for evaluating how AI can enhance evidence and data creation across all stages. To understand where AI genuinely adds value, this review examines its contribution across the major stages of the PPL. Rather than treating drug discovery, nonclinical evaluation, clinical research, and post-marketing assessment as separate domains, we view them as a continuous chain of data, where digital technologies enhance different decision points in distinct ways. In early discovery, AI narrows the search space by integrating diverse datasets to prioritize candidates most likely to succeed. Nonclinical models increasingly rely on machine-learning systems designed to improve the human relevance of safety predictions. Within clinical trials, AI supports cohort formation, real-time monitoring, and new analytic strategies that supplement empirical evidence. Case studies from leading pharmaceutical companies illustrate that the most meaningful advances emerge when AI is embedded not as a standalone tool but as part of a broader data strategy that links information across stages. Taken together, current evidence suggests that AI is beginning to transform data generation and integration throughout the PPL. Given the accelerating pace of digital innovation, it is essential for the field to maintain continuous awareness of emerging methodologies and evolving regulatory frameworks to ensure that these technologies are implemented in a reliable, transparent, and scientifically grounded manner. Full article
Show Figures

Figure 1

26 pages, 821 KB  
Systematic Review
Advances in Biosimilars: A Systematic Review of Machine Learning Applications
by Vannessa Duarte and Tomas Gabriel Bas
Pharmaceuticals 2026, 19(5), 745; https://doi.org/10.3390/ph19050745 - 8 May 2026
Viewed by 435
Abstract
Background/Objectives: Biosimilars are medicinal products derived from reference biologics and designed to demonstrate a high degree of similarity in quality, efficacy, safety, and immunogenicity. Machine learning (ML) and other artificial intelligence (AI) methodologies have emerged as important tools in this field in biosimilar [...] Read more.
Background/Objectives: Biosimilars are medicinal products derived from reference biologics and designed to demonstrate a high degree of similarity in quality, efficacy, safety, and immunogenicity. Machine learning (ML) and other artificial intelligence (AI) methodologies have emerged as important tools in this field in biosimilar research and development. This systematic review identifies ML applications throughout the biosimilar lifecycle while distinguishing them from the broader AI literature and from health technology evaluation, economic, and decision-analytic studies. Methods: Following PRISMA, records were retrieved from Scopus, PubMed, and Web of Science. After applying predefined inclusion and exclusion criteria, 44 original peer-reviewed studies were selected. Only studies that implemented a data-driven ML method for a biosimilar-relevant problem were included. Results: The review mapped AI applications at different stages of biosimilar development and characterized emerging trends and the types of methods used at each stage. Evidence indicates that the most mature empirical ML applications are concentrated in manufacturing optimization and analytical comparability, where supervised learning, ensemble models, and neural networks support process control, glycan or spectral analysis, and similarity assessment. By contrast, biosimilar-specific ML applications in clinical prediction and pharmacovigilance remain comparatively limited. Conclusions: These advances support the mission of biosimilars to provide affordable and high-quality biologic therapies. Using ML, developers can reduce timelines, reduce costs, and strengthen safety and efficacy assessments through the analysis of complex datasets that are difficult to address with traditional approaches. The main contribution of this review is to provide a clearer map of methodological maturity, translational relevance, and future opportunities for data-driven biosimilar development. Full article
31 pages, 9832 KB  
Article
A BIM-Driven Dynamic LCA Framework for Net Carbon Accounting of Buildings: A Case Study in Hot-Summer Region of China
by Qinghe Liu, Shushan Li, Zujun Liu and Hongmei Li
Sustainability 2026, 18(10), 4682; https://doi.org/10.3390/su18104682 - 8 May 2026
Viewed by 149
Abstract
Addressing the prevalent issues of scattered data sources, reliance on multi-software collaboration, and low integration efficiency between Building Information Modeling (BIM) and Life Cycle Assessment (LCA) in current building life cycle carbon emission accounting, this study aims to construct a BIM-driven, data-traceable automated [...] Read more.
Addressing the prevalent issues of scattered data sources, reliance on multi-software collaboration, and low integration efficiency between Building Information Modeling (BIM) and Life Cycle Assessment (LCA) in current building life cycle carbon emission accounting, this study aims to construct a BIM-driven, data-traceable automated method for building life cycle carbon accounting. This paper proposes a life cycle carbon accounting framework based on Revit secondary development. By defining unified data mapping rules and constructing a scalable localized carbon emission factor database, this framework achieves a seamless workflow from BIM model information extraction and intelligent factor matching to phased accounting and report generation. Taking an office building in Nanning as an empirical case study, the results indicate that the operational stage and the building material production stage are the primary emission sources, accounting for 78.82% and 24.13% of the total emissions, respectively; the transportation stage accounts for 1.68%; the construction stage accounts for 0.40%; and the demolition and recycling stage exhibits negative emissions of –3.53% due to material recovery benefits. The accounting results of the developed plugin exhibit a relative error of 6.67% compared to traditional methods, and the robustness of the accounting framework is verified through uncertainty analysis. Sensitivity analysis further reveals that the grid emission factor, key material factors, and building design service life are the core variables affecting carbon emissions. The contribution of this study lies in proposing an operable and scalable BIM-LCA integrated solution. Its practical value resides in providing a real-time data feedback tool for low-carbon optimization during the building design stage, as well as offering a highly transparent methodological reference for carbon accounting in engineering practice, thereby supporting data-driven decision-making in the pursuit of sustainable urban development. Full article
40 pages, 2482 KB  
Review
Agricultural Intelligence: A Technical Review Within the Perception–Decision–Execution Framework
by Shaode Yu, Xinyi Li, Songnan Zhao and Qian Liu
Appl. Syst. Innov. 2026, 9(5), 95; https://doi.org/10.3390/asi9050095 - 30 Apr 2026
Viewed by 880
Abstract
Artificial intelligence (AI) is transforming modern agriculture from experience-driven practices to data-driven production paradigms. To provide an in-depth analysis of AI technologies in intelligent agriculture, we retrieved literature from Web of Science, IEEE Xplore, Google Scholar and Scopus, covering publications from 2015 to [...] Read more.
Artificial intelligence (AI) is transforming modern agriculture from experience-driven practices to data-driven production paradigms. To provide an in-depth analysis of AI technologies in intelligent agriculture, we retrieved literature from Web of Science, IEEE Xplore, Google Scholar and Scopus, covering publications from 2015 to 2025, and 85 articles remained after screening 1867 relevant publications. These articles are grouped into three stages from perception, to decision making, to execution (PDE) in a closed-loop framework. At the perception level, we highlight progress in intelligent sensing systems, such as unmanned aerial vehicle (UAV) and multi-modal monitoring platforms, for crop disease and pest detection, growth monitoring and abiotic stress assessment. At the decision making level, integration of heterogeneous data sources, including meteorological records, soil measurements, remote sensing (RS) imagery and market information, supports advanced analytics, such as yield prediction, pest and disease warning, irrigation and fertilization planning, and crop management optimization. At the execution level, agricultural robots equipped with simultaneous localization and mapping (SLAM) and deep reinforcement learning (RL) facilitate precision spraying, autonomous harvesting, and unmanned field operations. Overall, AI technologies demonstrate substantial potential in the PDE pipeline of agricultural production. However, several challenges remain, including heterogeneous data fusion, limited generalization across diverse environments, complex system integration, and high hardware and deployment costs. Future directions are discussed from the perspectives of lightweight model design, cross-platform standardization, enhanced human–machine collaboration, and a deeper integration of emerging AI paradigms to support scalable, robust, and autonomous agricultural intelligence systems. Full article
51 pages, 1153 KB  
Article
Introducing the Edu-GenAI Rubric: A Theory-Informed Tool for Assessing the Educational Value of Large Language Models and AI Media Generators
by Todd Cherner and Mags Donnelly
Educ. Sci. 2026, 16(5), 706; https://doi.org/10.3390/educsci16050706 - 30 Apr 2026
Viewed by 288
Abstract
The rapid proliferation of generative artificial intelligence (GenAI) tools has created an urgent need for instruments to evaluate their educational value as teachers, faculty, administrators, and instructional designers consider adopting them. While rubrics exist to assess mobile applications and virtual reality tools, no [...] Read more.
The rapid proliferation of generative artificial intelligence (GenAI) tools has created an urgent need for instruments to evaluate their educational value as teachers, faculty, administrators, and instructional designers consider adopting them. While rubrics exist to assess mobile applications and virtual reality tools, no comparable instrument has been developed specifically for large language models (LLMs) and AI media generators. The authors reviewed existing evaluation rubrics for edtech and GenAI tools, with edtech meaning digital tools that support ethical teaching to improve student learning and GenAI referring to neural networks that simulate human interactions by contextualizing relevant content based on learning needs. Grounded in Waks’ framework, the resulting Edu-GenAI Rubric comprises multiple dimensions organized into five domains: the Instrumental, Technical, Hedonic, Use, and Beneficial values. Dimensions include accuracy, productivity, personalization, citation, user interface, user experience, sharing, storage, and ethical dimensions encompassing data privacy, data transparency, guardrails, fair use, and algorithmic discrimination. The Edu-GenAI Rubric offers decision-makers with a preliminary, theory-informed instrument for evaluating GenAI tools in educational contexts that can be applied to institutional adoption decisions, developer benchmarking, and future research. Full article
25 pages, 5866 KB  
Article
Flexible Job Shop Scheduling Problem Based on Deep Reinforcement Learning Using Dual Attention Network
by Fan Xu, Lang He and Xi Fang
Processes 2026, 14(9), 1419; https://doi.org/10.3390/pr14091419 - 28 Apr 2026
Viewed by 233
Abstract
Industry 4.0 is transforming the way companies manufacture, improve, and distribute products, moving toward fast, intelligent, and flexible manufacturing, which will bring about fundamental changes in enterprises’ production capabilities. The Flexible Job Shop Scheduling Problem (FJSP) allows a single job to be divided [...] Read more.
Industry 4.0 is transforming the way companies manufacture, improve, and distribute products, moving toward fast, intelligent, and flexible manufacturing, which will bring about fundamental changes in enterprises’ production capabilities. The Flexible Job Shop Scheduling Problem (FJSP) allows a single job to be divided into multiple operations, each of which can be processed on multiple machines. Due to its high flexibility and complexity, traditional scheduling methods are difficult to meet the needs of dynamic production. Dispatching rules struggle to effectively perceive the global precedence relationships among jobs and the distribution of machine workloads; metaheuristic approaches suffer from slow iterative convergence; existing deep reinforcement learning methods often employ a single policy network to handle both operation sequencing and machine assignment in a coupled manner, which tends to cause training instability and slow convergence. This paper proposes a deep reinforcement learning model that integrates Multi-Proximal Policy Optimization (MPPO) and Dual Attention Network (DAN) to address the FJSP. The model uses the operation message attention block and machine message attention block of DAN to capture the dependency relationships between operations and the dynamic competitive relationships between machines, respectively, and extract deep features. At the same time, MPPO designs dual actor networks to handle operation sequencing and machine assignment decisions separately, and combines a centralized critic to optimize the policy. This balances exploration and exploitation and improves training stability. Experiments are conducted based on the SD1 and SD2 datasets. In FJSP instances of four scales, the model is compared with PPO-DAN, PPO-HGNN, traditional scheduling rules, and OR-Tools. The results show that the algorithm reduces makespan by up to 4.2% on SD1 and 10.1% on SD2. Moreover, it achieves better performance than traditional scheduling rules. Its comprehensive performance is superior to that of the comparison methods, verifying its effectiveness and practical application potential in solving the FJSP. Full article
(This article belongs to the Section Automation Control Systems)
Show Figures

Figure 1

39 pages, 1415 KB  
Article
A Blockchain–IoT–ML Framework for Sustainable Vaccine Cold Chain Management in Pharmaceutical Supply Chains
by Ibrahim Mutambik
Systems 2026, 14(5), 467; https://doi.org/10.3390/systems14050467 - 26 Apr 2026
Viewed by 252
Abstract
Ensuring the quality, reliability, and efficiency of cold chain logistics for thermolabile pharmaceutical products, particularly vaccines, remains a critical challenge in global health supply chains. These biologics require stringent temperature control throughout storage, transport, and distribution to preserve their efficacy. Persistent issues such [...] Read more.
Ensuring the quality, reliability, and efficiency of cold chain logistics for thermolabile pharmaceutical products, particularly vaccines, remains a critical challenge in global health supply chains. These biologics require stringent temperature control throughout storage, transport, and distribution to preserve their efficacy. Persistent issues such as maintaining product integrity, accurately forecasting vaccine demand, and fostering trust among stakeholders often result in inefficiencies, waste, and public mistrust. This study proposes an intelligent digital management framework specifically designed for vaccine cold chains, integrating blockchain, the Internet of Things (IoT), and machine learning (ML) to address these challenges in a holistic and sustainable manner. The main innovation of the study lies in combining secure traceability, real-time cold chain monitoring, and predictive decision support within a unified vaccine cold chain management framework rather than treating these functions as isolated technological solutions. Using WHO immunization coverage data and vaccine-related review data, the framework supports vaccine demand forecasting through the Informer model and stakeholder trust assessment through BERT-based sentiment analysis. In the sentiment analysis task, the BERT model achieved ~80% accuracy on dominant sentiment classes, with a weighted F1-score of 0.6974, demonstrating strong performance on imbalanced datasets. By minimizing vaccine spoilage and enabling more accurate demand planning, the system reduces excess production and distribution, thus lowering resource consumption, carbon emissions, and financial waste. Moreover, trust-informed analytics support better alignment of supply with actual community needs, fostering equity and resilience in vaccine distribution. While this framework has been validated through simulations and experimental evaluation, further real-world testing is needed to assess long-term stability and stakeholder adoption. Nonetheless, it provides a scalable and adaptive foundation for advancing sustainability and transparency in pharmaceutical cold chains. Full article
Show Figures

Figure 1

19 pages, 3530 KB  
Article
A Digital Construction Framework for Prefabricated Steel Structures Based on High-Precision 3D Laser Scanning
by Xianggang Su, Ning Wang, Kunshen Jia, Kun Wang, Jianxin Zhang, Tianqi Yi and Yuanqing Wang
Buildings 2026, 16(9), 1665; https://doi.org/10.3390/buildings16091665 - 23 Apr 2026
Viewed by 238
Abstract
Prefabricated steel structures have been increasingly adopted in modern construction due to their high efficiency, sustainability, and industrialized production. However, their construction quality and efficiency are often compromised by accumulated geometric deviations during fabrication, transportation, assembly, and welding, while traditional construction control and [...] Read more.
Prefabricated steel structures have been increasingly adopted in modern construction due to their high efficiency, sustainability, and industrialized production. However, their construction quality and efficiency are often compromised by accumulated geometric deviations during fabrication, transportation, assembly, and welding, while traditional construction control and welding processes remain highly dependent on manual measurements and empirical operations. To address these challenges, this study proposes a digital construction framework for prefabricated steel structures, integrating high-precision three-dimensional (3D) laser scanning, Building Information Modeling (BIM), and intelligent welding technologies. First, high-precision 3D laser scanning is employed to capture the as-built geometric information of prefabricated steel components, generating dense point cloud data for construction-stage deviation detection and quantitative comparison with BIM-based design models. Based on deviation analysis, a digital construction control strategy is established to support real-time feedback, error compensation, and assembly adjustment. An engineering case study involving a complex prefabricated steel structure is conducted to validate the proposed framework. The results demonstrate that the integrated digital construction and intelligent welding approach significantly improves assembly accuracy, weld positioning precision, and construction efficiency, while reducing manual intervention and error accumulation. Overall, this study contributes to the body of knowledge by proposing a unified closed-loop digital construction paradigm that integrates geometric perception, deviation-driven decision-making, and intelligent welding execution, thereby bridging the gap between construction control and robotic fabrication in prefabricated steel structures. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

28 pages, 5567 KB  
Article
A Safety-Constrained Multi-Objective Optimization Framework for Autonomous Mining Systems: Statistical Validation in Surface and Underground Environments
by Rajesh Patil and Magnus Löfstrand
Technologies 2026, 14(5), 248; https://doi.org/10.3390/technologies14050248 - 22 Apr 2026
Viewed by 244
Abstract
The incorporation of artificial intelligence, multi-sensor perception, and cyber-physical control into mining operations offers tremendous opportunities for increasing productivity, safety, and sustainability. However, present frameworks focus on discrete subsystems rather than providing a unified, safety-constrained optimization method that has been verified in both [...] Read more.
The incorporation of artificial intelligence, multi-sensor perception, and cyber-physical control into mining operations offers tremendous opportunities for increasing productivity, safety, and sustainability. However, present frameworks focus on discrete subsystems rather than providing a unified, safety-constrained optimization method that has been verified in both surface and underground environments. This paper describes a scalable, hierarchical autonomous mining architecture that incorporates sensor fusion, edge intelligence, fleet coordination, and digital twin-based decision support. It is designed to operate in GNSS-denied conditions and extreme climatic constraints common to Nordic mining environments. A mathematical modeling approach formalizes vehicle dynamics, drilling mechanics, and multi-agent fleet coordination inside a safety-constrained multi-objective optimization formulation. The framework is validated using Monte Carlo simulation with uncertainty measurement, sensitivity analysis, and statistical hypothesis testing. The preliminary results show improvements over a typical baseline, with productivity increasing by approximately 24.3% ± 3.2%, energy consumption decreasing by 12.8% ± 2.5%, and safety risk decreasing by 48.6% ± 4.1%. A sensitivity study identifies localization accuracy, communication delay, and optimization weighting as the primary system performance drivers. The suggested framework serves as a reproducible and transferable reference model for next-generation intelligent mining systems, having direct applications to both industrial deployment and future research in autonomous resource extraction. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

39 pages, 7225 KB  
Article
Enhancing Agri-Food Supply Chain Resilience: A FIT2 Gaussian Fuzzy FUCOM-QFD Framework for Designing Sustainable Controlled-Environment Hydroponic Agriculture Systems
by Biset Toprak and A. Çağrı Tolga
Agriculture 2026, 16(8), 901; https://doi.org/10.3390/agriculture16080901 - 19 Apr 2026
Viewed by 462
Abstract
Vulnerabilities in conventional agri-food supply chains (CAFSCs) necessitate a shift toward resilient, localized production models. Within the Agri-Food 4.0 landscape, urban Controlled-Environment Hydroponic Agriculture (CEHA) systems address these challenges by shortening supply chains and mitigating climate-induced breakdowns. However, structurally aligning Triple Bottom Line [...] Read more.
Vulnerabilities in conventional agri-food supply chains (CAFSCs) necessitate a shift toward resilient, localized production models. Within the Agri-Food 4.0 landscape, urban Controlled-Environment Hydroponic Agriculture (CEHA) systems address these challenges by shortening supply chains and mitigating climate-induced breakdowns. However, structurally aligning Triple Bottom Line (TBL)-oriented stakeholder needs with complex technical specifications remains a critical challenge in sustainable CEHA system design. To address this challenge, the present study proposes a novel framework integrating the Full Consistency Method (FUCOM) and Quality Function Deployment (QFD) within a Finite Interval Type-2 (FIT2) Gaussian fuzzy environment. This approach systematically translates TBL-oriented priorities into precise engineering specifications, mapping 17 stakeholder needs (SNs) to 30 technical design requirements (TDRs) while capturing linguistic uncertainty and hesitation. The findings reveal a clear strategic focus on environmental and social sustainability. Specifically, high product quality, food safety and traceability, consumer acceptance, and minimization of environmental impacts emerge as the primary drivers of CEHA adoption. The QFD translation identifies scalable IoT infrastructure, sensor maintenance and calibration, and AI-enabled decision support as the most critical TDRs. The framework’s reliability and structural robustness were rigorously validated through comprehensive analyses, including Kendall’s W test to confirm expert consensus, alongside a Leave-One-Out (LOO) approach, weight perturbations, and a structural evaluation of TDR intercorrelations. These findings provide a scientifically grounded roadmap for designing sustainable, intelligent urban agricultural systems. Ultimately, this framework offers actionable managerial implications for agribusiness stakeholders to bridge strategic TBL-oriented goals with practical engineering, significantly enhancing agri-food supply chain resilience. Full article
(This article belongs to the Special Issue Building Resilience Through Sustainable Agri-Food Supply Chains)
Show Figures

Figure 1

20 pages, 5140 KB  
Article
Is AI an Academic Threat to Reject or a Complementary Tool to Embrace? Case Study of Senior Interior Design Studio in Imam Abdulrahman Bin Faisal University in the Kingdom of Saudi Arabia
by Zeinab Ahmed Abd Elghaffar Elmoghazy, Dalia H. Eldardiry, Sarah Ali Alghamdi and Ayah Hani AlQaysum
Buildings 2026, 16(8), 1589; https://doi.org/10.3390/buildings16081589 - 17 Apr 2026
Viewed by 270
Abstract
Integrating artificial intelligence (AI) into design education is no longer optional; it has become an essential tool for enhancing innovative design and preparing students for data-driven practice and rapid technological acceleration. However, ignoring AI risks professional irrelevance; it introduces a range of concerns [...] Read more.
Integrating artificial intelligence (AI) into design education is no longer optional; it has become an essential tool for enhancing innovative design and preparing students for data-driven practice and rapid technological acceleration. However, ignoring AI risks professional irrelevance; it introduces a range of concerns about students’ cognitive skills and comes with many drawbacks in the education process, as it threatens the attainment of learning outcomes, renders a fair assessment process unachievable, and places academic integrity in a vulnerable position. Using a qualitative case study approach, this research employs semi-structured interviews with 27 senior-year students in the interior design department to gain in-depth academic insights into how AI influenced their design process in their term project and its impact on their cognitive development and decision -making. Instructors’ observations on students’ skills, their pace in the project, and their end-products were documented. This study demonstrates that integrating AI into design education cannot be avoided, making a new paradigm for addressing design education inevitable. Based on the analysis, the paper proposes a conceptual framework outlining key dimensions in teaching and assessing strategies in design education adopting AI, focusing on analysis, critical thinking, reasoning, and process rather than on the end-product and its presentation. Full article
(This article belongs to the Special Issue Emerging Trends in Architecture, Urbanization, and Design)
Show Figures

Figure 1

37 pages, 570 KB  
Review
Autonomous Supply Chains: Integrating Artificial Intelligence, Digital Twins, and Predictive Analytics for Intelligent Decision Systems
by Mohammad Shamsuddoha, Honey Zimmerman, Tasnuba Nasir and Md Najmus Sakib
Information 2026, 17(4), 371; https://doi.org/10.3390/info17040371 - 15 Apr 2026
Viewed by 1112
Abstract
Autonomous supply chains (ASC) are the next generation of digitally empowered logistics and operations systems that can make adaptive, data-driven, and intelligent decisions. Innovations in artificial intelligence (AI), digital twins (DT), and predictive analytics (PA) are transforming traditional supply chains into integrated and [...] Read more.
Autonomous supply chains (ASC) are the next generation of digitally empowered logistics and operations systems that can make adaptive, data-driven, and intelligent decisions. Innovations in artificial intelligence (AI), digital twins (DT), and predictive analytics (PA) are transforming traditional supply chains into integrated and interactive networks to detect disruptions, simulate the future, and automatically modify operational decisions. This paper reviews the ASC mechanism and summarizes the increasing literature on the technologies and analytical capabilities available to support intelligent supply chain decision systems. A structured literature review was conducted using Scopus, Web of Science, and Google Scholar, resulting in 52 relevant studies after screening and eligibility assessment. The paper discusses the recent advances in AI-based forecasting, simulation environments using digital twins, data integration using the Internet of Things (IoT), and predictive analytics. These technologies can help an organization gain real-time visibility of the supply chain networks. They improve the precision of demand forecasting, optimize inventory and production planning, and dynamically coordinate logistics operations. Digital twins allow the development of virtual models of supply chain ecosystems, which could be used to test scenarios, analyze risks, and plan strategies. These capabilities combined can be used to create predictive and self-adaptive supply networks capable of being responsive to uncertainty and market volatility. Besides examining the technological foundations, the paper also tracks key challenges related to the move towards autonomous supply chains, such as data governance, system interoperability, cybersecurity risks, algorithm transparency, and the necessity of successful human-AI collaboration in decision-making. The synthesis leads to a multi-layered framework that integrates data acquisition, analytics, simulation, and execution for autonomous decision-making in supply chains. Future research directions in relation to resilient supply networks, intelligent automation, and adaptive supply chain ecosystems are also provided in the study. Through integrating existing information on the new forms of intelligent technology and how it can be incorporated into the supply chain systems, this review contributes to the literature on next-generation supply chains. It will also offer information to both researchers and practitioners aiming at designing autonomous as well as data-driven supply networks. Full article
Show Figures

Figure 1

15 pages, 266 KB  
Article
AI-Supported Design of Teaching Units for English to Young Learners: A Case Study in Initial Teacher Education
by Cecilia Lazzeretti
Educ. Sci. 2026, 16(4), 614; https://doi.org/10.3390/educsci16040614 - 11 Apr 2026
Viewed by 408
Abstract
While generative artificial intelligence (GenAI) is increasingly used by university students for writing support, less is known about its role in discipline-specific professional tasks. This study examines how pre-service primary teachers integrate and conceptualise GenAI when designing Teaching Units for English for Young [...] Read more.
While generative artificial intelligence (GenAI) is increasingly used by university students for writing support, less is known about its role in discipline-specific professional tasks. This study examines how pre-service primary teachers integrate and conceptualise GenAI when designing Teaching Units for English for Young Learners (EYL), with a focus on whether AI is positioned as a substitute for pedagogical reasoning or as a support within teacher decision-making. The qualitative study involved 75 fifth-year pre-service teachers at the Free University of Bozen-Bolzano (Italy), working in 23 groups. Data included 23 Teaching Units and 10 AI Use Reports, analysed through document analysis and thematic coding. GenAI was used mainly for material production (visual and text generation, idea generation, and text revision) and resource adaptation, with limited evidence of use for macro- or micro-planning decisions (objectives, sequencing, assessment). Prompts were often underspecified, but reports described iterative refinement and critical adaptation to improve age appropriateness and reduce lexical overload. Overall, within a transparent course framework, pre-service teachers retained pedagogical ownership while using GenAI as a supplementary resource, underscoring the need to develop pedagogically grounded AI literacy (prompt design, evaluation, and disclosure). Full article
25 pages, 4742 KB  
Article
An Edge-Enabled Predictive Maintenance Approach Based on Anomaly-Driven Health Indicators for Industrial Production Systems
by Bouzidi Lamdjad and Adem Chaiter
Algorithms 2026, 19(4), 286; https://doi.org/10.3390/a19040286 - 8 Apr 2026
Viewed by 536
Abstract
This study develops a data-driven framework for predictive maintenance and prognostic health management in industrial systems using edge-enabled predictive algorithms. The objective is to support early identification of abnormal operating conditions and improve maintenance decision making under real production environments. The proposed approach [...] Read more.
This study develops a data-driven framework for predictive maintenance and prognostic health management in industrial systems using edge-enabled predictive algorithms. The objective is to support early identification of abnormal operating conditions and improve maintenance decision making under real production environments. The proposed approach combines edge-level monitoring, anomaly detection, and predictive modeling to analyze operational signals and estimate system health conditions from high-frequency industrial data. Empirical validation was conducted using operational datasets collected from two industrial production facilities between 2024 and 2025. The model evaluates patterns associated with operational instability and degradation-related anomalies and translates them into interpretable health indicators that can support proactive intervention. The empirical results show strong predictive performance, with R2 reaching 0.989, a mean absolute percentage error of 3.67%, and a root mean square error of 0.79. In addition, the mitigation of early anomaly signals was associated with an observed improvement of approximately 3.99% in system stability. Unlike many existing studies that treat anomaly detection, predictive modeling, and prognostic analysis as separate tasks, the proposed framework connects these stages within a unified analytical structure designed for deployment in industrial environments. The findings indicate that edge-generated anomaly signals can provide meaningful early information about potential system deterioration and can assist in planning timely maintenance actions even when explicit failure labels are limited. The study contributes to the development of scalable predictive maintenance solutions that integrate artificial intelligence with edge-based industrial monitoring systems. Full article
Show Figures

Figure 1

35 pages, 12654 KB  
Article
An Integrated BIM–NLP Framework for Design-Informed Automated Construction Schedule Generation
by Mahmoud Donia, Emad Elbeltagi, Ahmed Elhakeem and Hossam Wefki
Designs 2026, 10(2), 43; https://doi.org/10.3390/designs10020043 - 7 Apr 2026
Viewed by 1001
Abstract
Artificial intelligence has attracted increasing attention in the construction industry; however, automated time scheduling remains limited in practical applications. Schedule development remains manual, requiring planners to analyze project documents, define activities, estimate durations, and identify relationships based on logical sequence. This process primarily [...] Read more.
Artificial intelligence has attracted increasing attention in the construction industry; however, automated time scheduling remains limited in practical applications. Schedule development remains manual, requiring planners to analyze project documents, define activities, estimate durations, and identify relationships based on logical sequence. This process primarily depends on individual experience and skills, making it both time-consuming and prone to human error. From an engineering design perspective, delayed or inconsistent schedule development weakens design-to-construction feedback, limiting the ability to evaluate constructability and time implications of alternative design decisions during early-stage planning. This study proposes an integrated BIM–Natural Language Processing (NLP) framework to automate activity identification, duration estimation, and logical sequencing for construction scheduling. The framework extracts project data from Revit, organizes it into a bill of quantities format, and then generates an activity list, each activity with a unique ID. Using Sentence-BERT (SBERT) embeddings, the framework estimates activity durations based on semantic similarity. The same semantic process is combined with rule-based reasoning to identify logical relationships, including sequences, supported by an Excel-based reference dictionary that includes logical relationships, productivity, and ID structure. Finally, the framework incorporates a crashing module that proportionally adjusts the duration of activities on the longest path to target the project’s completion time without violating relationships. The proposed framework was validated using real construction project data and produced reliable results. By producing a tool-ready schedule directly from design-model information, the proposed workflow enables earlier schedule feedback loops and supports design-informed planning by allowing designers and planners to assess the time consequences of model-driven scope changes. The results demonstrate that integrating BIM and NLP can transform conventional schedules into faster, more consistent processes, thereby supporting the construction industry. Full article
Show Figures

Figure 1

Back to TopTop