Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,440)

Search Parameters:
Keywords = OpenLayers

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 3340 KB  
Article
Spatial Modelling of Urban Accessibility: Insights from Belgrade, Republic of Serbia
by Filip Arnaut, Sreten Jevremović, Aleksandra Kolarski, Zoran R. Mijić and Vladimir A. Srećković
Urban Sci. 2025, 9(10), 424; https://doi.org/10.3390/urbansci9100424 (registering DOI) - 13 Oct 2025
Abstract
This study presents the first comprehensive spatial accessibility assessment of essential urban services in Belgrade, Republic of Serbia, conducted entirely with open-source tools and data. The analysis focused on six facility categories: primary healthcare centers, public pharmacies, primary and secondary schools, libraries, and [...] Read more.
This study presents the first comprehensive spatial accessibility assessment of essential urban services in Belgrade, Republic of Serbia, conducted entirely with open-source tools and data. The analysis focused on six facility categories: primary healthcare centers, public pharmacies, primary and secondary schools, libraries, and green markets. Spatial accessibility was modelled using OpenRouteService (ORS) isochrones for walking travel times of 5, 10, and 15 min, combined with population data from the Global Human Settlement Layer (GHSL). Results indicate that 79% of residents live within a 15-min walk of a healthcare facility, 74% of a pharmacy, 89% of an elementary school, 52% of a high school, 60% of a library, and 62% of a green market. Central administrative units such as Vračar, Zvezdara, and Stari Grad demonstrated nearly complete service coverage, while peripheral areas, including Resnik, Jajinci, and Višnjica, exhibited substantial accessibility deficits, often coinciding with lower-income zones. The developed workflow provides a transparent, replicable approach for identifying underserved neighborhoods and prioritizing investments in public infrastructure. This research emphasizes the role of spatial accessibility analysis in advancing Sustainable Development Goals (SDGs), contributing to the creation of more inclusive, walkable, and sustainable urban environments, while on the other hand, it offers practical insights for improving urban equity, guiding policy formulation, and supporting necessary planning decisions. Subsequent research will focus on alternative facilities, other cities such as Novi Sad and Niš, and the disparity between urban and rural populations. Full article
Show Figures

Figure 1

17 pages, 1887 KB  
Article
AlphaGlue: A Novel Conceptual Delivery Method for α Therapy
by Lujin Abu Sabah, Laura Ballisat, Chiara De Sio, Magdalena Dobrowolska, Adam Chambers, Jinyan Duan, Susanna Guatelli, Dousatsu Sakata, Yuyao Shi, Jaap Velthuis and Anatoly Rosenfeld
BioMedInformatics 2025, 5(4), 58; https://doi.org/10.3390/biomedinformatics5040058 (registering DOI) - 13 Oct 2025
Abstract
Extensive research is being carried out on the application of α particles for cancer treatment. A key challenge in α therapy is how to deliver the α emitters to the tumour. In AlphaGlue, a novel treatment delivery concept, the α emitters are suspended [...] Read more.
Extensive research is being carried out on the application of α particles for cancer treatment. A key challenge in α therapy is how to deliver the α emitters to the tumour. In AlphaGlue, a novel treatment delivery concept, the α emitters are suspended in a thin layer of glue that is put on top of the tumour. In principle, this should be an easy and safe way to apply α therapy. In this study, the effectiveness of AlphaGlue is evaluated using GEANT4 and GEANT4-DNA simulations to calculate the DNA damage as a function of depth. Two radionuclides are considered in this work, 211At and 224Ra. The results indicate that, as a concept, the method offers a promising hypothesis for treating superficial tumours, such as skin cancer, when 224Ra is applied directly on the tissue and stabilized with a glue layer. This results in 2×105 complex double strand breaks and 5×105 double strand breaks at 5 mm depth per applied 224Ra atom. When applying a 224Ra atom concentration of (4.35±0.2)×1011/cm2 corresponding to an activity of (21.8±1)μCi/cm2 on the skin surface, the RBE weighted dose exceeds 20 Gy at 5 mm depth. Hence, there is significant cell death at 5 mm into the tissue; a depth matching clinical requirements for skin cancer treatment. Given the rapidly falling weighted dose versus depth curve, the treatment depth can be tuned with good precision. The results of this study show that AlphaGlue is a promosing treatment and open the pathway towards the next stage of the research, which includes in-vitro studies. Full article
Show Figures

Figure 1

12 pages, 1654 KB  
Article
Research on Open Magnetic Shielding Packaging for STT and SOT-MRAM
by Haibo Ye, Xiaofei Zhang, Nannan Lu, Jiawei Li, Jun Jia, Guilin Zhao, Jiejie Sun, Lei Zhang and Chao Wang
Micromachines 2025, 16(10), 1157; https://doi.org/10.3390/mi16101157 - 13 Oct 2025
Abstract
As an emerging type of non-volatile memory, magneto-resistive random access memory (MRAM) stands out for its exceptional reliability and rapid read–write speeds, thereby garnering considerable attention within the industry. The memory cell architecture of MRAM is centered around the magnetic tunnel junction (MTJ), [...] Read more.
As an emerging type of non-volatile memory, magneto-resistive random access memory (MRAM) stands out for its exceptional reliability and rapid read–write speeds, thereby garnering considerable attention within the industry. The memory cell architecture of MRAM is centered around the magnetic tunnel junction (MTJ), which, however, is prone to interference from external magnetic fields—a limitation that restricts its application in demanding environments. To address this challenge, we propose an innovative open magnetic shielding structure. This design demonstrates remarkable shielding efficacy against both in-plane and perpendicular magnetic fields, effectively catering to the magnetic shielding demands of both spin-transfer torque (STT) and spin–orbit torque (SOT) MRAM. Finite element magnetic simulations reveal that when subjected to an in-plane magnetic field of 40 mT, the magnetic field intensity at the chip level is reduced to nearly 1‰ of its original value. Similarly, under a perpendicular magnetic field of 40 mT, the magnetic field at the chip is reduced to 2‰ of its initial strength. Such reductions significantly enhance the anti-magnetic capabilities of MRAM. Moreover, the magnetic shielding performance remains unaffected by the height of the packaging structure, ensuring compatibility with various chip stack packaging requirements across different layers. The research presented in this paper holds immense significance for the realization of highly reliable magnetic shielding packaging solutions for MRAM. Full article
Show Figures

Figure 1

30 pages, 2870 KB  
Article
CourseEvalAI: Rubric-Guided Framework for Transparent and Consistent Evaluation of Large Language Models
by Catalin Anghel, Marian Viorel Craciun, Emilia Pecheanu, Adina Cocu, Andreea Alexandra Anghel, Paul Iacobescu, Calina Maier, Constantin Adrian Andrei, Cristian Scheau and Serban Dragosloveanu
Computers 2025, 14(10), 431; https://doi.org/10.3390/computers14100431 (registering DOI) - 11 Oct 2025
Viewed by 74
Abstract
Background and objectives: Large language models (LLMs) show promise in automating open-ended evaluation tasks, yet their reliability in rubric-based assessment remains uncertain. Variability in scoring, feedback, and rubric adherence raises concerns about transparency and pedagogical validity in educational contexts. This study introduces [...] Read more.
Background and objectives: Large language models (LLMs) show promise in automating open-ended evaluation tasks, yet their reliability in rubric-based assessment remains uncertain. Variability in scoring, feedback, and rubric adherence raises concerns about transparency and pedagogical validity in educational contexts. This study introduces CourseEvalAI, a framework designed to enhance consistency and fidelity in rubric-guided evaluation by fine-tuning a general-purpose LLM with authentic university-level instructional content. Methods: The framework employs supervised fine-tuning with Low-Rank Adaptation (LoRA) on rubric-annotated answers and explanations drawn from undergraduate computer science exams. Responses generated by both the base and fine-tuned models were independently evaluated by two human raters and two LLM judges, applying dual-layer rubrics for answers (technical or argumentative) and explanations. Inter-rater reliability was reported as intraclass correlation coefficient (ICC(2,1)), Krippendorff’s α, and quadratic-weighted Cohen’s κ (QWK), and statistical analyses included Welch’s t tests with Holm–Bonferroni correction, Hedges’ g with bootstrap confidence intervals, and Levene’s tests. All responses, scores, feedback, and metadata were stored in a Neo4j graph database for structured exploration. Results: The fine-tuned model consistently outperformed the base version across all rubric dimensions, achieving higher scores for both answers and explanations. After multiple-testing correction, only the Generative Pre-trained Transformer (GPT-4)—judged Technical Answer contrast remains statistically significant; other contrasts show positive trends without passing the adjusted threshold, and no additional significance is claimed for explanation-level results. Variance in scoring decreased, inter-model agreement increased, and evaluator feedback for fine-tuned outputs contained fewer vague or critical remarks, indicating stronger rubric alignment and greater pedagogical coherence. Inter-rater reliability analyses indicated moderate human–human agreement and weaker alignment of LLM judges to the human mean. Originality: CourseEvalAI integrates rubric-guided fine-tuning, dual-layer evaluation, and graph-based storage into a unified framework. This combination provides a replicable and interpretable methodology that enhances the consistency, transparency, and pedagogical value of LLM-based evaluators in higher education and beyond. Full article
Show Figures

Figure 1

21 pages, 824 KB  
Article
Biases in AI-Supported Industry 4.0 Research: A Systematic Review, Taxonomy, and Mitigation Strategies
by Javier Arévalo-Royo, Francisco-Javier Flor-Montalvo, Juan-Ignacio Latorre-Biel, Emilio Jiménez-Macías, Eduardo Martínez-Cámara and Julio Blanco-Fernández
Appl. Sci. 2025, 15(20), 10913; https://doi.org/10.3390/app152010913 - 11 Oct 2025
Viewed by 95
Abstract
Industrial engineering research has been reshaped by the integration of artificial intelligence (AI) within the framework of Industry 4.0, characterized by the interplay between cyber-physical systems (CPS), advanced automation, and the Industrial Internet of Things (IIoT). While this integration opens new opportunities, it [...] Read more.
Industrial engineering research has been reshaped by the integration of artificial intelligence (AI) within the framework of Industry 4.0, characterized by the interplay between cyber-physical systems (CPS), advanced automation, and the Industrial Internet of Things (IIoT). While this integration opens new opportunities, it also introduces biases that undermine the reliability and robustness of scientific and industrial outcomes. This article presents a systematic literature review (SLR), supported by natural language processing techniques, aimed at identifying and classifying biases in AI-driven research within industrial contexts. Based on this meta-research approach, a taxonomy is proposed that maps biases across the stages of the scientific method as well as the operational layers of intelligent production systems. Statistical analysis confirms that biases are unevenly distributed, with a higher incidence in hypothesis formulation and results dissemination. The study also identifies emergent AI-related biases specific to industrial applications such as predictive maintenance, quality control, and digital twin management. Practical implications include stronger reliability in predictive analytics for manufacturers, improved accuracy in monitoring and rescue operations through transparent AI pipelines, and enhanced reproducibility for researchers across stages. Mitigation strategies are then discussed to safeguard research integrity and support trustworthy, bias-aware decision-making in Industry 4.0. Full article
Show Figures

Figure 1

24 pages, 1626 KB  
Article
Physical Layer Security Enhancement in IRS-Assisted Interweave CIoV Networks: A Heterogeneous Multi-Agent Mamba RainbowDQN Method
by Ruiquan Lin, Shengjie Xie, Wencheng Chen and Tao Xu
Sensors 2025, 25(20), 6287; https://doi.org/10.3390/s25206287 - 10 Oct 2025
Viewed by 210
Abstract
The Internet of Vehicles (IoV) relies on Vehicle-to-Everything (V2X) communications to enable cooperative perception among vehicles, infrastructures, and devices, where Vehicle-to-Infrastructure (V2I) links are crucial for reliable transmission. However, the openness of wireless channels exposes IoV to eavesdropping, threatening privacy and security. This [...] Read more.
The Internet of Vehicles (IoV) relies on Vehicle-to-Everything (V2X) communications to enable cooperative perception among vehicles, infrastructures, and devices, where Vehicle-to-Infrastructure (V2I) links are crucial for reliable transmission. However, the openness of wireless channels exposes IoV to eavesdropping, threatening privacy and security. This paper investigates an Intelligent Reflecting Surface (IRS)-assisted interweave Cognitive IoV (CIoV) network to enhance physical layer security in V2I communications. A non-convex joint optimization problem involving spectrum allocation, transmit power for Vehicle Users (VUs), and IRS phase shifts is formulated. To address this challenge, a heterogeneous multi-agent (HMA) Mamba RainbowDQN algorithm is proposed, where homogeneous VUs and a heterogeneous secondary base station (SBS) act as distinct agents to simplify decision-making. Simulation results show that the proposed method significantly outperform benchmark schemes, achieving a 13.29% improvement in secrecy rate and a 54.2% reduction in secrecy outage probability (SOP). These results confirm the effectiveness of integrating IRS and deep reinforcement learning (DRL) for secure and efficient V2I communications in CIoV networks. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

17 pages, 4221 KB  
Article
Fabrication and Oxidation Resistance of Metallic Ta-Reinforced High-Entropy (Ti,Zr,Hf,Nb,Ta)B2 Ceramics
by Bowen Yuan, Qilong Guo, Hao Ying, Liang Hua, Ziqiu Shi, Shengcai Yang, Jing Wang and Xiufang Wang
Materials 2025, 18(19), 4642; https://doi.org/10.3390/ma18194642 - 9 Oct 2025
Viewed by 189
Abstract
High-entropy boride (HEB) ceramics combine ultra-high melting points, superior hardness, and compositional tunability, enabling service in extreme environments; however, difficult densification and limited fracture toughness still constrain their aerospace applications. In this study, metallic Ta was introduced into high-entropy (Ti0.2Zr0.2 [...] Read more.
High-entropy boride (HEB) ceramics combine ultra-high melting points, superior hardness, and compositional tunability, enabling service in extreme environments; however, difficult densification and limited fracture toughness still constrain their aerospace applications. In this study, metallic Ta was introduced into high-entropy (Ti0.2Zr0.2Hf0.2Nb0.2Ta0.2)B2 as both a sintering aid and a toughening phase. Bulk HEB-Ta composites were fabricated by spark plasma sintering to investigate the effect of Ta content on densification behavior, microstructure, mechanical properties, and high-temperature oxidation resistance. The results show that an appropriate amount of Ta markedly promotes densification; at 10 vol% Ta, the open porosity reaches a minimum of 0.15%. Hardness and fracture toughness exhibit an increase-then-decrease trend with Ta content, attaining maxima at 15 vol% Ta (20.79 ± 0.17 GPa and 4.31 ± 0.12 MPa·, respectively). During oxidation at 800–1400 °C, the extent of oxidation increases with temperature, yet the composite with 10 vol% Ta shows the best oxidation resistance. This improvement arises from the formation of a viscous, protective Ta2O5-B2O3 glassy layer that effectively suppresses oxygen diffusion and enhances high-temperature stability. Overall, incorporating metallic Ta is an effective route to improve the manufacturability and service durability of HEB ceramics, providing a composition guideline and a mechanistic basis for simultaneously enhancing densification, toughness, and oxidation resistance. Full article
Show Figures

Figure 1

11 pages, 217 KB  
Article
Evaluation of Ganglion Cell–Inner Plexiform Layer Thickness in the Diagnosis of Preperimetric and Early Perimetric Glaucoma
by Ilona Anita Kaczmarek, Marek Edmund Prost and Radosław Różycki
J. Clin. Med. 2025, 14(19), 7117; https://doi.org/10.3390/jcm14197117 - 9 Oct 2025
Viewed by 142
Abstract
Background: Optical coherence tomography (OCT) is the main diagnostic technology used to detect damage to the retinal ganglion cells (RGCs) in glaucoma. However, it remains unclear which OCT parameter demonstrates the best diagnostic performance for eyes with early, especially preperimetric glaucoma (PPG). We [...] Read more.
Background: Optical coherence tomography (OCT) is the main diagnostic technology used to detect damage to the retinal ganglion cells (RGCs) in glaucoma. However, it remains unclear which OCT parameter demonstrates the best diagnostic performance for eyes with early, especially preperimetric glaucoma (PPG). We determined the diagnostic performance of ganglion cell–inner plexiform layer (GCIPL) parameters using spectral-domain OCT (SD-OCT) in primary open-angle preperimetric and early perimetric glaucoma and compared them with optic nerve head (ONH) and peripapillary retinal nerve fiber layer (pRNFL) parameters. Methods: We analyzed 101 eyes: 36 normal eyes, 33 with PPG, and 32 with early perimetric glaucoma. All patients underwent Topcon SD–OCT imaging using the Optic Disc and Macular Vertical protocols. The diagnostic abilities of the GCIPL, rim area, vertical cup-to-disc ratio (CDR), and pRNFL were assessed using the area under the receiver operating characteristic curve (AUC). Results: For PPG, the AUCs ranged from 0.60 to 0.63 (GCIPL), 0.82 to 0.86 (ONH), and 0.49 to 0.75 (pRNFL). For early perimetric glaucoma, the AUCs for GCIPL and pRNFL ranged from 0.81 to 0.88 and 0.57 to 0.91, respectively, whereas both ONH parameters demonstrated an AUC of 0.89. The GCIPL parameters were significantly lower than both ONH parameters in detecting preperimetric glaucoma (p < 0.05). For early perimetric glaucoma, comparisons between the AUCs of the best-performing mGCIPL parameters and those of the best-performing pRNFL and ONH parameters revealed no significant differences in their diagnostic abilities (p > 0.05). Conclusions: GCIPL parameters exhibited a diagnostic performance comparable to that of ONH and pRNFL parameters for early perimetric glaucoma. However, their ability to detect preperimetric glaucoma was significantly lower than the ONH parameters. Full article
(This article belongs to the Section Ophthalmology)
24 pages, 12411 KB  
Article
RANS-Based Aerothermal Database of LS89 Transonic Turbine Cascade Under Adiabatic and Cooled Wall Conditions
by Davide Fornasari, Stefano Regazzo, Ernesto Benini and Francesco De Vanna
Energies 2025, 18(19), 5321; https://doi.org/10.3390/en18195321 - 9 Oct 2025
Viewed by 135
Abstract
Modern gas turbines for aeroengines operate at ever-increasing inlet temperatures to maximize thermal efficiency, power, output and thrust, subjecting turbine blades to severe thermal and mechanical stresses. To ensure component durability, effective cooling strategies are indispensable, yet they strongly influence the underlying aerothermal [...] Read more.
Modern gas turbines for aeroengines operate at ever-increasing inlet temperatures to maximize thermal efficiency, power, output and thrust, subjecting turbine blades to severe thermal and mechanical stresses. To ensure component durability, effective cooling strategies are indispensable, yet they strongly influence the underlying aerothermal behavior, particularly in transonic regimes where shock–boundary layer interactions are critical. In this work, a comprehensive Reynolds-Averaged Navier–Stokes (RANS) investigation is carried out on the LS89 transonic turbine cascade, considering both adiabatic and cooled wall conditions. Three operating cases, spanning progressively higher outlet Mach numbers (0.84, 0.875, and 1.020), are analyzed using multiple turbulence closures. To mitigate the well-known model dependence of RANS predictions, a model-averaging strategy is introduced, providing a more robust prediction framework and reducing the uncertainty associated with single-model results. A systematic mesh convergence study is also performed to ensure grid-independent solutions. The results show that while wall pressure and isentropic Mach number remain largely unaffected by wall cooling, viscous near-wall quantities and wake characteristics exhibit a pronounced sensitivity to the wall-to-recovery temperature ratio. To support further research and model benchmarking, the complete RANS database generated in this work is released as an open-source resource and made publicly. Full article
(This article belongs to the Special Issue Advancements in Gas Turbine Aerothermodynamics)
Show Figures

Figure 1

15 pages, 12388 KB  
Article
Evaluating a New Prototype of Plant Microbial Fuel Cell: Is the Electrical Performance Affected by Carbon Pellet Layering and Urea Treatment?
by Ilaria Brugellis, Marco Grassi, Piero Malcovati and Silvia Assini
Energies 2025, 18(19), 5320; https://doi.org/10.3390/en18195320 - 9 Oct 2025
Viewed by 213
Abstract
Plant Microbial Fuel Cells (PMFCs) represent a promising technology that uses electroactive bacteria to convert the chemical energy in organic matter into electrical energy. The addition of carbon pellet on electrodes may increase the specific surface area for colonization via bacteria. Use of [...] Read more.
Plant Microbial Fuel Cells (PMFCs) represent a promising technology that uses electroactive bacteria to convert the chemical energy in organic matter into electrical energy. The addition of carbon pellet on electrodes may increase the specific surface area for colonization via bacteria. Use of nutrients such as urea could enhance plant growth. Our study aims to address the following questions: (1) Does carbon pellet layering affect the electrical performance of PMFCs? (2) Does urea treatment of the plants used to feed the PMFCs affect the electrical performance? A new prototype of PMFC has been tested: the plant pot is on the top, drainage water percolates to the tub below, containing the Microbial Fuel Cells (MFCs). To evaluate the best layering setup, two groups of MFCs were constructed: a “Double layer” group (with carbon pellet both on the cathode and on the anode), and a “Single layer” group (with graphite only on the cathode). All MFCs were plant-fed by Spathiphyllum lanceifolium L leachate. After one year, each of the previous two sets has been divided into two subsets: one wetted with percolate from plants fertilized with urea, and the other with percolate from unfertilized plants. Open circuit voltage (mV), short circuit peak current, and short circuit current after 5 s (mA) produced values that were measured on a weekly basis. PMFCs characterized by a “Single layer” group performed better than the “Double layer” group most times, in terms of higher and steadier values for voltage and calculated power. Undesirable results regarding urea treatment suggest the use of less concentrated urea solution. The treatment may provide consistency but appears to limit voltage and peak values, particularly in the “Double layer” configuration. Full article
(This article belongs to the Section D2: Electrochem: Batteries, Fuel Cells, Capacitors)
Show Figures

Figure 1

26 pages, 1456 KB  
Article
Collaborative Design Through Authentic Design Challenges: Preservice Teachers’ Perceptions of Digital Competence Development and SQD-Aligned Supports
by Bram Cabbeke, Britt Adams, Tijs Rotsaert and Tammy Schellens
Educ. Sci. 2025, 15(10), 1331; https://doi.org/10.3390/educsci15101331 - 8 Oct 2025
Viewed by 270
Abstract
Collaborative design is recognized as a promising approach to strengthening preservice teachers’ digital competence, yet its potential when design tasks approximate the complexities of classroom practice remains underexplored. This mixed-methods study investigated a Synthesis of Qualitative Evidence (SQD)-aligned collaborative design course in which [...] Read more.
Collaborative design is recognized as a promising approach to strengthening preservice teachers’ digital competence, yet its potential when design tasks approximate the complexities of classroom practice remains underexplored. This mixed-methods study investigated a Synthesis of Qualitative Evidence (SQD)-aligned collaborative design course in which 23 final-year preservice secondary mathematics teachers, organized in six teams, spent ten weeks designing technology-enhanced lesson materials for authentic design challenges posed by in-service teachers. Using questionnaires and interviews, this study explored preservice teachers’ perceived digital competence development and their perceptions of the SQD-aligned course supports. Regarding competence development, participants indicated increases in self-assessed cognitive (technological knowledge, TPACK) and motivational (technology-integration self-efficacy, perceived ease of use) digital competence dimensions. Qualitative findings linked these perceptions to heightened technological awareness and confidence but noted limited tool mastery due to reliance on familiar technologies and efficiency-driven task division. Concerning course supports, authentic challenges enhanced motivation and context-sensitive reasoning, while layering scaffolds (guidelines, coaching, and microteaching feedback) supported navigating the open-endedness of the design task. Yet calls for earlier feedback and technology-related modeling underscore the need for further scaffolding to adequately support autonomy in technology selection and integration. Findings inform teacher education course design for fostering preservice teachers’ digital competencies. Full article
(This article belongs to the Special Issue Empowering Teacher Education with Digital Competences)
Show Figures

Figure 1

29 pages, 9465 KB  
Article
Modeling Seasonal Fire Probability in Thailand: A Machine Learning Approach Using Multiyear Remote Sensing Data
by Enikoe Bihari, Karen Dyson, Kayla Johnston, Daniel Marc G. dela Torre, Akkarapon Chaiyana, Karis Tenneson, Wasana Sittirin, Ate Poortinga, Veerachai Tanpipat, Kobsak Wanthongchai, Thannarot Kunlamai, Elijah Dalton, Chanarun Saisaward, Marina Tornorsam, David Ganz and David Saah
Remote Sens. 2025, 17(19), 3378; https://doi.org/10.3390/rs17193378 - 7 Oct 2025
Viewed by 579
Abstract
Seasonal fires in northern Thailand are a persistent environmental and public health concern, yet existing fire probability mapping approaches in Thailand rely heavily on subjective multi-criteria analysis (MCA) methods and temporally static data aggregation methods. To address these limitations, we present a flexible, [...] Read more.
Seasonal fires in northern Thailand are a persistent environmental and public health concern, yet existing fire probability mapping approaches in Thailand rely heavily on subjective multi-criteria analysis (MCA) methods and temporally static data aggregation methods. To address these limitations, we present a flexible, replicable, and operationally viable seasonal fire probability mapping methodology using a Random Forest (RF) machine learning model in the Google Earth Engine (GEE) platform. We trained the model on historical fire occurrence and fire predictor layers from 2016–2023 and applied it to 2024 conditions to generate a probabilistic fire prediction. Our novel approach improves upon existing operational methods and scientific literature in several ways. It uses a more representative sample design which is agnostic to the burn history of fire presences and absences, pairs fire and fire predictor data from each year to account for interannual variation in conditions, empirically refines the most influential fire predictors from a comprehensive set of predictors, and provides a reproducible and accessible framework using GEE. Predictor variables include both socioeconomic and environmental drivers of fire, such as topography, fuels, potential fire behavior, forest type, vegetation characteristics, climate, water availability, crop type, recent burn history, and human influence and accessibility. The model achieves an Area Under the Curve (AUC) of 0.841 when applied to 2016–2023 data and 0.848 when applied to 2024 data, indicating strong discriminatory power despite the additional spatial and temporal variability introduced by our sample design. The highest fire probabilities emerge in forested and agricultural areas at mid elevations and near human settlements and roads, which aligns well with the known anthropogenic drivers of fire in Thailand. Distinct areas of model uncertainty are also apparent in cropland and forests which are only burned intermittently, highlighting the importance of accounting for localized burning cycles. Variable importance analysis using the Gini Impurity Index identifies both natural and anthropogenic predictors as key and nearly equally important predictors of fire, including certain forest and crop types, vegetation characteristics, topography, climate, human influence and accessibility, water availability, and recent burn history. Our findings demonstrate the heavy influence of data preprocessing and model design choices on model results. The model outputs are provided as interpretable probability maps and the methods can be adapted to future years or augmented with local datasets. Our methodology presents a scalable advancement in wildfire probability mapping with machine learning and open-source tools, particularly for data-constrained landscapes. It will support Thailand’s fire managers in proactive fire response and planning and also inform broader regional fire risk assessment efforts. Full article
(This article belongs to the Special Issue Remote Sensing in Hazards Monitoring and Risk Assessment)
Show Figures

Figure 1

23 pages, 7644 KB  
Article
Optimized Venturi-Ejector Adsorption Mechanism for Underwater Inspection Robots: Design, Simulation, and Field Testing
by Lei Zhang, Anxin Zhou, Yao Du, Kai Yang, Weidong Zhu and Sisi Zhu
J. Mar. Sci. Eng. 2025, 13(10), 1913; https://doi.org/10.3390/jmse13101913 - 5 Oct 2025
Viewed by 169
Abstract
Stable adhesion on non-magnetic, steep, and irregular underwater surfaces (e.g., concrete dams with cracks or biofilms) remains a challenge for inspection robots. This study develops a novel adsorption mechanism based on the synergistic operation of a Venturi-ejector and a composite suction cup. The [...] Read more.
Stable adhesion on non-magnetic, steep, and irregular underwater surfaces (e.g., concrete dams with cracks or biofilms) remains a challenge for inspection robots. This study develops a novel adsorption mechanism based on the synergistic operation of a Venturi-ejector and a composite suction cup. The mechanism utilizes the Venturi effect to generate stable negative pressure via hydrodynamic entrainment and innovatively adopts a composite suction cup—comprising a rigid base and a dual-layer EPDM sponge (closed-cell + open-cell)—to achieve adaptive sealing, thereby reliably applying the efficient negative-pressure generation capability to rough underwater surfaces. Theoretical modeling established the quantitative relationship between adsorption force (F) and key parameters (nozzle/throat diameters, suction cup radius). CFD simulations revealed optimal adsorption at a nozzle diameter of 4.4 mm and throat diameter of 5.8 mm, achieving a peak simulated F of 520 N. Experiments demonstrated a maximum F of 417.9 N at 88.9 W power. The composite seal significantly reduced leakage on high-roughness surfaces (Ra ≥ 6 mm) compared to single-layer designs. Integrated into an inspection robot, the system provided stable adhesion (>600 N per single adsorption device) on vertical walls and reliable operation under real-world conditions at Balnetan Dam, enabling mechanical-arm-assisted maintenance. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

19 pages, 5700 KB  
Article
Restoring Spectral Symmetry in Gradients: A Normalization Approach for Efficient Neural Network Training
by Zhigao Huang, Nana Gong, Quanfa Li, Tianying Wu, Shiyan Zheng and Miao Pan
Symmetry 2025, 17(10), 1648; https://doi.org/10.3390/sym17101648 - 4 Oct 2025
Viewed by 310
Abstract
Neural network training often suffers from spectral asymmetry, where gradient energy is disproportionately allocated to high-frequency components, leading to suboptimal convergence and reduced efficiency. This paper introduces Gradient Spectral Normalization (GSN), a novel optimization technique designed to restore spectral symmetry by dynamically reshaping [...] Read more.
Neural network training often suffers from spectral asymmetry, where gradient energy is disproportionately allocated to high-frequency components, leading to suboptimal convergence and reduced efficiency. This paper introduces Gradient Spectral Normalization (GSN), a novel optimization technique designed to restore spectral symmetry by dynamically reshaping gradient distributions in the frequency domain. GSN transforms gradients using FFT, applies layer-specific energy redistribution to enforce a symmetric balance between low- and high-frequency components, and reconstructs the gradients for parameter updates. By tailoring normalization schedules for attention and MLP layers, GSN enhances inference performance and improves model accuracy with minimal overhead. Our approach leverages the principle of symmetry to create more stable and efficient neural systems, offering a practical solution for resource-constrained environments. This frequency-domain paradigm, grounded in symmetry restoration, opens new directions for neural network optimization with broad implications for large-scale AI systems. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

31 pages, 1209 KB  
Article
MiMapper: A Cloud-Based Multi-Hazard Mapping Tool for Nepal
by Catherine A. Price, Morgan Jones, Neil F. Glasser, John M. Reynolds and Rijan B. Kayastha
GeoHazards 2025, 6(4), 63; https://doi.org/10.3390/geohazards6040063 - 3 Oct 2025
Viewed by 627
Abstract
Nepal is highly susceptible to natural hazards, including earthquakes, flooding, and landslides, all of which may occur independently or in combination. Climate change is projected to increase the frequency and intensity of these natural hazards, posing growing risks to Nepal’s infrastructure and development. [...] Read more.
Nepal is highly susceptible to natural hazards, including earthquakes, flooding, and landslides, all of which may occur independently or in combination. Climate change is projected to increase the frequency and intensity of these natural hazards, posing growing risks to Nepal’s infrastructure and development. To the authors’ knowledge, the majority of existing geohazard research in Nepal is typically limited to single hazards or localised areas. To address this gap, MiMapper was developed as a cloud-based, open-access multi-hazard mapping tool covering the full national extent. Built on Google Earth Engine and using only open-source spatial datasets, MiMapper applies an Analytical Hierarchy Process (AHP) to generate hazard indices for earthquakes, floods, and landslides. These indices are combined into an aggregated hazard layer and presented in an interactive, user-friendly web map that requires no prior GIS expertise. MiMapper uses a standardised hazard categorisation system for all layers, providing pixel-based scores for each layer between 0 (Very Low) and 1 (Very High). The modal and mean hazard categories for aggregated hazard in Nepal were Low (47.66% of pixels) and Medium (45.61% of pixels), respectively, but there was high spatial variability in hazard categories depending on hazard type. The validation of MiMapper’s flooding and landslide layers showed an accuracy of 0.412 and 0.668, sensitivity of 0.637 and 0.898, and precision of 0.116 and 0.627, respectively. These validation results show strong overall performance for landslide prediction, whilst broad-scale exposure patterns are predicted for flooding but may lack the resolution or sensitivity to fully represent real-world flood events. Consequently, MiMapper is a useful tool to support initial hazard screening by professionals in urban planning, infrastructure development, disaster management, and research. It can contribute to a Level 1 Integrated Geohazard Assessment as part of the evaluation for improving the resilience of hydropower schemes to the impacts of climate change. MiMapper also offers potential as a teaching tool for exploring hazard processes in data-limited, high-relief environments such as Nepal. Full article
Show Figures

Figure 1

Back to TopTop