Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,553)

Search Parameters:
Keywords = dataset variability

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 8900 KB  
Article
Pre-Dog-Leg: A Feature Optimization Method for Visual Inertial SLAM Based on Adaptive Preconditions
by Junyang Zhao, Shenhua Lv, Huixin Zhu, Yaru Li, Han Yu, Yutie Wang and Kefan Zhang
Sensors 2025, 25(19), 6161; https://doi.org/10.3390/s25196161 (registering DOI) - 4 Oct 2025
Abstract
To address the ill-posedness of the Hessian matrix in monocular visual-inertial SLAM (Simultaneous Localization and Mapping) caused by unobservable depth of feature points, which leads to convergence difficulties and reduced robustness, this paper proposes a Pre-Dog-Leg feature optimization method based on an adaptive [...] Read more.
To address the ill-posedness of the Hessian matrix in monocular visual-inertial SLAM (Simultaneous Localization and Mapping) caused by unobservable depth of feature points, which leads to convergence difficulties and reduced robustness, this paper proposes a Pre-Dog-Leg feature optimization method based on an adaptive preconditioner. First, we propose a multi-candidate initialization method with robust characteristics. This method effectively circumvents erroneous depth initialization by introducing multiple depth assumptions and geometric consistency constraints. Second, we address the pathology of the Hessian matrix of the feature points by constructing a hybrid SPAI-Jacobi adaptive preconditioner. This preconditioner is capable of identifying matrix pathology and dynamically enabling preconditioning as a strategy. Finally, we construct a hybrid adaptive preconditioner for the traditional Dog-Leg numerical optimization method. To address the issue of degraded convergence performance when solving pathological problems, we map the pathological optimization problem from the original parameter space to a well-conditioned preconditioned space. The optimization equivalence is maintained by variable recovery. The experiments on the EuRoC dataset show that the method reduces the number of Hessian matrix conditionals by a factor of 7.9, effectively suppresses outliers, and significantly improves the overall convergence time. From the analysis of trajectory error, the absolute trajectory error is reduced by up to 16.48% relative to RVIO2 on the MH_01 sequence, 20.83% relative to VINS-mono on the MH_02 sequence, and up to 14.73% relative to VINS-mono and 34.0% relative to OpenVINS on the highly dynamic MH_05 sequence, indicating that the algorithm achieves higher localization accuracy and stronger system robustness. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

25 pages, 6271 KB  
Article
Estimating Fractional Land Cover Using Sentinel-2 and Multi-Source Data with Traditional Machine Learning and Deep Learning Approaches
by Sergio Sierra, Rubén Ramo, Marc Padilla, Laura Quirós and Adolfo Cobo
Remote Sens. 2025, 17(19), 3364; https://doi.org/10.3390/rs17193364 (registering DOI) - 4 Oct 2025
Abstract
Land cover mapping is essential for territorial management due to its links with ecological, hydrological, climatic, and socioeconomic processes. Traditional methods use discrete classes per pixel, but this study proposes estimating cover fractions with Sentinel-2 imagery (20 m) and AI. We employed the [...] Read more.
Land cover mapping is essential for territorial management due to its links with ecological, hydrological, climatic, and socioeconomic processes. Traditional methods use discrete classes per pixel, but this study proposes estimating cover fractions with Sentinel-2 imagery (20 m) and AI. We employed the French Land cover from Aerospace ImageRy (FLAIR) dataset (810 km2 in France, 19 classes), with labels co-registered with Sentinel-2 to derive precise fractional proportions per pixel. From these references, we generated training sets combining spectral bands, derived indices, and auxiliary data (climatic and temporal variables). Various machine learning models—including XGBoost three deep neural network (DNN) architectures with different depths, and convolutional neural networks (CNNs)—were trained and evaluated to identify the optimal configuration for fractional cover estimation. Model validation on the test set employed RMSE, MAE, and R2 metrics at both pixel level (20 m Sentinel-2) and scene level (100 m FLAIR). The training set integrating spectral bands, vegetation indices, and auxiliary variables yielded the best MAE and RMSE results. Among all models, DNN2 achieved the highest performance, with a pixel-level RMSE of 13.83 and MAE of 5.42, and a scene-level RMSE of 4.94 and MAE of 2.36. This fractional approach paves the way for advanced remote sensing applications, including continuous cover-change monitoring, carbon footprint estimation, and sustainability-oriented territorial planning. Full article
(This article belongs to the Special Issue Multimodal Remote Sensing Data Fusion, Analysis and Application)
Show Figures

Figure 1

25 pages, 7875 KB  
Article
Intelligent Optimal Seismic Design of Buildings Based on the Inversion of Artificial Neural Networks
by Augusto Montisci, Francesca Pibi, Maria Cristina Porcu and Juan Carlos Vielma
Appl. Sci. 2025, 15(19), 10713; https://doi.org/10.3390/app151910713 (registering DOI) - 4 Oct 2025
Abstract
The growing need for safe, cheap and sustainable earthquake-resistant buildings means that efficient methods for optimal seismic design must be found. The complexity and nonlinearity of the problem can be addressed using advanced automated techniques. This paper presents an intelligent three-step procedure for [...] Read more.
The growing need for safe, cheap and sustainable earthquake-resistant buildings means that efficient methods for optimal seismic design must be found. The complexity and nonlinearity of the problem can be addressed using advanced automated techniques. This paper presents an intelligent three-step procedure for optimally designing earthquake-resistant buildings based on the training (1st step) and successive inversion (2nd step) of Multi-Layer Perceptron Neural Networks. This involves solving the inverse problem of determining the optimal design parameters that meet pre-assigned, code-based performance targets, by means of a gradient-based optimization algorithm (3rd step). The effectiveness of the procedure was tested using an archetypal multistory, moment-resisting, concentrically braced steel frame with active tension diagonal bracing. The input dataset was obtained by varying four design parameters. The output dataset resulted from performance variables obtained through non-linear dynamic analyses carried out under three earthquakes consistent with the Chilean code spectrum, for all cases considered. Three spectrum-consistent records are sufficient for code-based seismic design, while each seismic excitation provides a wealth of information about the behavior of the structure, highlighting potential issues. For optimization purposes, only information relevant to critical sections was used as a performance indicator. Thus, the dataset for training consisted of pairs of design parameter sets and their corresponding performance indicator sets. A dedicated MLP was trained for each of the outputs over the entire dataset, which greatly reduced the total complexity of the problem without compromising the effectiveness of the solution. Due to the comparatively low number of cases considered, the leave-one-out method was adopted, which made the validation process more rigorous than usual since each case acted once as a validation set. The trained network was then inverted to find the input design search domain, where a cost-effective gradient-based algorithm determined the optimal design parameters. The feasibility of the solution was tested through numerical analyses, which proved the effectiveness of the proposed artificial intelligence-aided optimal seismic design procedure. Although the proposed methodology was tested on an archetypal building, the significance of the results highlights the effectiveness of the three-step procedure in solving complex optimization problems. This paves the way for its use in the design optimization of different kinds of earthquake-resistant buildings. Full article
Show Figures

Figure 1

18 pages, 837 KB  
Article
Physics-Informed Feature Engineering and R2-Based Signal-to-Noise Ratio Feature Selection to Predict Concrete Shear Strength
by Trevor J. Bihl, William A. Young II and Adam Moyer
Mathematics 2025, 13(19), 3182; https://doi.org/10.3390/math13193182 (registering DOI) - 4 Oct 2025
Abstract
Accurate prediction of reinforced concrete shear strength is essential for structural safety, yet datasets often contain a mix of raw geometric and material properties alongside physics-informed engineered features, making optimal feature selection challenging. This study introduces a statistically principled framework that advances feature [...] Read more.
Accurate prediction of reinforced concrete shear strength is essential for structural safety, yet datasets often contain a mix of raw geometric and material properties alongside physics-informed engineered features, making optimal feature selection challenging. This study introduces a statistically principled framework that advances feature reduction for neural networks in three novel ways. First, it extends the artificial neural network-based signal-to-noise ratio (ANN-SNR) method, previously limited to classification, into regression tasks for the first time. Second, it couples ANN-SNR with a confidence-interval (CI)-based stopping rule, using the lower bound of the baseline ANN’s R2 confidence interval as a rigorous statistical threshold for determining when feature elimination should cease. Third, it systematically evaluates both raw experimental variables and physics-informed engineered features, showing how their combination enhances both robustness and interpretability. Applied to experimental concrete shear strength data, the framework revealed that many low-SNR features in conventional formulations contribute little to predictive performance and can be safely removed. In contrast, hybrid models that combined key raw and engineered features consistently yielded the strongest performance. Overall, the proposed method reduced the input feature set by approximately 45% while maintaining results statistically indistinguishable from baseline and fully optimized models (R2 ≈ 0.85). These findings demonstrate that ANN-SNR with CI-based stopping provides a defensible and interpretable pathway for reducing model complexity in reinforced concrete shear strength prediction, offering practical benefits for design efficiency without compromising reliability. Full article
22 pages, 2624 KB  
Article
Seismic Damage Assessment of RC Structures After the 2015 Gorkha, Nepal, Earthquake Using Gradient Boosting Classifiers
by Murat Göçer, Hakan Erdoğan, Baki Öztürk and Safa Bozkurt Coşkun
Buildings 2025, 15(19), 3577; https://doi.org/10.3390/buildings15193577 (registering DOI) - 4 Oct 2025
Abstract
Accurate prediction of earthquake—induced building damage is essential for timely disaster response and effective risk mitigation. This study explores a machine learning (ML)-based classification approach using data from the 2015 Gorkha, Nepal earthquake, with a specific focus on reinforced concrete (RC) structures. The [...] Read more.
Accurate prediction of earthquake—induced building damage is essential for timely disaster response and effective risk mitigation. This study explores a machine learning (ML)-based classification approach using data from the 2015 Gorkha, Nepal earthquake, with a specific focus on reinforced concrete (RC) structures. The original dataset from the 2015 Nepal earthquake contained 762,094 building entries across 127 variables describing structural, functional, and contextual characteristics. Three ensemble ML modelsGradient Boosting Machine (GBM), Extreme Gradient Boosting (XGBoost), and Light Gradient Boosting Machine (LightGBM) were trained and tested on both the full dataset and a filtered RC-only subset. Two target variables were considered: a three-class variable (damage_class) and the original five-level damage grade (damage_grade). To address class imbalance, oversampling and undersampling techniques were applied, and model performance was evaluated using accuracy and F1 scores. The results showed that LightGBM consistently outperformed the other models, especially when oversampling was applied. For the RC dataset, LightGBM achieved up to 98% accuracy for damage_class and 93% accuracy for damage_grade, along with high F1 scores ranging between 0.84 and 1.00 across all classes. Feature importance analysis revealed that structural characteristics such as building area, age, and height were the most influential predictors of damage. These findings highlight the value of building-type-specific modeling combined with class balancing techniques to improve the reliability and generalizability of ML-based earthquake damage prediction. Full article
Show Figures

Figure 1

18 pages, 3251 KB  
Article
Classifying Advanced Driver Assistance System (ADAS) Activation from Multimodal Driving Data: A Real-World Study
by Gihun Lee, Kahyun Lee and Jong-Uk Hou
Sensors 2025, 25(19), 6139; https://doi.org/10.3390/s25196139 (registering DOI) - 4 Oct 2025
Abstract
Identifying the activation status of advanced driver assistance systems (ADAS) in real-world driving environments is crucial for safety, responsibility attribution, and accident forensics. Unlike prior studies that primarily rely on simulation-based settings or unsynchronized data, we collected a multimodal dataset comprising synchronized controller [...] Read more.
Identifying the activation status of advanced driver assistance systems (ADAS) in real-world driving environments is crucial for safety, responsibility attribution, and accident forensics. Unlike prior studies that primarily rely on simulation-based settings or unsynchronized data, we collected a multimodal dataset comprising synchronized controller area network (CAN)-bus and smartphone-based inertial measurement unit (IMU) signals from drivers on consistent highway sections under both ADAS-enabled and manual modes. Using these data, we developed lightweight classification pipelines based on statistical and deep learning approaches to explore the feasibility of distinguishing ADAS operation. Our analyses revealed systematic behavioral differences between modes, particularly in speed regulation and steering stability, highlighting how ADAS reduces steering variability and stabilizes speed control. Although classification accuracy was moderate, this study provides one of the first data-driven demonstrations of ADAS status detection under naturalistic conditions. Beyond classification, the released dataset enables systematic behavioral analysis and offers a valuable resource for advancing research on driver monitoring, adaptive ADAS algorithms, and accident forensics. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Automotive Engineering)
Show Figures

Figure 1

21 pages, 406 KB  
Article
DRBoost: A Learning-Based Method for Steel Quality Prediction
by Yang Song, Shuaida He and Qiyu Wu
Symmetry 2025, 17(10), 1644; https://doi.org/10.3390/sym17101644 - 3 Oct 2025
Abstract
Steel products play an important role in daily production and life as a common production material. Currently, the quality of steel products is judged by manual experience. However, various inspection criteria employed by human operators and complex factors and mechanisms in the steelmaking [...] Read more.
Steel products play an important role in daily production and life as a common production material. Currently, the quality of steel products is judged by manual experience. However, various inspection criteria employed by human operators and complex factors and mechanisms in the steelmaking process may lead to inaccuracies. To address these issues, we propose a learning-based method for steel quality prediction, which is named DRBoost,based on multiple machine learning techniques, including Decision tree, Random forest, and the LSBoost algorithm. In our method, the decision tree clearly captures the nonlinear relationships between features and serves as a solid baseline for making preliminary predictions. Random forest enhances the model’s robustness and avoids overfitting by aggregating multiple decision trees. LSBoost uses gradient descent training to assign contribution coefficients to different kinds of raw materials to obtain more accurate predictions. Five key chemical elements, including carbon, silicon, manganese, phosphorus, and sulfur, which significantly influence the major performance characteristics of steel products, are selected. Steel quality prediction is conducted by predicting the contents of these chemical elements. Multiple models are constructed to predict the contents of five key chemical elements in steel products. These models are symmetrically complementary, meeting the requirements of different production scenarios and forming a more accurate and universal method for predicting the steel product’s quality. In addition, the prediction method provides a symmetric quality control system for steel product production. Experimental evaluations are conducted based on a dataset of 2012 samples from a steel plant in Liaoning Province, China. The input variables include various raw material usages, while the outputs are the content of five key chemical elements that influence the quality of steel products. The experimental results show that the models demonstrate their advantages in different performance metrics and are applicable to practical steelmaking scenarios. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

19 pages, 8892 KB  
Article
Territorial Context and Spatial Interactions: A Case Study on the Erasmus K1 Mobility Datasets
by Alexandru Rusu, Octavian Groza, Nicolae Popa and Anita Denisa Caizer
Geographies 2025, 5(4), 55; https://doi.org/10.3390/geographies5040055 - 3 Oct 2025
Abstract
This study evaluates the impact of different territorial contexts on academic mobility within the framework of the Erasmus Programme, using data on Key Action 1 exchanges between 2015 and 2023. Using official EU datasets and a gravity model framework, the research investigates how [...] Read more.
This study evaluates the impact of different territorial contexts on academic mobility within the framework of the Erasmus Programme, using data on Key Action 1 exchanges between 2015 and 2023. Using official EU datasets and a gravity model framework, the research investigates how economic performance, geographical distance, EU membership, AUF (Agence Universitaire de la Francophonie) regional affiliation, and state contiguity shape international academic flows. The research developed two gravity models: one aimed to measure the potential barriers to academic flows through a residuals analysis, and the second integrated territorial delineations as predictors. In both models, the core of the explanatory variable is formed by indicators describing the economic performance of states and the distance between countries. When applied, the models converge in emphasizing that the inclusion of states in different territorial configurations has a strong effect on the structuring of academic flows. This suggests that the Erasmus Programme exhibits trends of overconcentration of flows in a limited number of countries, questioning the need for a more polycentric strategy and a reshaping of the funding mechanisms. Even if the gravity models behave well, given the limited number of predictors, further studies may need to incorporate qualitative indicators for a more comprehensive evaluation of the interactions. Full article
Show Figures

Figure 1

32 pages, 2499 KB  
Article
MiMapper: A Cloud-Based Multi-Hazard Mapping Tool for Nepal
by Catherine A. Price, Morgan Jones, Neil F. Glasser, John M. Reynolds and Rijan B. Kayastha
GeoHazards 2025, 6(4), 63; https://doi.org/10.3390/geohazards6040063 - 3 Oct 2025
Abstract
Nepal is highly susceptible to natural hazards, including earthquakes, flooding, and landslides, all of which may occur independently or in combination. Climate change is projected to increase the frequency and intensity of these natural hazards, posing growing risks to Nepal’s infrastructure and development. [...] Read more.
Nepal is highly susceptible to natural hazards, including earthquakes, flooding, and landslides, all of which may occur independently or in combination. Climate change is projected to increase the frequency and intensity of these natural hazards, posing growing risks to Nepal’s infrastructure and development. To the authors’ knowledge, the majority of existing geohazard research in Nepal is typically limited to single hazards or localised areas. To address this gap, MiMapper was developed as a cloud-based, open-access multi-hazard mapping tool covering the full national extent. Built on Google Earth Engine and using only open-source spatial datasets, MiMapper applies an Analytical Hierarchy Process (AHP) to generate hazard indices for earthquakes, floods, and landslides. These indices are combined into an aggregated hazard layer and presented in an interactive, user-friendly web map that requires no prior GIS expertise. MiMapper uses a standardised hazard categorisation system for all layers, providing pixel-based scores for each layer between 0 (Very Low) and 1 (Very High). The modal and mean hazard categories for aggregated hazard in Nepal were Low (47.66% of pixels) and Medium (45.61% of pixels), respectively, but there was high spatial variability in hazard categories depending on hazard type. The validation of MiMapper’s flooding and landslide layers showed an accuracy of 0.412 and 0.668, sensitivity of 0.637 and 0.898, and precision of 0.116 and 0.627, respectively. These validation results show strong overall performance for landslide prediction, whilst broad-scale exposure patterns are predicted for flooding but may lack the resolution or sensitivity to fully represent real-world flood events. Consequently, MiMapper is a useful tool to support initial hazard screening by professionals in urban planning, infrastructure development, disaster management, and research. It can contribute to a Level 1 Integrated Geohazard Assessment as part of the evaluation for improving the resilience of hydropower schemes to the impacts of climate change. MiMapper also offers potential as a teaching tool for exploring hazard processes in data-limited, high-relief environments such as Nepal. Full article
38 pages, 5753 KB  
Article
EfficientNet-B3-Based Automated Deep Learning Framework for Multiclass Endoscopic Bladder Tissue Classification
by A. A. Abd El-Aziz, Mahmood A. Mahmood and Sameh Abd El-Ghany
Diagnostics 2025, 15(19), 2515; https://doi.org/10.3390/diagnostics15192515 - 3 Oct 2025
Abstract
Background: Bladder cancer (BLCA) is a malignant growth that originates from the urothelial lining of the urinary bladder. Diagnosing BLCA is complex due to the variety of tumor features and its heterogeneous nature, which leads to significant morbidity and mortality. Understanding tumor [...] Read more.
Background: Bladder cancer (BLCA) is a malignant growth that originates from the urothelial lining of the urinary bladder. Diagnosing BLCA is complex due to the variety of tumor features and its heterogeneous nature, which leads to significant morbidity and mortality. Understanding tumor histopathology is crucial for developing tailored therapies and improving patient outcomes. Objectives: Early diagnosis and treatment are essential to lower the mortality rate associated with bladder cancer. Manual classification of muscular tissues by pathologists is labor-intensive and relies heavily on experience, which can result in interobserver variability due to the similarities in cancerous cell morphology. Traditional methods for analyzing endoscopic images are often time-consuming and resource-intensive, making it difficult to efficiently identify tissue types. Therefore, there is a strong demand for a fully automated and reliable system for classifying smooth muscle images. Methods: This paper proposes a deep learning (DL) technique utilizing the EfficientNet-B3 model and a five-fold cross-validation method to assist in the early detection of BLCA. This model enables timely intervention and improved patient outcomes while streamlining the diagnostic process, ultimately reducing both time and costs for patients. We conducted experiments using the Endoscopic Bladder Tissue Classification (EBTC) dataset for multiclass classification tasks. The dataset was preprocessed using resizing and normalization methods to ensure consistent input. In-depth experiments were carried out utilizing the EBTC dataset, along with ablation studies to evaluate the best hyperparameters. A thorough statistical analysis and comparisons with five leading DL models—ConvNeXtBase, DenseNet-169, MobileNet, ResNet-101, and VGG-16—showed that the proposed model outperformed the others. Conclusions: The EfficientNet-B3 model achieved impressive results: accuracy of 99.03%, specificity of 99.30%, precision of 97.95%, recall of 96.85%, and an F1-score of 97.36%. These findings indicate that the EfficientNet-B3 model demonstrates significant potential in accurately and efficiently diagnosing BLCA. Its high performance and ability to reduce diagnostic time and cost make it a valuable tool for clinicians in the field of oncology and urology. Full article
(This article belongs to the Special Issue AI and Big Data in Medical Diagnostics)
24 pages, 8041 KB  
Article
Stable Water Isotopes and Machine Learning Approaches to Investigate Seawater Intrusion in the Magra River Estuary (Italy)
by Marco Sabattini, Francesco Ronchetti, Gianpiero Brozzo and Diego Arosio
Hydrology 2025, 12(10), 262; https://doi.org/10.3390/hydrology12100262 - 3 Oct 2025
Abstract
Seawater intrusion into coastal river systems poses increasing challenges for freshwater availability and estuarine ecosystem integrity, especially under evolving climatic and anthropogenic pressures. This study presents a multidisciplinary investigation of marine intrusion dynamics within the Magra River estuary (Northwest Italy), integrating field monitoring, [...] Read more.
Seawater intrusion into coastal river systems poses increasing challenges for freshwater availability and estuarine ecosystem integrity, especially under evolving climatic and anthropogenic pressures. This study presents a multidisciplinary investigation of marine intrusion dynamics within the Magra River estuary (Northwest Italy), integrating field monitoring, isotopic tracing (δ18O; δD), and multivariate statistical modeling. Over an 18-month period, 11 fixed stations were monitored across six seasonal campaigns, yielding a comprehensive dataset of water electrical conductivity (EC) and stable isotope measurements from fresh water to salty water. EC and oxygen isotopic ratios displayed strong spatial and temporal coherence (R2 = 0.99), confirming their combined effectiveness in identifying intrusion patterns. The mass-balance model based on δ18O revealed that marine water fractions exceeded 50% in the lower estuary for up to eight months annually, reaching as far as 8.5 km inland during dry periods. Complementary δD measurements provided additional insight into water origin and fractionation processes, revealing a slight excess relative to the local meteoric water line (LMWL), indicative of evaporative enrichment during anomalously warm periods. Multivariate regression models (PLS, Ridge, LASSO, and Elastic Net) identified river discharge as the primary limiting factor of intrusion, while wind intensity emerged as a key promoting variable, particularly when aligned with the valley axis. Tidal effects were marginal under standard conditions, except during anomalous events such as tidal surges. The results demonstrate that marine intrusion is governed by complex and interacting environmental drivers. Combined isotopic and machine learning approaches can offer high-resolution insights for environmental monitoring, early-warning systems, and adaptive resource management under climate-change scenarios. Full article
24 pages, 1454 KB  
Article
AI-Driven Monitoring for Fish Welfare in Aquaponics: A Predictive Approach
by Jorge Saúl Fandiño Pelayo, Luis Sebastián Mendoza Castellanos, Rocío Cazes Ortega and Luis G. Hernández-Rojas
Sensors 2025, 25(19), 6107; https://doi.org/10.3390/s25196107 - 3 Oct 2025
Abstract
This study addresses the growing need for intelligent monitoring in aquaponic systems by developing a predictive system based on artificial intelligence and environmental sensing. The goal is to improve fish welfare through the early detection of adverse water conditions. The system integrates low-cost [...] Read more.
This study addresses the growing need for intelligent monitoring in aquaponic systems by developing a predictive system based on artificial intelligence and environmental sensing. The goal is to improve fish welfare through the early detection of adverse water conditions. The system integrates low-cost digital sensors to continuously measure key physicochemical variables—pH, dissolved oxygen, and temperature—using these as inputs for real-time classification of fish health status. Four supervised machine learning models were evaluated: linear discriminant analysis (LDA), support vector machines (SVMs), neural networks (NNs), and random forest (RF). A dataset of 1823 instances was collected over eight months from a red tilapia aquaponic setup. The random forest model yielded the highest classification accuracy (99%), followed by NN (98%) and SVM (97%). LDA achieved 82% accuracy. Performance was validated using 5-fold cross-validation and label permutation tests to confirm model robustness. These results demonstrate that sensor-based predictive models can reliably detect early signs of fish stress or mortality, supporting the implementation of intelligent environmental monitoring and automation strategies in sustainable aquaponic production. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

42 pages, 17206 KB  
Article
Sedimentary Architecture Prediction Using Facies Interpretation and Forward Seismic Modeling: Application to a Mediterranean Land–Sea Pliocene Infill (Roussillon Basin, France)
by Teddy Widemann, Eric Lasseur, Johanna Lofi, Serge Berné, Carine Grélaud, Benoît Issautier, Philippe-A. Pezard and Yvan Caballero
Geosciences 2025, 15(10), 383; https://doi.org/10.3390/geosciences15100383 - 3 Oct 2025
Abstract
This study predicts sedimentary architectures and facies distribution within the Pliocene prograding prism of the Roussillon Basin (Gulf of Lion, France), developed along an onshore–offshore continuum. Boreholes and outcrops provide facies-scale observations onshore, while seismic data capture basin-scale structures offshore. Forward seismic modeling [...] Read more.
This study predicts sedimentary architectures and facies distribution within the Pliocene prograding prism of the Roussillon Basin (Gulf of Lion, France), developed along an onshore–offshore continuum. Boreholes and outcrops provide facies-scale observations onshore, while seismic data capture basin-scale structures offshore. Forward seismic modeling bridges spatial and scale gaps between these datasets, yielding characteristic synthetic seismic signatures for the sedimentary facies associations observed onshore, used as analogs for offshore deposits. These signatures are then identified in offshore seismic data, allowing seismic profiles to be populated with sedimentary facies without a well tie. Predicted offshore architectures are consistent with shoreline trajectories and facies successions observed onshore. The Roussillon prism records passive margin reconstruction in the Mediterranean Basin following the Messinian Salinity Crisis, through the following three successive depositional profiles marking the onset of infilling: (1) Gilbert deltas, (2) wave- and storm-reworked fan deltas, and (3) a wave-dominated delta. Offshore, transitions in clinoform type modify sedimentary architectures, influenced by inherited Messinian paleotopography. This autogenic control generates spatial variability in accommodation, driving changes in depositional style. Overall, this multi-scale and integrative approach provides a robust framework for predicting offshore sedimentary architectures and can be applied to other deltaic settings with limited land–sea data continuity. Full article
Show Figures

Figure 1

45 pages, 7902 KB  
Review
Artificial Intelligence-Guided Supervised Learning Models for Photocatalysis in Wastewater Treatment
by Asma Rehman, Muhammad Adnan Iqbal, Mohammad Tauseef Haider and Adnan Majeed
AI 2025, 6(10), 258; https://doi.org/10.3390/ai6100258 - 3 Oct 2025
Abstract
Artificial intelligence (AI), when integrated with photocatalysis, has demonstrated high predictive accuracy in optimizing photocatalytic processes for wastewater treatment using a variety of catalysts such as TiO2, ZnO, CdS, Zr, WO2, and CeO2. The progress of research [...] Read more.
Artificial intelligence (AI), when integrated with photocatalysis, has demonstrated high predictive accuracy in optimizing photocatalytic processes for wastewater treatment using a variety of catalysts such as TiO2, ZnO, CdS, Zr, WO2, and CeO2. The progress of research in this area is greatly enhanced by advancements in data science and AI, which enable rapid analysis of large datasets in materials chemistry. This article presents a comprehensive review and critical assessment of AI-based supervised learning models, including support vector machines (SVMs), artificial neural networks (ANNs), and tree-based algorithms. Their predictive capabilities have been evaluated using statistical metrics such as the coefficient of determination (R2), root mean square error (RMSE), and mean absolute error (MAE), with numerous investigations documenting R2 values greater than 0.95 and RMSE values as low as 0.02 in forecasting pollutant degradation. To enhance model interpretability, Shapley Additive Explanations (SHAP) have been employed to prioritize the relative significance of input variables, illustrating, for example, that pH and light intensity frequently exert the most substantial influence on photocatalytic performance. These AI frameworks not only attain dependable predictions of degradation efficiency for dyes, pharmaceuticals, and heavy metals, but also contribute to economically viable optimization strategies and the identification of novel photocatalysts. Overall, this review provides evidence-based guidance for researchers and practitioners seeking to advance wastewater treatment technologies by integrating supervised machine learning with photocatalysis. Full article
Show Figures

Figure 1

28 pages, 3149 KB  
Article
Performance Comparison of Metaheuristic and Hybrid Algorithms Used for Energy Cost Minimization in a Solar–Wind–Battery Microgrid
by Seyfettin Vadi, Merve Bildirici and Orhan Kaplan
Sustainability 2025, 17(19), 8849; https://doi.org/10.3390/su17198849 - 2 Oct 2025
Abstract
The integration of renewable energy sources has become a strategic necessity for sustainable energy management and supply security. This study evaluates the performance of eight metaheuristic optimization algorithms in scheduling a renewable-based smart grid system that integrates solar, wind, and battery storage for [...] Read more.
The integration of renewable energy sources has become a strategic necessity for sustainable energy management and supply security. This study evaluates the performance of eight metaheuristic optimization algorithms in scheduling a renewable-based smart grid system that integrates solar, wind, and battery storage for a factory in İzmir, Türkiye. The algorithms considered include classical approaches—Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), the Whale Optimization Algorithm (WOA), Krill Herd Optimization (KOA), and the Ivy Algorithm (IVY)—alongside hybrid methods, namely KOA–WOA, WOA–PSO, and Gradient-Assisted PSO (GD-PSO). The optimization objectives were minimizing operational energy cost, maximizing renewable utilization, and reducing dependence on grid power, evaluated over a 7-day dataset in MATLAB. The results showed that hybrid algorithms, particularly GD-PSO and WOA–PSO, consistently achieved the lowest average costs with strong stability, while classical methods such as ACO and IVY exhibited higher costs and variability. Statistical analyses confirmed the robustness of these findings, highlighting the effectiveness of hybridization in improving smart grid energy optimization. Full article
Back to TopTop