Processing math: 100%
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (356)

Search Parameters:
Keywords = SGD

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
44 pages, 3453 KiB  
Article
Fractional Optimizers for LSTM Networks in Financial Time Series Forecasting
by Mustapha Ez-zaiym, Yassine Senhaji, Meriem Rachid, Karim El Moutaouakil and Vasile Palade
Mathematics 2025, 13(13), 2068; https://doi.org/10.3390/math13132068 (registering DOI) - 22 Jun 2025
Abstract
This study investigates the theoretical foundations and practical advantages of fractional-order optimization in computational machine learning, with a particular focus on stock price forecasting using long short-term memory (LSTM) networks. We extend several widely used optimization algorithms—including Adam, RMSprop, SGD, Adadelta, FTRL, Adamax, [...] Read more.
This study investigates the theoretical foundations and practical advantages of fractional-order optimization in computational machine learning, with a particular focus on stock price forecasting using long short-term memory (LSTM) networks. We extend several widely used optimization algorithms—including Adam, RMSprop, SGD, Adadelta, FTRL, Adamax, and Adagrad—by incorporating fractional derivatives into their update rules. This novel approach leverages the memory-retentive properties of fractional calculus to improve convergence behavior and model efficiency. Our experimental analysis evaluates the performance of fractional-order optimizers on LSTM networks tasked with forecasting stock prices for major companies such as AAPL, MSFT, GOOGL, AMZN, META, NVDA, JPM, V, and UNH. Considering four metrics (Sharpe ratio, directional accuracy, cumulative return, and MSE), the results show that fractional orders can significantly enhance prediction accuracy for moderately volatile stocks, especially among lower-cap assets. However, for highly volatile stocks, performance tends to degrade with higher fractional orders, leading to erratic and inconsistent forecasts. In addition, fractional optimizers with short-memory truncation offer a favorable trade-off between computational efficiency and modeling accuracy in medium-frequency financial applications. Their enhanced capacity to capture long-range dependencies and robust performance in noisy environments further justify their adoption in such contexts. These results suggest that fractional-order optimization holds significant promise for improving financial forecasting models—provided that the fractional parameters are carefully tuned to balance memory effects with system stability. Full article
28 pages, 3267 KiB  
Article
Alzheimer’s Disease Detection in Various Brain Anatomies Based on Optimized Vision Transformer
by Faisal Mehmood, Asif Mehmood and Taeg Keun Whangbo
Mathematics 2025, 13(12), 1927; https://doi.org/10.3390/math13121927 - 10 Jun 2025
Viewed by 302
Abstract
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder and a growing public health concern. Despite significant advances in deep learning for medical image analysis, early and accurate diagnosis of AD remains challenging. In this study, we focused on optimizing the training process of [...] Read more.
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder and a growing public health concern. Despite significant advances in deep learning for medical image analysis, early and accurate diagnosis of AD remains challenging. In this study, we focused on optimizing the training process of deep learning models by proposing an enhanced version of the Adam optimizer. The proposed optimizer introduces adaptive learning rate scaling, momentum correction, and decay modulation to improve convergence speed, training stability, and classification accuracy. We integrated the enhanced optimizer with Vision Transformer (ViT) and Convolutional Neural Network (CNN) architectures. The ViT-based model comprises a linear projection of image patches, positional encoding, a transformer encoder, and a Multi-Layer Perceptron (MLP) head with a Softmax classifier for multiclass AD classification. Experiments on publicly available Alzheimer’s disease datasets (ADNI-1 and ADNI-2) showed that the enhanced optimizer enabled the ViT model to achieve a 99.84% classification accuracy on Dataset-1 and 95.75% on Dataset-2, outperforming Adam, RMSProp, and SGD. Moreover, the optimizer reduced entropy loss and improved convergence stability by 0.8–2.1% across various architectures, including ResNet, RegNet, and MobileNet. This work contributes a robust optimizer-centric framework that enhances training efficiency and diagnostic accuracy for automated Alzheimer’s disease detection. Full article
(This article belongs to the Special Issue The Application of Deep Neural Networks in Image Processing)
Show Figures

Figure 1

18 pages, 3611 KiB  
Article
Using Landsat 8/9 Thermal Bands to Detect Potential Submarine Groundwater Discharge (SGD) Sites in the Mediterranean in North West-Central Morocco
by Youssef Bernichi, Mina Amharref, Abdes-Samed Bernoussi and Pierre-Louis Frison
Hydrology 2025, 12(6), 144; https://doi.org/10.3390/hydrology12060144 - 10 Jun 2025
Viewed by 505
Abstract
The objective of this study is to detect the locations of submarine groundwater discharge (SGD) in the coastal area of the El Jebha region, located in northwestern Morocco. It is hypothesized that this zone is fed by one of the most rain-rich karstic [...] Read more.
The objective of this study is to detect the locations of submarine groundwater discharge (SGD) in the coastal area of the El Jebha region, located in northwestern Morocco. It is hypothesized that this zone is fed by one of the most rain-rich karstic aquifers in Morocco (the Dorsale Calcaire). The region’s geology is complex, characterized by multiple faults and fractures. Thermal remote sensing is used in this study to locate potential SGD zones, as groundwater emerging from karst systems is typically cooler than surrounding ocean water. Landsat satellite imagery was used to assess temperature variations and detect anomalies associated with the presence of freshwater in the marine environment. El Jebha’s geographical location, with a direct interface between limestone and sea, makes it an ideal site for the appearance of submarine groundwater discharges. This study constitutes the first use of Landsat-8/9 thermal-infrared imagery, processed with a multi-temporal fuzzy-overlay method, to detect SGD. Out of 107 Landsat scenes reviewed, 16 cloud-free images were selected. The workflow identified 18 persistent cold anomalies, of which three were classified as high-probability SGD zones based on recurrence and spatial consistency. The results highlight several potential SGD zones, confirming the cost-effectiveness of thermal remote sensing in mapping thermal anomalies and opening up new perspectives for the study of SGD in Morocco, where these phenomena remain rarely documented. Full article
(This article belongs to the Topic Karst Environment and Global Change)
Show Figures

Figure 1

18 pages, 1055 KiB  
Article
Privacy-Preserving and Interpretable Grade Prediction: A Differential Privacy Integrated TabNet Framework
by Yuqi Zhao, Jinheng Wang, Xiaoqing Tan, Linyan Wen, Qingru Gao and Wenjing Wang
Electronics 2025, 14(12), 2328; https://doi.org/10.3390/electronics14122328 - 6 Jun 2025
Viewed by 282
Abstract
The increasing digitization of educational data poses critical challenges in balancing predictive accuracy with privacy protection for sensitive student information. This study introduces DP-TabNet, a pioneering framework that integrates the interpretable deep learning architecture of TabNet with differential privacy (DP) techniques to enable [...] Read more.
The increasing digitization of educational data poses critical challenges in balancing predictive accuracy with privacy protection for sensitive student information. This study introduces DP-TabNet, a pioneering framework that integrates the interpretable deep learning architecture of TabNet with differential privacy (DP) techniques to enable secure and effective student grade prediction. By incorporating the Laplace Mechanism with a carefully calibrated privacy budget (ϵ = 0.7) and sensitivity (Δf = 0.1), DP-TabNet ensures robust protection of individual data while maintaining analytical utility. Experimental results on real-world educational datasets demonstrate that DP-TabNet achieves an accuracy of 80%, only 4% lower than the non-private TabNet model (84%), and outperforms privacy-preserving baselines such as DP-Random Forest (78%), DP-XGBoost (78%), DP-MLP (69%), and DP-SGD (69%). Furthermore, its interpretable feature importance analysis highlights key predictors like resource engagement and attendance metrics, offering actionable insights for educators under strict privacy constraints. This work advances privacy-preserving educational technology by demonstrating that high predictive performance and strong privacy guarantees can coexist, providing a practical and responsible framework for educational data analytics. Full article
Show Figures

Figure 1

16 pages, 1400 KiB  
Article
An RMSprop-Incorporated Latent Factorization of Tensor Model for Random Missing Data Imputation in Structural Health Monitoring
by Jingjing Yang
Algorithms 2025, 18(6), 351; https://doi.org/10.3390/a18060351 - 6 Jun 2025
Viewed by 561
Abstract
In structural health monitoring (SHM), ensuring data completeness is critical for enhancing the accuracy and reliability of structural condition assessments. SHM data are prone to random missing values due to signal interference or connectivity issues, making precise data imputation essential. A latent factorization [...] Read more.
In structural health monitoring (SHM), ensuring data completeness is critical for enhancing the accuracy and reliability of structural condition assessments. SHM data are prone to random missing values due to signal interference or connectivity issues, making precise data imputation essential. A latent factorization of tensor (LFT)-based method has proven effective for such problems, with optimization typically achieved via stochastic gradient descent (SGD). However, SGD-based LFT models and other imputation methods exhibit significant sensitivity to learning rates and slow tail-end convergence. To address these limitations, this study proposes an RMSprop-incorporated latent factorization of tensor (RLFT) model, which integrates an adaptive learning rate mechanism to dynamically adjust step sizes based on gradient magnitudes. Experimental validation on a scaled bridge accelerometer dataset demonstrates that RLFT achieves faster convergence and higher imputation accuracy compared to state-of-the-art models including SGD-based LFT and the long short-term memory (LSTM) network, with improvements of at least 10% in both imputation accuracy and convergence rate, offering a more efficient and reliable solution for missing data handling in SHM. Full article
Show Figures

Figure 1

28 pages, 1638 KiB  
Article
Sign-Entropy Regularization for Personalized Federated Learning
by Koffka Khan
Entropy 2025, 27(6), 601; https://doi.org/10.3390/e27060601 - 4 Jun 2025
Viewed by 352
Abstract
Personalized Federated Learning (PFL) seeks to train client-specific models across distributed data silos with heterogeneous distributions. We introduce Sign-Entropy Regularization (SER), a novel entropy-based regularization technique that penalizes excessive directional variability in client-local optimization. Motivated by Descartes’ Rule of Signs, we hypothesize that [...] Read more.
Personalized Federated Learning (PFL) seeks to train client-specific models across distributed data silos with heterogeneous distributions. We introduce Sign-Entropy Regularization (SER), a novel entropy-based regularization technique that penalizes excessive directional variability in client-local optimization. Motivated by Descartes’ Rule of Signs, we hypothesize that frequent sign changes in gradient trajectories reflect complexity in the local loss landscape. By minimizing the entropy of gradient sign patterns during local updates, SER encourages smoother optimization paths, improves convergence stability, and enhances personalization. We formally define a differentiable sign-entropy objective over the gradient sign distribution and integrate it into standard federated optimization frameworks, including FedAvg and FedProx. The regularizer is computed efficiently and applied post hoc per local round. Extensive experiments on three benchmark datasets (FEMNIST, Shakespeare, and CIFAR-10) show that SER improves both average and worst-case client accuracy, reduces variance across clients, accelerates convergence, and smooths the local loss surface as measured by Hessian trace and spectral norm. We also present a sensitivity analysis of the regularization strength ρ and discuss the potential for client-adaptive variants. Comparative evaluations against state-of-the-art methods (e.g., Ditto, pFedMe, momentum-based variants, Entropy-SGD) highlight that SER introduces an orthogonal and scalable mechanism for personalization. Theoretically, we frame SER as an information-theoretic and geometric regularizer that stabilizes learning dynamics without requiring dual-model structures or communication modifications. This work opens avenues for trajectory-based regularization and hybrid entropy-guided optimization in federated and resource-constrained learning settings. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

19 pages, 6127 KiB  
Review
Review of Research Progress on the Impact of Submarine Groundwater Discharge on Pockmark Formation and Evolution
by Zhengrong Zhang, Hongxian Shan, Xuezhi Feng, Zhentian Jia, Long Jiang, Siming Wang and Chaoqi Zhu
J. Mar. Sci. Eng. 2025, 13(6), 1070; https://doi.org/10.3390/jmse13061070 - 28 May 2025
Viewed by 272
Abstract
Pockmarks are globally distributed geomorphic features exhibiting diverse morphologies. Their geometric characteristics are commonly quantified by the radius-to-depth ratio. The evolutionary process of these features typically follows a cyclical pattern comprising initiation, expansion, stabilization, and decline. Submarine groundwater discharge (SGD), a seasonally modulated [...] Read more.
Pockmarks are globally distributed geomorphic features exhibiting diverse morphologies. Their geometric characteristics are commonly quantified by the radius-to-depth ratio. The evolutionary process of these features typically follows a cyclical pattern comprising initiation, expansion, stabilization, and decline. Submarine groundwater discharge (SGD), a seasonally modulated land–sea exchange process, exerts a significant influence on the formation and evolution of pockmarks. This influence is mediated through hydrodynamic forcing effects, sediment redistribution, and coupled chemical–biological interactions. This review systematically examines the formation mechanisms, evolutionary patterns, and primary controlling factors of pockmarks induced by SGD. It integrates recent research developments and global case studies to elucidate the dynamic interplay of multiple influencing factors. This study emphasizes the significance of interdisciplinary approaches in marine geological research and identifies key areas for future investigation. These insights aim to enhance risk assessment frameworks for marine hazards and inform marine spatial planning strategies. Full article
(This article belongs to the Special Issue Marine Geohazards: Characterization to Prediction)
Show Figures

Figure 1

21 pages, 7991 KiB  
Article
Machine Learning–Based Calibration and Performance Evaluation of Low-Cost Internet of Things Air Quality Sensors
by Mehmet Taştan
Sensors 2025, 25(10), 3183; https://doi.org/10.3390/s25103183 - 19 May 2025
Viewed by 786
Abstract
Low-cost air quality sensors (LCSs) are increasingly being used in environmental monitoring due to their affordability and portability. However, their sensitivity to environmental factors can lead to measurement inaccuracies, necessitating effective calibration methods to enhance their reliability. In this study, an Internet of [...] Read more.
Low-cost air quality sensors (LCSs) are increasingly being used in environmental monitoring due to their affordability and portability. However, their sensitivity to environmental factors can lead to measurement inaccuracies, necessitating effective calibration methods to enhance their reliability. In this study, an Internet of Things (IoT)-based air quality monitoring system was developed and tested using the most commonly preferred sensor types for air quality measurement: fine particulate matter (PM2.5), carbon dioxide (CO2), temperature, and humidity sensors. To improve sensor accuracy, eight different machine learning (ML) algorithms were applied: Decision Tree (DT), Linear Regression (LR), Random Forest (RF), k-Nearest Neighbors (kNN), AdaBoost (AB), Gradient Boosting (GB), Support Vector Machines (SVM), and Stochastic Gradient Descent (SGD). Sensor performance was evaluated by comparing measurements with a reference device, and the best-performing ML model was determined for each sensor. The results indicate that GB and kNN achieved the highest accuracy. For CO2 sensor calibration, GB achieved R2 = 0.970, RMSE = 0.442, and MAE = 0.282, providing the lowest error rates. For the PM2.5 sensor, kNN delivered the most successful results, with R2 = 0.970, RMSE = 2.123, and MAE = 0.842. Additionally, for temperature and humidity sensors, GB demonstrated the highest accuracy with the lowest error values (R2 = 0.976, RMSE = 2.284). These findings demonstrate that, by identifying suitable ML methods, ML-based calibration techniques can significantly enhance the accuracy of LCSs. Consequently, they offer a viable and cost-effective alternative to traditional high-cost air quality monitoring systems. Future studies should focus on long-term data collection, testing under diverse environmental conditions, and integrating additional sensor types to further advance this field. Full article
(This article belongs to the Special Issue Intelligent Sensor Calibration: Techniques, Devices and Methodologies)
Show Figures

Figure 1

21 pages, 16775 KiB  
Article
Non-Iterative Phase-Only Hologram Generation via Stochastic Gradient Descent Optimization
by Alejandro Velez-Zea and John Fredy Barrera-Ramírez
Photonics 2025, 12(5), 500; https://doi.org/10.3390/photonics12050500 - 16 May 2025
Viewed by 353
Abstract
In this work, we explored, for the first time, to the best of our knowledge, the potential of stochastic gradient descent (SGD) to optimize random phase functions for application in non-iterative phase-only hologram generation. We defined and evaluated four loss functions based on [...] Read more.
In this work, we explored, for the first time, to the best of our knowledge, the potential of stochastic gradient descent (SGD) to optimize random phase functions for application in non-iterative phase-only hologram generation. We defined and evaluated four loss functions based on common image quality metrics and compared the performance of SGD-optimized random phases with those generated using Gerchberg–Saxton (GS) optimization. The quality of the reconstructed holograms was assessed through numerical simulations, considering both accuracy and computational efficiency. Our results demonstrate that SGD-based optimization can produce higher-quality phase holograms for low-contrast target scenes and presents nearly identical performance to GS-optimized random phases for high-contrast targets. Experimental validation confirmed the practical feasibility of the proposed method and its potential as a flexible alternative to conventional GS-based optimization. Full article
(This article belongs to the Special Issue Advances in Optical Imaging)
Show Figures

Figure 1

21 pages, 26641 KiB  
Article
A CNN-Based Method for Quantitative Assessment of Steel Microstructures in Welded Zones
by Cássio Danelon de Almeida, Thales Tozatto Filgueiras, Moisés Luiz Lagares, Bruno da Silva Macêdo, Camila Martins Saporetti, Matteo Bodini and Leonardo Goliatt
Fibers 2025, 13(5), 66; https://doi.org/10.3390/fib13050066 - 15 May 2025
Viewed by 824
Abstract
The mechanical performance of metallic components is intrinsically linked to their microstructural features. However, the manual quantification of microconstituents in metallographic images remains a time-consuming and subjective task, often requiring over 15 min per image by a trained expert. To address this limitation, [...] Read more.
The mechanical performance of metallic components is intrinsically linked to their microstructural features. However, the manual quantification of microconstituents in metallographic images remains a time-consuming and subjective task, often requiring over 15 min per image by a trained expert. To address this limitation, this study proposes an automated approach for quantifying the microstructural constituents from low-carbon steel welded zone images using convolutional neural networks (CNNs). A dataset of 210 micrographs was expanded to 720 samples through data augmentation to improve model generalization. Two architectures (AlexNet and VGG16) were trained from scratch, while three pre-trained models (VGG19, InceptionV3, and Xception) were fine-tuned. Among these, VGG19 optimized with stochastic gradient descent (SGD) achieved the best predictive performance, with an R2 of 0.838, MAE of 5.01%, and RMSE of 6.88%. The results confirm the effectiveness of CNNs for reliable and efficient microstructure quantification, offering a significant contribution to computational metallography. Full article
Show Figures

Figure 1

10 pages, 322 KiB  
Proceeding Paper
Optimizing Brain Tumor Classification: Integrating Deep Learning and Machine Learning with Hyperparameter Tuning
by Vijaya Kumar Velpula, Kamireddy Rasool Reddy, K. Naga Prakash, K. Prasanthi Jasmine and Vadlamudi Jyothi Sri
Eng. Proc. 2025, 87(1), 64; https://doi.org/10.3390/engproc2025087064 - 12 May 2025
Viewed by 391
Abstract
Brain tumors significantly impact global health and pose serious challenges for accurate diagnosis due to their diverse nature and complex characteristics. Effective diagnosis and classification are essential for selecting the best treatment strategies and forecasting patient outcomes. Currently, histopathological examination of biopsy samples [...] Read more.
Brain tumors significantly impact global health and pose serious challenges for accurate diagnosis due to their diverse nature and complex characteristics. Effective diagnosis and classification are essential for selecting the best treatment strategies and forecasting patient outcomes. Currently, histopathological examination of biopsy samples is the standard method for brain tumor identification and classification. However, this method is invasive, time-consuming, and prone to human error. To address these limitations, a fully automated approach is proposed for brain tumor classification. Recent advancements in deep learning, particularly convolutional neural networks (CNNs), have shown promise in improving the accuracy and efficiency of tumor detection from magnetic resonance imaging (MRI) scans. In response, a model was developed that integrates machine learning (ML) and deep learning (DL) techniques. The process began by splitting the data into training, testing, and validation sets. Images were then resized and cropped to enhance model quality and efficiency. Relevant texture features were extracted using a modified Visual Geometry Group (VGG) architecture. These features were fed into various supervised ML models, including support vector machine (SVM), k-nearest neighbors (KNN), logistic regression (LR), stochastic gradient descent (SGD), random forest (RF), and AdaBoost, with GridSearchCV used for hyperparameter tuning. The model’s performance was evaluated using key metrics such as accuracy, precision, recall, F1-score, and specificity. Experimental results demonstrate that the proposed approach offers a robust and automated solution for brain tumor classification, achieving the highest accuracy of 94.02% with VGG19 and 96.30% with VGG16. This model can significantly assist healthcare professionals in early tumor detection and in improving diagnostic accuracy. Full article
(This article belongs to the Proceedings of The 5th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

18 pages, 6257 KiB  
Article
Submarine Groundwater Discharge in the Nice Airport Landslide Area
by Christoph Witt and Achim Kopf
J. Mar. Sci. Eng. 2025, 13(5), 909; https://doi.org/10.3390/jmse13050909 - 3 May 2025
Viewed by 390
Abstract
Natural radioactivity was measured and analyzed at the Nice Slope for over a month using radon daughters in order to trace groundwater movement from a coastal aquifer to a nearshore continental shelf. Such groundwater movement may have resulted in submarine groundwater discharge (SGD) [...] Read more.
Natural radioactivity was measured and analyzed at the Nice Slope for over a month using radon daughters in order to trace groundwater movement from a coastal aquifer to a nearshore continental shelf. Such groundwater movement may have resulted in submarine groundwater discharge (SGD) and potentially sediment weakening and slope failure. The relationship among major hydrological parameters (precipitation, Var discharge, groundwater level, salinity and water origin) in the area is demonstrated in this study. Time series analyses also helped to detect tidal fluctuations in freshwater input, highlighting the crucial role SGD plays in the slope stability of the still failure-prone Nice Slope, parts of which collapsed in a tsunamigenic submarine landslide in 1979. Earlier deployments of the underwater mass spectrometer KATERINA showed that SGD is limited to the region of the 1979 landslide scar, suggesting that the spatially heterogenous lithologies do not support widespread groundwater charging. The calculated volumetric activities from groundwater tracing isotopes revealed peaks up to ca. 150 counts 214Bi, which is similar to those measured at other prominent SGD sites along the Mediterranean shoreline. Therefore, this rare long-term radioisotope dataset is a valuable contribution to the collaborative research at the Nice Slope and may not remain restricted to the unconfined landslide scar but may charge permeable sub-bottom areas nearby. Hence, it has to be taken into account for further slope stability studies. Full article
Show Figures

Figure 1

37 pages, 1508 KiB  
Article
Investigating the Impact of Digitalization on Resource Use, Energy Use, and Waste Reduction Towards Sustainability: Considering Environmental Awareness as a Moderator
by Ghadeer Alsanie
Sustainability 2025, 17(9), 4073; https://doi.org/10.3390/su17094073 - 30 Apr 2025
Viewed by 351
Abstract
Today’s world is characterized by rapid technological advancements and escalating ecological concerns including soil and water contamination, acidification, biodiversity loss, excessive waste, etc.; a need is continuously increasing to mitigate these issues and promote sustainability. This study investigated the impact of digitalization on [...] Read more.
Today’s world is characterized by rapid technological advancements and escalating ecological concerns including soil and water contamination, acidification, biodiversity loss, excessive waste, etc.; a need is continuously increasing to mitigate these issues and promote sustainability. This study investigated the impact of digitalization on resource use, energy use, waste reduction, and the moderating role of environmental awareness towards sustainability in the manufacturing sector of Pakistan. A quantitative-based approach was used and primary data were collected from the 650 managerial-level manufacturing sector employees from Pakistan by using a closed-ended questionnaire and purposive sampling. Structural equation modeling (SEM) using SmartPLS-4 was employed to analyze the data. Digitalization showed positive and significant effects on energy efficiency, waste reduction, and optimal resource utilization—all of which were significantly associated with sustainability. In addition, the results also showed that environmental awareness functions as a moderator, influencing the impact of digital technologies on resource use, energy use, and waste reduction. The study highlights the potential of digitalization for promoting sustainability by enhancing resource consumption efficiency in the manufacturing industry. It also underscores the importance and need for environmental awareness in adopting digital transformation. The study has major implications for governments, engineers, policymakers, and researchers to promote digital transformation in the manufacturing sector and provide support in accomplishing the Sustainable Development Goals, i.e., SGD-7, SDG-9, SDG-11, and SDG-12. Full article
(This article belongs to the Section Resources and Sustainable Utilization)
Show Figures

Figure 1

19 pages, 2231 KiB  
Article
Predicting the S. cerevisiae Gene Expression Score by a Machine Learning Classifier
by Piotr H. Pawłowski and Piotr Zielenkiewicz
Life 2025, 15(5), 723; https://doi.org/10.3390/life15050723 - 29 Apr 2025
Viewed by 456
Abstract
The topic of this work is gene expression and its score according to various factors analyzed globally using machine learning techniques. The expression score (ES) of genes characterizes their activity and, thus, their importance for cellular processes. This may depend on many different [...] Read more.
The topic of this work is gene expression and its score according to various factors analyzed globally using machine learning techniques. The expression score (ES) of genes characterizes their activity and, thus, their importance for cellular processes. This may depend on many different factors (attributes). To find the most important classifier, a machine learning classifier (random forest) was selected, trained, and optimized on the Waikato Environment for Knowledge Analysis WEKA platform, resulting in the most accurate attribute-dependent prediction of the ES of Saccharomyces cerevisiae genes. In this way, data from the Saccharomyces Genome Database (SGD), presenting ES values corresponding to a wide spectrum of attributes, were used, revised, classified, and balanced, and the significance of the considered attributes was evaluated. In this way, the novel random forest model indicates the most important attributes determining classes of low, moderate, and high ES. They cover both the experimental conditions and the genetic, physical, statistical, and logistic features. During validation, the obtained model could classify the instances of a primary unknown test set with a correctness of 84.1%. Full article
(This article belongs to the Section Microbiology)
Show Figures

Figure 1

12 pages, 2000 KiB  
Article
Effects of Training of Pharmacists in Japan on Reasonable Accommodations for People with Intellectual Disabilities
by Masaki Shoji, Rintaro Imafuku, Mei Mizomoto and Mitsuko Onda
Disabilities 2025, 5(2), 43; https://doi.org/10.3390/disabilities5020043 - 25 Apr 2025
Viewed by 376
Abstract
With the enforcement of the Act for Eliminating Discrimination against Persons with Disabilities, the provision of reasonable accommodations in pharmacies has become mandatory in Japan. This study aimed to develop and validate the effectiveness of a training program to improve pharmacists’ ability to [...] Read more.
With the enforcement of the Act for Eliminating Discrimination against Persons with Disabilities, the provision of reasonable accommodations in pharmacies has become mandatory in Japan. This study aimed to develop and validate the effectiveness of a training program to improve pharmacists’ ability to assist people with intellectual disabilities. The educational staff of one chain pharmacy company announced the program, and pharmacists at this company were invited to participate in the program. A 90 min online training was conducted with 15 pharmacists. The session included a lecture on reasonable accommodations and small group discussions (SGD). Before and after the training, participants answered an online survey about 1. Their basic attributes (number of years of experience and awareness of reasonable accommodations and constructive dialogue); 2. Their confidence in providing medication guidance to people with intellectual disabilities (10-point scale); and 3. Possible accommodations that could be provided by pharmacies. Training resulted in an increase in the average score for question 2 from 3.93 to 5.87. In addition, an increase in the number of keywords within the free-text descriptions and in the number of co-occurrences within the responses of mentions of possible accommodations in pharmacies was observed. Despite the shortness of the training, it changed the participants’ awareness about accommodations for people with intellectual disabilities. Further study is needed to enhance the content and evaluate changes in practice. Full article
Show Figures

Figure 1

Back to TopTop