Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (235)

Search Parameters:
Keywords = news insight automation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 958 KiB  
Review
Application of SLAM-Based Mobile Laser Scanning in Forest Inventory: Methods, Progress, Challenges, and Perspectives
by Yexu Wu, Shilei Zhong, Yuxin Ma, Yao Zhang and Meijie Liu
Forests 2025, 16(6), 920; https://doi.org/10.3390/f16060920 - 30 May 2025
Viewed by 88
Abstract
A thorough understanding of forest resources and development trends is based on quick and accurate forest inventories. Because of its flexibility and localized independence, mobile laser scanning (MLS) based on simultaneous localization and mapping (SLAM) is the best option for forest inventories. The [...] Read more.
A thorough understanding of forest resources and development trends is based on quick and accurate forest inventories. Because of its flexibility and localized independence, mobile laser scanning (MLS) based on simultaneous localization and mapping (SLAM) is the best option for forest inventories. The gap in the review studies in this field is filled by this study, which offers the first comprehensive review of SLAM-based MLS in forest inventory. This synthesis includes methods, research progress, challenges, and future perspectives of SLAM-based MLS in forest inventory. The precision and efficiency of SLAM-based MLS in forest inventories have benefited from improvements in data collection techniques and the ongoing development of algorithms, especially the application of deep learning. Based on evaluating the research progress of SLAM-based MLS in forest inventory, this paper provides new insights into the development of automation in this field. The main challenges of the current research are complex forest environments, localized bias, and limitations of the algorithms. To achieve accurate, real-time, and applicable forest inventories, researchers should develop SLAM technology dedicated to forest environments in the future so as to perform path planning, localization, autonomous navigation, obstacle avoidance, and point cloud recognition. In addition, researchers should develop algorithms specialized for different forest environments and improve the information processing capability of the algorithms to generate forest maps capable of extracting tree attributes automatically and in real time. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

18 pages, 4529 KiB  
Article
KlyH: 1D Disk Model-Based Large-Signal Simulation Software for Klystrons
by Hezhang Zhao, Hu He, Shifeng Li, Hua Huang, Zhengbang Liu, Limin Sun, Ke He and Dongwenlong Wu
Electronics 2025, 14(11), 2223; https://doi.org/10.3390/electronics14112223 - 30 May 2025
Viewed by 190
Abstract
This paper presents KlyH, a new 1D (one-dimensional) large-signal simulation software for klystrons, designed to deliver efficient and accurate simulation and optimization tools. KlyH integrates a Fortran-based dynamic link library (DLL) as its computational core, which employs high-performance numerical algorithms to rapidly compute [...] Read more.
This paper presents KlyH, a new 1D (one-dimensional) large-signal simulation software for klystrons, designed to deliver efficient and accurate simulation and optimization tools. KlyH integrates a Fortran-based dynamic link library (DLL) as its computational core, which employs high-performance numerical algorithms to rapidly compute critical parameters such as efficiency, gain, and bandwidth. Compared with traditional 1D simulation tools, which often lack open interfaces and extensibility, KlyH is built with a modular and open architecture that supports seamless integration with advanced optimization and intelligent design algorithms. KlyH incorporates multi-objective optimization frameworks, notably the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) and Optimized Multi-Objective Particle Swarm Optimization (OMOPSO), enabling automated parameter tuning for efficiency maximization and interaction length optimization. Its bandwidth-of-klystron-analysis module predicts gain and output power across operational bandwidths, with optimization algorithms further enhancing bandwidth performance. A Java-based graphical user interface (GUI) provides an intuitive workflow for parameter configuration and real-time visualization of simulation results. The open architecture also lays the foundation for future integration of artificial intelligence algorithms, promoting intelligent and automated klystron design workflows. The accuracy of KlyH and its potential for parameter optimization are confirmed by a case study on an X-band relativistic klystron amplifier. Discrepancies observed between 1D simulations and 3D PIC (three-dimensional particle-in-cell) simulation results are analyzed to identify model limitations, providing critical insights for advancing high-performance klystron designs. Full article
Show Figures

Figure 1

13 pages, 465 KiB  
Article
Democratizing Quantitative Data Analysis and Evaluation in Community-Based Research Through a New Automated Tool
by Jonathan Bennett, Mehdi Hajilo, Anna Paula Della Rosa, Rachel Arthur, Wesley James and Karen Matthews
Soc. Sci. 2025, 14(6), 346; https://doi.org/10.3390/socsci14060346 - 29 May 2025
Viewed by 193
Abstract
Data from community-based research offer crucial insights into community needs, challenges, and strengths, informing effective decision making for development strategies. To ensure efficient analysis, accessible and user-friendly tools are necessary for quick and accurate results. While successful tools and programming languages exist, many [...] Read more.
Data from community-based research offer crucial insights into community needs, challenges, and strengths, informing effective decision making for development strategies. To ensure efficient analysis, accessible and user-friendly tools are necessary for quick and accurate results. While successful tools and programming languages exist, many social science researchers struggle with complex analytical tools due to limited exposure during their education, as such tools are often not required. Developing an automated, user-friendly tool for community research can support students, researchers, and data centers by bridging gaps in analysis capabilities and enhancing the accessibility of valuable insights. We developed a new automated tool using the Shiny framework in R designed primarily for analyzing data in community research, which often involves pre- and post-analysis tests. While the tool is specifically tailored for pre- and post-survey data, it can also be easily adapted to provide other statistical information. The findings presented in this paper highlight the efficiency of using this tool for community-based research and emphasize the need for further development to address its shortcomings. Furthermore, this paper is considered the groundwork for developing more accessible, user-friendly, and free tools in the future, especially in an era of advanced and complex technologies. Full article
Show Figures

Figure 1

14 pages, 2303 KiB  
Article
Brain White Matter Alterations in Young Adults with Childhood Emotional Neglect Experience
by Xiaokang Jin, Bin Xu, Hua Jin and Shizhen Yan
Behav. Sci. 2025, 15(6), 746; https://doi.org/10.3390/bs15060746 - 28 May 2025
Viewed by 167
Abstract
Childhood trauma encompasses various subtypes, and evidence suggests that neurodevelopmental damage differs across these subtypes. However, the specific impact of childhood emotional neglect (CEN), a distinct subtype of childhood trauma, on the microstructural integrity of brain white matter remains unclear. Therefore, the present [...] Read more.
Childhood trauma encompasses various subtypes, and evidence suggests that neurodevelopmental damage differs across these subtypes. However, the specific impact of childhood emotional neglect (CEN), a distinct subtype of childhood trauma, on the microstructural integrity of brain white matter remains unclear. Therefore, the present study aims to investigate the effects of CEN on the microstructure of brain white matter in young adults using diffusion tensor imaging. After administering online questionnaires, conducting interviews, and obtaining diagnoses from specialized physicians, we recruited 20 young adults with a history of CEN and 20 young adults with no history of childhood trauma. Using automating fiber tract quantification (driven by a diffusion tensor model), we traced the 20 primary white matter fibers and divided each fiber into 100 nodes for analysis. Group differences in fractional anisotropy (FA) at each node of each fiber were then examined. The results revealed that the FA values at nodes 1–35 of the right thalamic radiation were consistently lower in the emotional neglect group compared to the control group (after FEW correction, cluster threshold = 22, p-threshold = 0.005). These findings suggest an association between CEN and reduced FA values in the right thalamic radiation, indicating alterations in brain white matter. Overall, our results contribute to the theoretical understanding of how “experience shapes the brain,” providing new insights into the neurostructural consequences of childhood emotional neglect. Full article
(This article belongs to the Section Social Psychology)
Show Figures

Figure 1

22 pages, 1525 KiB  
Article
Are Nations Ready for Digital Transformation? A Macroeconomic Perspective Through the Lens of Education Quality
by Roman Chinoracky, Natalia Stalmasekova, Radovan Madlenak and Lucia Madlenakova
Economies 2025, 13(6), 152; https://doi.org/10.3390/economies13060152 - 28 May 2025
Viewed by 113
Abstract
The global shift toward digital transformation presents both opportunities and challenges for national economies, particularly in terms of workforce readiness. While many studies assess digital readiness via infrastructure or technological adoption, fewer investigate the preparedness of countries’ future labor forces. This article addresses [...] Read more.
The global shift toward digital transformation presents both opportunities and challenges for national economies, particularly in terms of workforce readiness. While many studies assess digital readiness via infrastructure or technological adoption, fewer investigate the preparedness of countries’ future labor forces. This article addresses this research gap by examining how quality of education relates to job automation risk across OECD countries. The goal is to identify which nations are least prepared for digital disruption due to weak educational foundations and high automation exposure. Using data on education expenditure, PISA scores, and the Education Index, compared to the percentage of jobs at high risk of automation, this study applies correlational analysis and a quadrant overview to assess national readiness. Findings show that countries such as Slovakia, Poland, and Greece are least prepared, combining low investment in education and high exposure to automation. Conversely, nations like Finland, Norway, Sweden, and New Zealand exhibit strong readiness, characterized by robust education systems and lower automation risks. This study contributes to the literature by integrating automation vulnerability into national readiness assessments and offers actionable insights for policymakers focused on education reform and workforce development. Full article
(This article belongs to the Special Issue Economic Development in the Digital Economy Era)
Show Figures

Figure 1

15 pages, 2549 KiB  
Article
Automated Implementation of the Edinburgh Visual Gait Score (EVGS)
by Ishaasamyuktha Somasundaram, Albert Tu, Ramiro Olleac, Natalie Baddour and Edward D. Lemaire
Sensors 2025, 25(10), 3226; https://doi.org/10.3390/s25103226 - 21 May 2025
Viewed by 136
Abstract
The Edinburgh Visual Gait Score (EVGS) is a commonly used clinical scale for assessing gait abnormalities, providing insight into diagnosis and treatment planning. However, its manual implementation is resource-intensive and requires time, expertise, and a controlled environment for video recording and analysis. To [...] Read more.
The Edinburgh Visual Gait Score (EVGS) is a commonly used clinical scale for assessing gait abnormalities, providing insight into diagnosis and treatment planning. However, its manual implementation is resource-intensive and requires time, expertise, and a controlled environment for video recording and analysis. To address these issues, an automated approach for scoring the EVGS was developed. Unlike past methods dependent on controlled environments or simulated videos, the proposed approach integrates pose estimation with new algorithms to handle operational challenges present in the dataset, such as minor camera movement during sagittal recordings, slight zoom variations in coronal views, and partial visibility (e.g., missing head) in some videos. The system uses OpenPose for pose estimation and new algorithms for automatic gait event detection, stride segmentation, and computation of the 17 EVGS parameters across the sagittal and coronal planes. Evaluation of gait videos of patients with cerebral palsy showed high accuracy for parameters such as hip and knee flexion but a need for improvement in pelvic rotation and hindfoot alignment scoring. This automated EVGS approach can minimize the workload for clinicians through the introduction of automated, rapid gait analysis and enable mobile-based applications for clinical decision-making. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

20 pages, 8859 KiB  
Article
Efficient Steel Surface Defect Detection via a Lightweight YOLO Framework with Task-Specific Knowledge-Guided Optimization
by He Xu, Zhibo Zhang, Hairong Ye, Jinyu Song and Yanbing Chen
Electronics 2025, 14(10), 2029; https://doi.org/10.3390/electronics14102029 - 16 May 2025
Viewed by 167
Abstract
Defect detection is a critical task in industrial manufacturing, playing a vital role in achieving automation, improving product quality, and ensuring operational safety. Traditional methods, however, face considerable limitations in terms of accuracy and efficiency. To address these challenges, we propose DCA-YOLO, a [...] Read more.
Defect detection is a critical task in industrial manufacturing, playing a vital role in achieving automation, improving product quality, and ensuring operational safety. Traditional methods, however, face considerable limitations in terms of accuracy and efficiency. To address these challenges, we propose DCA-YOLO, a lightweight model for steel surface defect detection optimized based on task-specific knowledge. Specifically, our model incorporates a Dynamic Snake Convolution (DSConv) module to capture subtle linear features in challenging defect categories, a context-guided module to leverage contextual information for detecting clustered defects, and an Adaptive Spatial Feature Fusion (ASFF) mechanism to efficiently merge features across scales. The experimental results demonstrate that even with a nanoscale architecture (4.3 million parameters and 9.4 GFLOPs), the enhanced model exhibits marked improvements in detection accuracy and robustness, with mAP50 increasing by 4.6% and mAP50-95 by 7.7%. These findings not only offer a better solution for steel surface defect detection, but also provide new theoretical insights and practical experience for the advancement of industrial inspection technologies. In the future, DCA-YOLO is expected to be applied across a wider range of industrial detection scenarios, further driving progress in the field. Full article
Show Figures

Figure 1

22 pages, 7092 KiB  
Article
A GPT-Based Approach for Cyber Threat Assessment
by Fahim Sufi
AI 2025, 6(5), 99; https://doi.org/10.3390/ai6050099 - 13 May 2025
Viewed by 780
Abstract
Background: The increasing prevalence of cyber threats in industrial cyber–physical systems (ICPSs) necessitates advanced solutions for threat detection and analysis. This research proposes a novel GPT-based framework for assessing cyber threats, leveraging artificial intelligence to process and analyze large-scale cyber event data. Methods: [...] Read more.
Background: The increasing prevalence of cyber threats in industrial cyber–physical systems (ICPSs) necessitates advanced solutions for threat detection and analysis. This research proposes a novel GPT-based framework for assessing cyber threats, leveraging artificial intelligence to process and analyze large-scale cyber event data. Methods: The framework integrates multiple components, including data ingestion, preprocessing, feature extraction, and analysis modules such as knowledge graph construction, clustering, and anomaly detection. It utilizes a hybrid methodology combining spectral residual transformation and Convolutional Neural Networks (CNNs) to identify anomalies in time-series cyber event data, alongside regression models for evaluating the significant factors associated with cyber events. Results: The system was evaluated using 9018 cyber-related events sourced from 44 global news portals. Performance metrics, including precision (0.999), recall (0.998), and F1-score (0.998), demonstrate the framework’s efficacy in accurately classifying and categorizing cyber events. Notably, anomaly detection identified six significant deviations during the monitored timeframe, starting from 25 September 2023 to 25 November 2024, with a sensitivity of 75%, revealing critical insights into unusual activity patterns. The fully deployed automated model also identified 11 correlated factors and five unique clusters associated with high-rated cyber incidents. Conclusions: This approach provides actionable intelligence for stakeholders by offering real-time monitoring, anomaly detection, and knowledge graph-based insights into cyber threats. The outcomes highlight the system’s potential to enhance ICPS security, supporting proactive threat management and resilience in increasingly complex industrial environments. Full article
Show Figures

Figure 1

42 pages, 6895 KiB  
Article
IceBench: A Benchmark for Deep-Learning-Based Sea-Ice Type Classification
by Samira Alkaee Taleghan, Andrew P. Barrett, Walter N. Meier and Farnoush Banaei-Kashani
Remote Sens. 2025, 17(9), 1646; https://doi.org/10.3390/rs17091646 - 6 May 2025
Viewed by 276
Abstract
Sea ice plays a critical role in the global climate system and maritime operations, making timely and accurate classification essential. However, traditional manual methods are time-consuming, costly, and have inherent biases. Automating sea-ice type classification addresses these challenges by enabling faster, more consistent, [...] Read more.
Sea ice plays a critical role in the global climate system and maritime operations, making timely and accurate classification essential. However, traditional manual methods are time-consuming, costly, and have inherent biases. Automating sea-ice type classification addresses these challenges by enabling faster, more consistent, and scalable analysis. While both traditional and deep-learning approaches have been explored, deep-learning models offer a promising direction for improving efficiency and consistency in sea-ice classification. However, the absence of a standardized benchmark and comparative study prevents a clear consensus on the best-performing models. To bridge this gap, we introduce IceBench, a comprehensive benchmarking framework for sea-ice type classification. Our key contributions are three-fold: First, we establish the IceBench benchmarking framework, which leverages the existing AI4Arctic Sea Ice Challenge Dataset as a standardized dataset, incorporates a comprehensive set of evaluation metrics, and includes representative models from the entire spectrum of sea-ice type-classification methods categorized in two distinct groups, namely pixel-based classification methods and patch-based classification methods. IceBench is open-source and allows for convenient integration and evaluation of other sea-ice type-classification methods, hence facilitating comparative evaluation of new methods and improving reproducibility in the field. Second, we conduct an in-depth comparative study on representative models to assess their strengths and limitations, providing insights for both practitioners and researchers. Third, we leverage IceBench for systematic experiments addressing key research questions on model transferability across seasons (time) and locations (space), data downsampling, and preprocessing strategies. By identifying the best-performing models under different conditions, IceBench serves as a valuable reference for future research and a robust benchmarking framework for the field. Full article
Show Figures

Graphical abstract

54 pages, 14411 KiB  
Review
Exploring the Chemistry and Applications of Thio-, Seleno-, and Tellurosugars
by Roxana Martínez-Pascual, Mario Valera-Zaragoza, José G. Fernández-Bolaños and Óscar López
Molecules 2025, 30(9), 2053; https://doi.org/10.3390/molecules30092053 - 5 May 2025
Viewed by 452
Abstract
Given the crucial roles of carbohydrates in energy supply, biochemical processes, signaling events and the pathogenesis of several diseases, the development of carbohydrate analogues, called glycomimetics, is a key research area in Glycobiology, Pharmacology, and Medicinal Chemistry. Among the many structural transformations explored, [...] Read more.
Given the crucial roles of carbohydrates in energy supply, biochemical processes, signaling events and the pathogenesis of several diseases, the development of carbohydrate analogues, called glycomimetics, is a key research area in Glycobiology, Pharmacology, and Medicinal Chemistry. Among the many structural transformations explored, the replacement of endo- and exocyclic oxygen atoms by carbon (carbasugars) or heteroatoms, such as nitrogen (aza- and iminosugars), phosphorous (phosphasugars), sulfur (thiosugars), selenium (selenosugars) or tellurium (tellurosugars) have garnered significant attention. These isosteric substitutions can modulate the carbohydrate bioavailability, stability, and bioactivity, while introducing new properties, such as redox activity, interactions with pathological lectins and enzymes, or cytotoxic effects. In this manuscript we have focused on three major families of glycomimetics: thio-, seleno-, and tellurosugars. We provide a comprehensive review of the most relevant synthetic pathways leading to substitutions primarily at the endocyclic and glycosidic positions. The scope includes metal-catalyzed reactions, organocatalysis, electro- and photochemical transformations, free-radical processes, and automated syntheses. Additionally, mechanistic insights, stereoselectivity, and biological properties are also discussed. The structural diversity and promising bioactivities of these glycomimetics underscore their significance in this research area. Full article
(This article belongs to the Special Issue Glycomimetics: Design, Synthesis and Bioorganic Applications)
Show Figures

Figure 1

18 pages, 2052 KiB  
Article
Research on the Automatic Multi-Label Classification of Flight Instructor Comments Based on Transformer and Graph Neural Networks
by Zejian Liang, Yunxiang Zhao, Mengyuan Wang, Hong Huang and Haiwen Xu
Aerospace 2025, 12(5), 407; https://doi.org/10.3390/aerospace12050407 - 4 May 2025
Viewed by 283
Abstract
With the rapid advancement of the civil aviation sector and the concurrent expansion of pilot training programs, a pressing need arises for more efficient assessment methodologies during the pilot training process. Traditional written evaluations conducted by flight instructors are often marred by subjectivity [...] Read more.
With the rapid advancement of the civil aviation sector and the concurrent expansion of pilot training programs, a pressing need arises for more efficient assessment methodologies during the pilot training process. Traditional written evaluations conducted by flight instructors are often marred by subjectivity and inefficiency, rendering them inadequate to satisfy the stringent demands of Competency-Based Training and Assessment (CBTA) frameworks. To address this challenge, this study presents a novel multi-label classification model that seamlessly integrates RoBERTa, a robust language model, with Graph Convolutional Networks (GCNs). By simultaneously modeling text features and label interdependencies, this model enables the automated, multi-dimensional classification of instructor evaluations. It incorporates a dynamic weight fusion strategy, which intelligently adjusts the output weights of RoBERTa and GCNs based on label correlations. Additionally, it introduces a label co-occurrence graph convolution layer, designed to capture intricate higher-order dependencies among labels. This study is based on a real-world dataset comprising 1078 evaluations and 158 labels, covering six major dimensions, including operational capabilities and communication skills. To provide context for the improvement, the proposed RoBERTa + GCN model is compared with key baseline models, such as BERT and LSTM. The results show that the RoBERTa + GCN model achieves an F1 score of 0.9737, representing an average improvement of 4.73% over these traditional methods. This approach enhances the consistency and efficiency of flight training assessments and provides new insights into integrating natural language processing and graph neural networks, demonstrating broad application prospects. Full article
(This article belongs to the Special Issue New Trends in Aviation Development 2024–2025)
Show Figures

Figure 1

19 pages, 628 KiB  
Review
Reconceptualizing Gatekeeping in the Age of Artificial Intelligence: A Theoretical Exploration of Artificial Intelligence-Driven News Curation and Automated Journalism
by Dan Valeriu Voinea
Journal. Media 2025, 6(2), 68; https://doi.org/10.3390/journalmedia6020068 - 1 May 2025
Viewed by 1040
Abstract
Artificial intelligence (AI) is transforming how news is produced, curated, and consumed, challenging traditional gatekeeping theories rooted in human editorial control. We develop a robust theoretical framework to reconceptualize gatekeeping in the AI era. We integrate classic media theories—gatekeeping, agenda-setting, and framing—with contemporary [...] Read more.
Artificial intelligence (AI) is transforming how news is produced, curated, and consumed, challenging traditional gatekeeping theories rooted in human editorial control. We develop a robust theoretical framework to reconceptualize gatekeeping in the AI era. We integrate classic media theories—gatekeeping, agenda-setting, and framing—with contemporary insights from algorithmic news recommender systems, large language model (LLM)–based news writing, and platform studies. Our review reveals that AI-driven content curation systems (e.g., social media feeds, news aggregators) increasingly mediate what news is visible, sometimes reinforcing mainstream agendas, according to Nechushtai & Lewis, while, at other times, introducing new biases or echo chambers. Simultaneously, automated news generation via LLMs raises questions about how training data and optimization goals (engagement vs. diversity) act as new “gatekeepers” in story selection and framing. We found pervasive Simon’s theory that reliance on third-party AI platforms transfers authority from newsrooms, creating power dependencies that may undercut journalistic autonomy. Moreover, adaptive algorithms learn from user behavior, creating feedback loops that dynamically shape news diversity and bias over time. Drawing on communication studies, science & technology studies (STS), and AI ethics, we propose an updated theoretical framework of “algorithmic gatekeeping” that accounts for the hybrid human–AI processes governing news flow. We outline key research gaps—including opaque algorithmic decision-making and normative questions of accountability—and suggest directions for future theory-building to ensure journalism’s core values survive in the age of AI-driven news. Full article
Show Figures

Figure 1

26 pages, 2812 KiB  
Article
Dynamic Modeling, Trajectory Optimization, and Linear Control of Cable-Driven Parallel Robots for Automated Panelized Building Retrofits
by Yifang Liu and Bryan P. Maldonado
Buildings 2025, 15(9), 1517; https://doi.org/10.3390/buildings15091517 - 1 May 2025
Viewed by 450
Abstract
The construction industry faces a growing need for automation to reduce costs, improve accuracy and productivity, and address labor shortages. One area that stands to benefit significantly from automation is panelized prefabricated building envelope retrofits, which can improve a building’s energy efficiency in [...] Read more.
The construction industry faces a growing need for automation to reduce costs, improve accuracy and productivity, and address labor shortages. One area that stands to benefit significantly from automation is panelized prefabricated building envelope retrofits, which can improve a building’s energy efficiency in heating and cooling interior spaces. In this paper, we propose using cable-driven parallel robots (CDPRs), which can effectively lift and handle large objects, to install these panels. However, implementing CDPRs presents significant challenges because of their nonlinear dynamics, complex trajectory planning, and precise control requirements. To tackle these challenges, this work focuses on a new application of established control and trajectory optimization theories in a CDPR simulation of a building envelope retrofit under real-world conditions. We first model the dynamics of CDPRs, highlighting the critical role of damping in system behavior. Building on this dynamic model, we formulate a trajectory optimization problem to generate feasible and efficient motion plans for the robot under operational and environmental constraints. Given the high precision required in the construction industry, accurately tracking the optimized trajectory is essential. However, challenges such as partial observability and external vibrations complicate this task. To address these issues, a Linear Quadratic Gaussian control framework is applied, enabling the robot to track the optimized trajectories with precision. Simulation results show that the proposed controller enables precise end effector positioning with errors under 4 mm, even in the presence of external wind disturbances. Through comprehensive simulations, our approach allows for an in-depth exploration of the system’s nonlinear dynamics, trajectory optimization, and control strategies under controlled yet highly realistic conditions. The results demonstrate the feasibility of CDPRs for automating panel installation and provide insights into their practical deployment. Full article
(This article belongs to the Special Issue Robotics, Automation and Digitization in Construction)
Show Figures

Figure 1

38 pages, 2098 KiB  
Review
Rethinking Poultry Welfare—Integrating Behavioral Science and Digital Innovations for Enhanced Animal Well-Being
by Suresh Neethirajan
Poultry 2025, 4(2), 20; https://doi.org/10.3390/poultry4020020 - 29 Apr 2025
Viewed by 668
Abstract
The relentless drive to meet global demand for poultry products has pushed for rapid intensification in chicken farming, dramatically boosting efficiency and yield. Yet, these gains have exposed a host of complex welfare challenges that have prompted scientific scrutiny and ethical reflection. In [...] Read more.
The relentless drive to meet global demand for poultry products has pushed for rapid intensification in chicken farming, dramatically boosting efficiency and yield. Yet, these gains have exposed a host of complex welfare challenges that have prompted scientific scrutiny and ethical reflection. In this review, I critically evaluate recent innovations aimed at mitigating such concerns by drawing on advances in behavioral science and digital monitoring and insights into biological adaptations. Specifically, I focus on four interconnected themes: First, I spotlight the complexity of avian sensory perception—encompassing vision, auditory capabilities, olfaction, and tactile faculties—to underscore how lighting design, housing configurations, and enrichment strategies can better align with birds’ unique sensory worlds. Second, I explore novel tools for gauging emotional states and cognition, ranging from cognitive bias tests to developing protocols for identifying pain or distress based on facial cues. Third, I examine the transformative potential of computer vision, bioacoustics, and sensor-based technologies for the continuous, automated tracking of behavior and physiological indicators in commercial flocks. Fourth, I assess how data-driven management platforms, underpinned by precision livestock farming, can deploy real-time insights to optimize welfare on a broad scale. Recognizing that climate change and evolving production environments intensify these challenges, I also investigate how breeds resilient to extreme conditions might open new avenues for welfare-centered genetic and management approaches. While the adoption of cutting-edge techniques has shown promise, significant hurdles persist regarding validation, standardization, and commercial acceptance. I conclude that truly sustainable progress hinges on an interdisciplinary convergence of ethology, neuroscience, engineering, data analytics, and evolutionary biology—an integrative path that not only refines welfare assessment but also reimagines poultry production in ethically and scientifically robust ways. Full article
Show Figures

Figure 1

13 pages, 510 KiB  
Article
A Comparative Analysis of Student Performance Prediction: Evaluating Optimized Deep Learning Ensembles Against Semi-Supervised Feature Selection-Based Models
by Jose Antonio Lagares Rodríguez, Norberto Díaz-Díaz and Carlos David Barranco González
Appl. Sci. 2025, 15(9), 4818; https://doi.org/10.3390/app15094818 - 26 Apr 2025
Viewed by 305
Abstract
Advancements in modern technology have significantly increased the availability of educational data, presenting researchers with new challenges in extracting meaningful insights. Educational Data Mining offers analytical methods to support the prediction of student outcomes, development of intelligent tutoring systems, and curriculum optimization. Prior [...] Read more.
Advancements in modern technology have significantly increased the availability of educational data, presenting researchers with new challenges in extracting meaningful insights. Educational Data Mining offers analytical methods to support the prediction of student outcomes, development of intelligent tutoring systems, and curriculum optimization. Prior studies have highlighted the potential of semi-supervised approaches that incorporate feature selection to identify factors influencing academic success, particularly for improving model interpretability and predictive performance. Many feature selection methods tend to exclude variables that may not be individually powerful predictors but can collectively provide significant information, thereby constraining a model’s capabilities in learning environments. In contrast, Deep Learning (DL) models paired with Automated Machine Learning techniques can decrease the reliance on manual feature engineering, thereby enabling automatic fine-tuning of numerous model configurations. In this study, we propose a reproducible methodology that integrates DL with AutoML to evaluate student performance. We compared the proposed DL methodology to a semi-supervised approach originally introduced by Yu et al. under the same evaluation criteria. Our results indicate that DL-based models can provide a flexible, data-driven approach for examining student outcomes, in addition to preserving the importance of feature selection for interpretability. This proposal is available for replication and additional research. Full article
Show Figures

Figure 1

Back to TopTop