Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (217)

Search Parameters:
Keywords = vision foundation model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 7591 KB  
Article
High-Fidelity NIR-LED Direct-View Display System for Authentic Night Vision Goggle Simulation Training
by Yixiong Zeng, Bo Xu and Kun Qiu
Sensors 2025, 25(17), 5368; https://doi.org/10.3390/s25175368 (registering DOI) - 30 Aug 2025
Viewed by 209
Abstract
Current simulation training for pilots wearing night vision goggles (NVGs) (e.g., night landings and tactical reconnaissance) faces fidelity limitations from conventional displays. This study proposed a novel dynamic NIR-LED direct-view display system for authentic nighttime scene simulation. Through comparative characterization of NVG response [...] Read more.
Current simulation training for pilots wearing night vision goggles (NVGs) (e.g., night landings and tactical reconnaissance) faces fidelity limitations from conventional displays. This study proposed a novel dynamic NIR-LED direct-view display system for authentic nighttime scene simulation. Through comparative characterization of NVG response across LED wavelengths under ultra-low-current conditions, 940 nm was identified as the optimal wavelength. Quantification of inherent nonlinear response in NVG observation enabled derivation of a mathematical model that provides the foundation for inverse gamma correction compensation. A prototype NIR-LED display was engineered with 1.25 mm pixel pitch and 1280 × 1024 resolution at 60 Hz refresh rate, achieving >90% uniformity and >2000:1 contrast. Subjective evaluations confirmed exceptional simulation fidelity. This system enables high-contrast, low-power NVG simulation for both full-flight simulators and urban low-altitude reconnaissance training systems, providing the first quantified analysis of NVG-LED nonlinear interactions and establishing the technical foundation for next-generation LED-based all-weather visual displays. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

47 pages, 2691 KB  
Systematic Review
Buzzing with Intelligence: A Systematic Review of Smart Beehive Technologies
by Josip Šabić, Toni Perković, Petar Šolić and Ljiljana Šerić
Sensors 2025, 25(17), 5359; https://doi.org/10.3390/s25175359 - 29 Aug 2025
Viewed by 268
Abstract
Smart-beehive technologies represent a paradigm shift in beekeeping, transitioning from traditional, reactive methods toward proactive, data-driven management. This systematic literature review investigates the current landscape of intelligent systems applied to beehives, focusing on the integration of IoT-based monitoring, sensor modalities, machine learning techniques, [...] Read more.
Smart-beehive technologies represent a paradigm shift in beekeeping, transitioning from traditional, reactive methods toward proactive, data-driven management. This systematic literature review investigates the current landscape of intelligent systems applied to beehives, focusing on the integration of IoT-based monitoring, sensor modalities, machine learning techniques, and their applications in precision apiculture. The review adheres to PRISMA guidelines and analyzes 135 peer-reviewed publications identified through searches of Web of Science, IEEE Xplore, and Scopus between 1990 and 2025. It addresses key research questions related to the role of intelligent systems in early problem detection, hive condition monitoring, and predictive intervention. Common sensor types include environmental, acoustic, visual, and structural modalities, each supporting diverse functional goals such as health assessment, behavior analysis, and forecasting. A notable trend toward deep learning, computer vision, and multimodal sensor fusion is evident, particularly in applications involving disease detection and colony behavior modeling. Furthermore, the review highlights a growing corpus of publicly available datasets critical for the training and evaluation of machine learning models. Despite the promising developments, challenges remain in system integration, dataset standardization, and large-scale deployment. This review offers a comprehensive foundation for the advancement of smart apiculture technologies, aiming to improve colony health, productivity, and resilience in increasingly complex environmental conditions. Full article
Show Figures

Figure 1

22 pages, 2117 KB  
Article
Deep Learning-Powered Down Syndrome Detection Using Facial Images
by Mujeeb Ahmed Shaikh, Hazim Saleh Al-Rawashdeh and Abdul Rahaman Wahab Sait
Life 2025, 15(9), 1361; https://doi.org/10.3390/life15091361 - 27 Aug 2025
Viewed by 219
Abstract
Down syndrome (DS) is one of the prevalent chromosomal disorders, representing distinctive craniofacial features and a range of developmental and medical challenges. Due to the lack of clinical expertise and high infrastructure costs, access to genetic testing is restricted to resource-constrained clinical settings. [...] Read more.
Down syndrome (DS) is one of the prevalent chromosomal disorders, representing distinctive craniofacial features and a range of developmental and medical challenges. Due to the lack of clinical expertise and high infrastructure costs, access to genetic testing is restricted to resource-constrained clinical settings. There is a demand for developing a non-invasive and equitable DS screening tool, facilitating DS diagnosis for a wide range of populations. In this study, we develop and validate a robust, interpretable deep learning model for the early detection of DS using facial images of infants. A hybrid feature extraction architecture combining RegNet X–MobileNet V3 and vision transformer (ViT)-Linformer is developed for effective feature representation. We use an adaptive attention-based feature fusion to enhance the proposed model’s focus on diagnostically relevant facial regions. Bayesian optimization with hyperband (BOHB) fine-tuned extremely randomized trees (ExtraTrees) is employed to classify the features. To ensure the model’s generalizability, stratified five-fold cross-validation is performed. Compared to the recent DS classification approaches, the proposed model demonstrates outstanding performance, achieving an accuracy of 99.10%, precision of 98.80%, recall of 98.87%, F1-score of 98.83%, and specificity of 98.81%, on the unseen data. The findings underscore the strengths of the proposed model as a reliable screening tool to identify DS in the early stages using the facial images. This study paves the foundation to build equitable, scalable, and trustworthy digital solution for effective pediatric care across the globe. Full article
(This article belongs to the Section Medical Research)
Show Figures

Figure 1

14 pages, 898 KB  
Article
Attention-Pool: 9-Ball Game Video Analytics with Object Attention and Temporal Context Gated Attention
by Anni Zheng and Wei Qi Yan
Computers 2025, 14(9), 352; https://doi.org/10.3390/computers14090352 - 27 Aug 2025
Viewed by 224
Abstract
The automated analysis of pool game videos presents significant challenges due to complex object interactions, precise rule requirements, and event-driven game dynamics that traditional computer vision approaches struggle to address effectively. This research introduces TCGA-Pool, a novel video analytics framework specifically designed for [...] Read more.
The automated analysis of pool game videos presents significant challenges due to complex object interactions, precise rule requirements, and event-driven game dynamics that traditional computer vision approaches struggle to address effectively. This research introduces TCGA-Pool, a novel video analytics framework specifically designed for comprehensive 9-ball pool game understanding through advanced object attention mechanisms and temporal context modeling. Our approach addresses the critical gap in automated cue sports analysis by focusing on three essential classification tasks: Clear shot detection (successful ball potting without fouls), win condition identification (game-ending scenarios), and potted balls counting (accurate enumeration of successfully pocketed balls). The proposed framework leverages a Temporal Context Gated Attention (TCGA) mechanism that dynamically focuses on salient game elements while incorporating sequential dependencies inherent in pool game sequences. Through comprehensive evaluation on a dataset comprising 58,078 annotated video frames from diverse 9-ball pool scenarios, our TCGA-Pool framework demonstrates substantial improvements over existing video analysis methods, achieving accuracy gains of 4.7%, 3.2%, and 6.2% for clear shot detection, win condition identification, and potted ball counting tasks, respectively. The framework maintains computational efficiency with only 27.3 M parameters and 13.9 G FLOPs, making it suitable for real-time applications. Our contributions include the introduction of domain-specific object attention mechanisms, the development of adaptive temporal modeling strategies for cue sports, and the implementation of a practical real-time system for automated pool game monitoring. This work establishes a foundation for intelligent sports analytics in precision-based games and demonstrates the effectiveness of specialized deep learning approaches for complex temporal video understanding tasks. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Figure 1

23 pages, 6098 KB  
Article
Smart Manufacturing Workflow for Fuse Box Assembly and Validation: A Combined IoT, CAD, and Machine Vision Approach
by Carmen-Cristiana Cazacu, Teodor Cristian Nasu, Mihail Hanga, Dragos-Alexandru Cazacu and Costel Emil Cotet
Appl. Sci. 2025, 15(17), 9375; https://doi.org/10.3390/app15179375 - 26 Aug 2025
Viewed by 285
Abstract
This paper presents an integrated workflow for smart manufacturing, combining CAD modeling, Digital Twin synchronization, and automated visual inspection to detect defective fuses in industrial electrical panels. The proposed system connects Onshape CAD models with a collaborative robot via the ThingWorx IoT platform [...] Read more.
This paper presents an integrated workflow for smart manufacturing, combining CAD modeling, Digital Twin synchronization, and automated visual inspection to detect defective fuses in industrial electrical panels. The proposed system connects Onshape CAD models with a collaborative robot via the ThingWorx IoT platform and leverages computer vision with HSV color segmentation for real-time fuse validation. A custom ROI-based calibration method is implemented to address visual variation across fuse types, and a 5-s time-window validation improves detection robustness under fluctuating conditions. The system achieves a 95% accuracy rate across two fuse box types, with confidence intervals reported for statistical significance. Experimental findings indicate an approximate 85% decrease in manual intervention duration. Because of its adaptability and extensibility, the design can be implemented in a variety of assembly processes and provides a foundation for smart factory systems that are more scalable and independent. Full article
Show Figures

Figure 1

16 pages, 3972 KB  
Article
Solar Panel Surface Defect and Dust Detection: Deep Learning Approach
by Atta Rahman
J. Imaging 2025, 11(9), 287; https://doi.org/10.3390/jimaging11090287 - 25 Aug 2025
Viewed by 389
Abstract
In recent years, solar energy has emerged as a pillar of sustainable development. However, maintaining panel efficiency under extreme environmental conditions remains a persistent hurdle. This study introduces an automated defect detection pipeline that leverages deep learning and computer vision to identify five [...] Read more.
In recent years, solar energy has emerged as a pillar of sustainable development. However, maintaining panel efficiency under extreme environmental conditions remains a persistent hurdle. This study introduces an automated defect detection pipeline that leverages deep learning and computer vision to identify five standard anomaly classes: Non-Defective, Dust, Defective, Physical Damage, and Snow on photovoltaic surfaces. To build a robust foundation, a heterogeneous dataset of 8973 images was sourced from public repositories and standardized into a uniform labeling scheme. This dataset was then expanded through an aggressive augmentation strategy, including flips, rotations, zooms, and noise injections. A YOLOv11-based model was trained and fine-tuned using both fixed and adaptive learning rate schedules, achieving a mAP@0.5 of 85% and accuracy, recall, and F1-score above 95% when evaluated across diverse lighting and dust scenarios. The optimized model is integrated into an interactive dashboard that processes live camera streams, issues real-time alerts upon defect detection, and supports proactive maintenance scheduling. Comparative evaluations highlight the superiority of this approach over manual inspections and earlier YOLO versions in both precision and inference speed, making it well suited for deployment on edge devices. Automating visual inspection not only reduces labor costs and operational downtime but also enhances the longevity of solar installations. By offering a scalable solution for continuous monitoring, this work contributes to improving the reliability and cost-effectiveness of large-scale solar energy systems. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

37 pages, 10467 KB  
Article
Cascaded Hierarchical Attention with Adaptive Fusion for Visual Grounding in Remote Sensing
by Huming Zhu, Tianqi Gao, Zhixian Li, Zhipeng Chen, Qiuming Li, Kongmiao Miao, Biao Hou and Licheng Jiao
Remote Sens. 2025, 17(17), 2930; https://doi.org/10.3390/rs17172930 - 23 Aug 2025
Viewed by 407
Abstract
Visual grounding for remote sensing (RSVG) is the task of localizing the referred object in remote sensing (RS) images by parsing free-form language descriptions. However, RSVG faces the challenge of low detection accuracy due to unbalanced multi-scale grounding capabilities, where large objects have [...] Read more.
Visual grounding for remote sensing (RSVG) is the task of localizing the referred object in remote sensing (RS) images by parsing free-form language descriptions. However, RSVG faces the challenge of low detection accuracy due to unbalanced multi-scale grounding capabilities, where large objects have more prominent grounding accuracy than small objects. Based on Faster R-CNN, we propose Faster R-CNN in Visual Grounding for Remote Sensing (FR-RSVG), a two-stage method for grounding RS objects. Building on this foundation, to enhance the ability to ground multi-scale objects, we propose Faster R-CNN with Adaptive Vision-Language Fusion (FR-AVLF), which introduces a layered Adaptive Vision-Language Fusion (AVLF) module. Specifically, this method can adaptively fuse deep or shallow visual features according to the input text (e.g., location-related or object characteristic descriptions), thereby optimizing semantic feature representation and improving grounding accuracy for objects of different scales. Given that RSVG is essentially an expanded form of RS object detection, and considering the knowledge the model acquired in prior RS object detection tasks, we propose Faster R-CNN with Adaptive Vision-Language Fusion Pretrained (FR-AVLFPRE). To further enhance model performance, we propose Faster R-CNN with Cascaded Hierarchical Attention Grounding and Multi-Level Adaptive Vision-Language Fusion Pretrained (FR-CHAGAVLFPRE), which introduces a cascaded hierarchical attention grounding mechanism, employs a more advanced language encoder, and improves upon AVLF by proposing Multi-Level AVLF, significantly improving localization accuracy in complex scenarios. Extensive experiments on the DIOR-RSVG dataset demonstrate that our model surpasses most existing advanced models. To validate the generalization capability of our model, we conducted zero-shot inference experiments on shared categories between DIOR-RSVG and both Complex Description DIOR-RSVG (DIOR-RSVG-C) and OPT-RSVG datasets, achieving performance superior to most existing models. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

25 pages, 3532 KB  
Article
Sustainable Design and Lifecycle Prediction of Crusher Blades Through a Digital Replica-Based Predictive Prototyping Framework and Data-Efficient Machine Learning
by Hilmi Saygin Sucuoglu, Serra Aksoy, Pinar Demircioglu and Ismail Bogrekci
Sustainability 2025, 17(16), 7543; https://doi.org/10.3390/su17167543 - 21 Aug 2025
Viewed by 376
Abstract
Sustainable product development demands components that last longer, consume less energy, and can be refurbished within circular supply chains. This study introduces a digital replica-based predictive prototyping workflow for industrial crusher blades that meets these goals. Six commercially used blade geometries (A–F) were [...] Read more.
Sustainable product development demands components that last longer, consume less energy, and can be refurbished within circular supply chains. This study introduces a digital replica-based predictive prototyping workflow for industrial crusher blades that meets these goals. Six commercially used blade geometries (A–F) were recreated as high-fidelity finite-element models and subjected to an identical 5 kN cutting load. Comparative simulations revealed that a triple-edged hooked profile (Blade A) reduced peak von Mises stress by 53% and total deformation by 71% compared with a conventional flat blade, indicating lower drive-motor power and slower wear. To enable fast virtual prototyping and condition-based maintenance, deformation was subsequently predicted using a data-efficient machine-learning model. Multi-view image augmentation enlarged the experimental dataset from 6 to 60 samples, and an XGBoost regressor, trained on computer-vision geometry features and engineering parameters, achieved R2 = 0.996 and MAE = 0.005 mm in five-fold cross-validation. Feature-importance analysis highlighted applied stress, safety factor, and edge design as the dominant predictors. The integrated method reduces development cycles, reduces material loss via iteration, extends the life of blades, and facilitates refurbishment decisions, providing a foundation for future integration into digital twin systems to support sustainable product development and predictive maintenance in heavy-duty manufacturing. Full article
(This article belongs to the Special Issue Achieving Sustainability in New Product Development and Supply Chain)
Show Figures

Figure 1

16 pages, 7955 KB  
Article
Development and Validation of a Computer Vision Dataset for Object Detection and Instance Segmentation in Earthwork Construction Sites
by JongHo Na, JaeKang Lee, HyuSoung Shin and IlDong Yun
Appl. Sci. 2025, 15(16), 9000; https://doi.org/10.3390/app15169000 - 14 Aug 2025
Viewed by 322
Abstract
Construction sites report the highest rate of industrial accidents, prompting the active development of smart safety management systems based on deep learning-based computer vision technology. To support the digital transformation of construction sites, securing site-specific datasets is essential. In this study, raw data [...] Read more.
Construction sites report the highest rate of industrial accidents, prompting the active development of smart safety management systems based on deep learning-based computer vision technology. To support the digital transformation of construction sites, securing site-specific datasets is essential. In this study, raw data were collected from an actual earthwork site. Key construction equipment and terrain objects primarily operated at the site were identified, and 89,766 images were processed to build a site-specific training dataset. This dataset includes annotated bounding boxes for object detection and polygon masks for instance segmentation. The performance of the dataset was validated using representative models—YOLO v7 for object detection and Mask R-CNN for instance segmentation. Quantitative metrics and visual assessments confirmed the validity and practical applicability of the dataset. The dataset used in this study has been made publicly available for use by researchers in related fields. This dataset is expected to serve as a foundational resource for advancing object detection applications in construction safety. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

33 pages, 9679 KB  
Article
Intelligent Defect Detection of Ancient City Walls Based on Computer Vision
by Gengpei Zhang, Xiaohan Dou and Leqi Li
Sensors 2025, 25(16), 5042; https://doi.org/10.3390/s25165042 - 14 Aug 2025
Viewed by 484
Abstract
As an important tangible carrier of historical and cultural heritage, ancient city walls embody the historical memory of urban development and serve as evidence of engineering evolution. However, due to prolonged exposure to complex natural environments and human activities, they are highly susceptible [...] Read more.
As an important tangible carrier of historical and cultural heritage, ancient city walls embody the historical memory of urban development and serve as evidence of engineering evolution. However, due to prolonged exposure to complex natural environments and human activities, they are highly susceptible to various types of defects, such as cracks, missing bricks, salt crystallization, and vegetation erosion. To enhance the capability of cultural heritage conservation, this paper focuses on the ancient city wall of Jingzhou and proposes a multi-stage defect-detection framework based on computer vision technology. The proposed system establishes a processing pipeline that includes image processing, 2D defect detection, depth estimation, and 3D reconstruction. On the processing end, the Restormer and SG-LLIE models are introduced for image deblurring and illumination enhancement, respectively, improving the quality of wall images. The system incorporates the LFS-GAN model to augment defect samples. On the detection end, YOLOv12 is used as the 2D recognition network to detect common defects based on the generated samples. A depth estimation module is employed to assist in the verification of ancient wall defects. Finally, a Gaussian Splatting point-cloud reconstruction method is used to achieve a 3D visual representation of the defects. Experimental results show that the proposed system effectively detects multiple types of defects in ancient city walls, providing both a theoretical foundation and technical support for the intelligent monitoring of cultural heritage. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

29 pages, 6246 KB  
Article
DASeg: A Domain-Adaptive Segmentation Pipeline Using Vision Foundation Models—Earthquake Damage Detection Use Case
by Huili Huang, Andrew Zhang, Danrong Zhang, Max Mahdi Roozbahani and James David Frost
Remote Sens. 2025, 17(16), 2812; https://doi.org/10.3390/rs17162812 - 14 Aug 2025
Viewed by 457
Abstract
Limited labeled imagery and tight response windows hinder the accurate damage quantification for post-disaster assessment. The objective of this study is to develop and evaluate a deep learning-based Domain-Adaptive Segmentation (DASeg) workflow to detect post-disaster damage using limited information [...] Read more.
Limited labeled imagery and tight response windows hinder the accurate damage quantification for post-disaster assessment. The objective of this study is to develop and evaluate a deep learning-based Domain-Adaptive Segmentation (DASeg) workflow to detect post-disaster damage using limited information available shortly after an event. DASeg unifies three Vision Foundation Models in an automatic workflow: fine-tuned DINOv2 supplies attention-based point prompts, fine-tuned Grounding DINO yields open-set box prompts, and a frozen Segment Anything Model (SAM) generates the final masks. In the earthquake-focused case study DASeg-Quake, the pipeline boosts mean Intersection over Union (mIoU) by 9.52% over prior work and 2.10% over state-of-the-art supervised baselines. In a zero-shot setting scenario, DASeg-Quake achieves the mIoU of 75.03% for geo-damage analysis, closely matching expert-level annotations. These results show that DASeg achieves superior workflow enhancement in infrastructure damage segmentation without needing pixel-level annotation, providing a practical solution for early-stage disaster response. Full article
Show Figures

Graphical abstract

19 pages, 6692 KB  
Article
A Deep Learning-Based Machine Vision System for Online Monitoring and Quality Evaluation During Multi-Layer Multi-Pass Welding
by Van Doi Truong, Yunfeng Wang, Chanhee Won and Jonghun Yoon
Sensors 2025, 25(16), 4997; https://doi.org/10.3390/s25164997 - 12 Aug 2025
Viewed by 457
Abstract
Multi-layer multi-pass welding plays an important role in manufacturing industries such as nuclear power plants, pressure vessel manufacturing, and ship building. However, distortion or welding defects are still challenges; therefore, welding monitoring and quality control are essential tasks for the dynamic adjustment of [...] Read more.
Multi-layer multi-pass welding plays an important role in manufacturing industries such as nuclear power plants, pressure vessel manufacturing, and ship building. However, distortion or welding defects are still challenges; therefore, welding monitoring and quality control are essential tasks for the dynamic adjustment of execution during welding. The aim was to propose a machine vision system for monitoring and surface quality evaluation during multi-pass welding using a line scanner and infrared camera sensors. The cross-section modelling based on the line scanner data enabled the measurement of distortion and dynamic control of the welding plan. Lack of fusion, porosity, and burn-through defects were intentionally generated by controlling welding parameters to construct a defect inspection dataset. To reduce the influence of material surface colour, the proposed normal map approach combined with a deep learning approach was applied for inspecting the surface defects on each layer, achieving a mean average precision of 0.88. In addition to monitoring the temperature of the weld pool, a burn-through defect detection algorithm was introduced to track welding status. The whole system was integrated into a graphical user interface to visualize the welding progress. This work provides a solid foundation for monitoring and potential for the further development of the automatic adaptive welding system in multi-layer multi-pass welding. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

34 pages, 3764 KB  
Review
Research Progress and Applications of Artificial Intelligence in Agricultural Equipment
by Yong Zhu, Shida Zhang, Shengnan Tang and Qiang Gao
Agriculture 2025, 15(15), 1703; https://doi.org/10.3390/agriculture15151703 - 7 Aug 2025
Viewed by 699
Abstract
With the growth of the global population and the increasing scarcity of arable land, traditional agricultural production is confronted with multiple challenges, such as efficiency improvement, precision operation, and sustainable development. The progressive advancement of artificial intelligence (AI) technology has created a transformative [...] Read more.
With the growth of the global population and the increasing scarcity of arable land, traditional agricultural production is confronted with multiple challenges, such as efficiency improvement, precision operation, and sustainable development. The progressive advancement of artificial intelligence (AI) technology has created a transformative opportunity for the intelligent upgrade of agricultural equipment. This article systematically presents recent progress in computer vision, machine learning (ML), and intelligent sensing. The key innovations are highlighted in areas such as object detection and recognition (e.g., a K-nearest neighbor (KNN) achieved 98% accuracy in distinguishing vibration signals across operation stages); autonomous navigation and path planning (e.g., a deep reinforcement learning (DRL)-optimized task planner for multi-arm harvesting robots reduced execution time by 10.7%); state perception (e.g., a multilayer perceptron (MLP) yielded 96.9% accuracy in plug seedling health classification); and precision control (e.g., an intelligent multi-module coordinated control system achieved a transplanting efficiency of 5000 plants/h). The findings reveal a deep integration of AI models with multimodal perception technologies, significantly improving the operational efficiency, resource utilization, and environmental adaptability of agricultural equipment. This integration is catalyzing the transition toward intelligent, automated, and sustainable agricultural systems. Nevertheless, intelligent agricultural equipment still faces technical challenges regarding data sample acquisition, adaptation to complex field environments, and the coordination between algorithms and hardware. Looking ahead, the convergence of digital twin (DT) technology, edge computing, and big data-driven collaborative optimization is expected to become the core of next-generation intelligent agricultural systems. These technologies have the potential to overcome current limitations in perception and decision-making, ultimately enabling intelligent management and autonomous decision-making across the entire agricultural production chain. This article aims to provide a comprehensive foundation for advancing agricultural modernization and supporting green, sustainable development. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

60 pages, 8707 KB  
Review
Automation in Construction (2000–2023): Science Mapping and Visualization of Journal Publications
by Mohamed Marzouk, Abdulrahman A. Bin Mahmoud, Khalid S. Al-Gahtani and Kareem Adel
Buildings 2025, 15(15), 2789; https://doi.org/10.3390/buildings15152789 - 7 Aug 2025
Viewed by 795
Abstract
This paper presents a scientometric review that provides a quantitative perspective on the evolution of Automation in Construction Journal (AICJ) research, emphasizing its developmental paths and emerging trends. The study aims to analyze the journal’s growth and citation impact over time. It also [...] Read more.
This paper presents a scientometric review that provides a quantitative perspective on the evolution of Automation in Construction Journal (AICJ) research, emphasizing its developmental paths and emerging trends. The study aims to analyze the journal’s growth and citation impact over time. It also seeks to identify the most influential publications and the cooperation patterns among key contributors. Furthermore, the study explores the journal’s primary research themes and their evolution. Accordingly, 4084 articles were identified using the Web of Science (WoS) database and subjected to a multistep analysis using VOsviewer version 1.6.18 and Biblioshiny as software tools. First, the growth and citation of the publications over time are inspected and evaluated, in addition to ranking the most influential documents. Second, the co-authorship analysis method is applied to visualize the cooperation patterns between countries, organizations, and authors. Finally, the publications are analyzed using keyword co-occurrence and keyword thematic evolution analyses, revealing five major research clusters: (i) foundational optimization, (ii) deep learning and computer vision, (iii) building information modeling, (iv) 3D printing and robotics, and (v) machine learning. Additionally, the analysis reveals significant growth in publications (54.5%) and citations (78.0%) from 2018 to 2023, indicating the journal’s increasing global influence. This period also highlights the accelerated adoption of digitalization (e.g., BIM, computational design), increased integration of AI and machine learning for automation and predictive analytics, and rapid growth of robotics and 3D printing, driving sustainable and innovative construction practices. The paper’s findings can help readers and researchers gain a thorough understanding of the AICJ’s published work, aid research groups in planning and optimizing their research efforts, and inform editorial boards on the most promising areas in the existing body of knowledge for further investigation and development. Full article
Show Figures

Figure 1

46 pages, 3093 KB  
Review
Security and Privacy in the Internet of Everything (IoE): A Review on Blockchain, Edge Computing, AI, and Quantum-Resilient Solutions
by Haluk Eren, Özgür Karaduman and Muharrem Tuncay Gençoğlu
Appl. Sci. 2025, 15(15), 8704; https://doi.org/10.3390/app15158704 - 6 Aug 2025
Viewed by 1131
Abstract
The IoE forms the foundation of the modern digital ecosystem by enabling seamless connectivity and data exchange among smart devices, sensors, and systems. However, the inherent nature of this structure, characterized by high heterogeneity, distribution, and resource constraints, renders traditional security approaches insufficient [...] Read more.
The IoE forms the foundation of the modern digital ecosystem by enabling seamless connectivity and data exchange among smart devices, sensors, and systems. However, the inherent nature of this structure, characterized by high heterogeneity, distribution, and resource constraints, renders traditional security approaches insufficient in areas such as data privacy, authentication, access control, and scalable protection. Moreover, centralized security systems face increasing fragility due to single points of failure, various AI-based attacks, including adversarial learning, model poisoning, and deepfakes, and the rising threat of quantum computers to encryption protocols. This study systematically examines the individual and integrated solution potentials of technologies such as Blockchain, Edge Computing, Artificial Intelligence, and Quantum-Resilient Cryptography within the scope of IoE security. Comparative analyses are provided based on metrics such as energy consumption, latency, computational load, and security level, while centralized and decentralized models are evaluated through a multi-layered security lens. In addition to the proposed multi-layered architecture, the study also structures solution methods and technology integrations specific to IoE environments. Classifications, architectural proposals, and the balance between performance and security are addressed from both theoretical and practical perspectives. Furthermore, a future vision is presented regarding federated learning-based privacy-preserving AI solutions, post-quantum digital signatures, and lightweight consensus algorithms. In this context, the study reveals existing vulnerabilities through an interdisciplinary approach and proposes a holistic framework for sustainable, scalable, and quantum-compatible IoE security. Full article
Show Figures

Figure 1

Back to TopTop