Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,005)

Search Parameters:
Keywords = digital image generation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 3575 KB  
Article
Toward Automatic 3D Model Reconstruction of Building Curtain Walls from UAV Images Based on NeRF and Deep Learning
by Zeyu Li, Qian Wang, Hongzhe Yue and Xiang Nie
Remote Sens. 2025, 17(19), 3368; https://doi.org/10.3390/rs17193368 (registering DOI) - 5 Oct 2025
Abstract
The Automated Building Information Modeling (BIM) reconstruction of existing building curtain walls is crucial for promoting digital Operation and Maintenance (O&M). However, existing 3D reconstruction technologies are mainly designed for general architectural scenes, and there is currently a lack of research specifically focused [...] Read more.
The Automated Building Information Modeling (BIM) reconstruction of existing building curtain walls is crucial for promoting digital Operation and Maintenance (O&M). However, existing 3D reconstruction technologies are mainly designed for general architectural scenes, and there is currently a lack of research specifically focused on the BIM reconstruction of curtain walls. This study proposes a BIM reconstruction method from unmanned aerial vehicle (UAV) images based on neural radiance field (NeRF) and deep learning-based semantic segmentation. The proposed method compensates for the lack of semantic information in traditional NeRF methods and could fill the gap in the automatic reconstruction of semantic models for curtain walls. A comprehensive high-rise building is selected as a case study to validate the proposed method. The results show that the overall accuracy (OA) for semantic segmentation of curtain wall point clouds is 71.8%, and the overall dimensional error of the reconstructed BIM model is less than 0.1m, indicating high modeling accuracy. Additionally, this study compares the proposed method with photogrammetry-based reconstruction and traditional semantic segmentation methods to further validate its effectiveness. Full article
(This article belongs to the Section AI Remote Sensing)
13 pages, 1461 KB  
Article
Reproducibility of AI in Cephalometric Landmark Detection: A Preliminary Study
by David Emilio Fracchia, Denis Bignotti, Stefano Lai, Stefano Cubeddu, Fabio Curreli, Massimiliano Lombardo, Alessio Verdecchia and Enrico Spinas
Diagnostics 2025, 15(19), 2521; https://doi.org/10.3390/diagnostics15192521 (registering DOI) - 5 Oct 2025
Abstract
Objectives: This study aimed to evaluate the reproducibility of artificial intelligence (AI) in identifying cephalometric landmarks, comparing its performance with manual tracing by an experienced orthodontist. Methods: A high-quality lateral cephalogram of a 26-year-old female patient, meeting strict inclusion criteria, was [...] Read more.
Objectives: This study aimed to evaluate the reproducibility of artificial intelligence (AI) in identifying cephalometric landmarks, comparing its performance with manual tracing by an experienced orthodontist. Methods: A high-quality lateral cephalogram of a 26-year-old female patient, meeting strict inclusion criteria, was selected. Eighteen cephalometric landmarks were identified using the WebCeph software (version 1500) in three experimental settings: AI tracing without image modification (AInocut), AI tracing with image modification (AI-cut), and manual tracing by an orthodontic expert. Each evaluator repeated the procedure 10 times on the same image. X and Y coordinates were recorded, and reproducibility was assessed using the coefficient of variation (CV) and centroid distance analysis. Statistical comparisons were performed using one-way ANOVA and Bonferroni post hoc tests, with significance set at p < 0.05. Results: AInocut achieved the highest reproducibility, showing the lowest mean CV values. Both AI methods demonstrated greater consistency than manual tracing, particularly for landmarks such as Menton (Me) and Pogonion (Pog). Gonion (Go) showed the highest variability across all groups. Significant differences were found for the Posterior Nasal Spine (PNS) point (p = 0.001), where AI outperformed manual tracing. Variability was generally higher along the X-axis than the Y-axis. Conclusions: AI demonstrated superior reproducibility in cephalometric landmark identification compared to manual tracing by an experienced operator. While certain points showed high consistency, others—particularly PNS and Go—remained challenging. These findings support AI as a reliable adjunct in digital cephalometry, although the use of a single radiograph limits generalizability. Broader, multi-image studies are needed to confirm clinical applicability. Full article
18 pages, 6931 KB  
Article
Research on Multi-Sensor Data Fusion Based Real-Scene 3D Reconstruction and Digital Twin Visualization Methodology for Coal Mine Tunnels
by Hongda Zhu, Jingjing Jin and Sihai Zhao
Sensors 2025, 25(19), 6153; https://doi.org/10.3390/s25196153 (registering DOI) - 4 Oct 2025
Abstract
This paper proposes a multi-sensor data-fusion-based method for real-scene 3D reconstruction and digital twin visualization of coal mine tunnels, aiming to address issues such as low accuracy in non-photorealistic modeling and difficulties in feature object recognition during traditional coal mine digitization processes. The [...] Read more.
This paper proposes a multi-sensor data-fusion-based method for real-scene 3D reconstruction and digital twin visualization of coal mine tunnels, aiming to address issues such as low accuracy in non-photorealistic modeling and difficulties in feature object recognition during traditional coal mine digitization processes. The research employs cubemap-based mapping technology to project acquired real-time tunnel images onto six faces of a cube, combined with navigation information, pose data, and synchronously acquired point cloud data to achieve spatial alignment and data fusion. On this basis, inner/outer corner detection algorithms are utilized for precise image segmentation, and a point cloud region growing algorithm integrated with information entropy optimization is proposed to realize complete recognition and segmentation of tunnel planes (e.g., roof, floor, left/right sidewalls) and high-curvature feature objects (e.g., ventilation ducts). Furthermore, geometric dimensions extracted from segmentation results are used to construct 3D models, and real-scene images are mapped onto model surfaces via UV (U and V axes of texture coordinate) texture mapping technology, generating digital twin models with authentic texture details. Experimental validation demonstrates that the method performs excellently in both simulated and real coal mine environments, with models capable of faithfully reproducing tunnel spatial layouts and detailed features while supporting multi-view visualization (e.g., bottom view, left/right rotated views, front view). This approach provides efficient and precise technical support for digital twin construction, fine-grained structural modeling, and safety monitoring of coal mine tunnels, significantly enhancing the accuracy and practicality of photorealistic 3D modeling in intelligent mining applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 4282 KB  
Article
PoseNeRF: In Situ 3D Reconstruction Method Based on Joint Optimization of Pose and Neural Radiation Field for Smooth and Weakly Textured Aeroengine Blade
by Yao Xiao, Xin Wu, Yizhen Yin, Yu Cai and Yuanhan Hou
Sensors 2025, 25(19), 6145; https://doi.org/10.3390/s25196145 (registering DOI) - 4 Oct 2025
Abstract
Digital twins are essential for the real-time health management and monitoring of aeroengines, and the in situ three-dimensional (3D) reconstruction technology of key components of aeroengines is an important support for the construction of a digital twin model. In this paper, an in [...] Read more.
Digital twins are essential for the real-time health management and monitoring of aeroengines, and the in situ three-dimensional (3D) reconstruction technology of key components of aeroengines is an important support for the construction of a digital twin model. In this paper, an in situ high-fidelity 3D reconstruction method, named PoseNeRF, for aeroengine blades based on the joint optimization of pose and neural radiance field (NeRF), is proposed. An aeroengine blades background filtering network based on complex network theory (ComBFNet) is designed to filter out the useless background information contained in the two-dimensional (2D) images and improve the fidelity of the 3D reconstruction of blades, and the mean intersection over union (mIoU) of the network reaches 95.5%. The joint optimization loss function, including photometric loss, depth loss, and point cloud loss is proposed. The method solves the problems of excessive blurring and aliasing artifacts, caused by factors such as smooth blade surface and weak texture information in 3D reconstruction, as well as the cumulative error problem caused by camera pose pre-estimation. The PSNR, SSIM, and LPIPS of the 3D reconstruction model proposed in this paper reach 25.59, 0.719, and 0.239, respectively, which are superior to other general models. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

24 pages, 11415 KB  
Article
Multi-Scale Investigation on Bearing Capacity and Load-Transfer Mechanism of Screw Pile Group via Model Tests and DEM Simulation
by Fenghao Bai, Ye Lu and Jiaxiang Yang
Buildings 2025, 15(19), 3581; https://doi.org/10.3390/buildings15193581 (registering DOI) - 4 Oct 2025
Abstract
Screw piles are widely used in infrastructure, such as railways, highways, and ports, etc., owing to their large pile resistance compared to unthreaded piles. While most screw pile research focuses on single pile behavior under rotational installation using torque-capacity correlations. Limited studies investigate [...] Read more.
Screw piles are widely used in infrastructure, such as railways, highways, and ports, etc., owing to their large pile resistance compared to unthreaded piles. While most screw pile research focuses on single pile behavior under rotational installation using torque-capacity correlations. Limited studies investigate group effects under alternative installation methods. In this study, the load-transfer mechanism of screw piles and soil displacement under vertical installation was explored using laboratory model tests combined with digital image correlation techniques. In addition, numerical simulations using the discrete element method were performed. Based on both lab tests and numerical simulation results, it is discovered that the ultimate bearing capacity of a single screw pile was approximately 50% higher than that of a cylindrical pile with the same outer diameter and length. For pile groups, the group effect coefficient of a triple-pile group composed of screw piles was 0.64, while that of cylindrical piles was 0.55. This phenomenon was caused by the unique thread-soil interaction of screw piles. The threads generated greater side resistance and reduced stress concentration at the pile tip compared with cylindrical piles. Moreover, the effects of pile type, pile number, embedment length, pile spacing, and thread pitch on pile resistance and soil displacement were also investigated. The findings in this study revealed the micro–macro correspondence of screw pile performance and can serve as references for pile construction in practice. Full article
(This article belongs to the Special Issue Structural Engineering in Building)
Show Figures

Figure 1

19 pages, 6432 KB  
Article
Storage and Production Aspects of Reservoir Fluids in Sedimentary Core Rocks
by Jumana Sharanik, Ernestos Sarris and Constantinos Hadjistassou
Geosciences 2025, 15(10), 386; https://doi.org/10.3390/geosciences15100386 - 3 Oct 2025
Abstract
Understanding the fluid storage and production mechanisms in sedimentary rocks is vital for optimising natural gas extraction and subsurface resource management. This study applies high-resolution X-ray computed tomography (≈15 μm) to digitise rock samples from onshore Cyprus, producing digital rock models from DICOM [...] Read more.
Understanding the fluid storage and production mechanisms in sedimentary rocks is vital for optimising natural gas extraction and subsurface resource management. This study applies high-resolution X-ray computed tomography (≈15 μm) to digitise rock samples from onshore Cyprus, producing digital rock models from DICOM images. The workflow, including digitisation, numerical simulation of natural gas flow, and experimental validation, demonstrates strong agreement between digital and laboratory-measured porosity, confirming the methods’ reliability. Synthetic sand packs generated via particle-based modelling provide further insight into the gas storage mechanisms. A linear porosity–permeability relationship was observed, with porosity increasing from 0 to 35% and permeability from 0 to 3.34 mD. Permeability proved critical for production, as a rise from 1.5 to 3 mD nearly doubled the gas flow rate (14 to 30 fm3/s). Grain morphology also influenced gas storage. Increasing roundness enhanced porosity from 0.30 to 0.41, boosting stored gas volume by 47.6% to 42 fm3. Although based on Cyprus retrieved samples, the methodology is applicable to sedimentary formations elsewhere. The findings have implications for enhanced oil recovery, CO2 sequestration, hydrogen storage, and groundwater extraction. This work highlights digital rock physics as a scalable technology for investigating transport behaviour in porous media and improving characterisation of complex sedimentary reservoirs. Full article
(This article belongs to the Special Issue Advancements in Geological Fluid Flow and Mechanical Properties)
Show Figures

Figure 1

31 pages, 9679 KB  
Article
Weather-Corrupted Image Enhancement with Removal-Raindrop Diffusion and Mutual Image Translation Modules
by Young-Ho Go and Sung-Hak Lee
Mathematics 2025, 13(19), 3176; https://doi.org/10.3390/math13193176 - 3 Oct 2025
Abstract
Artificial intelligence-based image processing is critical for sensor fusion and image transformation in mobility systems. Advanced driver assistance functions such as forward monitoring and digital side mirrors are essential for driving safety. Degradation due to raindrops, fog, and high-dynamic range (HDR) imbalance caused [...] Read more.
Artificial intelligence-based image processing is critical for sensor fusion and image transformation in mobility systems. Advanced driver assistance functions such as forward monitoring and digital side mirrors are essential for driving safety. Degradation due to raindrops, fog, and high-dynamic range (HDR) imbalance caused by lighting changes impairs visibility and reduces object recognition and distance estimation accuracy. This paper proposes a diffusion framework to enhance visibility under multi-degradation conditions. The denoising diffusion probabilistic model (DDPM) offers more stable training and high-resolution restoration than the generative adversarial networks. The DDPM relies on large-scale paired datasets, which are difficult to obtain in raindrop scenarios. This framework applies the Palette diffusion model, comprising data augmentation and raindrop-removal modules. The data augmentation module generates raindrop image masks and learns inpainting-based raindrop synthesis. Synthetic masks simulate raindrop patterns and HDR imbalance scenarios. The raindrop-removal module reconfigures the Palette architecture for image-to-image translation, incorporating the augmented synthetic dataset for raindrop removal learning. Loss functions and normalization strategies improve restoration stability and removal performance. During inference, the framework operates with a single conditional input, and an efficient sampling strategy is introduced to significantly accelerate the process. In post-processing, tone adjustment and chroma compensation enhance visual consistency. The proposed method preserves fine structural details and outperforms existing approaches in visual quality, improving the robustness of vision systems under adverse conditions. Full article
(This article belongs to the Special Issue Deep Learning in Image Processing and Scientific Computing)
Show Figures

Figure 1

38 pages, 2485 KB  
Review
Research Progress of Deep Learning-Based Artificial Intelligence Technology in Pest and Disease Detection and Control
by Yu Wu, Li Chen, Ning Yang and Zongbao Sun
Agriculture 2025, 15(19), 2077; https://doi.org/10.3390/agriculture15192077 - 3 Oct 2025
Abstract
With the rapid advancement of artificial intelligence technology, the widespread application of deep learning in computer vision is driving the transformation of agricultural pest detection and control toward greater intelligence and precision. This paper systematically reviews the evolution of agricultural pest detection and [...] Read more.
With the rapid advancement of artificial intelligence technology, the widespread application of deep learning in computer vision is driving the transformation of agricultural pest detection and control toward greater intelligence and precision. This paper systematically reviews the evolution of agricultural pest detection and control technologies, with a special focus on the effectiveness of deep-learning-based image recognition methods for pest identification, as well as their integrated applications in drone-based remote sensing, spectral imaging, and Internet of Things sensor systems. Through multimodal data fusion and dynamic prediction, artificial intelligence has significantly improved the response times and accuracy of pest monitoring. On the control side, the development of intelligent prediction and early-warning systems, precision pesticide-application technologies, and smart equipment has advanced the goals of eco-friendly pest management and ecological regulation. However, challenges such as high data-annotation costs, limited model generalization, and constrained computing power on edge devices remain. Moving forward, further exploration of cutting-edge approaches such as self-supervised learning, federated learning, and digital twins will be essential to build more efficient and reliable intelligent control systems, providing robust technical support for sustainable agricultural development. Full article
39 pages, 2624 KB  
Review
A Review of Neural Network-Based Image Noise Processing Methods
by Anton A. Volkov, Alexander V. Kozlov, Pavel A. Cheremkhin, Dmitry A. Rymov, Anna V. Shifrina, Rostislav S. Starikov, Vsevolod A. Nebavskiy, Elizaveta K. Petrova, Evgenii Yu. Zlokazov and Vladislav G. Rodin
Sensors 2025, 25(19), 6088; https://doi.org/10.3390/s25196088 - 2 Oct 2025
Abstract
This review explores the current landscape of neural network-based methods for digital image noise processing. Digital cameras have become ubiquitous in fields like forensics and medical diagnostics, and image noise remains a critical factor for ensuring image quality. Traditional noise suppression techniques are [...] Read more.
This review explores the current landscape of neural network-based methods for digital image noise processing. Digital cameras have become ubiquitous in fields like forensics and medical diagnostics, and image noise remains a critical factor for ensuring image quality. Traditional noise suppression techniques are often limited by extensive parameter selection and inefficient handling of complex data. In contrast, neural networks, particularly convolutional neural networks, autoencoders, and generative adversarial networks, have shown significant promise for noise estimation, suppression, and analysis. These networks can handle complex noise patterns, leverage context-specific data, and adapt to evolving conditions with minimal manual intervention. This paper describes the basics of camera and image noise components and existing techniques for their evaluation. Main neural network-based methods for noise estimation are briefly presented. This paper discusses neural network application for noise suppression, classification, image source identification, and the extraction of unique camera fingerprints through photo response non-uniformity. Additionally, it highlights the challenges of generating reliable training datasets and separating image noise from photosensor noise, which remains a fundamental issue. Full article
(This article belongs to the Section Sensing and Imaging)
22 pages, 8922 KB  
Article
Stress Assessment of Abutment-Free and Three Implant–Abutment Connections Utilizing Various Abutment Materials: A 3D Finite Element Study of Static and Cyclic Static Loading Conditions
by Maryam H. Mugri, Nandalur Kulashekar Reddy, Mohammed E. Sayed, Khurshid Mattoo, Osama Mohammed Qomari, Mousa Mahmoud Alnaji, Waleed Abdu Mshari, Firas K. Alqarawi, Saad Saleh AlResayes and Raghdah M. Alshaibani
J. Funct. Biomater. 2025, 16(10), 372; https://doi.org/10.3390/jfb16100372 - 2 Oct 2025
Abstract
Background: The implant–abutment interface has been thoroughly examined due to its impact on the success of implant healing and longevity. Removing the abutment is advantageous, but it changes the biomechanics of the implant fixture and restoration. This in vitro three-dimensional finite element analytical [...] Read more.
Background: The implant–abutment interface has been thoroughly examined due to its impact on the success of implant healing and longevity. Removing the abutment is advantageous, but it changes the biomechanics of the implant fixture and restoration. This in vitro three-dimensional finite element analytical (FEA) study aims to evaluate the distribution of von Mises stress (VMS) in abutment-free and three additional implant abutment connections composed of various titanium alloys. Materials and methods: A three-dimensional implant-supported single-crown prosthesis model was digitally generated on the mandibular section using a combination of microcomputed tomography imaging (microCT), a computer-assisted designing (CAD) program (SolidWorks), Analysis of Systems (ANSYS), and a 3D digital scan (Visual Computing Lab). Four digital models [A (BioHorizons), B (Straumann AG), C abutment-free (Matrix), and D (TRI)] representing three different functional biomaterials [wrought Ti-6Al-4Va ELI, Roxolid (85% Ti, 15% Zr), and Ti-6Al-4V ELI] were subjected to simulated static/cyclic static loading in axial/oblique directions after being restored with highly translucent monolithic zirconia restoration. The stresses generated on the implant fixture, abutment, crown, screw, cortical, and cancellous bones were measured. Results: The highest VMSs were generated by the abutment-free (Model C, Matrix) implant system on the implant fixture [static (32.36 Mpa), cyclic static (83.34 Mpa)], screw [static (16.85 Mpa), cyclic static (30.33 Mpa), oblique (57.46 Mpa)], and cortical bone [static (26.55), cyclic static (108.99 Mpa), oblique (47.8 Mpa)]. The lowest VMSs in the implant fixture, abutment, screw, and crown were associated with the binary alloy Roxolid [83–87% Ti and 13–17% Zr]. Conclusions: Abutment-free implant systems generate twice the stress on cortical bone than other abutment implant systems while producing the highest stresses on the fixture and screw, therefore demanding further clinical investigations. Roxolid, a binary alloy of titanium and zirconia, showed the least overall stresses in different loadings and directions. Full article
(This article belongs to the Special Issue Biomaterials and Biomechanics Modelling in Dental Implantology)
Show Figures

Figure 1

18 pages, 2980 KB  
Article
Deep Learning-Based Identification of Kazakhstan Apple Varieties Using Pre-Trained CNN Models
by Jakhfer Alikhanov, Tsvetelina Georgieva, Eleonora Nedelcheva, Aidar Moldazhanov, Akmaral Kulmakhambetova, Dmitriy Zinchenko, Alisher Nurtuleuov, Zhandos Shynybay and Plamen Daskalov
AgriEngineering 2025, 7(10), 331; https://doi.org/10.3390/agriengineering7100331 - 1 Oct 2025
Abstract
This paper presents a digital approach for the identification of apple varieties bred in Kazakhstan using deep learning methods and transfer learning. The main objective of this study is to develop and evaluate an algorithm for automatic varietal classification of apples based on [...] Read more.
This paper presents a digital approach for the identification of apple varieties bred in Kazakhstan using deep learning methods and transfer learning. The main objective of this study is to develop and evaluate an algorithm for automatic varietal classification of apples based on color images obtained under controlled conditions. Five representative cultivars were selected as research objects: Aport Alexander, Ainur, Sinap Almaty, Nursat, and Kazakhskij Yubilejnyj. The fruit samples were collected in the pomological garden of the Kazakh Research Institute of Fruit and Vegetable Growing, ensuring representativeness and taking into account the natural variability of the cultivars. Two convolutional neural network (CNN) architectures—GoogLeNet and SqueezeNet—were fine-tuned using transfer learning with different optimization settings. The data processing pipeline included preprocessing, training and validation set formation, and augmentation techniques to improve model generalization. Network performance was assessed using standard evaluation metrics such as accuracy, precision, and recall, complemented by confusion matrix analysis to reveal potential misclassifications. The results demonstrated high recognition efficiency: the classification accuracy exceeded 95% for most cultivars, while the Ainur variety achieved 100% recognition when tested with GoogLeNet. Interestingly, the Nursat variety achieved the best results with SqueezeNet, which highlights the importance of model selection for specific apple types. These findings confirm the applicability of CNN-based deep learning for varietal recognition of Kazakhstan apple cultivars. The novelty of this study lies in applying neural network models to local Kazakhstan apple varieties for the first time, which is of both scientific and practical importance. The practical contribution of the research is the potential integration of the developed method into industrial fruit-sorting systems, thereby increasing productivity, objectivity, and precision in post-harvest processing. The main limitation of this study is the relatively small dataset and the use of controlled laboratory image acquisition conditions. Future research will focus on expanding the dataset, testing the models under real production environments, and exploring more advanced deep learning architectures to further improve recognition performance. Full article
(This article belongs to the Special Issue Implementation of Artificial Intelligence in Agriculture)
Show Figures

Figure 1

15 pages, 25292 KB  
Article
Reconstructing Ancient Iron-Smelting Furnaces of Guéra (Chad) Through 3D Modeling and AI-Assisted Video Generation
by Jean-Baptiste Barreau, Djimet Guemona and Caroline Robion-Brunner
Electronics 2025, 14(19), 3923; https://doi.org/10.3390/electronics14193923 - 1 Oct 2025
Abstract
This article presents an innovative methodological approach for the documentation and enhancement of ancient ironworking heritage in the Guéra region of Chad. By combining ethno-historical and archaeological surveys, 3D modeling with Blender, and the generation of images and video sequences through artificial intelligence [...] Read more.
This article presents an innovative methodological approach for the documentation and enhancement of ancient ironworking heritage in the Guéra region of Chad. By combining ethno-historical and archaeological surveys, 3D modeling with Blender, and the generation of images and video sequences through artificial intelligence (AI), we propose an integrated production pipeline enabling the faithful reconstruction of three types of metallurgical furnaces. Our method relies on rigorously collected field data to generate multiple and plausible representations from fragmentary information. A standardized evaluation grid makes it possible to assess the archaeological fidelity, cultural authenticity, and visual quality of the reconstructions, thereby limiting biases inherent to generative models. The results offer strong potential for integration into immersive environments, opening up perspectives in education, digital museology, and the virtual preservation of traditional ironworking knowledge. This work demonstrates the relevance of multimodal approaches in reconciling scientific rigor with engaging visual storytelling. Full article
(This article belongs to the Special Issue Augmented Reality, Virtual Reality, and 3D Reconstruction)
43 pages, 28786 KB  
Article
Secure and Efficient Data Encryption for Internet of Robotic Things via Chaos-Based Ascon
by Gülyeter Öztürk, Murat Erhan Çimen, Ünal Çavuşoğlu, Osman Eldoğan and Durmuş Karayel
Appl. Sci. 2025, 15(19), 10641; https://doi.org/10.3390/app151910641 - 1 Oct 2025
Abstract
The increasing adoption of digital technologies, robotic systems, and IoT applications in sectors such as medicine, agriculture, and industry drives a surge in data generation and necessitates secure and efficient encryption. For resource-constrained systems, lightweight yet robust cryptographic algorithms are critical. This study [...] Read more.
The increasing adoption of digital technologies, robotic systems, and IoT applications in sectors such as medicine, agriculture, and industry drives a surge in data generation and necessitates secure and efficient encryption. For resource-constrained systems, lightweight yet robust cryptographic algorithms are critical. This study addresses the security demands of IoRT systems by proposing an enhanced chaos-based encryption method. The approach integrates the lightweight structure of NIST-standardized Ascon-AEAD128 with the randomness of the Zaslavsky map. Ascon-AEAD128 is widely used on many hardware platforms; therefore, it must robustly resist both passive and active attacks. To overcome these challenges and enhance Ascon’s security, we integrate into Ascon the keys and nonces generated by the Zaslavsky chaotic map, which is deterministic, nonperiodic, and highly sensitive to initial conditions and parameter variations.This integration yields a chaos-based Ascon variant with a higher encryption security relative to the standard Ascon. In addition, we introduce exploratory variants that inject non-repeating chaotic values into the initialization vectors (IVs), the round constants (RCs), and the linear diffusion constants (LCs), while preserving the core permutation. Real-time tests are conducted using Raspberry Pi 3B devices and ROS 2–based IoRT robots. The algorithm’s performance is evaluated over 100 encryption runs on 12 grayscale/color images and variable-length text transmitted via MQTT. Statistical and differential analyses—including histogram, entropy, correlation, chi-square, NPCR, UACI, MSE, MAE, PSNR, and NIST SP 800-22 randomness tests—assess the encryption strength. The results indicate that the proposed method delivers consistent improvements in randomness and uniformity over standard Ascon-AEAD128, while remaining comparable to state-of-the-art chaotic encryption schemes across standard security metrics. These findings suggest that the algorithm is a promising option for resource-constrained IoRT applications. Full article
(This article belongs to the Special Issue Recent Advances in Mechatronic and Robotic Systems)
24 pages, 1034 KB  
Article
MMFD-Net: A Novel Network for Image Forgery Detection and Localization via Multi-Stream Edge Feature Learning and Multi-Dimensional Information Fusion
by Haichang Yin, KinTak U, Jing Wang and Zhuofan Gan
Mathematics 2025, 13(19), 3136; https://doi.org/10.3390/math13193136 - 1 Oct 2025
Abstract
With the rapid advancement of image processing techniques, digital image forgery detection has emerged as a critical research area in information forensics. This paper proposes a novel deep learning model based on Multi-view Multi-dimensional Forgery Detection Networks (MMFD-Net), designed to simultaneously determine whether [...] Read more.
With the rapid advancement of image processing techniques, digital image forgery detection has emerged as a critical research area in information forensics. This paper proposes a novel deep learning model based on Multi-view Multi-dimensional Forgery Detection Networks (MMFD-Net), designed to simultaneously determine whether an image has been tampered with and precisely localize the forged regions. By integrating a Multi-stream Edge Feature Learning module with a Multi-dimensional Information Fusion module, MMFD-Net employs joint supervised learning to extract semantics-agnostic forgery features, thereby enhancing both detection performance and model generalization. Extensive experiments demonstrate that MMFD-Net achieves state-of-the-art results on multiple public datasets, excelling in both pixel-level localization and image-level classification tasks, while maintaining robust performance in complex scenarios. Full article
(This article belongs to the Special Issue Applied Mathematics in Data Science and High-Performance Computing)
Show Figures

Figure 1

12 pages, 15620 KB  
Protocol
A Simple Method for Imaging and Quantifying Respiratory Cilia Motility in Mouse Models
by Richard Francis
Methods Protoc. 2025, 8(5), 113; https://doi.org/10.3390/mps8050113 - 1 Oct 2025
Abstract
A straightforward ex vivo approach has been developed and refined to enable high-resolution imaging and quantitative assessment of motile cilia function in mouse airway epithelial tissue, allowing critical insights into cilia motility and cilia generated flow using different mouse models or following different [...] Read more.
A straightforward ex vivo approach has been developed and refined to enable high-resolution imaging and quantitative assessment of motile cilia function in mouse airway epithelial tissue, allowing critical insights into cilia motility and cilia generated flow using different mouse models or following different sample treatments. In this method, freshly excised mouse trachea is cut longitudinally through the trachealis muscle which is then sandwiched between glass coverslips within a thin silicon gasket. By orienting the tissue along its longitudinal axis, the natural curling of the trachealis muscle helps maintain the sample in a configuration optimal for imaging along the full tracheal length. High-speed video microscopy, utilizing differential interference contrast (DIC) optics and a fast digital camera capturing at >200 frames per second is then used to record ciliary motion. This enables detailed measurement of both cilia beat frequency (CBF) and waveform characteristics. The application of 1 µm microspheres to the bathing media during imaging allows for additional analysis of fluid flow generated by ciliary activity. The entire procedure typically takes around 40 min to complete per animal: ~30 min for tissue harvest and sample mounting, then ~10 min for imaging samples and acquiring data. Full article
(This article belongs to the Section Biomedical Sciences and Physiology)
Show Figures

Figure 1

Back to TopTop