Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (162)

Search Parameters:
Keywords = Blender

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 3704 KB  
Article
Optimization of Scene and Material Parameters for the Generation of Synthetic Training Datasets for Machine Learning-Based Object Segmentation
by Malte Nagel, Kolja Hedrich, Nils Melchert, Lennart Hinz and Eduard Reithmeier
Computers 2025, 14(8), 341; https://doi.org/10.3390/computers14080341 - 21 Aug 2025
Viewed by 284
Abstract
Synthetic training data is often essential for neural-network-based segmentation when real datasets are difficult or impossible to obtain. Conventional synthetic data generation relies on manually selecting scene and material parameters. This can lead to poor performance because the optimal parameters are often non-intuitive [...] Read more.
Synthetic training data is often essential for neural-network-based segmentation when real datasets are difficult or impossible to obtain. Conventional synthetic data generation relies on manually selecting scene and material parameters. This can lead to poor performance because the optimal parameters are often non-intuitive and depend heavily on the specific use case and on the objects to be segmented. This study proposes a novel, automated optimization pipeline to improve the quality of synthetic datasets for specific object segmentation tasks. Synthetic datasets are generated by varying material and scene parameters with the BlenderProc framework. These parameters are optimized with the Optuna framework to maximize the average precision achieved by models trained on this data and validated using a small real dataset. After initial single-parameter studies and subsequent multidimensional optimization, optimal scene and material parameters are identified for each object. The results demonstrate the potential of this optimization pipeline to produce synthetic training datasets that enhance neural network performance for specific segmentation tasks, offering insights into the critical role of scene design and material selection in synthetic data generation. Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
Show Figures

Figure 1

36 pages, 13404 KB  
Article
A Multi-Task Deep Learning Framework for Road Quality Analysis with Scene Mapping via Sim-to-Real Adaptation
by Rahul Soans, Ryuichi Masuda and Yohei Fukumizu
Appl. Sci. 2025, 15(16), 8849; https://doi.org/10.3390/app15168849 - 11 Aug 2025
Viewed by 396
Abstract
Robust perception of road surface conditions is a critical challenge for the safe deployment of autonomous vehicles and the efficient management of transportation infrastructure. This paper introduces a synthetic data-driven deep learning framework designed to address this challenge. We present a large-scale, procedurally [...] Read more.
Robust perception of road surface conditions is a critical challenge for the safe deployment of autonomous vehicles and the efficient management of transportation infrastructure. This paper introduces a synthetic data-driven deep learning framework designed to address this challenge. We present a large-scale, procedurally generated 3D synthetic dataset created in Blender, featuring a diverse range of road defects—including cracks, potholes, and puddles—alongside crucial road features like manhole covers and patches. Crucially, our dataset provides dense, pixel-perfect annotations for segmentation masks, depth maps, and camera parameters (intrinsic and extrinsic). Our proposed model leverages these rich annotations in a multi-task learning framework that jointly performs road defect segmentation and depth estimation, enabling a comprehensive geometric and semantic understanding of the road environment. A core contribution is a two-stage domain adaptation strategy to bridge the synthetic-to-real gap. First, we employ a modified CycleGAN with a segmentation-aware loss to translate synthetic images into a realistic domain while preserving defect fidelity. Second, during model training, we utilize a dual-discriminator adversarial approach, applying alignment at both the feature and output levels to minimize domain shift. Benchmarking experiments validate our approach, demonstrating high accuracy and computational efficiency. Our model excels in detecting subtle or occluded defects, attributed to an occlusion-aware loss formulation. The proposed system shows significant promise for real-time deployment in autonomous navigation, automated infrastructure assessment and Advanced Driver-Assistance Systems (ADAS). Full article
Show Figures

Figure 1

22 pages, 63497 KB  
Article
From Earth to Interface: Towards a 3D Semantic Virtual Stratigraphy of the Funerary Ara of Ofilius Ianuarius from the Via Appia Antica 39 Burial Complex
by Matteo Lombardi and Rachele Dubbini
Heritage 2025, 8(8), 305; https://doi.org/10.3390/heritage8080305 - 30 Jul 2025
Viewed by 441
Abstract
This paper presents the integrated study of the funerary ara of Ofilius Ianuarius, discovered within the burial complex of Via Appia Antica 39, and explores its digital stratigraphic recontextualisation through two 3D semantic workflows. The research aims to evaluate the potential of [...] Read more.
This paper presents the integrated study of the funerary ara of Ofilius Ianuarius, discovered within the burial complex of Via Appia Antica 39, and explores its digital stratigraphic recontextualisation through two 3D semantic workflows. The research aims to evaluate the potential of stratigraphic 3D modelling as a tool for post-excavation analysis and transparent archaeological interpretation. Starting from a set of georeferenced photogrammetric models acquired between 2023 and 2025, the study tests two workflows: (1) an EMF-based approach using the Extended Matrix, Blender, and EMviq for stratigraphic relationship modelling and online visualisation; (2) a semantic integration method using the .gltf format and the CRMArcheo Annotation Tool developed in Blender, exported to the ATON platform. While both workflows enable accurate 3D documentation, they differ in their capacity for structured semantic enrichment and interoperability. The results highlight the value of combining reality-based models with semantically linked stratigraphic proxies and suggest future directions for linking archaeological datasets, ontologies, and interactive digital platforms. This work contributes to the ongoing effort to foster transparency, reproducibility, and accessibility in virtual archaeological reconstruction. Full article
(This article belongs to the Section Digital Heritage)
Show Figures

Figure 1

17 pages, 13125 KB  
Article
Evaluating the Accuracy and Repeatability of Mobile 3D Imaging Applications for Breast Phantom Reconstruction
by Elena Botti, Bart Jansen, Felipe Ballen-Moreno, Ayush Kapila and Redona Brahimetaj
Sensors 2025, 25(15), 4596; https://doi.org/10.3390/s25154596 - 24 Jul 2025
Viewed by 738
Abstract
Three-dimensional imaging technologies are increasingly used in breast reconstructive and plastic surgery due to their potential for efficient and accurate preoperative assessment and planning. This study systematically evaluates the accuracy and consistency of six commercially available 3D scanning applications (apps)—Structure Sensor, 3D Scanner [...] Read more.
Three-dimensional imaging technologies are increasingly used in breast reconstructive and plastic surgery due to their potential for efficient and accurate preoperative assessment and planning. This study systematically evaluates the accuracy and consistency of six commercially available 3D scanning applications (apps)—Structure Sensor, 3D Scanner App, Heges, Polycam, SureScan, and Kiri—in reconstructing the female torso. To avoid variability introduced by human subjects, a silicone breast mannequin model was scanned, with fiducial markers placed at known anatomical landmarks. Manual distance measurements were obtained using calipers by two independent evaluators and compared to digital measurements extracted from 3D reconstructions in Blender software. Each scan was repeated six times per application to ensure reliability. SureScan demonstrated the lowest mean error (2.9 mm), followed by Structure Sensor (3.0 mm), Heges (3.6 mm), 3D Scanner App (4.4 mm), Kiri (5.0 mm), and Polycam (21.4 mm), which showed the highest error and variability. Even the app using an external depth sensor (Structure Sensor) showed no statistically significant accuracy advantage over those using only the iPad’s built-in camera (except for Polycam), underscoring that software is the primary driver of performance, not hardware (alone). This work provides practical insights for selecting mobile 3D scanning tools in clinical workflows and highlights key limitations, such as scaling errors and alignment artifacts. Future work should include patient-based validation and explore deep learning to enhance reconstruction quality. Ultimately, this study lays the foundation for more accessible and cost-effective 3D imaging in surgical practice, showing that smartphone-based tools can produce clinically useful scans. Full article
(This article belongs to the Special Issue Biomedical Imaging, Sensing and Signal Processing)
Show Figures

Figure 1

27 pages, 14879 KB  
Article
Research on AI-Driven Classification Possibilities of Ball-Burnished Regular Relief Patterns Using Mixed Symmetrical 2D Image Datasets Derived from 3D-Scanned Topography and Photo Camera
by Stoyan Dimitrov Slavov, Lyubomir Si Bao Van, Marek Vozár, Peter Gogola and Diyan Minkov Dimitrov
Symmetry 2025, 17(7), 1131; https://doi.org/10.3390/sym17071131 - 15 Jul 2025
Viewed by 431
Abstract
The present research is related to the application of artificial intelligence (AI) approaches for classifying surface textures, specifically regular reliefs patterns formed by ball burnishing operations. A two-stage methodology is employed, starting with the creation of regular reliefs (RRs) on test parts by [...] Read more.
The present research is related to the application of artificial intelligence (AI) approaches for classifying surface textures, specifically regular reliefs patterns formed by ball burnishing operations. A two-stage methodology is employed, starting with the creation of regular reliefs (RRs) on test parts by ball burnishing, followed by 3D topography scanning with Alicona device and data preprocessing with Gwyddion, and Blender software, where the acquired 3D topographies are converted into a set of 2D images, using various virtual camera movements and lighting to simulate the symmetrical fluctuations around the tool-path of the real camera. Four pre-trained convolutional neural networks (DenseNet121, EfficientNetB0, MobileNetV2, and VGG16) are used as a base for transfer learning and tested for their generalization performance on different combinations of synthetic and real image datasets. The models were evaluated by using confusion matrices and four additional metrics. The results show that the pretrained VGG16 model generalizes the best regular reliefs textures (96%), in comparison with the other models, if it is subjected to transfer learning via feature extraction, using mixed dataset, which consist of 34,037 images in following proportions: non-textured synthetic (87%), textured synthetic (8%), and real captured (5%) images of such a regular relief. Full article
Show Figures

Figure 1

9 pages, 1819 KB  
Proceeding Paper
Magic of Water: Exploration of Production Process with Fluid Effects in Film and Advertisement in Computer-Aided Design
by Nan-Hu Lu
Eng. Proc. 2025, 98(1), 20; https://doi.org/10.3390/engproc2025098020 - 27 Jun 2025
Viewed by 358
Abstract
Fluid effects are important in films and advertisements, where their realism and aesthetic quality directly impact the visual experience. With the rapid advancement of digital technology and computer-aided design (CAD), modern visual effects are used to simulate various water-related phenomena, such as flowing [...] Read more.
Fluid effects are important in films and advertisements, where their realism and aesthetic quality directly impact the visual experience. With the rapid advancement of digital technology and computer-aided design (CAD), modern visual effects are used to simulate various water-related phenomena, such as flowing water, ocean waves, and raindrops. However, creating these realistic effects is not solely dependent on advanced software and hardware; it also requires an understanding of the technical and artistic aspects of visual effects artists. In the creation process, the artist must possess a keen aesthetic sense and innovative thinking to craft stunning visual effects to overcome technological constraints. Whether depicting the grandeur of turbulent ocean scenes or the romance of gentle rain, the artist needs to transform fluid effects into expressive visual language to enhance emotional impact, aligning with the storyline and the director’s vision. The production process of fluid effects typically involves the following critical steps. First, the visual effects artist utilizes CAD-based tools, particle systems, or fluid simulation software to model the dynamic behavior of water. This process demands a solid foundation in physics and the ability to adjust parameters flexibly according to the specific needs of the scene, ensuring that the fluid motion appears natural and smooth. Next, in the rendering stage, the simulated fluid is transformed into realistic imagery, requiring significant computational power and precise handling of lighting effects. Finally, in the compositing stage, the fluid effects are seamlessly integrated with live-action footage, making the visual effects appear as though they are parts of the actual scene. In this study, the technical details of creating fluid effects using free software such as Blender were explored. How advanced CAD tools are utilized to achieve complex water effects was also elucidated. Additionally, case studies were conducted to illustrate the creative processes involved in visual effects production to understand how to seamlessly blend technology with artistry to create unforgettable visual spectacles. Full article
Show Figures

Figure 1

17 pages, 2694 KB  
Article
Evaluation of Vibratory Ball Mill Mixing as an Alternative to Wet Granulation in the Manufacturing of Sodium Naproxen Tablets with Dolomite-Based Formulations
by Mateusz Przywara, Klaudia Jękot and Wiktoria Jednacz
Appl. Sci. 2025, 15(13), 6966; https://doi.org/10.3390/app15136966 - 20 Jun 2025
Cited by 1 | Viewed by 300
Abstract
The development of robust and scalable tablet manufacturing methods remains a key objective in pharmaceutical technology, especially when dealing with active pharmaceutical ingredients (APIs) and excipients that exhibit suboptimal processing properties. This study evaluated two alternative manufacturing strategies for tablets containing sodium naproxen [...] Read more.
The development of robust and scalable tablet manufacturing methods remains a key objective in pharmaceutical technology, especially when dealing with active pharmaceutical ingredients (APIs) and excipients that exhibit suboptimal processing properties. This study evaluated two alternative manufacturing strategies for tablets containing sodium naproxen (20%, API), dolomite (65%, sustainable mineral filler), cellulose (7%), polyvinylpyrrolidone (5%, binder), and magnesium stearate (3%, lubricant). The direct compression method used a vibrating ball mill (SPEX SamplePrep 8000M), while the indirect method employed wet granulation using a pan granulator at different inclination angles. Physical properties of raw materials and granules were assessed, and final tablets were evaluated for mass, thickness, mechanical resistance, abrasiveness, and API content uniformity. Direct compression using vibratory mixing for 5–10 min (DT2, DT3) resulted in average tablet masses close to the target (0.260 g) and improved reproducibility compared to a reference V-type blender. Wet granulation produced tablets with the lowest abrasiveness (<1.0%) and minimal variability in dimensions and API content. The best uniformity (SD < 0.5%) was observed in batch IT2. Overall, vibratory mixing proved capable of achieving tablet quality comparable to that of wet granulation, while requiring fewer processing steps. This highlights its potential as an efficient and scalable alternative in solid dosage manufacturing. Full article
Show Figures

Figure 1

12 pages, 772 KB  
Article
Clinical and Gut Microbiome Characteristics of Medically Complex Patients Receiving Blenderized Tube Feeds vs. Standard Enteral Feeds
by Marianelly Fernandez Ferrer, Mauricio Retuerto, Aravind Thavamani, Erin Marie San Valentin, Thomas J. Sferra, Mahmoud Ghannoum and Senthilkumar Sankararaman
Nutrients 2025, 17(12), 2018; https://doi.org/10.3390/nu17122018 - 17 Jun 2025
Viewed by 498
Abstract
Background: Diet is known to influence the composition of the gut microbiome. For patients who require enteral feeding, there has been a growing popularity of using blenderized tube feeds (BTFs) as an alternative to standard enteral formula (SEF). There is limited literature exploring [...] Read more.
Background: Diet is known to influence the composition of the gut microbiome. For patients who require enteral feeding, there has been a growing popularity of using blenderized tube feeds (BTFs) as an alternative to standard enteral formula (SEF). There is limited literature exploring the impact of BTFs on the gut microbiome. Methods: Twenty-eight patients 1 to 22 years of age who received their nutrition via gastrostomy tube for over 4 weeks were included and participants were divided into BTF and SEF groups. Demographics and clinical information were collected from the medical records, and all legal guardians completed a semi-structured interview using a questionnaire. 16SrRNA sequencing was used for bacteriome analysis. Results: Eleven patients in the BTF group and seventeen in the SEF group were included. No significant differences in the demographics were noted. Patients on BTFs had no emesis compared to seven (41%) in the SEF group, p = 0.02. There were no significant differences in other clinical characteristics and comorbidities. No significant differences in the gut microbiome between the groups were noted for alpha and beta diversities, richness, and evenness (at both genus and species levels). Differential abundance analysis showed only a few significant differences between the groups at all reported taxonomic levels. Conclusions: Patients on BTFs had a significantly decreased prevalence of emesis compared to the SEF group. No significant differences in the microbiome between the groups were noted for alpha and beta diversities, richness, and evenness. Prospective studies are recommended to verify our preliminary data and further evaluate the implications of our study results. Full article
(This article belongs to the Section Clinical Nutrition)
Show Figures

Figure 1

16 pages, 1913 KB  
Article
Evaluation of Ultra-Low-Dose CBCT Protocols to Investigate Vestibular Bone Defects in the Context of Immediate Implant Planning: An Ex Vivo Study on Cadaver Skulls
by Mats Wernfried Heinrich Böse, Jonas Buchholz, Florian Beuer, Stefano Pieralli and Axel Bumann
J. Clin. Med. 2025, 14(12), 4196; https://doi.org/10.3390/jcm14124196 - 12 Jun 2025
Viewed by 605
Abstract
Background/Objectives: This ex vivo study aimed to evaluate the diagnostic performance of ultra-low-dose (ULD) cone-beam computed tomography (CBCT) protocols in detecting vestibular bone defects for immediate implant planning, using intraoral scan (IOS) data as a reference. Methods: Four CBCT protocols (ENDO, A, B, [...] Read more.
Background/Objectives: This ex vivo study aimed to evaluate the diagnostic performance of ultra-low-dose (ULD) cone-beam computed tomography (CBCT) protocols in detecting vestibular bone defects for immediate implant planning, using intraoral scan (IOS) data as a reference. Methods: Four CBCT protocols (ENDO, A, B, C) were applied to four dried human skulls using a standardized setup and a single CBCT unit (Planmeca ProMax® 3D Mid, Planmeca Oy, Helsinki, Finland). All scans were taken at 90 kV, with varying parameters: (1) ENDO (40 × 50 mm, 75 µm, 12 mA, 80–120 µSv, 15 s), (2) A (50 × 50 mm, 75 µm, 9 mA, 20–40 µSv, 5 s), (3) B (100 × 60 mm, 150 µm, 7.1 mA, 22–32 µSv, 5 s), and (4) C (100 × 100 mm, 200 µm, 7.1 mA, 44 µSv, 4 s). Vestibular root surfaces of single-rooted teeth (FDI regions 15–25 and 35–45) were digitized via IOS and exported as STL files. CBCT datasets were superimposed using 3D software (Blender 2.79), and surface defects were measured and compared using one-sample t-tests and Bland–Altman analysis. The level of significance was set at p < 0.05. Results: A total of 330 vestibular surfaces from 66 teeth were analyzed. Compared to the IOS reference, protocols ENDO and A showed minimal differences (p > 0.05). In contrast, protocols B and C exhibited statistically significant deviations (p < 0.05). Protocol B demonstrated a mean difference of −0.477 mm2 with limits of agreement (LoA) from −2.04 to 1.09 mm2 and significant intra-rater variability (p < 0.05). Protocol C revealed a similar mean deviation (−0.455 mm2) but a wider LoA (−2.72 to 1.81 mm2), indicating greater measurement variability. Overall, larger voxel sizes were associated with increased random error, although deviations remained within clinically acceptable limits. Conclusions: Despite statistical significance, deviations for protocols B and C remained within clinically acceptable limits. ULD CBCT protocols are, thus, suitable for evaluating vestibular bone defects with reduced radiation exposure. Full article
(This article belongs to the Special Issue Emerging Technologies for Dental Imaging)
Show Figures

Figure 1

22 pages, 1970 KB  
Article
Bridging Information from Manufacturing to the AEC Domain: The Development of a Conversion Framework from STEP to IFC
by Davide Avogaro and Carlo Zanchetta
Systems 2025, 13(6), 421; https://doi.org/10.3390/systems13060421 - 31 May 2025
Viewed by 507
Abstract
Interoperability between digital models in the manufacturing and AEC domains is a critical issue in the building design of complex systems. Despite the adoption of well-established standards such as STEP (STandard for the Exchange of Product data, ISO 10303-21) for the industrial domain [...] Read more.
Interoperability between digital models in the manufacturing and AEC domains is a critical issue in the building design of complex systems. Despite the adoption of well-established standards such as STEP (STandard for the Exchange of Product data, ISO 10303-21) for the industrial domain and IFC (Industry Foundation Classes, ISO 16739-1) for the construction domain, communication between these domains is still limited due to differences in conceptual models, levels of detail, and application purposes. Existing solutions for conversion between these formats are few, often proprietary, and not always suitable to ensure full semantic integration in BIM (Building Information Modeling) flows. This study proposes a methodological framework for structured conversion from STEP to IFC-SPF (STEP Physical File), based on information and geometric simplification and data enrichment. The process includes the elimination of irrelevant components, simplification of geometries, merging assemblies, and integration of data useful to the building context. The experimental implementation, carried out using the Bonsai extension for Blender, demonstrates a substantial reduction in geometric complexity and computational load, while maintaining data consistency required for integration into BIM processes. This approach emerges as a scalable, affordable, and sustainable solution for interoperability between industrial and civil models, even in professional environments lacking advanced software development skills. Full article
(This article belongs to the Special Issue Complex Construction Project Management with Systems Thinking)
Show Figures

Figure 1

30 pages, 63763 KB  
Article
Computer-Aided Facial Soft Tissue Reconstruction with Computer Vision: A Modern Approach to Identifying Unknown Individuals
by Svenja Preuß, Sven Becker, Jasmin Rosenfelder and Dirk Labudde
Appl. Sci. 2025, 15(11), 6086; https://doi.org/10.3390/app15116086 - 28 May 2025
Viewed by 1079
Abstract
Facial soft tissue reconstruction is an important tool in forensic investigations, especially when conventional identification methods are unsuccessful. This paper presents a digital workflow for facial reconstruction and identity verification using computer vision techniques applied to two forensic cases. The first case involves [...] Read more.
Facial soft tissue reconstruction is an important tool in forensic investigations, especially when conventional identification methods are unsuccessful. This paper presents a digital workflow for facial reconstruction and identity verification using computer vision techniques applied to two forensic cases. The first case involves a cold case from 1993, in which a manual reconstruction by Prof. Helmer was conducted in 1994. We digitally reconstructed the same individual using CAD software (Blender), enabling a direct comparison between manual and digital techniques. To date, the deceased remains unidentified. The second case, from 2021, involved a digitally reconstructed face that was later matched to a missing person through DNA analysis. Here, comparison material was available, including an official photograph. A police officer involved in the case noted a “striking resemblance” between the reconstruction and the photograph. To evaluate this subjective impression, we performed quantitative analyses using three face recognition models (Dlib-based method, VGG-Face, and GhostFaceNet). The models did not indicate significant similarity, highlighting a gap between human perception and algorithmic assessment. These findings suggest that current face recognition algorithms may not yet be fully suited to evaluating reconstructions, which tend to deviate in subtle but critical facial features. To achieve better facial recognition results, further research is required to generate more anatomically accurate and detailed reconstructions that align more closely with the sensitivity of AI-based identification systems. Full article
Show Figures

Figure 1

10 pages, 5714 KB  
Review
Clinical Consequences of Ankyloglossia from Childhood to Adulthood: Support for and Development of a Three-Dimensional Animated Video
by Carlos O’Connor-Reina, Laura Rodriguez Alcala, Gabriela Bosco, Paula Martínez-Ruiz de Apodaca, Paula Mackers, Maria Teresa Garcia-Iriarte, Peter Baptista and Guillermo Plaza
Int. J. Orofac. Myol. Myofunct. Ther. 2025, 51(1), 5; https://doi.org/10.3390/ijom51010005 - 23 May 2025
Cited by 1 | Viewed by 8871
Abstract
Ankyloglossia causes impairment of normal tongue motility and disrupts the average balance of the muscle forces that form the orofacial complex. Inadequate swallowing from birth can cause long-term anatomical and functional consequences in adult life. Using the video presented herein, we describe the [...] Read more.
Ankyloglossia causes impairment of normal tongue motility and disrupts the average balance of the muscle forces that form the orofacial complex. Inadequate swallowing from birth can cause long-term anatomical and functional consequences in adult life. Using the video presented herein, we describe the current knowledge about the long-term implications of ankyloglossia. After a literature review of the Medline, Google Scholar, and Embase databases on the relations between ankyloglossia and sleep-disordered breathing, we designed and created a three-dimensional (3D) video using Adobe After Effects based on the anatomical and functional changes produced by repeated deglutition, with and without ankyloglossia, from childhood to adulthood. The animated video (Blender 3D, Amsterdam, The Netherlands, 2024) presented herein was based on the most recent literature review of dentition, breathing, posture, and abnormal swallowing, emphasizing the importance of the potential consequences of sleep-disordered breathing. The resulting animated 3D video includes dynamic sequences of a growing child, demonstrating the anatomy and physiology of deglutition with and without ankyloglossia, and its potential consequences for the surrounding structures during growth due to untreated ankyloglossia. This visual instructional video regarding the impacts of ankyloglossia on deglutition/swallowing may help motivate early childhood diagnosis and treatment of ankyloglossia. This instrument addresses the main myofunctional aspects of normal deglutition based on the importance of free tongue motion and can be used by students or professionals training in myofunctional disorders. Full article
Show Figures

Figure 1

19 pages, 3940 KB  
Article
Effect of Workwear Fit on Thermal Insulation: Assessment Using 3D Scanning Technology
by Magdalena Młynarczyk, Joanna Orysiak and Jarosław Jankowski
Materials 2025, 18(9), 2098; https://doi.org/10.3390/ma18092098 - 3 May 2025
Viewed by 439
Abstract
Thermal insulation is a basic property for describing a set of clothing and consists of the thermal resistance of the individual layers of clothing (which depends on the material used and its structure) and also takes into account the air gaps between the [...] Read more.
Thermal insulation is a basic property for describing a set of clothing and consists of the thermal resistance of the individual layers of clothing (which depends on the material used and its structure) and also takes into account the air gaps between the layers. Here, the total thermal insulation was measured in a climatic chamber with a thermal manikin. The air gaps were measured using a 3D scanning technique and calculated using the Blender 3D graphics program. Our study shows the effect of size (fit) on the size of the air gaps, as well as the influence of the air gap size on the thermal insulation value (both for static and dynamic conditions with 45 double steps and 45 double arm movements per minute) for workwear. The relationship of the total thermal insulation value on the volume and size of the air gap was described as a second-order polynomial (R2 > 0.8). It was observed that for workwear, thermal insulation did not increase when the air gaps exceeded approximately 30 mm or when the air gap volume reached 50–55 dm3. The highest total thermal insulation (~0.23 m2°C/W) was achieved when the garment closely fitted the wearer’s body (or in this case, the thermal manikin) without excessive tightness. Full article
(This article belongs to the Special Issue Advanced Textile Materials: Design, Properties and Applications)
Show Figures

Figure 1

22 pages, 9648 KB  
Article
Three-Dimensional Real-Scene-Enhanced GNSS/Intelligent Vision Surface Deformation Monitoring System
by Yuanrong He, Weijie Yang, Qun Su, Qiuhua He, Hongxin Li, Shuhang Lin and Shaochang Zhu
Appl. Sci. 2025, 15(9), 4983; https://doi.org/10.3390/app15094983 - 30 Apr 2025
Viewed by 802
Abstract
With the acceleration of urbanization, surface deformation monitoring has become crucial. Existing monitoring systems face several challenges, such as data singularity, the poor nighttime monitoring quality of video surveillance, and fragmented visual data. To address these issues, this paper presents a 3D real-scene [...] Read more.
With the acceleration of urbanization, surface deformation monitoring has become crucial. Existing monitoring systems face several challenges, such as data singularity, the poor nighttime monitoring quality of video surveillance, and fragmented visual data. To address these issues, this paper presents a 3D real-scene (3DRS)-enhanced GNSS/intelligent vision surface deformation monitoring system. The system integrates GNSS monitoring terminals and multi-source meteorological sensors to accurately capture minute displacements at monitoring points and multi-source Internet of Things (IoT) data, which are then automatically stored in MySQL databases. To enhance the functionality of the system, the visual sensor data are fused with 3D models through streaming media technology, enabling 3D real-scene augmented reality to support dynamic deformation monitoring and visual analysis. WebSocket-based remote lighting control is implemented to enhance the quality of video data at night. The spatiotemporal fusion of UAV aerial data with 3D models is achieved through Blender image-based rendering, while edge detection is employed to extract crack parameters from intelligent inspection vehicle data. The 3DRS model is constructed through UAV oblique photography, 3D laser scanning, and the combined use of SVSGeoModeler and SketchUp. A visualization platform for surface deformation monitoring is built on the 3DRS foundation, adopting an “edge collection–cloud fusion–terminal interaction” approach. This platform dynamically superimposes GNSS and multi-source IoT monitoring data onto the 3D spatial base, enabling spatiotemporal correlation analysis of millimeter-level displacements and early risk warning. Full article
Show Figures

Figure 1

26 pages, 10897 KB  
Article
LiDAR-Based Road Cracking Detection: Machine Learning Comparison, Intensity Normalization, and Open-Source WebGIS for Infrastructure Maintenance
by Nicole Pascucci, Donatella Dominici and Ayman Habib
Remote Sens. 2025, 17(9), 1543; https://doi.org/10.3390/rs17091543 - 26 Apr 2025
Viewed by 1566
Abstract
This study introduces an innovative and scalable approach for automated road surface assessment by integrating Mobile Mapping System (MMS)-based LiDAR data analysis with an open-source WebGIS platform. In a U.S.-based case study, over 20 datasets were collected along Interstate I-65 in West Lafayette, [...] Read more.
This study introduces an innovative and scalable approach for automated road surface assessment by integrating Mobile Mapping System (MMS)-based LiDAR data analysis with an open-source WebGIS platform. In a U.S.-based case study, over 20 datasets were collected along Interstate I-65 in West Lafayette, Indiana, using the Purdue Wheel-based Mobile Mapping System—Ultra High Accuracy (PWMMS-UHA), following Indiana Department of Transportation (INDOT) guidelines. Preprocessing included noise removal, resolution reduction to 2 cm, and ground/non-ground separation using the Cloth Simulation Filter (CSF), resulting in Bare Earth (BE), Digital Terrain Model (DTM), and Above Ground (AG) point clouds. The optimized BE layer, enriched with intensity and color information, enabled crack detection through Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Random Forest (RF) classification, with and without intensity normalization. DBSCAN parameter tuning was guided by silhouette scores, while model performance was evaluated using precision, recall, F1-score, and the Jaccard Index, benchmarked against reference data. Results demonstrate that RF consistently outperformed DBSCAN, particularly under intensity normalization, achieving Jaccard Index values of 94% for longitudinal and 88% for transverse cracks. A key contribution of this work is the integration of geospatial analytics into an interactive, open-source WebGIS environment—developed using Blender, QGIS, and Lizmap—to support predictive maintenance planning. Moreover, intervention thresholds were defined based on crack surface area, aligned with the Pavement Condition Index (PCI) and FHWA standards, offering a data-driven framework for infrastructure monitoring. This study emphasizes the practical advantages of comparing clustering and machine learning techniques on 3D LiDAR point clouds, both with and without intensity normalization, and proposes a replicable, computationally efficient alternative to deep learning methods, which often require extensive training datasets and high computational resources. Full article
Show Figures

Figure 1

Back to TopTop