Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (10)

Search Parameters:
Keywords = 3D image file creation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 3810 KB  
Article
From Digital Design to Edible Art: The Role of Additive Manufacturing in Shaping the Future of Food
by János Simon and László Gogolák
J. Manuf. Mater. Process. 2025, 9(7), 217; https://doi.org/10.3390/jmmp9070217 - 27 Jun 2025
Viewed by 1138
Abstract
Three-dimensional food printing (3DFP), a specialized application of additive manufacturing (AM), employs a layer-by-layer deposition process guided by digital image files to fabricate edible structures. Utilizing heavily modified 3D printers and Computer-Aided Design (CAD) software technology allows for the precise creation of customized [...] Read more.
Three-dimensional food printing (3DFP), a specialized application of additive manufacturing (AM), employs a layer-by-layer deposition process guided by digital image files to fabricate edible structures. Utilizing heavily modified 3D printers and Computer-Aided Design (CAD) software technology allows for the precise creation of customized food items tailored to individual aesthetic preferences and nutritional requirements. Three-dimensional food printing holds significant potential in revolutionizing the food industry by enabling the production of personalized meals, enhancing the sensory dining experience, and addressing specific dietary constraints. Despite these promising applications, 3DFP remains one of the most intricate and technically demanding areas within AM, particularly in the context of modern gastronomy. Challenges such as the rheological behaviour of food materials, print stability, and the integration of cooking functions must be addressed to fully realize its capabilities. This article explores the possibilities of applying classical modified 3D printers in the food industry. The behaviour of certain recipes is also tested. Two test case scenarios are covered. The first scenario is the work and formation of a homogenized meat mass. The second scenario involves finding a chocolate recipe that is suitable for printing relatively detailed chocolate decorative elements. The current advancements, technical challenges, and future opportunities of 3DFP in the field of engineering, culinary innovation and nutritional science are also explored. Full article
Show Figures

Figure 1

37 pages, 12112 KB  
Article
Protocol for Converting DICOM Files to STL Models Using 3D Slicer and Ultimaker Cura
by Malena Pérez-Sevilla, Fernando Rivas-Navazo, Pedro Latorre-Carmona and Darío Fernández-Zoppino
J. Pers. Med. 2025, 15(3), 118; https://doi.org/10.3390/jpm15030118 - 19 Mar 2025
Viewed by 2676
Abstract
Background/Objectives: 3D printing has become an invaluable tool in medicine, enabling the creation of precise anatomical models for surgical planning and medical education. This study presents a comprehensive protocol for converting DICOM files into three-dimensional models and their subsequent transformation into GCODE [...] Read more.
Background/Objectives: 3D printing has become an invaluable tool in medicine, enabling the creation of precise anatomical models for surgical planning and medical education. This study presents a comprehensive protocol for converting DICOM files into three-dimensional models and their subsequent transformation into GCODE files ready for 3D printing. Methods: We employed the open-source software “3D Slicer” for the initial conversion of the DICOM files, capitalising on its robust capabilities in segmentation and medical image processing. An optimised workflow was developed for the precise and efficient conversion of medical images into STL models, ensuring high fidelity in anatomical structures. The protocol was validated through three case studies, achieving elevated structural fidelity based on deviation analysis between the STL models and the original DICOM data. Furthermore, the segmentation process preserved morphological accuracy within a narrow deviation range, ensuring the reliable replication of anatomical features for medical applications. Our protocol provides an effective and accessible approach to generating 3D anatomical models with enhanced accuracy and reproducibility. In later stages, we utilised the “Ultimaker Cura” software to generate customised GCODE files tailored to the specifications of the 3D printer. Results: Our protocol offers an effective, accessible, and more accurate solution for creating 3D anatomical models from DICOM images. Furthermore, the versatility of this approach allows for its adaptation to various 3D printers and materials, expanding its utility in the medical and scientific community. Conclusions: This study presents a robust and reproducible approach for converting medical data into physical three-dimensional objects, paving the way for a wide range of applications in personalised medicine and advanced clinical practice. The selection of sample datasets from the 3D Slicer repository ensures standardisation and reproducibility, allowing for independent validation of the proposed workflow without ethical or logistical constraints related to patient data access. However, we acknowledge that future work could expand upon this by incorporating real patient datasets and benchmarking the protocol against alternative segmentation methods and software packages to further assess performance across different clinical scenarios. Essentially, this protocol can be particularly characterised by its commitment to open-source software and low-cost solutions, making advanced 3D modelling accessible to a wider audience. By leveraging open-access tools such as “3D Slicer” and “Ultimaker Cura”, we democratise the creation of anatomical models, ensuring that institutions with limited resources can also benefit from this technology, promoting innovation and inclusivity in medical sciences and education. Full article
(This article belongs to the Section Methodology, Drug and Device Discovery)
Show Figures

Figure 1

19 pages, 3744 KB  
Article
In-House Fabrication and Validation of 3D-Printed Custom-Made Medical Devices for Planning and Simulation of Peripheral Endovascular Therapies
by Arianna Mersanne, Ruben Foresti, Chiara Martini, Cristina Caffarra Malvezzi, Giulia Rossi, Anna Fornasari, Massimo De Filippo, Antonio Freyrie and Paolo Perini
Diagnostics 2025, 15(1), 8; https://doi.org/10.3390/diagnostics15010008 - 25 Dec 2024
Cited by 2 | Viewed by 1331
Abstract
Objectives: This study aims to develop and validate a standardized methodology for creating high-fidelity, custom-made, patient-specific 3D-printed vascular models that serve as tools for preoperative planning and training in the endovascular treatment of peripheral artery disease (PAD). Methods: Ten custom-made 3D-printed vascular models [...] Read more.
Objectives: This study aims to develop and validate a standardized methodology for creating high-fidelity, custom-made, patient-specific 3D-printed vascular models that serve as tools for preoperative planning and training in the endovascular treatment of peripheral artery disease (PAD). Methods: Ten custom-made 3D-printed vascular models were produced using computed tomography angiography (CTA) scans of ten patients diagnosed with PAD. CTA images were analyzed using Syngo.via by a specialist to formulate a medical prescription that guided the model’s creation. The CTA data were then processed in OsiriX MD to generate the .STL file, which is further refined in a Meshmixer. Stereolithography (SLA) 3D printing technology was employed, utilizing either flexible or rigid materials. The dimensional accuracy of the models was evaluated by comparing their CT scan images with the corresponding patient data, using OsiriX MD. Additionally, both flexible and rigid models were evaluated by eight vascular surgeons during simulations in an in-house-designed setup, assessing both the technical aspects and operator perceptions of the simulation. Results: Each model took approximately 21.5 h to fabricate, costing €140 for flexible and €165 for rigid materials. Bland–Alman plots revealed a strong agreement between the 3D models and patient anatomy, with outliers ranging from 4.3% to 6.9%. Simulations showed that rigid models performed better in guidewire navigation and catheter stability, while flexible models offered improved transparency and lesion treatment. Surgeons confirmed the models’ realism and utility. Conclusions: The study highlights the cost-efficient, high-fidelity production of 3D-printed vascular models, emphasizing their potential to enhance training and planning in endovascular surgery. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

18 pages, 11545 KB  
Article
Synthetic Training Data in AI-Driven Quality Inspection: The Significance of Camera, Lighting, and Noise Parameters
by Dominik Schraml and Gunther Notni
Sensors 2024, 24(2), 649; https://doi.org/10.3390/s24020649 - 19 Jan 2024
Cited by 3 | Viewed by 2383
Abstract
Industrial-quality inspections, particularly those leveraging AI, require significant amounts of training data. In fields like injection molding, producing a multitude of defective parts for such data poses environmental and financial challenges. Synthetic training data emerge as a potential solution to address these concerns. [...] Read more.
Industrial-quality inspections, particularly those leveraging AI, require significant amounts of training data. In fields like injection molding, producing a multitude of defective parts for such data poses environmental and financial challenges. Synthetic training data emerge as a potential solution to address these concerns. Although the creation of realistic synthetic 2D images from 3D models of injection-molded parts involves numerous rendering parameters, the current literature on the generation and application of synthetic data in industrial-quality inspection scarcely addresses the impact of these parameters on AI efficacy. In this study, we delve into some of these key parameters, such as camera position, lighting, and computational noise, to gauge their effect on AI performance. By utilizing Blender software, we procedurally introduced the “flash” defect on a 3D model sourced from a CAD file of an injection-molded part. Subsequently, with Blender’s Cycles rendering engine, we produced datasets for each parameter variation. These datasets were then used to train a pre-trained EfficientNet-V2 for the binary classification of the “flash” defect. Our results indicate that while noise is less critical, using a range of noise levels in training can benefit model adaptability and efficiency. Variability in camera positioning and lighting conditions was found to be more significant, enhancing model performance even when real-world conditions mirror the controlled synthetic environment. These findings suggest that incorporating diverse lighting and camera dynamics is beneficial for AI applications, regardless of the consistency in real-world operational settings. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

13 pages, 6527 KB  
Article
Cohesive Zone Modeling of Pull-Out Test for Dental Fiber–Silicone Polymer
by Ayman M. Maqableh and Muhanad M. Hatamleh
Polymers 2023, 15(18), 3668; https://doi.org/10.3390/polym15183668 - 6 Sep 2023
Cited by 2 | Viewed by 2093
Abstract
Background: Several analytical methods for the fiber pull-out test have been developed to evaluate the bond strength of fiber–matrix systems. We aimed to investigate the debonding mechanism of a fiber–silicone pull-out specimen and validate the experimental data using 3D-FEM and a cohesive element [...] Read more.
Background: Several analytical methods for the fiber pull-out test have been developed to evaluate the bond strength of fiber–matrix systems. We aimed to investigate the debonding mechanism of a fiber–silicone pull-out specimen and validate the experimental data using 3D-FEM and a cohesive element approach. Methods: A 3D model of a fiber–silicone pull-out testing specimen was established by pre-processing CT images of the typical specimen. The materials on the scans were posted in three different cross-sectional views using ScanIP and imported to ScanFE in which 3D generation was implemented for all of the image slices. This file was exported in FEA format and was imported in the FEA software (PATRAN/ABAQUS, version r2) for generating solid mesh, boundary conditions, and material properties attribution, as well as load case creation and data processing. Results: The FEM cohesive zone pull-out force versus displacement curve showed an initial linear response. The Von Mises stress concentration was distributed along the fiber–silicone interface. The damage in the principal stresses’ directions S11, S22, and S33, which represented the maximum possible magnitude of tensile and compressive stress at the fiber–silicone interface, showed that the stress is higher in the direction S33 (stress acting in the Z-direction) in which the lower damage criterion was higher as well when compared to S11 (stress acting in the XY plane) and S23 (stress acting in the YZ plane). Conclusions: The comparison between the experimental values and the results from the finite element simulations show that the proposed cohesive zone model accurately reproduces the experimental results. These results are considered almost identical to the experimental observations about the interface. The cohesive element approach is a potential function that takes into account the shear effects with many advantages related to its ability to predict the initiation and progress of the fiber–silicone debonding during pull-out tests. A disadvantage of this approach is the computational effort required for the simulation and analysis process. A good understanding of the parameters related to the cohesive laws is responsible for a successful simulation. Full article
(This article belongs to the Special Issue Organic-Inorganic Hybrid Materials III)
Show Figures

Figure 1

22 pages, 9681 KB  
Article
A Specialized Database for Autonomous Vehicles Based on the KITTI Vision Benchmark
by Juan I. Ortega-Gomez, Luis A. Morales-Hernandez and Irving A. Cruz-Albarran
Electronics 2023, 12(14), 3165; https://doi.org/10.3390/electronics12143165 - 21 Jul 2023
Cited by 7 | Viewed by 2817
Abstract
Autonomous driving systems have emerged with the promise of preventing accidents. The first critical aspect of these systems is perception, where the regular practice is the use of top-view point clouds as the input; however, the existing databases in this area only present [...] Read more.
Autonomous driving systems have emerged with the promise of preventing accidents. The first critical aspect of these systems is perception, where the regular practice is the use of top-view point clouds as the input; however, the existing databases in this area only present scenes with 3D point clouds and their respective labels. This generates an opportunity, and the objective of this work is to present a database with scenes directly in the top-view and their labels in the respective plane, as well as adding a segmentation map for each scene as a label for segmentation work. The method used during the creation of the proposed database is presented; this covers how to transform 3D to 2D top-view image point clouds, how the detection labels in the plane are generated, and how to implement a neural network for the generated segmentation maps of each scene. Using this method, a database was developed with 7481 scenes, each with its corresponding top-view image, label file, and segmentation map, where the road segmentation metrics are as follows: F1, 95.77; AP, 92.54; ACC, 97.53; PRE, 94.34; and REC, 97.25. This article presents the development of a database for segmentation and detection assignments, highlighting its particular use for environmental perception works. Full article
Show Figures

Figure 1

24 pages, 10604 KB  
Article
Point-Cloud Segmentation for 3D Edge Detection and Vectorization
by Thodoris Betsas and Andreas Georgopoulos
Heritage 2022, 5(4), 4037-4060; https://doi.org/10.3390/heritage5040208 - 9 Dec 2022
Cited by 4 | Viewed by 6725
Abstract
The creation of 2D–3D architectural vector drawings constitutes a manual, labor-intensive process. The scientific community has not provided an automated approach for the production of 2D–3D architectural drawings of cultural-heritage objects yet, regardless of the undoubtable need of many scientific fields. This paper [...] Read more.
The creation of 2D–3D architectural vector drawings constitutes a manual, labor-intensive process. The scientific community has not provided an automated approach for the production of 2D–3D architectural drawings of cultural-heritage objects yet, regardless of the undoubtable need of many scientific fields. This paper presents an automated method which addresses the problem of detecting 3D edges in point clouds by leveraging a set of RGB images and their 2D edge maps. More concretely, once the 2D edge maps have been produced exploiting manual, semi-automated or automated methods, the RGB images are enriched with an extra channel containing the edge semantic information corresponding to each RGB image. The four-channel images are fed into a Structure from Motion–Multi View Stereo (SfM-MVS) software and a semantically enriched dense point cloud is produced. Then, using the semantically enriched dense point cloud, the points belonging to a 3D edge are isolated from all the others based on their label value. The detected 3D edge points are decomposed into set of points belonging to each edge and fed into the 3D vectorization procedure. Finally, the 3D vectors are saved into a “.dxf” file. The previously described steps constitute the 3DPlan software, which is available on GitHub. The efficiency of the proposed software was evaluated on real-world data of cultural-heritage assets. Full article
(This article belongs to the Special Issue 3D Virtual Reconstruction and Visualization of Complex Architectures)
Show Figures

Figure 1

20 pages, 8133 KB  
Article
Geological-Geomorphological and Paleontological Heritage in the Algarve (Portugal) Applied to Geotourism and Geoeducation
by Antonio Martínez-Graña, Paulo Legoinha, José Luis Goy, José Angel González-Delgado, Ildefonso Armenteros, Cristino Dabrio and Caridad Zazo
Land 2021, 10(9), 918; https://doi.org/10.3390/land10090918 - 31 Aug 2021
Cited by 6 | Viewed by 7080
Abstract
A 3D virtual geological route on Digital Earth of the geological-geomorphological and paleontological heritage in the Algarve (Portugal) is presented, assessing the geological heritage of nine representative geosites. Eighteen quantitative parameters are used, weighing the scientific, didactic and cultural tourist interest of each [...] Read more.
A 3D virtual geological route on Digital Earth of the geological-geomorphological and paleontological heritage in the Algarve (Portugal) is presented, assessing the geological heritage of nine representative geosites. Eighteen quantitative parameters are used, weighing the scientific, didactic and cultural tourist interest of each site. A virtual route has been created in Google Earth, with overlaid georeferenced cartographies, as a field guide for students to participate and improve their learning. This free application allows loading thematic georeferenced information that has previously been evaluated by means of a series of parameters for identifying the importance and interest of a geosite (scientific, educational and/or tourist). The virtual route allows travelling from one geosite to another, interacting in real time from portable devices (e.g., smartphone and tablets), and thus making possible the ability to observe the relief and spatial geological distribution with representative images, as well as to access files with the description and analysis of each geosite. By using a field guide, each geosite is complemented with activities for carrying out and evaluating what has been learned; these resources allow a teaching–learning process where the student is an active part of the development and creation of content using new technologies that provide more entertaining and educational learning, teamwork and interaction with social networks. This itinerary allows the creation of attitudes and skills that involve geoconservation as an element for sustainable development. Full article
(This article belongs to the Section Landscape Archaeology)
Show Figures

Figure 1

14 pages, 10078 KB  
Article
Multi-View 3D Integral Imaging Systems Using Projectors and Mobile Devices
by Nikolai Petrov, Maksim Khromov and Yuri Sokolov
Photonics 2021, 8(8), 331; https://doi.org/10.3390/photonics8080331 - 13 Aug 2021
Cited by 3 | Viewed by 4303
Abstract
Glassless 3D displays using projectors and mobile phones based on integral imaging technology have been developed. Three-dimensional image files are created from the 2D images captured by a conventional camera. Large size 3D images using four HD and Ultra HD 4K projectors are [...] Read more.
Glassless 3D displays using projectors and mobile phones based on integral imaging technology have been developed. Three-dimensional image files are created from the 2D images captured by a conventional camera. Large size 3D images using four HD and Ultra HD 4K projectors are created with a viewing angle of 35 degrees and a large depth. Three-dimensional images are demonstrated using optimized lenticular lenses and mobile smartphones, such as LG and Samsung with resolution 2560 × 1440, and 4K Sony with resolution 3840 × 2160. Full article
(This article belongs to the Special Issue Holography)
Show Figures

Figure 1

27 pages, 2203 KB  
Article
Unmanned Aerial Vehicle (UAV) for Monitoring Soil Erosion in Morocco
by Sebastian D'Oleire-Oltmanns, Irene Marzolff, Klaus Daniel Peter and Johannes B. Ries
Remote Sens. 2012, 4(11), 3390-3416; https://doi.org/10.3390/rs4113390 - 7 Nov 2012
Cited by 458 | Viewed by 35543
Abstract
This article presents an environmental remote sensing application using a UAV that is specifically aimed at reducing the data gap between field scale and satellite scale in soil erosion monitoring in Morocco. A fixed-wing aircraft type Sirius I (MAVinci, Germany) equipped with a [...] Read more.
This article presents an environmental remote sensing application using a UAV that is specifically aimed at reducing the data gap between field scale and satellite scale in soil erosion monitoring in Morocco. A fixed-wing aircraft type Sirius I (MAVinci, Germany) equipped with a digital system camera (Panasonic) is employed. UAV surveys are conducted over different study sites with varying extents and flying heights in order to provide both very high resolution site-specific data and lower-resolution overviews, thus fully exploiting the large potential of the chosen UAV for multi-scale mapping purposes. Depending on the scale and area coverage, two different approaches for georeferencing are used, based on high-precision GCPs or the UAV’s log file with exterior orientation values respectively. The photogrammetric image processing enables the creation of Digital Terrain Models (DTMs) and ortho-image mosaics with very high resolution on a sub-decimetre level. The created data products were used for quantifying gully and badland erosion in 2D and 3D as well as for the analysis of the surrounding areas and landscape development for larger extents. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles (UAVs) based Remote Sensing)
Show Figures

Back to TopTop