Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (62)

Search Parameters:
Keywords = 3D human shape reconstruction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 6720 KB  
Article
UBSP-Net: Underclothing Body Shape Perception Network for Parametric 3D Human Reconstruction
by Xihang Li, Xianguo Cheng, Fang Chen, Furui Shi and Ming Li
Electronics 2025, 14(17), 3522; https://doi.org/10.3390/electronics14173522 - 3 Sep 2025
Viewed by 399
Abstract
This paper introduces a novel Underclothing Body Shape Perception Network (UBSP-Net) for reconstructing parametric 3D human models from clothed full-body 3D scans, addressing the challenge of estimating body shape and pose beneath clothing. Our approach simultaneously predicts both the internal body point cloud [...] Read more.
This paper introduces a novel Underclothing Body Shape Perception Network (UBSP-Net) for reconstructing parametric 3D human models from clothed full-body 3D scans, addressing the challenge of estimating body shape and pose beneath clothing. Our approach simultaneously predicts both the internal body point cloud and a reference point cloud for the SMPL model, with point-to-point correspondence, leveraging the external scan as an initial approximation to enhance the model’s stability and computational efficiency. By learning point offsets and incorporating body part label probabilities, the network achieves accurate internal body shape inference, enabling reliable Skinned Multi-Person Linear (SMPL) human body model registration. Furthermore, we optimize the SMPL+D human model parameters to reconstruct the clothed human model, accommodating common clothing types, such as T-shirts, shirts, and pants. Evaluated on the CAPE dataset, our method outperforms mainstream approaches, achieving significantly lower Chamfer distance errors and faster inference times. The proposed automated pipeline ensures accurate and efficient reconstruction, even with sparse or incomplete scans, and demonstrates robustness on real-world Thuman2.0 dataset scans. This work advances parametric human modeling by providing a scalable and privacy-preserving solution for applications to 3D shape analysis, virtual try-ons, and animation. Full article
Show Figures

Figure 1

19 pages, 3207 KB  
Article
Pose-Driven Body Shape Prediction Algorithm Based on the Conditional GAN
by Jiwon Jang, Jiseong Byeon, Daewon Jung, Jihun Chang and Sekyoung Youm
Appl. Sci. 2025, 15(14), 7643; https://doi.org/10.3390/app15147643 - 8 Jul 2025
Viewed by 577
Abstract
Reconstructing accurate human body shapes from clothed images remains a challenge due to occlusion by garments and limitations of the existing methods. Traditional parametric models often require minimal clothing and involve high computational costs. To address these issues, we propose a lightweight algorithm [...] Read more.
Reconstructing accurate human body shapes from clothed images remains a challenge due to occlusion by garments and limitations of the existing methods. Traditional parametric models often require minimal clothing and involve high computational costs. To address these issues, we propose a lightweight algorithm that predicts body shape from clothed RGB images by leveraging pose estimation. Our method simultaneously extracts major joint positions and body features to reconstruct complete 3D body shapes, even in regions hidden by clothing or obscured from view. This approach enables real-time, non-invasive body modeling suitable for practical applications. Full article
Show Figures

Figure 1

24 pages, 2032 KB  
Article
ViT-Based Classification and Self-Supervised 3D Human Mesh Generation from NIR Single-Pixel Imaging
by Carlos Osorio Quero, Daniel Durini and Jose Martinez-Carranza
Appl. Sci. 2025, 15(11), 6138; https://doi.org/10.3390/app15116138 - 29 May 2025
Viewed by 779
Abstract
Accurately estimating 3D human pose and body shape from a single monocular image remains challenging, especially under poor lighting or occlusions. Traditional RGB-based methods struggle in such conditions, whereas single-pixel imaging (SPI) in the Near-Infrared (NIR) spectrum offers a robust alternative. NIR penetrates [...] Read more.
Accurately estimating 3D human pose and body shape from a single monocular image remains challenging, especially under poor lighting or occlusions. Traditional RGB-based methods struggle in such conditions, whereas single-pixel imaging (SPI) in the Near-Infrared (NIR) spectrum offers a robust alternative. NIR penetrates clothing and adapts to illumination changes, enhancing body shape and pose estimation. This work explores an SPI camera (850–1550 nm) with Time-of-Flight (TOF) technology for human detection in low-light conditions. SPI-derived point clouds are processed using a Vision Transformer (ViT) to align poses with a predefined SMPL-X model. A self-supervised PointNet++ network estimates global rotation, translation, body shape, and pose, enabling precise 3D human mesh reconstruction. Laboratory experiments simulating night-time conditions validate NIR-SPI’s potential for real-world applications, including human detection in rescue missions. Full article
(This article belongs to the Special Issue Single-Pixel Intelligent Imaging and Recognition)
Show Figures

Figure 1

21 pages, 10971 KB  
Article
A Deep Learning Approach to Assist in Pottery Reconstruction from Its Sherds
by Matheus Ferreira Coelho Pinho, Guilherme Lucio Abelha Mota and Gilson Alexandre Ostwald Pedro da Costa
Heritage 2025, 8(5), 167; https://doi.org/10.3390/heritage8050167 - 8 May 2025
Viewed by 936
Abstract
Pottery is one of the most common and abundant types of human remains found in archaeological contexts. The analysis of archaeological pottery involves the reconstruction of pottery vessels from their sherds, which represents a laborious and repetitive task. In this work, we investigate [...] Read more.
Pottery is one of the most common and abundant types of human remains found in archaeological contexts. The analysis of archaeological pottery involves the reconstruction of pottery vessels from their sherds, which represents a laborious and repetitive task. In this work, we investigate a deep learning-based approach to make that process more efficient, accurate, and fast. In that regard, given a sherd’s digital point cloud in a standard, so-called canonical position, the proposed method predicts the geometric transformation, which moves the sherd to its expected normalized position relative to the vessel’s coordinate system. Among the main components of the proposed method, a pair of deep 1D convolutional neural networks trained to predict the 3D Euclidean transformation parameters stands out. Herein, rotation and translation components are treated as independent problems, so while the first network is dedicated to predicting translation moments, the other infers the rotation parameters. In practical applications, once a vessel’s shape is identified, the networks can be trained to predict the target transformation parameter values. Thus, given a 3D model of a complete vessel, it may be virtually broken down countless times for the production of sufficient data to meet deep neural network training demands. In addition to overcoming the scarcity of real sherd data, given a virtual sherd in its original position, that procedure provides paired canonical and normalized point clouds, as well as the target Euclidean transformation. The herein proposed 1D convolutional neural network architecture, the so-called PotNet, was inspired by the PointNet architecture. While PointNet was motivated by 3D point cloud classification and segmentation applications, PotNet was designed to perform non-linear regressions. The method is able to provide an initial estimate for the correct position of a sherd, reducing the complexity of the problem of fitting candidate pairs of sherds, which could be then carried out by a classical adjustment method like ICP, for instance. Experiments using three distinct real vessels were carried out, and the reported results suggest that the proposed method can be successfully used for aiding pottery reconstruction. Full article
Show Figures

Figure 1

15 pages, 686 KB  
Article
IDNet: A Diffusion Model-Enhanced Framework for Accurate Cranio-Maxillofacial Bone Defect Repair
by Xueqin Ji, Wensheng Wang, Xiaobiao Zhang and Xinrong Chen
Bioengineering 2025, 12(4), 407; https://doi.org/10.3390/bioengineering12040407 - 11 Apr 2025
Viewed by 779
Abstract
Cranio-maxillofacial bone defect repair poses significant challenges in oral and maxillofacial surgery due to the complex anatomy of the region and its substantial impact on patients’ physiological function, aesthetic appearance, and quality of life. Inaccurate reconstruction can result in serious complications, including functional [...] Read more.
Cranio-maxillofacial bone defect repair poses significant challenges in oral and maxillofacial surgery due to the complex anatomy of the region and its substantial impact on patients’ physiological function, aesthetic appearance, and quality of life. Inaccurate reconstruction can result in serious complications, including functional impairment and psychological trauma. Traditional methods have notable limitations for complex defects, underscoring the need for advanced computational approaches to achieve high-precision personalized reconstruction. This study presents the Internal Diffusion Network (IDNet), a novel framework that integrates a diffusion model into a standard U-shaped network to extract valuable information from input data and produce high-resolution representations for 3D medical segmentation. A Step-Uncertainty Fusion module was designed to enhance prediction robustness by combining diffusion model outputs at each inference step. The model was evaluated on a dataset consisting of 125 normal human skull 3D reconstructions and 2625 simulated cranio-maxillofacial bone defects. Quantitative evaluation revealed that IDNet outperformed mainstream methods, including UNETR and 3D U-Net, across key metrics: Dice Similarity Coefficient (DSC), True Positive Rate (RECALL), and 95th percentile Hausdorff Distance (HD95). The approach achieved an average DSC of 0.8140, RECALL of 0.8554, and HD95 of 4.35 mm across seven defect types, substantially surpassing comparison methods. This study demonstrates the significant performance advantages of diffusion model-based approaches in cranio-maxillofacial bone defect repair, with potential implications for increasing repair surgery success rates and patient satisfaction in clinical applications. Full article
Show Figures

Figure 1

37 pages, 8385 KB  
Article
Reconstruction of Effective Cross-Sections from DEMs and Water Surface Elevation
by Isadora Rezende, Christophe Fatras, Hind Oubanas, Igor Gejadze, Pierre-Olivier Malaterre, Santiago Peña-Luque and Alessio Domeneghetti
Remote Sens. 2025, 17(6), 1020; https://doi.org/10.3390/rs17061020 - 14 Mar 2025
Cited by 2 | Viewed by 1064
Abstract
Knowledge of river bathymetry is crucial for accurately simulating river flows and floodplain inundation. However, field data are scarce, and the depth and shape of the river channels cannot be systematically observed via remote sensing. Therefore, an efficient methodology is necessary to define [...] Read more.
Knowledge of river bathymetry is crucial for accurately simulating river flows and floodplain inundation. However, field data are scarce, and the depth and shape of the river channels cannot be systematically observed via remote sensing. Therefore, an efficient methodology is necessary to define effective river bathymetry. This research reconstructs the bathymetry from existing global digital elevation models (DEMs) and water surface elevation observations with minimum human intervention. The methodology can be considered a 1D geometric inverse problem, and it can potentially be used in gauged or ungauged basins worldwide. Nine global DEMs and two sources of water surface elevation (in situ and remotely sensed) were analyzed across two study areas. Results highlighted the importance of preprocessing cross-sections to align with water surface elevations, significantly improving discharge estimates. Among the techniques tested, one that combines the slope-break concept with the principles of mass conservation consistently provided robust discharge estimates for the different DEMs, achieving good performance in both study areas. Copernicus and FABDEM emerged as the most reliable DEMs for accurately representing river geometry. Overall, the proposed methodology offers a scalable and efficient solution for cross-section reconstruction, supporting global hydraulic modeling in data-scarce regions. Full article
Show Figures

Figure 1

17 pages, 3417 KB  
Article
TransSMPL: Efficient Human Pose Estimation with Pruned and Quantized Transformer Networks
by Yeonggwang Kim, Hyeongjun Yoo, Je-Ho Ryu, Seungjoo Lee, Jong Hun Lee and Jinsul Kim
Electronics 2024, 13(24), 4980; https://doi.org/10.3390/electronics13244980 - 18 Dec 2024
Cited by 2 | Viewed by 1749
Abstract
Existing Transformers for 3D human pose and shape estimation models often struggle with computational complexity, particularly when handling high-resolution feature maps. These challenges limit their ability to efficiently utilize fine-grained features, leading to suboptimal performance in accurate body reconstruction. In this work, we [...] Read more.
Existing Transformers for 3D human pose and shape estimation models often struggle with computational complexity, particularly when handling high-resolution feature maps. These challenges limit their ability to efficiently utilize fine-grained features, leading to suboptimal performance in accurate body reconstruction. In this work, we propose TransSMPL, a novel Transformer framework built upon the SMPL model, specifically designed to address the challenges of computational complexity and inefficient utilization of high-resolution feature maps in 3D human pose and shape estimation. By replacing HRNet with MobileNetV3 for lightweight feature extraction, applying pruning and quantization techniques, and incorporating an early exit mechanism, TransSMPL significantly reduces both computational cost and memory usage. TransSMPL introduces two key innovations: (1) a multi-scale attention mechanism, reduced from four scales to two, allowing for more efficient global and local feature integration, and (2) a confidence-based early exit strategy, which enables the model to halt further computations when high-confidence predictions are achieved, further enhancing efficiency. Extensive pruning and dynamic quantization are also applied to reduce the model size while maintaining competitive performance. Quantitative and qualitative experiments on the Human3.6M dataset demonstrate the efficacy of TransSMPL. Our model achieves an MPJPE (Mean Per Joint Position Error) of 48.5 mm, reducing the model size by over 16% compared to existing methods while maintaining a similar level of accuracy. Full article
(This article belongs to the Special Issue Trustworthy Artificial Intelligence in Cyber-Physical Systems)
Show Figures

Figure 1

14 pages, 4843 KB  
Article
Enhanced Multi-Scale Attention-Driven 3D Human Reconstruction from Single Image
by Yong Ren, Mingquan Zhou, Pengbo Zhou, Shibo Wang, Yangyang Liu, Guohua Geng, Kang Li and Xin Cao
Electronics 2024, 13(21), 4264; https://doi.org/10.3390/electronics13214264 - 30 Oct 2024
Cited by 1 | Viewed by 2040
Abstract
Due to the inherent limitations of a single viewpoint, reconstructing 3D human meshes from a single image has long been a challenging task. While deep learning networks enable us to approximate the shape of unseen sides, capturing the texture details of the non-visible [...] Read more.
Due to the inherent limitations of a single viewpoint, reconstructing 3D human meshes from a single image has long been a challenging task. While deep learning networks enable us to approximate the shape of unseen sides, capturing the texture details of the non-visible side remains difficult with just one image. Traditional methods utilize Generative Adversarial Networks (GANs) to predict the normal maps of the non-visible side, thereby inferring detailed textures and wrinkles on the model’s surface. However, we have identified challenges with existing normal prediction networks when dealing with complex scenes, such as a lack of focus on local features and insufficient modeling of spatial relationships.To address these challenges, we introduce EMAR—Enhanced Multi-scale Attention-Driven Single-Image 3D Human Reconstruction. This approach incorporates a novel Enhanced Multi-Scale Attention (EMSA) mechanism, which excels at capturing intricate features and global relationships in complex scenes. EMSA surpasses traditional single-scale attention mechanisms by adaptively adjusting the weights between features, enabling the network to more effectively leverage information across various scales. Furthermore, we have improved the feature fusion method to better integrate representations from different scales. This enhanced feature fusion allows the network to more comprehensively understand both fine details and global structures within the image. Finally, we have designed a hybrid loss function tailored to the introduced attention mechanism and feature fusion method, optimizing the network’s training process and enhancing the quality of reconstruction results. Our network demonstrates significant improvements in performance for 3D human model reconstruction. Experimental results show that our method exhibits greater robustness to challenging poses compared to traditional single-scale approaches. Full article
Show Figures

Figure 1

13 pages, 698 KB  
Systematic Review
Three-Dimensional Scaffolds Designed and Printed Using CAD/CAM Technology: A Systematic Review
by Beatriz Pardal-Peláez, Cristina Gómez-Polo, Javier Flores-Fraile, Norberto Quispe-López, Ildefonso Serrano-Belmonte and Javier Montero
Appl. Sci. 2024, 14(21), 9877; https://doi.org/10.3390/app14219877 - 29 Oct 2024
Cited by 3 | Viewed by 1561
Abstract
The objective of this work is to review the literature on the use of three-dimensional scaffolds obtained by printing for the regeneration of bone defects in the maxillofacial area. The research question asked was: what clinical experiences exist on the use of bone [...] Read more.
The objective of this work is to review the literature on the use of three-dimensional scaffolds obtained by printing for the regeneration of bone defects in the maxillofacial area. The research question asked was: what clinical experiences exist on the use of bone biomaterials manufactured by CAD/CAM in the maxillofacial area? Prospective and retrospective studies and randomized clinical trials in humans with reconstruction area in the maxillofacial and intraoral area were included. The articles had to obtain scaffolds for bone reconstruction that were designed by computer processing and printed in different materials. Clinical cases, case series, in vitro studies and those that were not performed in humans were excluded. Six clinical studies were selected that met the established inclusion criteria. The selected studies showed heterogeneity in their objectives, materials used and types of regenerated bone defects. A high survival rate was found for dental implants placed on 3D-printed scaffolds, with rates ranging from 94.3% to 98%. The materials used included polycaprolactone, coral-derived hydroxyapatite, biphasic calcium phosphate (BCP) and bioceramics. The use of CAD/CAM technology is seen as key for satisfying variations in the shapes and requirements of different fabrics and size variations between different individuals. Furthermore, the possibility of using the patient’s own stem cells could revolutionize the way bone defects are currently treated in oral surgery. The results indicate a high survival rate of dental implants placed on 3D-printed scaffolds, suggesting the potential of this technology for bone regeneration in the maxillofacial mass. Full article
(This article belongs to the Special Issue Recent Advances in 3D Printing and Additive Manufacturing Technology)
Show Figures

Figure 1

22 pages, 7012 KB  
Article
A Multi-View Real-Time Approach for Rapid Point Cloud Acquisition and Reconstruction in Goats
by Yi Sun, Qifeng Li, Weihong Ma, Mingyu Li, Anne De La Torre, Simon X. Yang and Chunjiang Zhao
Agriculture 2024, 14(10), 1785; https://doi.org/10.3390/agriculture14101785 - 11 Oct 2024
Cited by 1 | Viewed by 1285
Abstract
The body size, shape, weight, and scoring of goats are crucial indicators for assessing their growth, health, and meat production. The application of computer vision technology to measure these parameters is becoming increasingly prevalent. However, in real farm environments, obstacles, such as fences, [...] Read more.
The body size, shape, weight, and scoring of goats are crucial indicators for assessing their growth, health, and meat production. The application of computer vision technology to measure these parameters is becoming increasingly prevalent. However, in real farm environments, obstacles, such as fences, ground conditions, and dust, pose significant challenges for obtaining accurate goat point cloud data. These obstacles lead to difficulties in rapid data extraction and result in incomplete reconstructions, causing substantial measurement errors. To address these challenges, we developed a system for real-time, non-contact acquisition, extraction, and reconstruction of goat point clouds using three depth cameras. The system operates in a scenario where goats walk naturally through a designated channel, and bidirectional distributed triggering logic is employed to ensure real-time acquisition of the point cloud. We also designed a noise recognition and filtering method tailored to handle complex environmental interferences found on farms, enabling automatic extraction of the goat point cloud. Furthermore, a distributed point cloud completion algorithm was developed to reconstruct missing sections of the goat point cloud caused by unavoidable factors such as railings and dust. Measurements of body height, body slant length, and chest circumference were calculated separately with deviation of no more than 25 mm and an average error of 3.1%. The system processes each goat in an average time of 3–5 s. This method provides rapid and accurate extraction and complementary reconstruction of 3D point clouds of goats in motion on real farms, without human intervention. It offers a valuable technological solution for non-contact monitoring and evaluation of goat body size, weight, shape, and appearance. Full article
Show Figures

Figure 1

23 pages, 76553 KB  
Article
3DRecNet: A 3D Reconstruction Network with Dual Attention and Human-Inspired Memory
by Muhammad Awais Shoukat, Allah Bux Sargano, Lihua You and Zulfiqar Habib
Electronics 2024, 13(17), 3391; https://doi.org/10.3390/electronics13173391 - 26 Aug 2024
Viewed by 1450
Abstract
Humans inherently perceive 3D scenes using prior knowledge and visual perception, but 3D reconstruction in computer graphics is challenging due to complex object geometries, noisy backgrounds, and occlusions, leading to high time and space complexity. To addresses these challenges, this study introduces 3DRecNet, [...] Read more.
Humans inherently perceive 3D scenes using prior knowledge and visual perception, but 3D reconstruction in computer graphics is challenging due to complex object geometries, noisy backgrounds, and occlusions, leading to high time and space complexity. To addresses these challenges, this study introduces 3DRecNet, a compact 3D reconstruction architecture optimized for both efficiency and accuracy through five key modules. The first module, the Human-Inspired Memory Network (HIMNet), is designed for initial point cloud estimation, assisting in identifying and localizing objects in occluded and complex regions while preserving critical spatial information. Next, separate image and 3D encoders perform feature extraction from input images and initial point clouds. These features are combined using a dual attention-based feature fusion module, which emphasizes features from the image branch over those from the 3D encoding branch. This approach ensures independence from proposals at inference time and filters out irrelevant information, leading to more accurate and detailed reconstructions. Finally, a Decoder Branch transforms the fused features into a 3D representation. The integration of attention-based fusion with the memory network in 3DRecNet significantly enhances the overall reconstruction process. Experimental results on the benchmark datasets, such as ShapeNet, ObjectNet3D, and Pix3D, demonstrate that 3DRecNet outperforms existing methods. Full article
(This article belongs to the Special Issue New Trends in Computer Vision and Image Processing)
Show Figures

Figure 1

13 pages, 2219 KB  
Article
Utilizing Artificial Neural Networks for Geometric Bone Model Reconstruction in Mandibular Prognathism Patients
by Jelena Mitić, Nikola Vitković, Miroslav Trajanović, Filip Górski, Ancuţa Păcurar, Cristina Borzan, Emilia Sabău and Răzvan Păcurar
Mathematics 2024, 12(10), 1577; https://doi.org/10.3390/math12101577 - 18 May 2024
Cited by 4 | Viewed by 1379
Abstract
Patient-specific 3D models of the human mandible are finding increasing utility in medical fields such as oral and maxillofacial surgery, orthodontics, dentistry, and forensic sciences. The efficient creation of personalized 3D bone models poses a key challenge in these applications. Existing solutions often [...] Read more.
Patient-specific 3D models of the human mandible are finding increasing utility in medical fields such as oral and maxillofacial surgery, orthodontics, dentistry, and forensic sciences. The efficient creation of personalized 3D bone models poses a key challenge in these applications. Existing solutions often rely on 3D statistical models of human bone, offering advantages in rapid bone geometry adaptation and flexibility by capturing a range of anatomical variations, but also a disadvantage in terms of reduced precision in representing specific shapes. Considering this, the proposed parametric model allows for precise manipulation using morphometric parameters acquired from medical images. This paper highlights the significance of employing the parametric model in the creation of a personalized bone model, exemplified through a case study targeting mandibular prognathism reconstruction. A personalized model is described as 3D point cloud determined through the utilization of series of parametric functions, determined by the application of geometrical morphometrics, morphology properties, and artificial neural networks in the input dataset of human mandible samples. With 95.05% of the personalized model’s surface area displaying deviations within −1.00–1.00 mm relative to the input polygonal model, and a maximum deviation of 2.52 mm, this research accentuates the benefits of the parametric approach, particularly in the preoperative planning of mandibular deformity surgeries. Full article
Show Figures

Figure 1

18 pages, 10168 KB  
Article
Single-Image-Based 3D Reconstruction of Endoscopic Images
by Bilal Ahmad, Pål Anders Floor, Ivar Farup and Casper Find Andersen
J. Imaging 2024, 10(4), 82; https://doi.org/10.3390/jimaging10040082 - 28 Mar 2024
Cited by 6 | Viewed by 7901
Abstract
A wireless capsule endoscope (WCE) is a medical device designed for the examination of the human gastrointestinal (GI) tract. Three-dimensional models based on WCE images can assist in diagnostics by effectively detecting pathology. These 3D models provide gastroenterologists with improved visualization, particularly in [...] Read more.
A wireless capsule endoscope (WCE) is a medical device designed for the examination of the human gastrointestinal (GI) tract. Three-dimensional models based on WCE images can assist in diagnostics by effectively detecting pathology. These 3D models provide gastroenterologists with improved visualization, particularly in areas of specific interest. However, the constraints of WCE, such as lack of controllability, and requiring expensive equipment for operation, which is often unavailable, pose significant challenges when it comes to conducting comprehensive experiments aimed at evaluating the quality of 3D reconstruction from WCE images. In this paper, we employ a single-image-based 3D reconstruction method on an artificial colon captured with an endoscope that behaves like WCE. The shape from shading (SFS) algorithm can reconstruct the 3D shape using a single image. Therefore, it has been employed to reconstruct the 3D shapes of the colon images. The camera of the endoscope has also been subjected to comprehensive geometric and radiometric calibration. Experiments are conducted on well-defined primitive objects to assess the method’s robustness and accuracy. This evaluation involves comparing the reconstructed 3D shapes of primitives with ground truth data, quantified through measurements of root-mean-square error and maximum error. Afterward, the same methodology is applied to recover the geometry of the colon. The results demonstrate that our approach is capable of reconstructing the geometry of the colon captured with a camera with an unknown imaging pipeline and significant noise in the images. The same procedure is applied on WCE images for the purpose of 3D reconstruction. Preliminary results are subsequently generated to illustrate the applicability of our method for reconstructing 3D models from WCE images. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

16 pages, 3876 KB  
Article
Three-Dimensional Reconstruction Pre-Training as a Prior to Improve Robustness to Adversarial Attacks and Spurious Correlation
by Yutaro Yamada, Fred Weiying Zhang, Yuval Kluger and Ilker Yildirim
Entropy 2024, 26(3), 258; https://doi.org/10.3390/e26030258 - 14 Mar 2024
Viewed by 2064
Abstract
Ensuring robustness of image classifiers against adversarial attacks and spurious correlation has been challenging. One of the most effective methods for adversarial robustness is a type of data augmentation that uses adversarial examples during training. Here, inspired by computational models of human vision, [...] Read more.
Ensuring robustness of image classifiers against adversarial attacks and spurious correlation has been challenging. One of the most effective methods for adversarial robustness is a type of data augmentation that uses adversarial examples during training. Here, inspired by computational models of human vision, we explore a synthesis of this approach by leveraging a structured prior over image formation: the 3D geometry of objects and how it projects to images. We combine adversarial training with a weight initialization that implicitly encodes such a prior about 3D objects via 3D reconstruction pre-training. We evaluate our approach using two different datasets and compare it to alternative pre-training protocols that do not encode a prior about 3D shape. To systematically explore the effect of 3D pre-training, we introduce a novel dataset called Geon3D, which consists of simple shapes that nevertheless capture variation in multiple distinct dimensions of geometry. We find that while 3D reconstruction pre-training does not improve robustness for the simplest dataset setting, we consider (Geon3D on a clean background) that it improves upon adversarial training in more realistic (Geon3D with textured background and ShapeNet) conditions. We also find that 3D pre-training coupled with adversarial training improves the robustness to spurious correlations between shape and background textures. Furthermore, we show that the benefit of using 3D-based pre-training outperforms 2D-based pre-training on ShapeNet. We hope that these results encourage further investigation of the benefits of structured, 3D-based models of vision for adversarial robustness. Full article
(This article belongs to the Special Issue Probabilistic Models in Machine and Human Learning)
Show Figures

Figure 1

14 pages, 8014 KB  
Article
Three-Dimensional Bioprinting of Strontium-Modified Controlled Assembly of Collagen Polylactic Acid Composite Scaffold for Bone Repair
by Weiwei Sun, Wenyu Xie, Kun Hu, Zongwen Yang, Lu Han, Luhai Li, Yuansheng Qi and Yen Wei
Polymers 2024, 16(4), 498; https://doi.org/10.3390/polym16040498 - 11 Feb 2024
Cited by 3 | Viewed by 2094
Abstract
In recent years, the incidence of bone defects has been increasing year by year. Bone transplantation has become the most needed surgery after a blood transfusion and shows a rising trend. Three-dimensional-printed implants can be arbitrarily shaped according to the defects of tissues [...] Read more.
In recent years, the incidence of bone defects has been increasing year by year. Bone transplantation has become the most needed surgery after a blood transfusion and shows a rising trend. Three-dimensional-printed implants can be arbitrarily shaped according to the defects of tissues and organs to achieve perfect morphological repair, opening a new way for non-traumatic repair and functional reconstruction. In this paper, strontium-doped mineralized collagen was first prepared by an in vitro biomimetic mineralization method and then polylactic acid was homogeneously blended with the mineralized collagen to produce a comprehensive bone repair scaffold by a gas extrusion 3D printing method. Characterization through scanning electron microscopy, X-ray diffraction, and mechanical testing revealed that the strontium-functionalized composite scaffold exhibits an inorganic composition and nanostructure akin to those of human bone tissue. The scaffold possesses uniformly distributed and interconnected pores, with a compressive strength reaching 21.04 MPa. The strontium doping in the mineralized collagen improved the biocompatibility of the scaffold and inhibited the differentiation of osteoclasts to promote bone regeneration. This innovative composite scaffold holds significant promise in the field of bone tissue engineering, providing a forward-thinking solution for prospective bone injury repair. Full article
Show Figures

Graphical abstract

Back to TopTop