Geometry Reconstruction from Images

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Visualization and Computer Graphics".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 26587

Special Issue Editors


E-Mail Website
Guest Editor
XLIM Institute, UMR CNRS 7252, University of Poitiers, 86073 Poitiers, France
Interests: computer graphics; lighting simulation; reflectance models; image-based rendering
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Cagliari, 09124 Cagliari, Italy
Interests: computer graphics; geometry processing; surface modeling; volumetric modeling; polycubes

Special Issue Information

Dear Colleagues,

Understanding image contents has been a tremendous source of research advances, with many different focuses, targeted applications, needs, or scientific starting points. A wide range of existing approaches nowadays are employed in the industry for many considerations, including, for instance, quality in engineering production, video-based security, or 3D modeling in gaming applications or movies. Yet, reconstructing a representation of a scene observed through a camera remains a challenging aspect in general, and the specific question of producing a (static or dynamic) geometric model has led to decades of research, and still corresponds to a very active scientific domain. Sensors are continuously evolving, bringing more and more accuracy, resolution, and new opportunities for reconstructing objects’ shape and/or detailed geometric variations.

This Special Issue is dedicated to the reconstruction of geometry from images, including, but not limited to, videos, multispectral images, or time of flight. We wish to encourage original contributions that focus on the power of imaging methods for recovering geometric representations of objects or parts of objects. Contributions may correspond to various approaches, which could include shape-from-X approaches, deep learning, photometric stereo, etc.

Dr. Daniel Meneveaux
Dr. Gianmarco Cherchi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Geometry from images and videos
  •  Reconstruction from 3D images
  •  Reconstruction from time of flight sensors
  •  Reconstruction from multispectral images
  •  Reconstruction from lightfield cameras
  •  Shape from X (shading, silhouette, etc.)
  •  Multiview reconstruction
  •  Photometric stereo
  •  Epipolar geometry
  •  Structure from motion
  •  Space carving and coloring
  •  Differential geometry
  •  Acquisition systems and calibration
  •  Shape from template
  •  Deep learning based reconstruction
  •  Reconstruction of non rigid geometry
  •  Medical imaging
  •  Radar, satellites
  •  Urban environments
  •  Cultural heritage
  •  Virtual and augmented reality
  •  Gaming

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review, Other

2 pages, 137 KiB  
Editorial
Editorial for the Special Issue on “Geometry Reconstruction from Images”
by Daniel Meneveaux and Gianmarco Cherchi
J. Imaging 2024, 10(2), 29; https://doi.org/10.3390/jimaging10020029 - 23 Jan 2024
Viewed by 1280
Abstract
This special issue on geometry reconstruction from images has received much attention from the community, with 10 published papers [...] Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)

Research

Jump to: Editorial, Review, Other

21 pages, 9974 KiB  
Article
3D Reconstruction of Fishes Using Coded Structured Light
by Christos Veinidis, Fotis Arnaoutoglou and Dimitrios Syvridis
J. Imaging 2023, 9(9), 189; https://doi.org/10.3390/jimaging9090189 - 18 Sep 2023
Cited by 2 | Viewed by 1238
Abstract
3D reconstruction of fishes provides the capability of extracting geometric measurements, which are valuable in the field of Aquaculture. In this paper, a novel method for 3D reconstruction of fishes using the Coded Structured Light technique is presented. In this framework, a binary [...] Read more.
3D reconstruction of fishes provides the capability of extracting geometric measurements, which are valuable in the field of Aquaculture. In this paper, a novel method for 3D reconstruction of fishes using the Coded Structured Light technique is presented. In this framework, a binary image, called pattern, consisting of white geometric shapes, namely symbols, on a black background is projected onto the surface of a number of fishes, which belong to different species. A camera captures the resulting images, and the various symbols in these images are decoded to uniquely identify them on the pattern. For this purpose, a number of steps, such as the binarization of the images captured by the camera, symbol classification, and the correction of misclassifications, are realized. The proposed methodology for 3D reconstructions is adapted to the specific geometric and morphological characteristics of the considered fishes with fusiform body shape, something which is implemented for the first time. Using the centroids of the symbols as feature points, the symbol correspondences immediately result in point correspondences between the pattern and the images captured by the camera. These pairs of corresponding points are exploited for the final 3D reconstructions of the fishes. The extracted 3D reconstructions provide all the geometric information which is related to the real fishes. The experimentation demonstrates the high efficiency of the techniques adopted in each step of the proposed methodology. As a result, the final 3D reconstructions provide sufficiently accurate approximations of the real fishes. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Show Figures

Figure 1

14 pages, 3864 KiB  
Article
Reconstructing Floorplans from Point Clouds Using GAN
by Tianxing Jin, Jiayan Zhuang, Jiangjian Xiao, Ningyuan Xu and Shihao Qin
J. Imaging 2023, 9(2), 39; https://doi.org/10.3390/jimaging9020039 - 8 Feb 2023
Cited by 5 | Viewed by 2120
Abstract
This paper proposed a method for reconstructing floorplans from indoor point clouds. Unlike existing corner and line primitive detection algorithms, this method uses a generative adversarial network to learn the complex distribution of indoor layout graphics, and repairs incomplete room masks into more [...] Read more.
This paper proposed a method for reconstructing floorplans from indoor point clouds. Unlike existing corner and line primitive detection algorithms, this method uses a generative adversarial network to learn the complex distribution of indoor layout graphics, and repairs incomplete room masks into more regular segmentation areas. Automatic learning of the structure information of layout graphics can reduce the dependence on geometric priors, and replacing complex optimization algorithms with Deep Neural Networks (DNN) can improve the efficiency of data processing. The proposed method can retain more shape information from the original data and improve the accuracy of the overall structure details. On this basis, the method further used an edge optimization algorithm to eliminate pixel-level edge artifacts that neural networks cannot perceive. Finally, combined with the constraint information of the overall layout, the method can generate compact floorplans with rich semantic information. Experimental results indicated that the algorithm has robustness and accuracy in complex 3D indoor datasets; its performance is competitive with those of existing methods. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Show Figures

Figure 1

15 pages, 1271 KiB  
Article
Perception and Quantization Model for Periodic Contour Modifications
by Dmitri Presnov and Andreas Kolb
J. Imaging 2022, 8(11), 311; https://doi.org/10.3390/jimaging8110311 - 21 Nov 2022
Cited by 2 | Viewed by 1284
Abstract
Periodic, wave-like modifications of 2D shape contours are often applied to convey quantitative data via images. However, to the best of our knowledge, there has been no in-depth investigation of the perceptual uniformity and legibility of these kind of approaches. In this paper, [...] Read more.
Periodic, wave-like modifications of 2D shape contours are often applied to convey quantitative data via images. However, to the best of our knowledge, there has been no in-depth investigation of the perceptual uniformity and legibility of these kind of approaches. In this paper, we design and perform a user study to evaluate the perception of periodic contour modifications related to their geometry and colour. Based on the study results, we statistically derive a perceptual model, which demonstrates a mainly linear stimulus-to-perception relationship for geometric and colour amplitude and a close-to-quadratic relationship for the respective frequencies, with a rather negligible dependency on the waveform. Furthermore, analyzing the distribution of perceived magnitudes and the overlapping of the respective 50% confidence intervals, we extract distinguishable, visually equidistant quantization levels for each contour-related visual variable. Moreover, we give first insights into the perceptual dependency between amplitude and frequency, and propose a scheme for transferring our model to glyphs with different size, which preserves the distinguishability and the visual equidistance. This work is seen as a first step towards a comprehensive understanding of the perception of periodic contour modifications in image-based visualizations. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Show Figures

Figure 1

11 pages, 1890 KiB  
Article
Experimental Study of Radial Distortion Compensation for Camera Submerged Underwater Using Open SaltWaterDistortion Data Set
by Daria Senshina, Dmitry Polevoy, Egor Ershov and Irina Kunina
J. Imaging 2022, 8(10), 289; https://doi.org/10.3390/jimaging8100289 - 19 Oct 2022
Cited by 3 | Viewed by 1735
Abstract
This paper describes a new open data set, consisting of images of a chessboard collected underwater with different refractive indices, which allows for investigation of the quality of different radial distortion correction methods. The refractive index is regulated by the degree of salinity [...] Read more.
This paper describes a new open data set, consisting of images of a chessboard collected underwater with different refractive indices, which allows for investigation of the quality of different radial distortion correction methods. The refractive index is regulated by the degree of salinity of the water. The collected data set consists of 662 images, and the chessboard cell corners are manually marked for each image (for a total of 35,748 nodes). Two different mobile phone cameras were used for the shooting: telephoto and wide-angle. With the help of the collected data set, the practical applicability of the formula for correction of the radial distortion that occurs when the camera is submerged underwater was investigated. Our experiments show that the radial distortion correction formula makes it possible to correct images with high precision, comparable to the precision of classical calibration algorithms. We also show that this correction method is resistant to small inaccuracies in the indication of the refractive index of water. The data set, as well as the accompanying code, are publicly available. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Show Figures

Figure 1

32 pages, 1807 KiB  
Article
Line Clipping in 2D: Overview, Techniques and Algorithms
by Dimitrios Matthes and Vasileios Drakopoulos
J. Imaging 2022, 8(10), 286; https://doi.org/10.3390/jimaging8100286 - 17 Oct 2022
Cited by 5 | Viewed by 4024
Abstract
Clipping, as a fundamental process in computer graphics, displays only the part of a scene which is needed to be displayed and rejects all others. In two dimensions, the clipping process can be applied to a variety of geometric primitives such as points, [...] Read more.
Clipping, as a fundamental process in computer graphics, displays only the part of a scene which is needed to be displayed and rejects all others. In two dimensions, the clipping process can be applied to a variety of geometric primitives such as points, lines, polygons or curves. A line-clipping algorithm processes each line in a scene through a series of tests and intersection calculations to determine whether the entire line or any part of it is to be saved. It also calculates the intersection position of a line with the window edges so its major goal is to minimize these calculations. This article surveys important techniques and algorithms for line-clipping in 2D but it also includes some of the latest research made by the authors. The survey criteria include evaluation of all line-clipping algorithms against a rectangular window, line clipping versus polygon clipping, and our line clipping against a convex polygon, as well as all line-clipping algorithms against a convex polygon algorithm. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Show Figures

Figure 1

18 pages, 5214 KiB  
Article
A Very Fast Image Stitching Algorithm for PET Bottle Caps
by Xiao Zhu, Zixiao Liu, Xin Zhang, Tingting Sui and Ming Li
J. Imaging 2022, 8(10), 275; https://doi.org/10.3390/jimaging8100275 - 7 Oct 2022
Cited by 4 | Viewed by 1864
Abstract
In the beverage, food and drug industry, more and more machine vision systems are being used for the defect detection of Polyethylene Terephthalate (PET) bottle caps. In this paper, in order to address the result of cylindrical distortions that influence the subsequent defect [...] Read more.
In the beverage, food and drug industry, more and more machine vision systems are being used for the defect detection of Polyethylene Terephthalate (PET) bottle caps. In this paper, in order to address the result of cylindrical distortions that influence the subsequent defect detection in the imaging process, a very fast image stitching algorithm is proposed to generate a panorama planar image of the surface of PET bottle caps. Firstly, the three-dimensional model of the bottle cap is established. Secondly, the relative poses among the four cameras and the bottle cap in the three-dimensional space are calculated to obtain the mapping relationship between three-dimensional points on the side surface of the bottle cap and image pixels taken by the camera. Finally, the side images of the bottle cap are unfolded and stitched to generate a planar image. The experimental results demonstrate that the proposed algorithm unfolds the side images of the bottle cap correctly and very fast. The average unfolding and stitching time for 1.6-megapixel color caps image can reach almost 123.6 ms. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Show Figures

Figure 1

31 pages, 6546 KiB  
Article
Detecting Cocircular Subsets of a Spherical Set of Points
by Basel Ibrahim and Nahum Kiryati
J. Imaging 2022, 8(7), 184; https://doi.org/10.3390/jimaging8070184 - 5 Jul 2022
Cited by 2 | Viewed by 1800
Abstract
Given a spherical set of points, we consider the detection of cocircular subsets of the data. We distinguish great circles from small circles, and develop algorithms for detecting cocircularities of both types. The suggested approach is an extension of the Hough transform. We [...] Read more.
Given a spherical set of points, we consider the detection of cocircular subsets of the data. We distinguish great circles from small circles, and develop algorithms for detecting cocircularities of both types. The suggested approach is an extension of the Hough transform. We address the unique parameter-space quantization issues arising due to the spherical geometry, present quantization schemes, and evaluate the quantization-induced errors. We demonstrate the proposed algorithms by detecting cocircular cities and airports on Earth’s spherical surface. These results facilitate the detection of great and small circles in spherical images. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Show Figures

Figure 1

17 pages, 13304 KiB  
Article
A Hybrid Method for 3D Reconstruction of MR Images
by Loubna Lechelek, Sebastien Horna, Rita Zrour, Mathieu Naudin and Carole Guillevin
J. Imaging 2022, 8(4), 103; https://doi.org/10.3390/jimaging8040103 - 7 Apr 2022
Cited by 5 | Viewed by 3195
Abstract
Three-dimensional surface reconstruction is a well-known task in medical imaging. In procedures for intervention or radiation treatment planning, the generated models should be accurate and reflect the natural appearance. Traditional methods for this task, such as Marching Cubes, use smoothing post processing to [...] Read more.
Three-dimensional surface reconstruction is a well-known task in medical imaging. In procedures for intervention or radiation treatment planning, the generated models should be accurate and reflect the natural appearance. Traditional methods for this task, such as Marching Cubes, use smoothing post processing to reduce staircase artifacts from mesh generation and exhibit the natural look. However, smoothing algorithms often reduce the quality and degrade the accuracy. Other methods, such as MPU implicits, based on adaptive implicit functions, inherently produce smooth 3D models. However, the integration in the implicit functions of both smoothness and accuracy of the shape approximation may impact the precision of the reconstruction. Having these limitations in mind, we propose a hybrid method for 3D reconstruction of MR images. This method is based on a parallel Marching Cubes algorithm called Flying Edges (FE) and Multi-level Partition of Unity (MPU) implicits. We aim to combine the robustness of the Marching Cubes algorithm with the smooth implicit curve tracking enabled by the use of implicit models in order to provide higher geometry precision. Towards this end, the regions that closely fit to the segmentation data, and thus regions that are not impacted by reconstruction issues, are first extracted from both methods. These regions are then merged and used to reconstruct the final model. Experimental studies were performed on a number of MRI datasets, providing images and error statistics generated from our results. The results obtained show that our method reduces the geometric errors of the reconstructed surfaces when compared to the MPU and FE approaches, producing a more accurate 3D reconstruction. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Show Figures

Figure 1

Review

Jump to: Editorial, Research, Other

23 pages, 3511 KiB  
Review
Three-Dimensional Reconstruction from a Single RGB Image Using Deep Learning: A Review
by Muhammad Saif Ullah Khan, Alain Pagani, Marcus Liwicki, Didier Stricker and Muhammad Zeshan Afzal
J. Imaging 2022, 8(9), 225; https://doi.org/10.3390/jimaging8090225 - 23 Aug 2022
Cited by 4 | Viewed by 4875
Abstract
Performing 3D reconstruction from a single 2D input is a challenging problem that is trending in literature. Until recently, it was an ill-posed optimization problem, but with the advent of learning-based methods, the performance of 3D reconstruction has also significantly improved. Infinitely many [...] Read more.
Performing 3D reconstruction from a single 2D input is a challenging problem that is trending in literature. Until recently, it was an ill-posed optimization problem, but with the advent of learning-based methods, the performance of 3D reconstruction has also significantly improved. Infinitely many different 3D objects can be projected onto the same 2D plane, which makes the reconstruction task very difficult. It is even more difficult for objects with complex deformations or no textures. This paper serves as a review of recent literature on 3D reconstruction from a single view, with a focus on deep learning methods from 2018 to 2021. Due to the lack of standard datasets or 3D shape representation methods, it is hard to compare all reviewed methods directly. However, this paper reviews different approaches for reconstructing 3D shapes as depth maps, surface normals, point clouds, and meshes; along with various loss functions and metrics used to train and evaluate these methods. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Show Figures

Figure 1

Other

11 pages, 2086 KiB  
Technical Note
Guidelines for Accurate Multi-Temporal Model Registration of 3D Scanned Objects
by Kate J. Benfield, Dylan E. Burruel and Trevor J. Lujan
J. Imaging 2023, 9(2), 43; https://doi.org/10.3390/jimaging9020043 - 14 Feb 2023
Cited by 1 | Viewed by 1992
Abstract
Changes in object morphology can be quantified using 3D optical scanning to generate 3D models of an object at different time points. This process requires registration techniques that align target and reference 3D models using mapping functions based on common object features that [...] Read more.
Changes in object morphology can be quantified using 3D optical scanning to generate 3D models of an object at different time points. This process requires registration techniques that align target and reference 3D models using mapping functions based on common object features that are unaltered over time. The goal of this study was to determine guidelines when selecting these localized features to ensure robust and accurate 3D model registration. For this study, an object of interest (tibia bone replica) was 3D scanned at multiple time points, and the acquired 3D models were aligned using a simple cubic registration block attached to the object. The size of the registration block and the number of planar block surfaces selected to calculate the mapping functions used for 3D model registration were varied. Registration error was then calculated as the average linear surface variation between the target and reference tibial plateau surfaces. We obtained very low target registration errors when selecting block features with an area equivalent to at least 4% of the scanning field of view. Additionally, we found that at least two orthogonal surfaces should be selected to minimize registration error. Therefore, when registering 3D models to measure multi-temporal morphological change (e.g., mechanical wear), we recommend selecting multiplanar features that account for at least 4% of the scanning field of view. For the first time, this study has provided guidelines for selecting localized object features that can provide accurate 3D model registration for 3D scanned objects. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Show Figures

Figure 1

Back to TopTop