Next Article in Journal
Shedding Light on Gas-Dynamic Effects in Laser Beam Fusion Cutting: The Potential of Background-Oriented Schlieren Imaging (BOS)
Next Article in Special Issue
Segmentation of Structural Elements from 3D Point Cloud Using Spatial Dependencies for Sustainability Studies
Previous Article in Journal
Sow Farrowing Early Warning and Supervision for Embedded Board Implementations
Previous Article in Special Issue
3D Radiometric Mapping by Means of LiDAR SLAM and Thermal Camera Data Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Simple Way to Reduce 3D Model Deformation in Smartphone Photogrammetry

by
Aleksandra Jasińska
1,
Krystian Pyka
1,*,
Elżbieta Pastucha
2 and
Henrik Skov Midtiby
2
1
Faculty of Geo-Data Science, Geodesy, and Environmental Engineering, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Cracow, Poland
2
UAS Center, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Campusvey 55, 5230 Odense, Denmark
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(2), 728; https://doi.org/10.3390/s23020728
Submission received: 14 December 2022 / Revised: 3 January 2023 / Accepted: 4 January 2023 / Published: 9 January 2023
(This article belongs to the Special Issue 3D Sensing, Semantic Reconstruction and Modelling)

Abstract

:
Recently, the term smartphone photogrammetry gained popularity. This suggests that photogrammetry may become a simple measurement tool by virtually every smartphone user. The research was undertaken to clarify whether it is appropriate to use the Structure from Motion—Multi Stereo View (SfM-MVS) procedure with self-calibration as it is done in Uncrewed Aerial Vehicle photogrammetry. First, the geometric stability of smartphone cameras was tested. Fourteen smartphones were calibrated on the checkerboard test field. The process was repeated multiple times. These observations were found: (1) most smartphone cameras have lower stability of the internal orientation parameters than a Digital Single-Lens Reflex (DSLR) camera, and (2) the principal distance and position of the principal point are constantly changing. Then, based on images from two selected smartphones, 3D models of a small sculpture were developed. The SfM-MVS method was used, with self-calibration and pre-calibration variants. By comparing the resultant models with the reference DSLR-created model it was shown that introducing calibration obtained in the test field instead of self-calibration improves the geometry of 3D models. In particular, deformations of local concavities and convexities decreased. In conclusion, there is real potential in smartphone photogrammetry, but it also has its limits.

1. Introduction

1.1. Motivation

The Structure from Motion (SfM) method paved the way for the use of consumer-grade cameras in photogrammetry. Drone 3D mapping has become popular, where light, small, customized cameras are used. Consumer-grade cameras, such as Digital Single-Lens Reflex camera (DSLR) or mirrorless, are successfully used in documenting monuments, museum artifacts, and geological objects, as well as in industrial photogrammetry. In recent years, smartphone cameras (SPCs) have been introduced in similar and often broader fields of applications. Consequently, a new term, smartphone photogrammetry (SPP), has been coined. It is thus worth asking whether the rules guiding the accuracy of photogrammetric measurements can be transferred to the SPP. The question is particularly important as the SPP development lacks in-depth research on the accuracy of the resulting 3D models.

1.2. Camera Calibration and on-the-Job Calibration in Photogrammetry

Pinhole camera calibration is a process designed to estimate interior orientation elements (IO) including principal distance (f), principal point offset (x0, y0) as well as distortion parameters [1]. The most prevalent distortion model is the Brown model [2], which includes radial (k parameters) and tangential (p parameters) distortion.
In photogrammetry surveys utilizing metric cameras, such as DMC or Vexel, the calibration is determined in a separate process. The estimated parameters are then included as fixed values in the alignment of the image bundle adjustment (BA). This strategy is based on the high stability of these cameras, which allows for reducing the BA unknowns to the elements of the images’ external orientation (EO) [3]. Semi-metric cameras (e.g., PhaseOne cameras) are also characterized by high stability of IO. Although periodic calibration is recommended [4]. Custom UAV (Uncrewed or Unmanned Aerial Vehicle) cameras’ IO stability is also improving, but a detailed assessment of this requires research over a longer period of time. Therefore, UAV cameras in photogrammetric surveys are non-metric, along with all cameras produced for general photography and not for measurement purposes.
Zhang proposed one of the most prevalent calibration methods [5]. The method uses a flat calibration field in the form of a checkerboard. It has been implemented in OpenCV and has become a de facto standard in computer vision (CV) [6,7]. The two-step procedure, where calibration is performed first and then the IO are fixed in CV operational tasks, works better the more stable the IO parameters are. A one-step method has been researched for many years, both in photogrammetry and computer vision. In this method, calibration runs simultaneously with the process of determining the EO. SfM is representative of a one-step methodology. It was created using multiple research contributions [8,9,10].
Within SfM, IO parameters are estimated as additional BA unknowns or in a separate process iteratively interlaced with BA. SfM self-calibration provides correct results providing few conditions. The tie points should be evenly distributed in the photos, and the relation between the objects’ depth range and the image acquisition distance should be large enough. However, this can be mitigated by the use of significant variability in images’ orientation angles (roll, pitch). The IO accuracy is also unfavorable when the network of images is built with only a few strips, which is typical for linear objects [11]. In those cases, the IO errors distort the EO, which leads to the so-called bowl effect [12].
The estimation of IO and EO in SfM paves the way for the generation of a dense point cloud. The process is called multi-view stereo (MVS). Two MVS solutions are used, the first is based on a sequence of stereo pairs [13] and the second on the simultaneous matching of multiple images [14]. The SfM-MVS method has been implemented in many tools, both open source (e.g., OpenCV [15], OpenDroneMap [16], Meshroom [17], MicMac [18]) and commercial (e.g., Pix4D [19], Metashape [20]).

1.3. Smartphone Cameras vs. DSLR

The built-in camera is a basic component of every smartphone. Additionally, more often one can find devices equipped with a set of lenses with different characteristics, e.g., a wide-angle lens, ultra-wide-angle, and telephoto lens. The size of smartphones necessitates the miniaturization of their cameras. The smartphone casing, often less than 10 mm thick, creates a small space to place the lens and matrix. In smartphones, matrices with a resolution of 12 Mpix prevail, with a frame with a 4:3 aspect ratio [21]. In DSLRs, as well as in their mirrorless counterparts, the resolution of the matrices is often 20 Mpix or slightly more. The physical sizes of the matrices differ significantly, with diagonals of the matrices in smartphones reaching 0.5″, while in DSLRs they are longer than 1″. Relatively high MPix resolution of smartphone cameras is achieved thanks to very small matrix pixel sizes, usually in the range of 1 to 2 μm. In DSLR, the matrix pixels are several times larger. A smaller pixel size of the matrix adversely affects the amount of image noise. Therefore, high-resolution SPCs images, reaching 100 Mpix, are usually heavily noisy [22]. The problem is reduced by replacing the classic Bayer filter with the so-called “multicell sensors” with pixel clusters of 4 to 9 pixels. Pixel clusters have increased sensitivity to light, resulting in a very high-resolution effect or noise reduction by pixel binning [21]. Both SPCs and DSLR cameras are multi-lens. Due to the very short space between the lens and the matrix (single millimeters), smartphones use plastic lenses instead of glass lenses found in DSLRs. Plastic lenses have a high thermal sensitivity, which causes a change in the refractive index, sharpness, and curvature of the lens field during use [21].

1.4. Overview of Smartphone Photogrammetry Applications

The use of smartphones in photogrammetry has been gaining popularity for some time. Thanks to the widespread use of smartphones, their computing power, and widely available easy-to-use software, more and more people decide to use smartphones for 3D modeling as an alternative to expensive, specialized equipment (DSLR cameras, laser scanners, etc.). Smartphones are mostly utilized to survey and model small objects. Literature study on smartphone photogrammetry shows most usage in the medical field, inventory of broadly understood cultural heritage, issues related to geomorphology, geotechnics, and geology, as well as strictly industry applications (e.g., displacement or volume measurement of earth masses). Table 1 lists smartphone photogrammetry publications according to the research area. The search was conducted using Scopus database, looking through research papers titles, key-word, and abstracts for key-words ‘photogrammetry’, ‘smartphone’, ‘3D’, and ‘model’. The list was then manually limited to include only relative publications.
By far the most common area where smartphones are used is the documentation of cultural heritage objects. This applies to museum artifacts [25,26], archaeological objects [23,24], and monuments (understood as single objects or complexes of objects) [27,28,29,30]. The second most popular field is medical applications. Solutions that allow recreating the shape and size of individual parts of the body [45,46,47,57,58] and models for the needs of, e.g., prostheses or plastic surgeries [48,49,51,63] are predominant here. In the case of geological issues, the analysis of rock porosity and roughness is a common topic [65,66], while in geomorphology it is popular to create digital models of various types of geomorphological forms (e.g., cliffs, caves) [70,71]. In terms of industrial applications, it is worth noting displacement determination [78,79] and utility networks modeling [75,76].
In addition to the large thematic groups described above, one can find a number of publications in which 3D models of various objects were made using images taken from a phone [80,81,82,83,84,85,86,87,88,89,90,91,92,93]. Smartphones were also used in radioactivity detection [94] or in forensic science [95,96]. It is worth mentioning that despite the increasingly frequent use of smartphone cameras for 3D modeling, it is very rare for the process to include camera calibration. The most common solution is on-the-job calibration. Among the many smartphone photogrammetry publications, only a small part of them contain information on calibration [30,40,41,59,64,67,72,74,76,78,90,93,97,98,99,100,101,102,103].
Figure 1 shows how many cited publications have been published each year. It can be seen that starting in 2016, the number of publications using smartphone photogrammetry is constantly growing and this trend will certainly continue in the future.

2. Materials and Methods

2.1. Research Aim

The main research objective was to evaluate smartphone cameras from the point of view of their suitability for photogrammetric measurements. It was assumed that the smartphone camera corresponds to the pinhole camera model. This made it possible to use analytical photogrammetry based on collinearity equations [1,3]. Before the start of the research, the experience was gained in utilizing smartphone imagery in 3D modeling within the SfM-MVS method. In these projects, the source of the IO was self-calibration, which is a common practice. However, it was noted that for the same camera calibration parameters, in particular the principal distance and the position of the principal point, would vastly vary between projects. Therefore, it was decided to investigate whether this was the result of camera IO instability, or whether the problem lies in the use of self-calibration for cases where the alignment of the bundle of images is poorly conditioned. A goal was set to develop a method minimizing the effect of IO instability of cameras on the results of a photogrammetric survey. The existing processing pipeline should be changed as little as possible, to retain usability.
The research involved two main stages. In the first one, calibration was performed for multiple, different smartphones at specific time intervals. A checkerboard calibration method was selected. The analysis of the calibration results made it possible to separate smartphones with high and low IO stability. In the second stage, photos of a small artifact were taken with selected smartphones. Using the SfM-MVS method, 3D models in the form of a dense point cloud were developed. Two 3D models were developed for each SPC, one with self-calibration and the other with pre-calibration fixed in SfM processing. Then, shape deformations of the obtained models were analyzed based on the distance to the reference model developed with the DSLR.

2.2. IO Stability of Smartphone Cameras

IO stability was tested for 14 smartphone cameras from Xiaomi, Samsung, Motorola, Lenovo, Oppo, Realme, and Huawei. Only those SPCs that have the option of manual focus were selected (iPhone cameras do not have this option and were therefore omitted from this experiment).
The calibration was based on the widely used Zhang method [5]. A chessboard displayed on a computer screen was photographed. For each calibration, at least 9 photos were taken with a wide variety of roll and pitch angles, which, while striving for the checkerboard to cover 90 to 100% of the photo frame, also forced a change in the camera-screen distance. This chessboard image acquisition strategy reduces the correlations of the estimated unknowns. The camera-calibration-with-large-chessboards program was used to calculate the calibration parameters. The internal chessboard corners were located by applying a convolution to a grayscale version of the input image with a kernel consisting of complex numbers. The structure of the kernel was chosen such that chessboard corners generate a strong response. After detection, the corner locations were enumerated, and then the OpenCV [15] camera calibration function is used to determine the camera parameters. The application is available to download under an opensource MIT license [104].
The calibration was repeated at least four times over a two-month period. The focus was always set to infinity so that for each calibration the principal distance would be identical (equal to the focal length in this case). The following were recorded from each calibration: principal distance, principal point location, radial (k1, k2, k3), and tangential (p1, p2) distortion coefficients.

2.3. 3D Model Deformation

The subject of modeling was the small sculpture (circa 30 × 30 × 30 cm) shown in Figure 2. Two smartphones were selected, representing different levels of IO stability. The photos were taken omnidirectionally, with the cameras stationary and the object rotating between the photos with an interval of 10°. This is a popular method for modeling small artifacts [25,26,66,69,89]. There were textured walls in the background of the sculpture, so in the development process, it was necessary to remove the so-called stationary tie points. Many researchers take photos so that the background is untextured [25,26,69]. Using the SfM-MVS method, two models in the form of a dense point cloud were developed for each SPC, one with self-calibration, and the other with pre-calibration. The SfM-MVS process was carried out using the Metashape software, which can automatically exclude stationary tie points from the SfM process [20].
Additionally, pictures of the sculpture were taken with a Nikon 5200 DSLR camera (manufactured by Nikon in Japan) in the same way as described above. The model was then developed using the SfM-MVS method with fixed pre-calibration (IO parameters were determined from the chessboard test and then considered constant). After confirming the IO stability of the Nikon D5200 camera and considering the low noise of Nikon cameras [74], the 3D model was treated as a reference. The models were compared in CloudCompare [105] using the cloud-to-cloud distance function.
The basic characteristics of the cameras used in the research are given in Table 2. The position of each camera was selected so that the object occupied a similar part of the photo. Due to the similar horizontal angle of view of the cameras, the positions were very close to each other. The focus was set manually to a selected place on the sculpture. For each camera, a checkerboard calibration was performed, with the same focus setting as for the sculpture registration (about 1 m). The calibration was done immediately after taking the photos, while for the Samsung Galaxy S10(manufactured by Samsung in Vietnam), due to its low IO stability, the calibration was performed twice, both before and after the sculpture registration. For this SPC, the final IO parameters were calculated in one process from all photos (4 × 9).
In each SfM-MVS process, only 3 markers with known XYZ coordinates in a local object coordinate system were used. The markers were used in a rigid transformation of the model to the local coordinate system, which facilitated the comparison of the 3D models. As in the BA procedure, all ground control points take part in the adjustment, the remaining markers (visible in Figure 2) were not used to avoid their impact on local changes in the shape of 3D models.

3. Results

3.1. IO Stability of Smartphone Cameras

The analysis of the SPC’s calibration results showed significant differences between the individual camera models in terms of IO stability. The greatest variation concerns f, x0, and y0 parameters. For some cameras, the variability of these parameters was small, at the level of a few pixels, but for most of them, the variability was higher (see Figure 3). Distortion changes between calibrations of the same cameras were very small. It is true that the position of the main point affects the distortion estimation, but with high radial distortion, reaching 50 pixels at the edges of the image, this influence is small. The tangential distortion was very small for all the tested cameras, it did not exceed 2 pixels at the edges of the image.
For the tested SPCs, a stability ranking was developed, based on the volatility of f, x0, y0. Mean absolute deviation (MAD) was calculated for each of these parameters. Then, each camera was assigned three numbers from 1 to 14, resulting from the separate sorting of the three MAD statistics (from min to max). The sum of these numbers determined the position in the ranking shown in Table 3).
For comparison, the Nikon DSLR camera obtained the following MAD values of f, x0, y0: 0.31, 0.96, 0.41 pixels.
The following were selected for the next stage of the research: Xiaomi Redmi Note 11S (manufactured by Xiaomi in China) and Samsung Galaxy S10, i.e., SPC’s which ranked first and last.

3.2. 3D Model Deformations

Deformations of individual 3D models obtained from two SPCs were assessed by the average value of the distance (d) to the reference model and its standard deviation, the median, and the maximum distance. Table 4 summarizes these measures, the values for the variant with self-calibration and with pre-calibration are given. Pre-calibration resulted in a reduction of the average distance compared to self-calibration by 30% and 34% for Xiaomi Redmi Note 11S and Samsung Galaxy S10, respectively. The standard deviation and median also decreased. The improvement in the standard deviation is relatively small, because a small number of noise points where the distance clouds to the pattern reaches approximately a dozen mm (the Xiaomi Redmi Note 11S data is noisier).
Figure 4 shows the density plots of the distances between 3D models with SPC’s in relation to the reference model. The shape of the density plots shows that the models obtained from pre-calibration have a much higher concentration of small errors and contain much fewer large errors. For Samsung Galaxy S10, pre-calibration reduces errors in the range of 0.9–2.8 mm from 49% to 20%. For Xiaomi Redmi Note 11S there is a reduction of errors in the range of 0.7–2.4 mm from 39% to 17%.
The data in Table 4, combined with the conclusions drawn from Figure 4, clearly indicates that the models with pre-calibration are closer to the reference model than the models with self-calibration.
In order to assess the spatial distribution of errors in 3D models, visualizations showing deviations from the reference model were developed (Figure 5 and Figure 6). Views showing the sculpture from the front were selected for the presentation, as the nature of the deformation is better visible with this perspective. Figure 5a and Figure 6a show that for the variants with self-calibration, the errors are arranged in clusters around local deformations. On the other hand, deformations of models created with the use of pre-calibration are smaller in value and are more spatially dispersed (Figure 5b and Figure 6b). This phenomenon is more clearly visible in the Samsung Galaxy S10 model (Figure 5a), where the largest deformations in terms of value occur on the upper part of the sculpture (hair), while the left part of the sculpture and a fragment of the right part above the eyes show greater deformations than the right part. For Xiaomi Redmi Note, the self-calibration model has smaller deformations (Figure 6a), but the general tendency of their spatial arrangement is similar to that of the Samsung Galaxy S10. However, in the case of models with pre-calibration, a reduction of clusters with a similar distance error is visible, stronger for Xiaomi Redmi Note (Figure 6b) than for Samsung Galaxy S10 (Figure 6a).

4. Discussion

Calibration of various smartphone cameras showed that there are large differences in terms of IO stability. Among the examined cameras, there were those whose key IO values changed at the level of a few pixels (stable). For most, the changes were larger, and for some, they reached several dozen pixels (mainly for the principal distance). Research shows that the stability is independent of the manufacturer (e.g., Xiaomi). There is also no correlation between the market price of the smartphone and IO stability. The instability is probably due to the interference of the camera software in the final image. It is magnified by the purpose of processing, which is always the visual quality of the image, not whether the idea of a pinhole camera is met. It cannot be ruled out that the instability is influenced by lens deformations caused by temperature changes.
Regardless of the differences between smartphone models, the following observations were found: (1) in smartphones, the principal distance f and the position of the principal point x0, y0 change continuously, (2) the distortion is highly stable, and the tangential distortion is so small that it can be neglected, (3) the use of pre-calibration in the SfM-MVS method increases the accuracy of 3D modeling.
The last of the above conclusions is of great practical importance, as self-calibration, which is an optional element of the SfM-MVS method, is commonly used in SPP. This was due to the adaptation of the rules used in UAV photogrammetry. However, the cameras used in UAVs are in the vast majority much more stable than SPCs. Their production is dedicated to metric applications. In addition, the UAV captures imagery of terrain, which is beneficial for the detection of tie points throughout the entirety of the image. When taking pictures of small objects made by SPCs, such convenient situations occur less often.
Estimation of IO and EO parameters during SfM-MVS is popular, as it simplifies the process of photogrammetric 3D modeling. From a theoretical point of view, it is preferable to estimate IO and EO in separate processes. This is due to the correlation of IO and EO parameters, which increases the uncertainty of the estimation. The highest correlations occur between principal distance (f) and camera height (Z0) as well as the location of the main point (x0, y0) and pitch and roll camera orientation angles. The level of critical correlation coefficients can vary from a few percent to 80% [7]. The highest correlation values are obtained when the object is flat and the photos are slightly twisted relative to each other and taken from a similar distance (height). It is true that such conditions did not occur during the modeling of the sculpture, but another problem arose. The sculpture occupied only about 15% of the surface of the photos, it was always the central fragment. This causes very bad conditioning of the computational process in which EO and IO are determined. The IO-EO correlation makes the photos tie into a lattice, but both the IO and EO parameters are far from true. Continuing the MVS process on mutually agreed but actually incorrect EO and IO values, we get a deformed point cloud. Unfortunately, the SfM-MVS process does not clearly signal the situation of poorly conditioned computations.
The introduction of a separate calibration to the SfM-MVS brought positive results. 3D models for both SPCs have gained in accuracy. Global statistical measures indicate an improvement of 30%, which, however, does not objectively reflect the improvement in the geometric quality of the model. A significant benefit of the introduced strategy is the reduction of local deformations of concavities and convexities.
The improvement in model accuracy as a result of using pre-calibration was similar for both tested SPCs, with a slight advantage of the Xiaomi Redmi Note 11S over the Samsung Galaxy S10. This begs the question, why did the stable IO camera give slightly better accuracy than the unstable camera? To explain this effect, there are three things to consider: the ground sampling distance (GSD), the radiometric quality of the images, and the pre-calibration method. In the experiment, the GSD of the Xiaomi Redmi Note 11S was 17% larger than the pixel of the Samsung Galaxy S10. This caused more noise in the photos on the outline of the sculpture, where the brightness of the pixel is shaped by the rays reflected from the sculpture and from the background. The visual comparison of the images shows that the camera which is more stable in terms of IO contains more radiometric interference (visual assessment is subjective). This is also evidenced by the greater number of outliers in computed distances (relative to the average values). Probably better color demosaicing is used in the Samsung Galaxy S10 camera. On the other hand, the issue of low IO stability of the Samsung Galaxy S10 camera was suppressed by performing the calibration from four series of photos taken before and after registration of the sculpture (only one series of photos was taken for the Xiaomi Redmi Note 11S).

5. Conclusions

Smartphone photogrammetry is a term announced prematurely because too few studies have been conducted to verify the objectively obtained accuracy. The research has shown that IO stability varies between different SPC models. Therefore, in order to use a smartphone for metric measurements with a certain accuracy, it must be subjected to a stability test by means of independent calibrations.
Drawing conclusions from the statistics of the SfM-MVS process can be misleading and hide the deformation of the developed 3D models. We have experimentally proven that in SPP it is beneficial to perform calibration outside of SfM-MVS, in a separate process. Our solution does not restrict access to SPP, because calibration only requires taking pictures of the chessboard displayed on a computer monitor, and both open-source and commercial software can be used for calculations. In addition, we provide a program that, compared to others, is characterized by better detectability of chessboard corners.
For some applications, such as modeling of geological structures, the use of smartphone photogrammetry can be recommended without additional conditions. However, where the metric accuracy is important due to the purpose of the measurement, measures to improve the accuracy described in the article should be introduced.
Our study did not include the influence of radiometric quality on SfM-MVS results using SPCs. We see the need for meticulous research on this aspect in order to comprehensively determine what metric potential the SPP has. We believe that in the face of many unexplained issues affecting the quality of 3D models from smartphones, you should always consider whether modeling photos should not be taken with a DSLR camera.

Author Contributions

Conceptualization, K.P. and A.J.; methodology, K.P. and E.P.; software, H.S.M. and E.P.; validation, K.P., E.P. and A.J.; formal analysis, A.J.; investigation, A.J.; resources, A.J.; data curation, A.J.; writing—original draft preparation, K.P. and E.P.; writing—review and editing, E.P., H.S.M. and A.J.; visualization, E.P. and A.J.; supervision, K.P.; project administration, A.J.; funding acquisition, A.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly supported by the program “Excellence Initiative—Research University” for the AGH University of Science and Technology.

Informed Consent Statement

Not applicable.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Luhmann, T.; Robson, S.; Kyle, S.; Boehm, J. Close-Range Photogrammetry and 3d Imaging, 3rd ed.; Walter de Gruyter GmbH: Berlin, Germany, 2020. [Google Scholar]
  2. Brown, D.C. In Close-range camera calibration. Photogram. Eng. 1971, 37, 855–866. [Google Scholar]
  3. Kraus, K. Photogrammetry. 2: Advanced Methods and Applications, 4th ed.; Dümmler: Bonn, Germany, 1997. [Google Scholar]
  4. Kolecki, J.; Kuras, P.; Pastucha, E.; Pyka, K.; Sierka, M. Calibration of Industrial Cameras for Aerial Photogrammetric Mapping. Remote. Sens. 2020, 12, 3130. [Google Scholar] [CrossRef]
  5. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  6. Remondino, F.; Fraser, C. ISPRS Commission V Symposium: Image Engineering and Vision Metrology. Photogramm. Rec. 2006, 36, 266–272. [Google Scholar] [CrossRef]
  7. Luhmann, T.; Fraser, C.; Maas, H.G. Sensor modelling and camera calibration for close-range photogrammetry. ISPRS J. Photogramm. Remote Sens. 2016, 115, 37–46. [Google Scholar] [CrossRef]
  8. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  9. Pollefeys, M.; Van Gool, L.; Vergauwen, M.; Verbiest, F.; Cornelis, K.; Tops, J.; Koch, R. Visual Modeling with a Hand-Held Camera. Int. J. Comput. Vis. 2004, 59, 207–232. [Google Scholar] [CrossRef]
  10. Mohr, R.; Quan, L.; Veillon, F. Relative 3D Reconstruction Using Multiple Uncalibrated Images. Int. J. Robot. Res. 1995, 14, 619–632. [Google Scholar] [CrossRef] [Green Version]
  11. Zhou, Y.; Rupnik, E.; Meynard, C.; Thom, C.; Pierrot-Deseilligny, M. Simulation and Analysis of Photogrammetric UAV Image Blocks—Influence of Camera Calibration Error. Remote Sens. 2019, 12, 22. [Google Scholar] [CrossRef] [Green Version]
  12. Huang, W.; Jiang, S.; Jiang, W. Camera Self-Calibration with GNSS Constrained Bundle Adjustment for Weakly Structured Long Corridor UAV Images. Remote Sens. 2021, 13, 4222. [Google Scholar] [CrossRef]
  13. Hirschmüller, H. Accurate and Efficient Stereo Processing by Semi-Global Matching and Mutual Information. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 807–814. [Google Scholar] [CrossRef]
  14. Furukawa, Y.; Ponce, J. Accurate, Dense, and Robust Multiview Stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1362–1376. [Google Scholar] [CrossRef]
  15. OpenCV. Open Source Computer Vision Library. 2015. Available online: https://opencv.org (accessed on 14 December 2022).
  16. Open Drone Map [Computer Software]. 2017. Available online: https://opendronemap.org (accessed on 14 December 2022).
  17. Vision, A. Meshroom: A 3D Reconstruction Software. 2018. Available online: https://github.com/alicevision/Meshroom (accessed on 14 December 2022).
  18. Rupnik, E.; Daakir, M.; Deseilligny, M.P. MicMac—A free, open-source solution for photogrammetry. Open Geospat. Data Softw. Stand. 2017, 2, 14. [Google Scholar] [CrossRef] [Green Version]
  19. Pix4D SA. Available online: https://www.pix4d.com (accessed on 14 December 2022).
  20. Agisoft Metashape Professional (Version 1.6.3) (Software). Available online: https://www.agisoft.com (accessed on 14 December 2022).
  21. Blahnik, V.; Schindelbeck, O. Smartphone imaging technology and its applications. Adv. Opt. Technol. 2021, 10, 145–232. [Google Scholar] [CrossRef]
  22. Kawahito, S.; Seo, M.-W. Noise Reduction Effect of Multiple-Sampling-Based Signal-Readout Circuits for Ultra-Low Noise CMOS Image Sensors. Sensors 2016, 16, 1867. [Google Scholar] [CrossRef] [Green Version]
  23. Brandolini, F.; Cremaschi, M.; Zerboni, A.; Degli Esposti, M.; Mariani, G.S.; Lischi, S. SfM-photogrammetry for fast recording of archaeological features in remote areas. AeC 2020, 31, 33–45. [Google Scholar] [CrossRef]
  24. Liba, N. Making 3D Models Using Close-Range Photogrammetry: Comparison of Cameras and Software. In Proceedings of the 19th International Multidisciplinary Scientific GeoConference SGEM 2019, Sofia, Bulgaria, 30 June–6 July 2019; pp. 561–568. [Google Scholar] [CrossRef]
  25. Apollonio, F.; Fantini, F.; Garagnani, S.; Gaiani, M. A Photogrammetry-Based Workflow for the Accurate 3D Construction and Visualization of Museums Assets. Remote Sens. 2021, 13, 486. [Google Scholar] [CrossRef]
  26. Gaiani, M.; Apollonio, F.I.; Fantini, F. Evaluating Smartphones Color Fidelity and Metric Accuracy for the 3d Documentation of Small Artifacts. ISPRS—Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2019, XLII-2/W11, 539–547. [Google Scholar] [CrossRef] [Green Version]
  27. Eker, R.; Elvanoglu, N.; Ucar, Z.; Bilici, E.; Aydın, A. 3D modelling of a historic windmill: PPK-aided terrestrial photogrammetry vs smartphone app. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2022, XLIII-B2-2022, 787–792. [Google Scholar] [CrossRef]
  28. da Purificação, N.R.S.; Henrique, V.B.; Amorim, A.; Carneiro, A.; de Souza, G.H.B. Reconstruction and storage of a low-cost three-dimensional model for a cadastre of historical and artistic heritage. Int. J. Build. Pathol. Adapt. 2022; ahead-of-print. [Google Scholar] [CrossRef]
  29. Khalloufi, H.; Azough, A.; Ennahnahi, N.; Kaghat, F.Z. Low-cost terrestrial photogrammetry for 3d modeling of historic sites: A case study of the marinids’ royal necropolis city of Fez, Morocco. Mediterr. Archaeol. Archaeom. 2020, 20, 257–272. [Google Scholar] [CrossRef]
  30. Yilmazturk, F.; Gurbak, A.E. Geometric Evaluation of Mobile-Phone Camera Images for 3D Information. Int. J. Opt. 2019, 2019, 8561380. [Google Scholar] [CrossRef]
  31. Inzerillo, L. Super-Resolution Images on Mobile Smartphone Aimed at 3D Modeling. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, XLVI-2/W1-2022, 259–266. [Google Scholar] [CrossRef]
  32. Shih, N.-J.; Wu, Y.-C. AR-Based 3D Virtual Reconstruction of Brick Details. Remote Sens. 2022, 14, 748. [Google Scholar] [CrossRef]
  33. Shih, N.-J.; Wu, Y.-C. An AR-assisted Comparison for the Case Study of the Reconstructed Components in two Old Brick Warehouses. In Proceedings of the ISCA 34th International Conference on Computer Applications in Industry and Engineering, Online, 11–13 October 2021; Volume 79, pp. 150–158. [Google Scholar] [CrossRef]
  34. Pepe, M.; Costantino, D. Techniques, Tools, Platforms and Algorithms in Close Range Photogrammetry in Building 3D Model and 2D Representation of Objects and Complex Architectures. Comput. Aided. Des. Appl. 2020, 18, 42–65. [Google Scholar] [CrossRef]
  35. Pan, X.; Hu, Y.G.; Hou, M.L.; Zheng, X. Research on Information Acquisition and Accuracy Analysis of Ancient Architecture Plaque with Common Smart Phone. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-4/W20, 65–70. [Google Scholar] [CrossRef] [Green Version]
  36. Lewis, M.; Oswald, C. Can an Inexpensive Phone App Compare to Other Methods When It Comes to 3d Digitization of Ship Models. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W10, 107–111. [Google Scholar] [CrossRef] [Green Version]
  37. Cardaci, A.; Versaci, A.; Azzola, P. 3d Low-Cost Acquisition for the Knowledge of Cultural Heritage: The Case Study of the Bust of San Nicola Da Tolentino. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W17, 93–100. [Google Scholar] [CrossRef] [Green Version]
  38. Boboc, R.G.; Gîrbacia, F.; Postelnicu, C.C.; Gîrbacia, T. Evaluation of Using Mobile Devices for 3D Reconstruction of Cultural Heritage Artifacts. In VR Technologies in Cultural Heritage; Communications in Computer and Information Science; Duguleană, M., Carrozzino, M., Gams, M., Tanea, I., Eds.; Springer International Publishing: Cham, Switzerland, 2019; Volume 904, pp. 46–59. [Google Scholar] [CrossRef]
  39. Scianna, A.; La Guardia, M. 3d Virtual Ch Interactive Information Systems for a Smart Web Browsing Experience for Desktop Pcs and Mobile Devices. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, XLII-2, 1053–1059. [Google Scholar] [CrossRef] [Green Version]
  40. Shults, R. New Opportunities of Low-Cost Photogrammetry for Culture Heritage Preservation. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-5/W1, 481–486. [Google Scholar] [CrossRef] [Green Version]
  41. Shults, R.; Krelshtein, P.; Kravchenko, I.; Rogoza, O.; Kyselov, O. Low-cost Photogrammetry for Culture Heritage. In Proceedings of the 10th International Conference “Environmental Engineering, Vilnius, Lithuania, 27–28 April 2017”; VGTU Technika: Vilnius Gediminas Technical University: Vilnius, Lithuania, 2017. [Google Scholar] [CrossRef]
  42. Sirmacek, B.; Lindenbergh, R.; Wang, J. Quality Assessment and Comparison of Smartphone and Leica C10 Laser Scanner Based Point Clouds. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B5, 581–586. [Google Scholar] [CrossRef] [Green Version]
  43. Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T. Crowdsourcing Based 3D Modeling. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B5, 587–590. [Google Scholar] [CrossRef] [Green Version]
  44. Sirmacek, B.; Lindenbergh, R. Accuracy assessment of building point clouds automatically generated from iphone images. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL-5, 547–552. [Google Scholar] [CrossRef] [Green Version]
  45. Dussel, N.; Fuchs, R.; Reske, A.W.; Neumuth, T. Automated 3D thorax model generation using handheld video-footage. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 1707–1716. [Google Scholar] [CrossRef]
  46. Stark, E.; Haffner, O.; Kučera, E. Low-Cost Method for 3D Body Measurement Based on Photogrammetry Using Smartphone. Electronics 2022, 11, 1048. [Google Scholar] [CrossRef]
  47. Matuzevičius, D.; Serackis, A. Three-Dimensional Human Head Reconstruction Using Smartphone-Based Close-Range Video Photogrammetry. Appl. Sci. 2021, 12, 229. [Google Scholar] [CrossRef]
  48. Shilov, L.; Shanshin, S.; Romanov, A.; Fedotova, A.; Kurtukova, A.; Kostyuchenko, E.; Sidorov, I. Reconstruction of a 3D Human Foot Shape Model Based on a Video Stream Using Photogrammetry and Deep Neural Networks. Future Internet 2021, 13, 315. [Google Scholar] [CrossRef]
  49. Cullen, S.; Mackay, R.; Mohagheghi, A.; Du, X. The Use of Smartphone Photogrammetry to Digitise Transtibial Sockets: Optimisation of Method and Quantitative Evaluation of Suitability. Sensors 2021, 21, 8405. [Google Scholar] [CrossRef]
  50. Gurses, M.E.; Gungor, A.; Hanalioglu, S.; Yaltirik, C.K.; Postuk, H.C.; Berker, M.; Türe, U. Qlone®: A Simple Method to Create 360-Degree Photogrammetry-Based 3-Dimensional Model of Cadaveric Specimens. Oper. Neurosurg. 2021, 21, E488–E493. [Google Scholar] [CrossRef]
  51. Farook, T.H.; Bin Jamayet, N.; Asif, J.A.; Din, A.S.; Mahyuddin, M.N.; Alam, M.K. Development and virtual validation of a novel digital workflow to rehabilitate palatal defects by using smartphone-integrated stereophotogrammetry (SPINS). Sci. Rep. 2021, 11, 8469. [Google Scholar] [CrossRef]
  52. Foltynski, P.; Ciechanowska, A.; Ladyzynski, P. Wound surface area measurement methods. Biocybern. Biomed. Eng. 2021, 41, 1454–1465. [Google Scholar] [CrossRef]
  53. Bridger, C.A.; Douglass, M.J.J.; Reich, P.D.; Santos, A.M.C. Evaluation of camera settings for photogrammetric reconstruction of humanoid phantoms for EBRT bolus and HDR surface brachytherapy applications. Phys. Eng. Sci. Med. 2021, 44, 457–471. [Google Scholar] [CrossRef]
  54. Gallardo, Y.N.; Salazar-Gamarra, R.; Bohner, L.; De Oliveira, J.I.; Dib, L.L.; Sesma, N. Evaluation of the 3D error of 2 face-scanning systems: An in vitro analysis. J. Prosthet. Dent. 2021, S0022391321003681. [Google Scholar] [CrossRef] [PubMed]
  55. Pavone, C.; Abrate, A.; Altomare, S.; Vella, M.; Serretta, V.; Simonato, A.; Callieri, M. Is Kelami′s Method Still Useful in the Smartphone Era? The Virtual 3-Dimensional Reconstruction of Penile Curvature in Patients with Peyronie's Disease: A Pilot Study. J. Sex. Med. 2021, 18, 209–214. [Google Scholar] [CrossRef] [PubMed]
  56. Matsuo, M.; Mine, Y.; Kawahara, K.; Murayama, T. Accuracy Evaluation of a Three-Dimensional Model Generated from Patient-Specific Monocular Video Data for Maxillofacial Prosthetic Rehabilitation: A Pilot Study. J. Prosthodont. 2020, 29, 712–717. [Google Scholar] [CrossRef]
  57. Trujillo-Jiménez, M.A.; Navarro, P.; Pazos, B.; Morales, L.; Ramallo, V.; Paschetta, C.; De Azevedo, S.; Ruderman, A.; Pérez, O.; Delrieux, C.; et al. body2vec: 3D Point Cloud Reconstruction for Precise Anthropometry with Handheld Devices. J. Imaging 2020, 6, 94. [Google Scholar] [CrossRef] [PubMed]
  58. Barbero-García, I.; Lerma, J.L.; Mora-Navarro, G. Fully automatic smartphone-based photogrammetric 3D modelling of infant’s heads for cranial deformation analysis. ISPRS J. Photogramm. Remote. Sens. 2020, 166, 268–277. [Google Scholar] [CrossRef]
  59. Barbero-García, I.; Lerma, J.L.; Miranda, P.; Marqués-Mateu, Á. Smartphone-based photogrammetric 3D modelling assessment by comparison with radiological medical imaging for cranial deformation analysis. Measurement 2019, 131, 372–379. [Google Scholar] [CrossRef]
  60. Barbero-García, I.; Cabrelles, M.; Lerma, J.L.; Marqués-Mateu, Á. Smartphone-based close-range photogrammetric assessment of spherical objects. Photogramm. Rec. 2018, 33, 283–299. [Google Scholar] [CrossRef] [Green Version]
  61. Hernandez, A.; Lemaire, E. A smartphone photogrammetry method for digitizing prosthetic socket interiors. Prosthetics Orthot. Int. 2017, 41, 210–214. [Google Scholar] [CrossRef]
  62. Barbero-García, I.; Lerma, J.L.; Marqués-Mateu, Á.; Miranda, P. Low-Cost Smartphone-Based Photogrammetry for the Analysis of Cranial Deformation in Infants. World Neurosurg. 2017, 102, 545–554. [Google Scholar] [CrossRef]
  63. Koban, K.C.; Leitsch, S.; Holzbach, T.; Volkmer, E.; Metz, P.M.; Giunta, R.E. 3D Bilderfassung und Analyse in der Plastischen Chirurgie mit Smartphone und Tablet: Eine Alternative zu professionellen Systemen? Handchir. Mikrochir. Plast. Chir. 2014, 46, 97–104. [Google Scholar] [CrossRef] [Green Version]
  64. Lerma, J.L.; Barbero-García, I.; Marqués-Mateu, Á.; Miranda, P. Smartphone-based video for 3D modelling: Application to infant’s cranial deformation analysis. Measurement 2018, 116, 299–306. [Google Scholar] [CrossRef] [Green Version]
  65. Ge, Y.; Chen, K.; Liu, G.; Zhang, Y.; Tang, H. A low-cost approach for the estimation of rock joint roughness using photogrammetry. Eng. Geol. 2022, 305, 106726. [Google Scholar] [CrossRef]
  66. Torkan, M.; Janiszewski, M.; Uotinen, L.; Baghbanan, A.; Rinne, M. Photogrammetric Method to Determine Physical Aperture and Roughness of a Rock Fracture. Sensors 2022, 22, 4165. [Google Scholar] [CrossRef]
  67. An, P.; Tang, H.; Li, C.; Fang, K.; Lu, S.; Zhang, J. A fast and practical method for determining particle size and shape by using smartphone photogrammetry. Measurement 2022, 193, 110943. [Google Scholar] [CrossRef]
  68. Fang, K.; An, P.; Tang, H.; Tu, J.; Jia, S.; Miao, M.; Dong, A. Application of a multi-smartphone measurement system in slope model tests. Eng. Geol. 2021, 295, 106424. [Google Scholar] [CrossRef]
  69. An, P.; Fang, K.; Jiang, Q.; Zhang, H.; Zhang, Y. Measurement of Rock Joint Surfaces by Using Smartphone Structure from Motion (SfM) Photogrammetry. Sensors 2021, 21, 922. [Google Scholar] [CrossRef]
  70. Tavani, S.; Granado, P.; Riccardi, U.; Seers, T.; Corradetti, A. Terrestrial SfM-MVS photogrammetry from smartphone sensors. Geomorphology 2020, 367, 107318. [Google Scholar] [CrossRef]
  71. Alessandri, L.; Baiocchi, V.; Del Pizzo, S.; Di Ciaccio, F.; Onori, M.; Rolfo, M.F.; Troisi, S. The Fusion of External and Internal 3d Photogrammetric Models as a Tool to Investigate the Ancient Human/Cave Interaction: The La Sassa Case Study. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B2-2020, 1443–1450. [Google Scholar] [CrossRef]
  72. Dabove, P.; Grasso, N.; Piras, M. Smartphone-Based Photogrammetry for the 3D Modeling of a Geomorphological Structure. Appl. Sci. 2019, 9, 3884. [Google Scholar] [CrossRef] [Green Version]
  73. Francioni, M.; Simone, M.; Stead, D.; Sciarra, N.; Mataloni, G.; Calamita, F. A New Fast and Low-Cost Photogrammetry Method for the Engineering Characterization of Rock Slopes. Remote Sens. 2019, 11, 1267. [Google Scholar] [CrossRef] [Green Version]
  74. Saif, W.; Alshibani, A. Smartphone-Based Photogrammetry Assessment in Comparison with a Compact Camera for Construction Management Applications. Appl. Sci. 2022, 12, 1053. [Google Scholar] [CrossRef]
  75. Hansen, L.H.; Pedersen, T.M.; Kjems, E.; Wyke, S. Smartphone-Based Reality Capture for Subsurface Utilities: Experiences from Water Utility Companies in Denmark. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, XLVI-4/W4-2021, 25–31. [Google Scholar] [CrossRef]
  76. Fauzan, K.N.; Suwardhi, D.; Murtiyoso, A.; Gumilar, I.; Sidiq, T.P. Close-Range Photogrammetry Method for Sf6 Gas Insulated Line (Gil) Deformation Monitoring. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, XLIII-B2-2021, 503–510. [Google Scholar] [CrossRef]
  77. Moritani, R.; Kanai, S.; Akutsu, K.; Suda, K.; Elshafey, A.; Urushidate, N.; Nishikawa, M. Streamlining Photogrammetry-based 3D Modeling of Construction Sites using a Smartphone, Cloud Service and Best-view Guidance. In Proceedings of the International Symposium on Automation and Robotics in Construction, Kitakyushu, Japan, 26–30 October 2020; pp. 1037–1044. [Google Scholar] [CrossRef]
  78. Yu, L.; Lubineau, G. A smartphone camera and built-in gyroscope based application for non-contact yet accurate off-axis structural displacement measurements. Measurement 2021, 167, 108449. [Google Scholar] [CrossRef]
  79. Najathulla, B.C.; Deshpande, A.S.; Khandelwal, M. Smartphone camera-based micron-scale displacement measurement: Development and application in soft actuators. Instrum. Sci. Technol. 2022, 50, 616–625. [Google Scholar] [CrossRef]
  80. Tungol, Z.P.L.; Toriya, H.; Owada, N.; Kitahara, I.; Inagaki, F.; Saadat, M.; Jang, H.D.; Kawamura, Y. Model Scaling in Smartphone GNSS-Aided Photogrammetry for Fragmentation Size Distribution Estimation. Minerals 2021, 11, 1301. [Google Scholar] [CrossRef]
  81. Zhu, R.; Guo, Z.; Zhang, X. Forest 3D Reconstruction and Individual Tree Parameter Extraction Combining Close-Range Photo Enhancement and Feature Matching. Remote Sens. 2021, 13, 1633. [Google Scholar] [CrossRef]
  82. Kujawa, P. Comparison of 3D Models of an Object Placed in Two Different Media (Air and Water) Created on the Basis of Photos Obtained with a Mobile Phone Camera. In IOP Conference Series: Earth and Environmental Science; IOP Publishing: Bristol, UK, 2021; Volume 684, p. 012032. [Google Scholar] [CrossRef]
  83. Van, T.N.; Le Thanh, T.; Natasa, N. Measuring propeller pitch based on photogrammetry and CAD. Manuf. Technol. 2021, 21, 706–713. [Google Scholar] [CrossRef]
  84. Zhou, K.C.; Cooke, C.; Park, J.; Qian, R.; Horstmeyer, R.; Izatt, J.A.; Farsiu, S. Mesoscopic Photogrammetry with an Unsta-bilized Phone Camera. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 7535–7545. [Google Scholar]
  85. Wolf, Á.; Troll, P.; Romeder-Finger, S.; Archenti, A.; Széll, K.; Galambos, P. A Benchmark of Popular Indoor 3D Reconstruction Technologies: Comparison of ARCore and RTAB-Map. Electronics 2020, 9, 2091. [Google Scholar] [CrossRef]
  86. Hellmuth, R.; Wehner, F.; Giannakidis, A. Datasets of captured images of three different devices for photogrammetry calculation comparison and integration into a laserscan point cloud of a built environment. Data Brief 2020, 33, 106321. [Google Scholar] [CrossRef]
  87. Yang, Z.; Han, Y. A Low-Cost 3D Phenotype Measurement Method of Leafy Vegetables Using Video Recordings from Smartphones. Sensors 2020, 20, 6068. [Google Scholar] [CrossRef]
  88. Marzulli, M.I.; Raumonen, P.; Greco, R.; Persia, M.; Tartarino, P. Estimating tree stem diameters and volume from smartphone photogrammetric point clouds. For. Int. J. For. Res. 2019, 93, 411–429. [Google Scholar] [CrossRef] [Green Version]
  89. Collins, T.; Woolley, S.I.; Gehlken, E.; Ch’Ng, E. Automated Low-Cost Photogrammetric Acquisition of 3D Models from Small Form-Factor Artefacts. Electronics 2019, 8, 1441. [Google Scholar] [CrossRef]
  90. Ahmad, N.; Azri, S.; Ujang, U.; Cuétara, M.G.; Retortillo, G.M.; Salleh, S.M. Comparative Analysis of Various Camera Input for Videogrammetry. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-4/W16, 63–70. [Google Scholar] [CrossRef] [Green Version]
  91. Chaves, A.G.S.; Sanquetta, C.R.; Arce, J.E.; Dos Santos, L.O.; Moreira, I.M.; Franco, F.M. Tridimensional (3d) Modeling of Trunks and Commercial Logs of Tectona grandis L.f. Floresta 2018, 48, 225–234. [Google Scholar] [CrossRef] [Green Version]
  92. Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A.H.; Varshosaz, M. The Performance Evaluation of Multi-Image 3D Reconstruction Software with Different Sensors. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-1/W5, 515–519. [Google Scholar] [CrossRef] [Green Version]
  93. Yun, M.; Yeu, Y.; Choi, C.; Park, J. Application of Smartphone Camera Calibration for Close-Range Digital Photogrammetry. Korean J. Remote Sens. 2014, 30, 149–160. [Google Scholar] [CrossRef] [Green Version]
  94. Johary, Y.H.; Trapp, J.; Aamry, A.; Aamri, H.; Tamam, N.; Sulieman, A. The suitability of smartphone camera sensors for detecting radiation. Sci. Rep. 2021, 11, 12653. [Google Scholar] [CrossRef]
  95. Haertel, M.E.M.; Linhares, E.J.; de Melo, A.L. Smartphones for latent fingerprint processing and photography: A revolution in forensic science. WIREs Forensic Sci. 2021, 3, e1410. [Google Scholar] [CrossRef]
  96. Zancajo-Blázquez, S.; González-Aguilera, D.; Gonzalez-Jorge, H.; Hernandez-Lopez, D. An Automatic Image-Based Modelling Method Applied to Forensic Infography. PLoS ONE 2015, 10, e0118719. [Google Scholar] [CrossRef]
  97. Aldelgawy, M.; Abu-Qasmieh, I. Calibration of Smartphone’s Rear Dual Camera System. Geodesy Cartogr. 2021, 47, 162–169. [Google Scholar] [CrossRef]
  98. Ataiwe, T.N.; Hatem, I.; Al Sharaa, H.M.J. Digital Model in Close-Range Photogrammetry Using a Smartphone Camera. E3S Web Conf. 2021, 318, 04005. [Google Scholar] [CrossRef]
  99. Maalek, R.; Lichti, D.D. Automated calibration of smartphone cameras for 3D reconstruction of mechanical pipes. Photogramm. Rec. 2021, 36, 124–146. [Google Scholar] [CrossRef]
  100. Wu, D.; Chen, R.; Chen, L. Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras. Sensors 2017, 17, 2645. [Google Scholar] [CrossRef] [Green Version]
  101. Akca, D.; Gruen, A. Comparative geometric and radiometric evaluation of mobile phone and still video cameras. Photogramm. Rec. 2009, 24, 217–245. [Google Scholar] [CrossRef]
  102. Massimiliano, P. Image-based methods for metric surveys of buildings using modern optical sensors and tools: From 2d ap-proach to 3d and vice versa. Int. J. Civ. Eng. Technol. 2018, 9, 729–745. [Google Scholar]
  103. Smith, M.J.; Kokkas, N. Assessing the Photogrammetric Potential of Cameras in Portable Devices. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, XXXIX-B5, 381–386. [Google Scholar] [CrossRef]
  104. Camera-Calibration-with-Large-Chessboards [an Opensource Software with MIT License]. Available online: Https://Github.Com/Henrikmidtiby/Camera-Calibration-with-Large-Chessboards (accessed on 14 December 2022).
  105. Cloud Compare (Version 2.13.Alpha) GPL Software. 2022. Available online: https://www.cloudcompare.org/main.html (accessed on 14 December 2022).
Figure 1. A number of publications including smartphone photogrammetry through the years. Only cited publications were used for this graph.
Figure 1. A number of publications including smartphone photogrammetry through the years. Only cited publications were used for this graph.
Sensors 23 00728 g001
Figure 2. Sculpture used in the research. Ground control points are visible on the table underneath.
Figure 2. Sculpture used in the research. Ground control points are visible on the table underneath.
Sensors 23 00728 g002
Figure 3. Stability of IO of all tested Models. The deviation was calculated as distance from the mean value.
Figure 3. Stability of IO of all tested Models. The deviation was calculated as distance from the mean value.
Sensors 23 00728 g003
Figure 4. The density plot of the distances between 3D models. SPC’s in relation to the reference model.
Figure 4. The density plot of the distances between 3D models. SPC’s in relation to the reference model.
Sensors 23 00728 g004
Figure 5. Deviations between the reference model and the model from Samsung Galaxy S10 images: (a) with self-calibration, (b) with pre-calibration.
Figure 5. Deviations between the reference model and the model from Samsung Galaxy S10 images: (a) with self-calibration, (b) with pre-calibration.
Sensors 23 00728 g005
Figure 6. Deviations between the reference model and the model from Xiaomi Redmi Note 11S images: (a) with self-calibration, (b) with pre-calibration.
Figure 6. Deviations between the reference model and the model from Xiaomi Redmi Note 11S images: (a) with self-calibration, (b) with pre-calibration.
Sensors 23 00728 g006
Table 1. Most popular research areas utilizing smartphone photogrammetry within cited papers.
Table 1. Most popular research areas utilizing smartphone photogrammetry within cited papers.
Research AreaResearch PapersNumber
cultural heritage[23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44]22
medical[45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64]20
geomorphology, geotechnology and geology[65,66,67,68,69,70,71,72,73]9
industrial application[74,75,76,77,78,79]6
Table 2. Parameters of the cameras used in the 3D model deformation studies (the principal for the focusing of the surveys).
Table 2. Parameters of the cameras used in the 3D model deformation studies (the principal for the focusing of the surveys).
CameraIO StabilityPixel
Resolution
H × W
Pixel Size [μm]f
[mm]
Mean GSD * [mm]
Samsung Galaxy S10low2268 × 40321.54.90.3
Xiaomi Redmi Note 11Shigh3000 × 40002.16.10.35
Nikon D5200very high4000 × 60004.021.10.2
* Ground sampling distance (GSD).
Table 3. Stability ranking for the tested SPCs.
Table 3. Stability ranking for the tested SPCs.
ModelProduction YearMAD f
[pix]
MAD x0
[pix]
MAD y0
[pix]
Points (f/x0/y0)Ranking
Xiaomi Redmi Note 11S20221.940.770.962/1/11
Motorola Moto G3120211.913.111.331/10/32
Xiaomi M2003J15SG20204.201.663.004/2/113
Huawei P30 Lite20195.032.231.416/7/44
Xiaomi Redmi Note 8 Pro20194.821.832.055/5/85
OPPO A7220202.402.312.083/8/96
Realme 7i RMX2193202041.331.831.2814/4/27
Samsung Galaxy M51202139.491.871.8813/6/58
Xiaomi Redmi Note 720197.803.611.908/11/69
Motorola One Action20195.863.911.947/12/710
Samsung Galaxy S9201833.761.733.1612/3/1211
Xiaomi Redmi Note 10s202110.692.702.3710/9/1012
Lenovo K53a4820179.914.183.459/13/1313
Samsung Galaxy S10201912.5117.157.2611/14/1414
Table 4. Statistical measures characterizing the errors of 3D models relative to the reference model.
Table 4. Statistical measures characterizing the errors of 3D models relative to the reference model.
SPCMethodMean d [mm]Std d [mm]Median d [mm]
Samsung Galaxy S10Pre-calibration0.650.550.54
Samsung Galaxy S10Self-calibration0.990.650.90
Xiaomi Redmi Note 11SPre-calibration0.480.410.38
Xiaomi Redmi Note 11SSelf-calibration0.700.540.54
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jasińska, A.; Pyka, K.; Pastucha, E.; Midtiby, H.S. A Simple Way to Reduce 3D Model Deformation in Smartphone Photogrammetry. Sensors 2023, 23, 728. https://doi.org/10.3390/s23020728

AMA Style

Jasińska A, Pyka K, Pastucha E, Midtiby HS. A Simple Way to Reduce 3D Model Deformation in Smartphone Photogrammetry. Sensors. 2023; 23(2):728. https://doi.org/10.3390/s23020728

Chicago/Turabian Style

Jasińska, Aleksandra, Krystian Pyka, Elżbieta Pastucha, and Henrik Skov Midtiby. 2023. "A Simple Way to Reduce 3D Model Deformation in Smartphone Photogrammetry" Sensors 23, no. 2: 728. https://doi.org/10.3390/s23020728

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop