Next Article in Journal
NTL-Unet: A Satellite-Based Approach for Non-Technical Loss Detection in Electricity Distribution Using Sentinel-2 Imagery and Machine Learning
Previous Article in Journal
Deep Learning and Histogram-Based Grain Size Analysis of Images
Previous Article in Special Issue
Label-Free Three-Dimensional Morphological Characterization of Cell Death Using Holographic Tomography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Evaluation of the Performance of a Mobile Device Camera and a Full-Frame Mirrorless Camera in Close-Range Photogrammetry Applications

School of Rural & Surveying Engineering, Aristotle University of Thessaloniki, University Box 439, GR-54 124 Thessaloniki, Greece
Sensors 2024, 24(15), 4925; https://doi.org/10.3390/s24154925
Submission received: 29 June 2024 / Revised: 23 July 2024 / Accepted: 27 July 2024 / Published: 30 July 2024
(This article belongs to the Special Issue Optical Instruments and Sensors and Their Applications)

Abstract

:
The comparative evaluation of the performance of a mobile device camera and an affordable full-frame mirrorless camera in close-range photogrammetry applications involves assessing the capabilities of these two types of cameras in capturing images for 3D measurement purposes. In this study, experiments are conducted to compare the distortion levels, the accuracy performance, and the image quality of a mobile device camera against a full-frame mirrorless camera when used in close-range photogrammetry applications in various settings. Analytical methodologies and specialized digital tools are used to evaluate the results. In the end, generalized conclusions are drawn for using each technology in close-range photogrammetry applications.

1. Introduction

Photogrammetry primarily concerns precise three-dimensional measurements of objects and terrain features from photographs. It can be used in fields such as topographic mapping [1], architecture [2], archaeology [3], filmmaking [4], engineering [5], manufacturing [6], quality control [7], police investigation [8], cultural heritage [9], underwater site mapping [10], and geology [11]. Photogrammetry has numerous advantages over other surveying methods, such as accuracy, low cost, and speed. In photogrammetry, the selection of cameras depends on project requirements and budget. The most crucial criterion about the requirements is the accuracy achieved in the results, which depends on the final mapping product’s scale.
Full-frame mirrorless cameras are the most advanced still cameras available on the market today. Full-frame cameras are equipped with an image sensor that is the size of a classic 35 mm analog film frame. The primary difference between a full frame and a crop sensor is the physical size of the sensor; crop sensors are smaller and can vary in dimensions. Mirrorless cameras offer several advantages over Digital Single-Lens Reflex (DSLR) cameras, including the compact size and faster shooting speed. Some of the most sophisticated still cameras have a 5-axis, sensor-based, image stabilization system that can move or rotate the image sensor along five different axes to counteract camera shake, see Figure 1.
Smartphones are designed to be convenient and easy to use, time-saving, and mostly, if not completely, automated. In addition, they are equipped with cameras that capture high-resolution and acceptable-quality photos. The main difference between camera phones and still cameras is the image sensor’s physical size. The number of megapixels on a phone camera may be higher than that of a still camera. However, the sensor’s physical dimensions are smaller. This means that each pixel on a full-frame sensor is bigger and can capture more light, resulting in higher-quality images with less noise when selecting higher ISO values. A mobile phone lens is typically fixed, wide-angle, with no optical zoom. Additionally, the lenses on smartphones are tiny. Therefore, the quality of the photos does not compare to those taken with a high-quality glass lens of a good still camera.
Digital camera technology has reached 61.0 MP levels [13], with a purchase price of 3.7 K. It could be assumed that this would be the ideal equipment for a photogrammetry application. However, the cost is a significant factor in searching for camera alternatives, possibly with lower capabilities but adequate performance. On the contrary, seeing the increased capabilities of smartphones and their additional features (GNSS, inertial sensors, magnetometers), which make them able to be used in more advanced and complex geomatics applications [14,15,16], the question arises whether they can be used in photogrammetry applications.
Different approaches are available in the literature regarding the evaluation and suitability of mirrorless cameras and smartphone technology for photogrammetry applications. In the paper [17], two still cameras, a mirrorless and a DSLR, are compared to investigate the usability of mirrorless cameras for terrestrial photogrammetry applications. The accuracies in the 3D models and the cross sections created are compared. As a result, it has been concluded that mirrorless cameras and point clouds produced using photographs obtained from these cameras can be used for terrestrial photogrammetry projects. In the paper [18], the experimental results show that the geometric and texture data quality produced by smartphone-based photogrammetry produces data that equal or even exceed DSLR-based photogrammetry. The study [19] indicates that smartphones can be utilized directly to acquire on-site photogrammetric data for 3D modeling and measurement extractions for construction management applications. In all the above studies, it is mentioned that other research should be conducted to strengthen the conclusions they formulate.
The scope of this paper is the comparative evaluation of the performance of a mobile device camera and an affordable full-frame mirrorless camera as concerns the distortion levels, accuracy, and image quality that can be achieved in close-range photogrammetry applications. In the end, it must be concluded whether these technologies can be used in photogrammetry and determine which type of camera is more suited to specific tasks. The experimental results and conclusions from this study will contribute to the research on the potentiality of using these technologies.
Regarding the structure of the work, after this introductory section, the methods used to evaluate and compare the cameras are discussed. Next, the experiment results for the cameras’ evaluation and comparison concerning the distortion levels, accuracy performance, and image quality are presented. This is followed by a discussion concerning the analysis and interpretation of the results, as well as conclusions drawn from the present study.

2. Methods

To evaluate and compare two cameras and determine whether they are suitable for photogrammetry applications, lens-induced distortions in images must be studied, as well as the accuracy with which point coordinates can be calculated by photogrammetry and the quality of photos from both cameras.
Methodologies and digital tools for camera calibration and evaluation of the results have been developed and implemented in the Surveyor-Photogrammetry software version 6.0 [20]. In this tool collection, two methods are available, an OpenCV function [21] and the photogrammetric bundle adjustment with additional parameters [22], which use the same camera model [23]. A method that utilizes single-image rectification [24] is employed to evaluate the calibration parameters. The camera calibration results are evaluated by an automated process that includes quality checks at various levels of detail. The evaluation includes the “Rect” indicator describing the overall quality, followed by charts and digital images showing the effect of the calibration results on an evaluation image at a control point level. Regarding the number and the shooting angles of the photos that will be used in the calibration, the standard proposed in the specific work [20] can be used.
To directly compare the distortions from the two cameras, a technique must be found that considers the different dimensions of their imaging sensors. Indeed, the concept of normalized pixels was used in this work. Specifically, the current value in pixels is divided by the maximum number of pixels of a reference length, per sensor so that the maximum number of pixels takes the value 1 in both cases. This way, the distortions can be observed in common diagrams, and comparable results can be obtained.
To estimate the accuracy in the photogrammetric calculation of coordinates, with the cameras under consideration, the photogrammetric bundle adjustment method can be used. The method can consider the coordinates of the checkpoints, the internal orientation, and the distortion coefficients as unknowns, thus helping establish whether these parameters affect the coordinate results in a real-world application.
For the quality check and comparison of photos acquired from two cameras, a series of images must be taken, where printed characters and symbols will be depicted in varied sizes. Depending on whether the characters and symbols are recognizable, sharp, or distorted in the images will tell the difference in quality between the two cameras.
For the equipment selection to be compared, the low cost was considered, which in both cases was chosen to be under 1 K EUR. The smartphone camera was selected to have a higher image sensor resolution than the still camera. Moreover, the still camera should be full-frame, mirrorless, and have a lens that can be adjusted to multiple focal lengths. The two devices selected are typical of their categories. The choice of smartphone was the Samsung Galaxy smartphone A52s 5G [25], and the still camera, the Sony α7II [12]. The technical specifications of both cameras are presented in Table 1.
In the smartphone, the highest resolution of the image sensor (64 MP) is used, while the focal length is fixed at 5 mm. The Sony α7 II is an image-stabilized, full-frame, mirrorless camera. Likewise, the highest available resolution (24.3 MP) is used here too, and four distinct nominal focal lengths of the analog lens, i.e., 28, 34, 50, and 70 mm. This defines several cases that need to be considered. These combinations and their output field of view along the sensor’s horizontal and vertical axis are given in Table 2.
It is evident from Table 2, that the Sony-28 mm camera case is closest to the smartphone features. In the other cases, for the results to be comparable, the shooting distance must be varied to cover the same surface area on the object.

3. Camera Evaluation and Comparison

3.1. Image Distortion Caused by Camera Lens

Image distortion is a common issue in photography caused by the camera lens producing curved lines where straight lines should be [26]. The two most common types of lens distortion are radial and tangential distortion. Distortion is the result of the lens’s geometrics and can significantly disrupt the image’s quality. The better the quality of a camera’s lens, the smaller the distortions introduced into the image. Especially in photogrammetry, limited distortions are an important factor in obtaining reliable measurement information from images. Alternatively, it is possible to correct the distortions by using mathematical models. In any case, the lack of distortions is better than using these mathematical models, which often roughly describe reality and do not have such precise results. In photogrammetry, lens distortions are calculated using camera calibration, which involves photographing a control field. In this case, a checkerboard with 1813 (49 × 37) control points was utilized, as shown in Figure 2.
The calibration was performed using the Surveyor-Photogrammetry software version 6.0 [20], which, in addition to the estimation of the calibration parameters, also evaluates the results. Two methods were used to determine the magnitude and variation of the distortions in the cameras under review: an OpenCV function and the bundle adjustment method with additional parameters. For the smartphone, five photos were used for the calibration, while for the still camera, seven. That choice was made because the still camera has a narrower sensor than the smartphone camera and requires more photos to centrally cover the entire image sensor. This way, the same checkerboard was used to calibrate both cameras.
An OpenCV function [27] was used to perform the camera calibration, which returns the intrinsic matrix (fx, fy, cx, cy) and the distortion coefficients matrix (k1, k2, k3, p1, p2). The process’s accuracy is described by the total re-projection, the Euclidean distance between the points re-projected using the estimated intrinsic and extrinsic matrix, and the image coordinates of the checkerboard corners. The smaller this error, the better the accuracy of the calculated parameters. The re-projection errors for all experiments are given in Table 3.
The smartphone has a slightly larger re-projection error, which shows a worse performance of the camera calibration parameter values. Whereas for the still camera, it seems in general that as the focal length increases, the parameter application performance worsens.
The intrinsic matrix and the distortion coefficients matrix, as they result from the calibration, are presented in Table 4 and Table 5, respectively. In the intrinsic matrix, the focal length (fx, fy), the coordinates of the primary point with respect to the upper left corner of the image (cx, cy), the estimation of the aspect ratio AspectRatio (fy/fx), the nominal focal length f, the calibrated focal length c in pixels and mm, and the coordinates of the primary point (xo, yo) concerning the center of the image in pixels and mm are tabulated.
In Table 4, the AspectRatio is approximately 1.000 in all cases, and the calibrated focal length c deviates from the nominal focal length f from 1.085 mm to 5.341 mm for the still camera and 0.475 mm for the smartphone. In percentages, the still camera’s, focal length differs from the nominal value by 3.9% to 8.2%, while the smartphone’s differs by 9.5%. In any case, the results are realistic.
The coefficients k1, k2, k3, and p1, p2, see Table 5, obtained from the calibration process can be applied to the distortion model equations and generate information to provide distortion diagrams. This information concerns the total distortion along pre-defined evaluation segments starting from the primary point and ending at the image’s corners and mid-sides, according to the guide in Figure 3.
The visualization of the total distortion for the evaluation segments 1-0-5, 8-0-4, 7-0-3, and 6-0-2, concerning distance from the primary point, is shown in Figure 4 and Figure 5 for the Sony camera and the Samsung smartphone, respectively.
In the Sony camera’s case, Figure 4 shows a smooth variation in distortion relative to the primary point, at least in the central part of the graphs. No systematic symmetry can be discerned except for the 34 mm focal length, where exceptionally the distortion results show a form of symmetry.
The total distortion in the Samsung smartphone camera, see Figure 5, shows sharp changes, with the most significant deviations occurring at the ends of the diagonal evaluation segments 7-0-3 and 1-0-5. The form of the chart is symmetric as to the primary point.
The surface of the sensor was divided into three sectors to generalize the conclusions of the existence of distortion in the sensor areas, see Figure 6. Sector 1 is the circular sector centered at the primary point, which has a radius of the sensor’s height. Sector 2 is defined by the intersection of sector 1 and another circular sector with a radius of the sensor’s width. Finally, Sector 3 is defined as the intersection of the circular sector with a radius of the sensor diagonal and that with a radius of its width.
For these sectors, and by examining Figure 4 and Figure 5, the maximum distortions that occurred for the Sony camera and the Samsung smartphone in each sector are displayed in Figure 7.
Figure 7 shows that for the Sony camera, sector 1 has the lowest distortions between 5 and 11 pixels in sector 2, the distortion ranges between 17 and 28 pixels, and in sector 3, 21 and 40 pixels. The best case is the 34 mm lens where the distortions are limited to 5, 17, and 21 pixels for sectors 1, 2, and 3. For the smartphone’s camera, the maximum distortion observed is 41 pixels in sector 1, 43 in sector 2, and 86 in sector 3.
To enable a direct comparison of the distortions of the two cameras and their lens combinations, all distortion measurements were normalized to the sensor size. All distortion values and distances from the primary point were divided by the number of pixels along the longest evaluate segments, per camera. The resulting values are shown in Figure 8, for all cameras at the 7-0-3 diagonal evaluation segment and for all cameras, at the sector level, in Figure 9.
Figure 8 shows that the smartphone camera has a more complex form of distortion that increases steeply than the Sony camera distortion. However, the Sony camera presents smoother gradients with characteristic asymmetry towards the primary point. As for the normalized measurement of the total distortion due to lenses, the smartphone camera displays larger values, see Figure 9. However, in the Sony camera, the total distortion is smaller except at the edges of the images; in this case, there is also noticeable distortion.
The alternative for camera calibration is the bundle adjustment method with additional parameters. The method provides more statistics to evaluate the outcomes than the OpenCV function. The general elements of the bundle adjustment solution contain the number of photos, control points used, observations, additional parameters, and degrees of freedom, see Table 6.
In Table 6, the eight additional parameters are the variables c, xo, yo, k1, k2, k3, and p1, p2. The degrees of freedom are large in both camera calibrations and distinct from each other due to the different number of photos used in each case.
In the analytical solution of the bundle adjustment with additional parameters, the residuals of the unknown parameters indicate the accuracy achieved in the results. The sigma, the estimates of the main elements of the internal orientation, and their standard deviations are displayed in Table 7.
In Table 7, the sigma ranges at approximately the same levels for all cameras, and the standard deviations in the estimates of the focal length and the coordinates of the primary point are low (1.59–5.82 pixels), except in the case of Sony with the focal length of 70 mm, where the standard deviations are relatively more significant (7.82–11.02 pixels). This order of magnitude of variation in the standard deviations indicates that the results in the camera calibrations are reliable.
The results of the single-image photogrammetry rectification of a photo with the checkerboard are utilized to evaluate the camera calibration parameters [20]. The process is performed twice using the distorted image and the undistorted. The analytical differences between the accuracies in the two cases show the performance of the calibration at a control point level. A positive value means that the calibration parameters have a positive effect, and a negative value means that the calibration parameters do not work effectively. For the visual evaluation of the results, depending on the color gradation, see Table 8, the improvement or worsening of the results is illustrated. Table 9 presents the evaluation images from the application of the internal orientation elements, as derived separately from the OpenCV function and the bundle adjustment method.
From the evaluation images in Table 9, on the Sony camera, depending on the focal length, there appears to be a distinct result each time. In 28, 50, and 70 mm cases, it follows a sort of pattern, but for 34 mm it is a different form. In the Samsung smartphone, it is evident that there is an improvement in most of the image surface, forming a concentric symmetrical shape. The results of applying the distortion coefficients are improved, but without avoiding certain cores where the results are worse, such as Sony’s 28, 34, and 70 mm.
The overall percentage difference between the distorted and undistorted rectifications, called the “Rect” Indicator [20], is an indication of the overall performance of the calibration parameters. The results for each camera are detailed in Table 10.
Using the distortion coefficients to correct the images results in a significant improvement in all cases, see Table 10, with the smartphone camera showing the highest performance rate. The OpenCV method is more effective than the bundle adjustment method in every case, verifying and extending the findings of the study [20]. The results of the photogrammetric bundle adjustment with additional parameters have the same trends but smaller percentages. The same conclusions emerge, in greater detail, from the evaluation images in Table 9 by studying the spatial distribution of the differences.

3.2. Accuracy of Geometrical Measurements Extraction

A series of images were taken from each camera to determine the accuracy of extracting geometric measurements, which can be achieved through photogrammetry using the cameras in question. Three converged photographic shots were taken from the smartphone and the still camera for the nominal focal lengths of 28, 34, 50, and 70 mm. Figure 10 shows an example of three photographs depicting the test field taken by the Sony-34 mm camera.
For photogrammetric processes to be carried out, points with known ground coordinates must be depicted in the photographs. These points will be used as control points or checkpoints to determine the accuracy achieved in actual three-dimensional measurements. For this purpose, 21 points with known coordinates were used, of which 5 served as checkpoints and the rest as control points. These points are characteristic details on the face of the building, see Figure 11 as an example. To document control points in the field survey, each point code was noted on printed close-up photographs of the object. This allows post-processing to identify the points in the photographs and assign coordinates to them. The coordinates of the points have been measured and calculated by topographic methods [28], which ensure accuracy and reliability [29]. The points were measured with a total station, in reflectorless mode, from the same location. With this technique and considering the measurement distances were close, high precision in calculating the coordinates is ensured, estimated at 2 mm.
It was considered appropriate to transform the coordinates of the control and checkpoints into a coordinate system that is adjusted to the building’s face. The X-axis is parallel to the building facade, the Y-axis is vertical, and the Z-axis is perpendicular to the XY plane, see Figure 12.
The coordinates transformation was performed in the Surveyor-Photogrammetry software version 6.0, see Figure 13. For the transformation, it is sufficient to define the direction of the building face with at least two points, one for the origin (Origin and Start) and one for the definition of the direction (Alignment).
The ground coordinates of the control and checkpoints, as to the new coordinate system, are listed in Table 11. Regarding the Z coordinates, their values range from −0.015 m to 3.039 m ensuring a satisfactory range of depth of field, theoretically ensuring reliable values for the external orientation of the photographs that will lead to reliable photogrammetric intersections.
Using this custom coordinate system, the accuracy in calculating the checkpoints in actual 3D measurements will be better estimated, especially when the Z-axis is in the same direction as the camera’s optical axis. The accuracy estimate along the Z-axis is critical in photogrammetry as it is the most sensitive, in terms of the accuracy of the measurements along this direction.
To test the accuracy that can be achieved in the calculation of coordinates on the object, a photogrammetric survey on a building face was conducted. Specifically, the photogrammetric bundle adjustment method with additional parameters was used in the Surveyor-Photogrammetry environment, see an instance in Figure 14. The method accepts as input, the image, and the ground coordinates of the control points and estimates the coordinates of the checkpoints. Also, if selected, the internal orientation elements, i.e., the focal length, primary point coordinates, and distortion coefficients can be estimated too.
The bundle adjustment method was performed for the still camera and the smartphone. Two solutions were extracted; in the first, the focal distance and the coordinates of the primary point were included as unknown parameters, and in the second, the distortion parameters k1, k2, k3, p1, p2 were added, as unknowns to the first case. The results of both cases are shown in Table 12 and Table 13, respectively.
The differences in the checkpoint coordinates, calculated using the two mapping methods, i.e., photogrammetry and topography, are an estimation of accuracy achieved. It is evident from Table 12 and Table 13 that the still camera outperforms the smartphone camera in all cases of various focal lengths. The accuracy differences between the two cameras are 10–17 mm. These accuracies determine the scales of the mapping products that can be produced photogrammetrically with this equipment. Furthermore, it significantly improves accuracy when distortion coefficients are considered.

3.3. Image Quality Test between the Smartphone and the Still Camera

To compare the quality of the images from the smartphone camera and the still camera, a series of photographs was taken, where a board with printed alphanumeric characters and special symbols, in varying sizes, was photographed, see Figure 15.
The experiment was divided into three sections, wherein in each section a specific horizontal width distance is photographically covered on the surface of a building, regardless of the shooting distance. These three sections were named frame 1, frame 2, and frame 3. For the still camera, the nominal focal lengths of 28, 34, 50, and 70 mm were used, while for the smartphone camera, the only available focal length of 5 mm was used. Therefore, in each frame, there are five photos, one from the smartphone camera and four from the still camera. The three photo frames, taken by the Samsung smartphone, are shown in Figure 16, for example.
During the experiment, all the photographs were taken in a brief period. In this way, the sun’s position, which affects the lighting, did not play a significant role in the quality of the photos. For the same reason, the humidity, pressure, and atmosphere composition remained the same during the photography and did not affect the quality of the photos. The same horizontal coverage on the object, in the frame photos, was achieved using as a guide the distinctive vertical lines on the facade of the building, see Figure 16, where the quality board was placed. At the same time, the camera was placed each time, along a line perpendicular to the facade of the building, which passed through the quality board. This way, the same horizontal coverage on the object and the same shooting angles were ensured.
Image quality is a subjective concept that depends on the purpose for which the image will be utilized and the viewer’s preferences. However, some quantitative metrics [30] can be used to measure and compare image quality, such as sharpness, contrast, noise, color accuracy, and dynamic range. Photogrammetry is mainly interested in identifying the edges of shapes in images. In this case, it was chosen to photograph alphanumeric characters and symbols in each frame. Depending on whether these are recognizable, distorted, or sharp, it will provide a clear picture of each camera’s image quality. In this way, the performance of the camera lens at different nominal focal lengths will also be studied.
Selected parts of the photos where the board is depicted have been cropped from the frames and are presented in Table 14, Table 15 and Table 16.
The photos of the alphanumeric characters and symbols show that between the different nominal focal lengths of 28, 34, 50, and 70 mm, there are no significant differences in the quality of the images. This means that the performance of the lens used by the Sony still camera is satisfactory.
Comparing the photos from the Sony camera with those from the Samsung smartphone in Table 14, Table 15 and Table 16, there is a qualitative difference in the recognition, and the sharpness of the depicted characters and symbols, especially as the shooting distance increases. The results for Sony are slightly better, but in no case are there any big differences. A slightly larger difference in image quality is shown in frame 3, which corresponds to the longest shooting distance of the test.
At longer shooting distances, no comparison can be made, as the smartphone cannot use a longer focal length lens. The digital zoom capability, on the one hand, cannot be used in photogrammetry as it is not a central projection, and on the other hand, it has no practical effect as it does not provide additional information as it simply magnifies the original information from the imaging sensor. The length x height ratio of the still camera sensor is 6000 × 4000, i.e., 1.5:1, while the smartphone camera is 9248 × 6936, i.e., 1.33:1. Since the same horizontal area is photographed, the pixel ratio between the smartphone camera and the still camera is 6000:9248, which is 0.65 in favor of the smartphone sensor.
Figure 17 is an additional sample related to the comparison of the two cameras’ image quality, depicting a construction detail on the face of a building.
Figure 17 shows differences between the two photographs, yet the outline and boundaries of the construction detail are evident in both cases.
Some practical issues encountered and dealt with during photography, which may affect the quality of the images, are worth mentioning.
The relative movement of the imaging sensor about the camera body creates issues in photogrammetric procedures, where the assumption is to maintain a constant geometry in the internal orientation in all shots. For this reason, the automatic image stabilization function in the Sony camera was disabled. To determine the effects of this setting, test shots were taken with and without a photo tripod, and the results are shown in Figure 18 and Figure 19, respectively.
With the automatic image stabilization function of the camera disabled, especially in the cases of poor lighting, where the shutter speed is slow, the photography results are “shaken”, see Figure 18. In the case that a photo tripod is used, and the time-delay function of the photo shot is parallel, the results are very satisfactory, see Figure 19. In the case of the smartphone, the problem of “shaken” photos did not occur.
Another issue noticed is that while both cameras’ shooting parameters are set to AUTO, the mobile phone camera achieves brighter photos than the professional still camera. This would have been different if the photo capture parameters were set to manual.

4. Discussion and Conclusions

The full-frame mirrorless camera has smoother distortion than the smartphone camera. Specifically, in a circular sector having as central the primary point, and as radius the sensor’s height, the distortion was low. Especially with the use of the 34 mm lens, where the distortion was limited to 5 pixels, while in this section, it did not exceed 11 pixels, regardless of the focal length. Outside this sector, significant distortions, of the order of 40 pixels, were observed particularly at the edges of the image. If the project requires high precision, the measurements should be limited to this area and the edges of the image should be avoided. Also, in the general case, there was no systematic symmetry in the distortions, around the primary point, indicating significant tangential distortions. In the smartphone, the distortions are more significant and change sharply. The correction of distortions brings a 41.9% improvement in the image, compared to 15.9–39.1% of the still camera. In addition, the distortion correction performance on the image surface appears smoother and more symmetrical compared to the still camera.
In terms of the accuracy in calculating coordinates in actual 3D measurements, it was found that the full-frame mirrorless camera achieves about four times more accurate results than smartphones. Specifically, 6 mm against 23 mm of the smartphone’s camera, when the distortions were not considered, corresponding to cartographic products at scales of 1:30 and 1:115. In the other case, when distortions are considered, the accuracy is 4 mm for the full-frame mirrorless camera against 14 mm for the smartphone camera, which corresponds to cartographic products at scales of 1:20 and 1:70, respectively. The distortion corrections led to an improvement in coordinates calculation accuracy of up to 39%.
Regarding image quality, it was found that, in conditions of sufficient lighting, and at a shooting distance of 3–15 m, which correspond to realistic distances of terrestrial photogrammetry applications, there is a slight lead in the full-frame mirrorless camera over the smartphone, especially to the longest shooting distance of the test. There can be no comparison at longer distances, due to the fixed lens of the smartphone’s camera.
The study answers the question of how comparable photo quality results can be achieved between a professional camera and a smartphone. The answer lies in the balance of performance, where there are more pixels on the one hand and a smaller number but a larger physical size and better camera lens quality on the other hand. In this case, the mobile phone has a 64 MP imaging sensor, while the still camera is 24.3 MP. In terms of lenses, the mobile phone has a 5 mm fixed-focus wide-angle lens, while the camera has an interchangeable focal length lens ranging from 28 mm to 70 mm. The significant difference between the two cameras, when used for actual 3D measurements, is due to the quality of the lenses and the smaller distortions that this brings. A small accuracy contribution is also made by slightly better-quality images, resulting in a more accurate selection of image coordinates.
For the full-frame mirrorless camera, the automatic image stabilization function was disabled because, in photogrammetry, the internal geometry of the camera must be constant during all shots. For this reason, a photo tripod should be used, and the shots should preferably be taken with a few seconds’ delay.
In general, regarding the use of the two cameras in photogrammetry applications, the full-frame mirrorless camera excels in accuracy, while the smartphone camera is a particularly good choice for situations where a quick solution is required, for close shooting distances, and with low accuracy requirements.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Lane, S.N.; James, T.D.; Crowell, M.D. Application of Digital Photogrammetry to Complex Topography for Geomorphological Research. Photogramm. Rec. 2000, 16, 793–821. [Google Scholar] [CrossRef]
  2. Pepe, M.; Domenica, C. Techniques, tools, platforms and algorithms in close range photogrammetry in building 3D model and 2D representation of objects and complex architectures. Comput. Aided Des. Appl. 2020, 18, 42–65. [Google Scholar] [CrossRef]
  3. Paukkonen, N. Ten Years of Photogrammetry and LiDAR: Digital 3D Documentation in Finnish Archaeology between 2013–2022. Fennosc. Archaeol. 2024, 41. [Google Scholar] [CrossRef]
  4. Silva-Bolíva, J.; Silva-Bolívar, G.J.; Cataño-Ospina, A.M.; Arenas-Becerra, L.Y. Photogrammetry for the reconstruction of realistic visual landscapes that serve for the creation of scenographies in audiovisual and multimedia products. J. Phys. Conf. Ser. 2019, 1418, 012006. [Google Scholar] [CrossRef]
  5. Maas, H.G.; Hampel, U. Photogrammetric techniques in civil engineering material testing and structure monitoring. Photogramm. Eng. Remote Sens. 2006, 72, 39–45. [Google Scholar] [CrossRef]
  6. Bacciaglia, A.; Ceruti, A.; Liverani, A. Photogrammetry and additive manufacturing based methodology for decentralized spare part production in automotive industry. In Intelligent Human Systems Integration 2020: Proceedings of the 3rd International Conference on Intelligent Human Systems Integration (IHSI 2020): Integrating People and Intelligent Systems, Modena, Italy, 19–21 February 2020; Springer International Publishing: Cham, Switzerland, 2020; pp. 796–802. [Google Scholar]
  7. Boesemann, W.; Godding, R.; Huette, H. Photogrammetric measurement techniques for quality control in sheet metal forming. Int. Arch. Photogramm. Remote Sens. 2000, 33, 291–298. [Google Scholar]
  8. Berezowski, V.; Mallett, X.; Moffat, I. Geomatic techniques in forensic science: A review. Sci. Justice 2020, 60, 99–107. [Google Scholar] [CrossRef] [PubMed]
  9. Stylianidis, E.; Evangelidis, K.; Vital, R.; Dafiotis, P.; Sylaiou, S. 3D Documentation and Visualization of Cultural Heritage Buildings through the Application of Geospatial Technologies. Heritage 2022, 5, 2818–2832. [Google Scholar] [CrossRef]
  10. Ktistis, A.; Tokmakidis, P.; Papadimitriou, K. Surveying, Modeling and 3D Representation of a Wreck for Diving Purposes: Cargo ship “Vera”. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 399–403. [Google Scholar] [CrossRef]
  11. Hao, J.; Zhang, X.; Wang, C.; Wang, H.; Wang, H. Application of UAV Digital Photogrammetry in Geological Investigation and Stability Evaluation of High-Steep Mine Rock Slope. Drones 2023, 7, 198. [Google Scholar] [CrossRef]
  12. Sony Website. Available online: https://electronics.sony.com/imaging/interchangeable-lens-cameras/full-frame/p/ilce7m2-b (accessed on 26 June 2024).
  13. Sony Website. Available online: https://www.sony.com/en-cd/electronics/interchangeable-lens-cameras/ilce-7cr (accessed on 26 June 2024).
  14. Patonis, P. Combined Technologies of Low-Cost Inertial Measurement Units and the Global Navigation Satellite System for Photogrammetry Applications. Ph.D. Thesis, Aristotle University of Thessaloniki, Thessaloniki, Greece, 2012. [Google Scholar]
  15. Patonis, P.; Patias, P.; Tziavos, I.N.; Rossikopoulos, D. A methodology for the performance evaluation of low-cost accelerometer and magnetometer sensors in geomatics applications. Geo-Spat. Inf. Sci. 2017, 21, 139–148. [Google Scholar] [CrossRef]
  16. Patonis, P.; Patias, P.; Tziavos, I.N.; Rossikopoulos, D.; Margaritis, K.G. A Fusion Method for Combining Low-Cost IMU/Magnetometer Outputs for Use in Applications on Mobile Devices. Sensors 2018, 18, 2616. [Google Scholar] [CrossRef] [PubMed]
  17. Incekara, A.H.; Seker, D.Z.; Delen, A.; Acar, A. Investigating the suitability of mirrorless cameras in terrestrial photogrammetric applications. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 259–262. [Google Scholar] [CrossRef]
  18. Samosir, F.S.; Riyadi, S. Comparison of smartphone and DSLR use in photogrammetry. In International Conference on Aesthetics and the Sciences of Art 28 September 2020; Bandung Institute of Technology: Bandung, Indonesia, 2020. [Google Scholar]
  19. Saif, W.; Alshibani, A. Smartphone-Based Photogrammetry Assessment in Comparison with a Compact Camera for Construction Management Applications. Appl. Sci. 2022, 12, 1053. [Google Scholar] [CrossRef]
  20. Patonis, P. Methodology and Tool Development for Mobile Device Cameras Calibration and Evaluation of the Results. Sensors 2023, 23, 1538. [Google Scholar] [CrossRef] [PubMed]
  21. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  22. Dermanis, A. Analytical Photogrammetry; Ziti: Thessaloniki, Greece, 1991. [Google Scholar]
  23. OpenCV Toolbox. Available online: https://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html (accessed on 26 June 2024).
  24. Novak, K. Rectification of Digital Imagery. Am. Soc. Photogramm. Remote Sens. 1992, 58, 339. [Google Scholar]
  25. Samsung Website. Available online: https://www.samsung.com/gr/business/smartphones/galaxy-a/galaxy-a52s-5g-a528-sm-a528bzwheue/ (accessed on 26 June 2024).
  26. Brown, D.C. Close-range camera calibration. Photom. Eng. 1971, 37, 855–866. [Google Scholar]
  27. OpenCV. Toolbox. Available online: https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html#ga3207604e4b1a1758aa66acb6ed5aa65d (accessed on 28 June 2024).
  28. Anderson, J.M.; Mikhail, E.M. Introduction to Surveying; McGraw-Hill: New York, NY, USA, 1985. [Google Scholar]
  29. Dermanis, A. Adjustment of Observations and Estimation Theory; Ziti: Thessaloniki, Greece, 1986; Volume 1. [Google Scholar]
  30. Wang, Z.; Bovik, A.C. Modern Image Quality Assessment. Ph.D. Thesis, Morgan & Claypool Publishers, San Rafael, CA, USA, 2006. [Google Scholar]
Figure 1. Sensor-based, image stabilization system [12].
Figure 1. Sensor-based, image stabilization system [12].
Sensors 24 04925 g001
Figure 2. The calibration checkerboard with the Sony α7ΙΙ camera.
Figure 2. The calibration checkerboard with the Sony α7ΙΙ camera.
Sensors 24 04925 g002
Figure 3. Guide for the evaluation segments.
Figure 3. Guide for the evaluation segments.
Sensors 24 04925 g003
Figure 4. Visualization of the distortion along the evaluation segments for the Sony camera for each focal length.
Figure 4. Visualization of the distortion along the evaluation segments for the Sony camera for each focal length.
Sensors 24 04925 g004
Figure 5. Visualization of the distortion along the evaluation segments for the Samsung smartphone.
Figure 5. Visualization of the distortion along the evaluation segments for the Samsung smartphone.
Sensors 24 04925 g005
Figure 6. The sensor’s sectors are radially defined from the primary point.
Figure 6. The sensor’s sectors are radially defined from the primary point.
Sensors 24 04925 g006
Figure 7. Distortions for all cameras and sensor’s sectors in pixels.
Figure 7. Distortions for all cameras and sensor’s sectors in pixels.
Sensors 24 04925 g007
Figure 8. Visualization of the normalized distortions concerning normalized distance from the primary point for all cameras on the 7-0-3 evaluation segment.
Figure 8. Visualization of the normalized distortions concerning normalized distance from the primary point for all cameras on the 7-0-3 evaluation segment.
Sensors 24 04925 g008
Figure 9. Cameras distortions in normalized pixels in each sector.
Figure 9. Cameras distortions in normalized pixels in each sector.
Sensors 24 04925 g009
Figure 10. The photos of the test field taken by the Sony-34 mm.
Figure 10. The photos of the test field taken by the Sony-34 mm.
Sensors 24 04925 g010
Figure 11. A control point through the telescope of the total station.
Figure 11. A control point through the telescope of the total station.
Sensors 24 04925 g011
Figure 12. The user coordinate system was adjusted to the facial of the building.
Figure 12. The user coordinate system was adjusted to the facial of the building.
Sensors 24 04925 g012
Figure 13. The coordinates transformation as to a user coordinate system on the Surveyor-Photogrammetry software version 6.0.
Figure 13. The coordinates transformation as to a user coordinate system on the Surveyor-Photogrammetry software version 6.0.
Sensors 24 04925 g013
Figure 14. The bundle adjustment solution on the Surveyor-Photogrammetry software version 6.0.
Figure 14. The bundle adjustment solution on the Surveyor-Photogrammetry software version 6.0.
Sensors 24 04925 g014
Figure 15. The board for the quality image test.
Figure 15. The board for the quality image test.
Sensors 24 04925 g015
Figure 16. The 3 frames used for the image quality test—photos by the Samsung smartphone.
Figure 16. The 3 frames used for the image quality test—photos by the Samsung smartphone.
Sensors 24 04925 g016
Figure 17. Construction detail. (Left): Samsung smartphone (zoom: 1310%) and (Right): Sony camera (zoom: 1919%).
Figure 17. Construction detail. (Left): Samsung smartphone (zoom: 1310%) and (Right): Sony camera (zoom: 1919%).
Sensors 24 04925 g017
Figure 18. Quality image test for the Sony-28 mm no photo tripod used.
Figure 18. Quality image test for the Sony-28 mm no photo tripod used.
Sensors 24 04925 g018
Figure 19. Quality image test for the Sony-28 mm photo tripod used.
Figure 19. Quality image test for the Sony-28 mm photo tripod used.
Sensors 24 04925 g019
Table 1. Cameras technical specifications.
Table 1. Cameras technical specifications.
Sony CameraSamsung Smartphone
Lens focal length (mm)28–705
Sensor resolution (pixels)6000 × 4000 (24.3 MP)9248 × 6936 (64 MP)
Pixel size (mm)0.0059670.000800
Physical dimensions of the sensor (mm)35.800 × 23.8677.398 × 5.549
Table 2. The cameras and their output field of view.
Table 2. The cameras and their output field of view.
Sony-28 mmSony-34 mmSony-50 mmSony-70 mmSamsung Smartphone
Output field of view along the horizontal sensor axis (deg)63.21952.39136.61126.72968.086
Output field of view along the vertical sensor axis (deg)44.30536.04324.69117.84753.835
Table 3. The re-projection error for calibrating cameras using OpenCV calibration.
Table 3. The re-projection error for calibrating cameras using OpenCV calibration.
CameraRe-Projection Error
Sony-28 mm3.205
Sony-34 mm3.039
Sony-50 mm3.542
Sony-70 mm4.031
Samsung smartphone5.206
Table 4. The intrinsic matrix for all cameras.
Table 4. The intrinsic matrix for all cameras.
Sony-28 mmSony-34 mmSony-50 mmSony-70 mmSamsung Smartphone
fx (pixels)4874.56097.99067.512,627.06844.1
fy (pixels)4912.46147.59137.212,737.26830.6
cx (pixels)2974.62981.62912.22916.44659.0
cy (pixels)1975.81968.31916.22030.43485.1
AspectRatio (fy/fx)1.0081.0081.0081.0090.998
Nominal f (mm)283450705
Estimated c (mm)29.08536.38454.10375.3415.475
xo (pixels)−25.4−18.4−87.8−83.635.0
yo (pixels)24.231.683.7−30.4−17.0
xo (mm)0.151−0.110−0.524−0.4990.028
yo (mm)0.1440.1890.500−0.181−0.014
Table 5. Distortion coefficient parameters for all cameras.
Table 5. Distortion coefficient parameters for all cameras.
Sony-28 mmSony-34 mmSony-50 mmSony-70 mmSamsung Smartphone
k1–0.026041270.00637290–0.02729633–0.052539370.11334916
k20.099465860.13491890−0.06231847−0.88042237−0.35944055
k3−0.10377144−0.321551540.8951436611.181741440.32451562
p1−0.00171816−0.00077366−0.002621520.004198120.00071273
p2−0.00335183−0.00036767−0.00642417−0.00171302−0.00043841
Table 6. General elements of the bundle adjustment solution with additional parameters.
Table 6. General elements of the bundle adjustment solution with additional parameters.
Sony CameraSamsung Smartphone
Photos75
Control points18131813
Observations12,6919065
Additional parameters88
Degrees of freedom25,33218,092
Table 7. Cameras calibration using bundle adjustment with additional parameters.
Table 7. Cameras calibration using bundle adjustment with additional parameters.
Sony-28 mmSony-34 mmSony-50 mmSony-70 mmSamsung Smartphone
sigma4.2474.4023.9984.0673.883
c (StDev)
(pixels)
4873.2 (2.0)6128.5 (3.7)9061.5 (4.4)12,587.4 (7.8)6837.8 (1.6)
xo (StDev)
(pixels)
−14.2 (2.0)−14.5 (3.0)−133.4 (5.8)−149.0 (11.0)−46.6 (1.7)
yo (StDev)
(pixels)
28.31 (1.6)14.61 (2.4)117.38 (4.5)−194.9 (8.3)−10.0 (1.4)
Table 8. Color gradients for the error differences.
Table 8. Color gradients for the error differences.
Error DifferencesColor Gradients
error > 0.3 mm
0.2 < error < 0.3
0.1 < error < 0.2
0 < error < 0.1
−0.1 < error < 0
−0.2 < error < −0.1
error < −0.2
Table 9. Comparison of evaluation images when using OpenCV and bundle adjustment with additional parameters.
Table 9. Comparison of evaluation images when using OpenCV and bundle adjustment with additional parameters.
CameraOpenCVBundle Adjustment with Additional Parameters
Sony-28 mmSensors 24 04925 i001Sensors 24 04925 i002
Sony-34 mmSensors 24 04925 i003Sensors 24 04925 i004
Sony-50 mmSensors 24 04925 i005Sensors 24 04925 i006
Sony-70 mmSensors 24 04925 i007Sensors 24 04925 i008
Samsung smartphoneSensors 24 04925 i009Sensors 24 04925 i010
Table 10. Evaluation results for the cameras using the “Rect” indicator.
Table 10. Evaluation results for the cameras using the “Rect” indicator.
CameraOpenCV Function Bundle Adjustment
Sony-28 mm+38.7%+24.9%
Sony-34 mm+34.7%+27.2%
Sony-50 mm+39.1%+20.5%
Sony-70 mm+15.9%+4.3%
Samsung smartphone+41.9%+40.3%
Table 11. The ground coordinates of the control and checkpoints.
Table 11. The ground coordinates of the control and checkpoints.
CodeXYZ
10.0000.0000.000
2−0.8084.4770.297
30.4497.1120.314
44.3877.1240.324
56.6405.0590.316
64.0971.832−0.015
75.8950.9320.049
89.9734.4450.335
99.3377.1150.335
1010.714−0.5040.000
113.966−0.5603.039
125.891−0.5452.463
13−0.928−0.4780.031
14−0.5038.0730.322
158.0938.5050.337
1611.4837.1800.351
174.6054.6240.310
1812.1520.1590.019
192.3353.9960.303
202.3038.4990.330
215.0070.3110.972
Table 12. Bundle adjustment solution with auto-calibration for the focal length and the coordinates of the primary point.
Table 12. Bundle adjustment solution with auto-calibration for the focal length and the coordinates of the primary point.
Sony-28 mmSony-34 mmSony-50 mmSony-70 mmSamsung Smartphone
sigma1.6641.0171.6261.1238.399
Observations5960403862
Control points1616141116
Checkpoints55435
Degrees of freedom8284474688
Average Z error (St Dev) (mm)2 (2)3 (2)2 (2)2 (2)8 (9)
Average 2D error (St Dev) (mm)5 (2)2 (1)2 (1)5 (0)22 (10)
Average 3D error (St Dev) (mm)6 (2)3 (2)1 (3)6 (1)23 (10)
Table 13. Bundle adjustment solution with auto-calibration for the focal length, the coordinates of the primary point, and the distortion coefficients.
Table 13. Bundle adjustment solution with auto-calibration for the focal length, the coordinates of the primary point, and the distortion coefficients.
Sony-28 mmSony-34 mmSony-50 mmSony-70 mmSamsung Smartphone
sigma0.8740.9971.4630.5034.251
Observations5960403862
Control points1616141116
Checkpoints55435
Degrees of freedom7779424183
Average Z error (St Dev) (mm)2 (3)3 (2)2 (2)1 (1)10 (12)
Average 2D error (St Dev) (mm)2 (1)2 (1)3 (1)2 (1)9 (4)
Average 3D error (St Dev) (mm)4 (1)3 (2)4 (1)3 (1)14 (5)
Table 14. Comparative image quality samples—frame 1.
Table 14. Comparative image quality samples—frame 1.
Sony-28 mmSensors 24 04925 i011
Sony-34 mmSensors 24 04925 i012
Sony-50 mmSensors 24 04925 i013
Sony-70 mmSensors 24 04925 i014
Samsung smartphoneSensors 24 04925 i015
Table 15. Comparative image quality samples—frame 2.
Table 15. Comparative image quality samples—frame 2.
Sony-28 mmSensors 24 04925 i016
Sony-34 mmSensors 24 04925 i017
Sony-50 mmSensors 24 04925 i018
Sony-70 mmSensors 24 04925 i019
Samsung smartphoneSensors 24 04925 i020
Table 16. Comparative image quality samples—frame 3.
Table 16. Comparative image quality samples—frame 3.
Sony-28 mmSensors 24 04925 i021
Sony-34 mmSensors 24 04925 i022
Sony-50 mmSensors 24 04925 i023
Sony-70 mmSensors 24 04925 i024
Samsung smartphoneSensors 24 04925 i025
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Patonis, P. Comparative Evaluation of the Performance of a Mobile Device Camera and a Full-Frame Mirrorless Camera in Close-Range Photogrammetry Applications. Sensors 2024, 24, 4925. https://doi.org/10.3390/s24154925

AMA Style

Patonis P. Comparative Evaluation of the Performance of a Mobile Device Camera and a Full-Frame Mirrorless Camera in Close-Range Photogrammetry Applications. Sensors. 2024; 24(15):4925. https://doi.org/10.3390/s24154925

Chicago/Turabian Style

Patonis, Photis. 2024. "Comparative Evaluation of the Performance of a Mobile Device Camera and a Full-Frame Mirrorless Camera in Close-Range Photogrammetry Applications" Sensors 24, no. 15: 4925. https://doi.org/10.3390/s24154925

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop