Next Article in Journal
Infrared and Visual Image Fusion Based on a Local-Extrema-Driven Image Filter
Next Article in Special Issue
Three-Dimensional Segmentation of Equine Paranasal Sinuses in Multidetector Computed Tomography Datasets: Preliminary Morphometric Assessment Assisted with Clustering Analysis
Previous Article in Journal
Improved Immune Moth–Flame Optimization Based on Gene Correction for Automatic Reverse Parking
Previous Article in Special Issue
Multi-Resolution 3D Rendering for High-Performance Web AR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of the Photogrammetric Use of 360-Degree Cameras in Complex Heritage-Related Scenes: Case of the Necropolis of Qubbet el-Hawa (Aswan Egypt)

by
José Luis Pérez-García
,
José Miguel Gómez-López
,
Antonio Tomás Mozas-Calvache
* and
Jorge Delgado-García
Departamento de Ingeniería Cartográfica, Geodésica y Fotogrametría, Universidad de Jaén, 23071 Jaen, Spain
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(7), 2268; https://doi.org/10.3390/s24072268
Submission received: 27 February 2024 / Revised: 25 March 2024 / Accepted: 26 March 2024 / Published: 2 April 2024

Abstract

:
This study shows the results of the analysis of the photogrammetric use of 360-degree cameras in complex heritage-related scenes. The goal is to take advantage of the large field of view provided by these sensors and reduce the number of images used to cover the entire scene compared to those needed using conventional cameras. We also try to minimize problems derived from camera geometry and lens characteristics. In this regard, we used a multi-sensor camera composed of six fisheye lenses, applying photogrammetric procedures to several funerary structures. The methodology includes the analysis of several types of spherical images obtained using different stitching techniques and the comparison of the results of image orientation processes considering these images and the original fisheye images. Subsequently, we analyze the possible use of the fisheye images to model complex scenes by reducing the use of ground control points, thus minimizing the need to apply surveying techniques to determine their coordinates. In this regard, we applied distance constraints based on a previous extrinsic calibration of the camera, obtaining results similar to those obtained using a traditional schema based on points. The results have allowed us to determine the advantages and disadvantages of each type of image and configuration, providing several recommendations regarding their use in complex scenes.

1. Introduction

The methods, techniques and sensors used to graphically document heritage have undergone a great development during the last decades, allowing us to obtain reliable models of reality in order to represent sites and artifacts. This documentation is mainly focused on conservation tasks but also other purposes such as virtual museums, dissemination, etc. There are several aspects that have contributed to this evolution, such as the development of new geomatic techniques, new sensors and data acquisition methodologies, the application of new algorithms, such as the Structure from Motion (SfM) [1,2,3] and the dense MultiView Stereo 3D reconstruction (MVS) [4,5,6,7], their implementation in several commercial applications, such as Agisoft Metashape and Pix4DMapper [8,9], the increase in the computing capabilities, etc. Regarding current geomatic techniques, we can highlight the generalized use of Light Detection and Ranging (LiDAR), particularly using Terrestrial Laser Scanning (TLS) in the case of indoor scenes, and non-metric conventional cameras (such as pinhole or perspective cameras) to develop photogrammetric surveys based on close range photogrammetry (CRP). This latter case has allowed a significant cost reduction compared to the use of metric cameras. In addition, the development of new platforms such as the Unmanned Aerial Vehicles (UAVs) [10,11,12] have made it possible to elevate these sensors to higher points of view, facilitating the capture of the scene. Some authors, such as Remondino [13] and Hassani et al. [14], have analyzed these techniques extensively, showing their advantages and disadvantages, although there are many applications that propose their combined use to take advantage of their integration [15,16,17,18,19,20,21,22].
Despite the current development of these techniques, widely applied in several disciplines, in recent years there have been some innovations that are focused on improving capture efficiency. For example, these new systems include mobile mapping systems (MMS) based on simultaneous localization and mapping (SLAM), which allow real-time capture, supporting data acquisition in inertial systems and/or GNSS (outdoor applications) and providing the trajectory and orientation of the system at any time. However, in some situations, the use of these new techniques is not possible due to their higher current cost and other obstacles.
In heritage, the presence of complex scenes is common, especially in interior spaces. These scenes can be defined as those whose geometrical characteristics, location and accessibility make a simple data acquisition difficult or impossible (e.g., narrow spaces with little distance between sensor and object). In these cases, the efficiency of data acquisition is one of the main aspects to be considered due to the difficulties in obtaining a complete coverage of the object by geomatic techniques that involve a large number of photographs (in the case of pinhole cameras) or scanning stations (in the case of using TLS). More specifically, in the case of photogrammetry, the number of photographs required to cover a complex scene can be reduced by using lenses that provide a larger Field of View (FoV), such as those using wide angle lenses [23,24,25,26,27], fisheye lenses [28,29,30,31,32,33,34,35,36,37,38] and 360-degree cameras [34,39,40,41,42,43,44,45,46]. In this context, spherical photogrammetry [47] has undergone a great development recently using both fisheye images (FEI) and spherical images (SI), which are also known as panoramic images [47,48,49,50,51,52].
Spherical images are obtained through the superposition of images obtained from a specific point of view in several directions until covering a larger FoV (some authors suggest values higher than 160 degrees horizontally to be considered as a panoramic image). To obtain them, we can use a conventional camera, rotating it around the lens axis in a defined direction obtaining several images that are merged and projected on a sphere, in most cases using the equirectangular projection, by means of geometrical and stitching procedures. The alternative and most efficient method to obtain these spherical images is to use a 360-degree camera, which can project the image obtained with a fisheye lens or by fusing and projecting several images from several fisheye lenses. Considering the latter case, several authors classified these cameras into three types: dioptric, catadioptric and polydioptric cameras [52,53,54]. The most commonly used in heritage applications are polydioptric cameras, which contain several fisheye lenses to capture scenes with a 360-degree horizontal FoV. Depending on the number of lenses (minimum 2), the system can achieve a greater overlapping of images, facilitating and improving the stitching procedure by using central zones where lens distortion is usually lower. In addition, the overlapping allows obtaining other relevant information such as depth maps because each point of the scene is captured from several points of view. At the moment, spherical photogrammetry has been applied in several studies, achieving accuracies of various centimeters [44,46,54,55,56,57,58]. Recently, the use of these sensors has been increasing due to their inclusion in MMS based on visual SLAM [59,60,61].
Regarding stitching techniques to obtain spherical images in a 360-degree camera, there are several classifications [62,63,64,65] taking into account some aspects related to the fusion and projection of images. In general, there are direct pixel-based methods and, most commonly, feature detection methods, which are based on algorithms such as the Scale-Invariant Feature Transform (SIFT) [3], and the Speeded Up Robust Features (SURF) [66], among others. In the case of complex scenes, we must consider several issues. Thus, differences in depth of the objects with respect to the camera may cause undesirable effects (blurring, ghosting, etc.) in the results of the stitching procedure due to different parallax, which is greater at shorter distances. To solve this, some authors [67,68] suggested the use of depth maps to improve the stitching results, which can be obtained thanks to the overlapping provided by polydioptric cameras.
A fundamental aspect in photogrammetry is the geometric calibration of the sensor, in this case, specifically related to fisheye sensors [69]. This calibration is basic to improving the geometric quality of the photogrammetric products through the determination of the intrinsic parameters of each sensor or interior orientation parameters (IOPs) (focal length, principal point and distortion parameters) and extrinsic or relative orientation parameters (ROPs) (rotation matrix and translation vectors between sensors). The determination of the extrinsic parameters is an element that contributes positively to the simplification of the final image orientation processes, considering that the mounting of the different cameras is sufficiently stable or robust, as in the case of 360-degree multi-cameras. Some studies [70,71,72] used these parameters to define constraints between those sensors, improving geometrical results.
In general, calibration methods are based on points and lines, patterns and self-calibration procedures. There are some studies related to calibration applied to fisheye lenses [73,74,75,76,77,78,79,80] and 360-degree cameras [51,69,70,81,82,83], which have improved geometric results. Other studies showed comparisons between several types of cameras (perspective, fisheye, etc.) [34,84]. In any case, intrinsic calibration allows an improvement in accuracy. On the other hand, extrinsic calibration allows obtaining the relationships between the different elements that make up the sensor, simplifying the final orientation procedure of the images as a whole while allowing more robust orientations to be obtained [69,81,83].
The current development of 360-degree cameras (improved sensor spatial resolution) allows their use for heritage documentation by facilitating data acquisition tasks thanks to the increased coverage (FoV) of the sensors. They can be used in complex scenes, minimizing the number of images needed to cover the entire scene. However, some studies have shown certain limitations related to the geometric accuracy when compared to other techniques based on TLS or conventional photogrammetry. In the case of indoor applications, there are other difficulties related to illumination conditions, mainly when the radiometric aspect is important, as is common in heritage studies. In these areas, the definition of a reference coordinate system (CRS) based on using surveying techniques to georeference data is also difficult, and in this sense, the reduction of these tasks is an important challenge to be faced. Another important aspect is related to the selection of the type of image (fisheye and spherical) and the stitching technique (spherical images) to be used. Although some of the studies developed so far have analyzed some of these aspects, they have addressed them partially (e.g., analyzing the stitching techniques) and not always oriented to complex heritage-related scenes, where there are major complications, as described previously. In this study, we analyze the different aspects of the use of 360-degree cameras and spherical photogrammetry [47] in complex scenes from multiple points of view: type of image, geometrical quality, minimization of GCPs, etc. As an example, the analysis includes the use of fisheye images directly and spherical images generated from them. The goal is to determine the aspects to be considered in order to improve the results, mainly in the photogrammetric orientation processes, taking into account the improvement of the efficiency in the data capture and the minimization of auxiliary work.
The objectives of this study are related to the analysis of the application of 360-degree cameras and spherical photogrammetry to heritage sites in order to improve the quality of products and the efficiency in data acquisition, reducing the number of GCPs needed for orientation processes. The analysis mainly focuses on cases of complex scenes, poorly illuminated and with short and variable distances between the sensor and the object. In this sense, the methodology to be employed should analyze which type of image (fisheye or spherical) is more convenient in these cases to improve geometric results and reduce surveying work.
The manuscript is divided into three sections. The first section comprises the presentation of the method developed to analyze these images and the materials used to apply it to some specific complex scenes of funerary structures in Egypt. The second section describes the main results obtained after the orientation of the spherical images considering several stitching techniques and fisheye images. In addition, this section also includes the results after using fisheye images considering certain constraints determined between all sensors thanks to the extrinsic calibration carried out. The final section includes the main conclusions and the proposals for future work.

2. Materials and Methods

The methodology proposed in this study (Figure 1) analyzes the accuracy results achieved after the image orientation processing by considering several cases related to the use of fisheye images (FEI) and spherical images (SI) obtained from the same 360-degree multi-camera. Thus, this comparison is developed using images obtained from the same acquisition considering several complex scenes related to heritage.
Prior to the development of the photogrammetric process using fisheye and spherical images, we must consider an important aspect related to the characteristics of these images, which is considered as an initial hypothesis in this study. The fisheye images are obtained directly from the sensor, while the spherical images are synthetic, obtained by transformation (or re-projection) and fusion. This implies that the spherical images are not associated with any sensor with a defined internal geometry. In the methodology proposed in this study, we use three stitching techniques to obtain these images, two based only on feature-based techniques (SI-FS and SI-HQS in Figure 1) and one that additionally considers depth maps (SI-DM in Figure 1). As discussed in the previous section, feature-based techniques can show problems due to parallax, different distances between objects and the sensor, etc., which cause geometrical errors in the final stitched image [67]. However, the spherical images based on depth maps show a more realistic representation because the true depth of the object with respect to the sensor is known and considered at all projected points [67,68]. This aspect contrasts to other techniques that use a single distance between the sensor and the object, not considering depth maps and incorporating this simplification into the image processing. Therefore, these assumptions are considered in the analysis proposed in this study.
The procedure (Figure 1) starts with the acquisition of fisheye images using a 360-degree multi-camera. The device must be located at different locations to guarantee full coverage of the scene. At each position, we obtained a set of fisheye images equal to the number of sensors contained in the multi-camera. These images have full overlapping that facilitates stitching procedures. After that, three spherical images were obtained from the set of fisheye images considering three stitching procedures (SI-FS, SI-HQS and SI-DMS). As an example, a comparison of tie points determined by two stitching methods (SI-HQS and SI-DMS) was performed using the distances determined between them in order to show their geometrical discrepancies. After that, the photogrammetric orientation of these blocks of images and the block that contains the fisheye images (FEI) was carried out by means of several well-distributed ground control points (GCPs), materialized on the scene by targets, whose coordinates were obtained from a surveying network using a total station. The results of these four block orientations were compared using the calculated Root Mean Squared Error (RMSE) of a set of check points (CPs).
Next, we compare the results of fisheye image orientation both using and not using GCPs. When GCPs are not used, we introduce sensor extrinsic calibration parameters (relative orientation) to provide several constraints to the system. Specifically, in this case, we set the block scale using scale bars (SBs). The accuracy of the orientation (FEI-SBs) is calculated by analyzing the residuals obtained in a 3D rigid transformation. This transformation is based on several GCPs defined in the images (targets). In addition, the two 3D meshes of each scene obtained from the point clouds determined from both photogrammetric blocks (FEI-GCPs and FEI-SBs) are aligned in the same CRS and compared, showing the discrepancies (distances) from mesh to mesh. The methodology proposed in this study is applied using a specific 360-degree multi-camera in a specific heritage site that is described below.

2.1. Materials: 360-Degree Camera

In this study, we used a Kandao Obsidian Go 360-degree camera (Figure 2a) due to the possibility of using depth maps to develop the stitching procedure. In fact, this is one of the few 360-degree cameras that can generate depth maps [85]. The camera has a focal length of 6.8 mm. The fisheye image obtained has a resolution of 4608 × 3456 pixels. The camera is composed of six fisheye lenses with a horizontal coverage of about 220 degrees each (Figure 2b). Their optical axes are distributed with an angle of 60 degrees. The horizontal FoV allows obtaining full coverage (360 degrees) using all sensors with a large overlap between them. In this sense, this sensors configuration provides two adjacent images with an approximate overlap of 160 degrees. Thus, 70% of the scene is covered by four sensors, and the remaining 30% is covered by three lenses. Consequently, the use of the six images makes it possible to obtain a depth map of the scene thanks to these large overlaps. The software used to obtain the spherical images using the six fisheye images captured in each acquisition is Kandao Studio v2.7 (Figure 2c). This application allows obtaining spherical images (SI-FS, SI-HQS and SI-DMS) (resolution of 7680 × 3840 pixels) using three stitching methods: Fast Mode, High Quality Mode and Depth Mode, and developing an internal pre-calibration process.

2.2. Materials: Images

As discussed in previous sections, we used two types of images: fisheye and spherical images, the latter being obtained by three different methods. Considering a camera capture at a specific location, Figure 3 shows some examples of the six fisheye images acquired and the three spherical images determined using the three stitching options available in the Kandao Studio software v2.7 (Fast Stitching -SI-FS-; High Quality Stitching -SI-HQS-; and, Based on Depth Maps -SI-DMS-). In the case of fisheye images, the intrinsic parameters (focal length, principal point and distortion) and extrinsic parameters (rotation matrix and translation between sensors) are calculated through calibration techniques using GCPs.

2.3. Materials: Scenes

The proposed methodology has been applied to a real study case composed of several complex scenes in four rock-cut funerary structures at the Necropolis of Qubbet el-Hawa (Assan, Egypt) (Figure 4a). This site is located in a hill situated on the west bank of the Nile River (Figure 4b), and it is composed of more than one hundred tombs of different dimensions (Figure 4c). The larger ones are distributed among several rooms, such as halls of pillars, corridors, vertical shafts and burial chambers, while the smallest are composed of a simple burial chamber (Figure 4d). Both cases include certain characteristics that complicate the application of photogrammetric techniques to document them. Thus, they contain narrow and reduced spaces to capture images and a complex geometry with pillars, niches, etc. (Figure 4d) that hinder the use of conventional photogrammetry. In this case, the use of spherical photogrammetry presents several advantages like reducing the number of images needed and other additional work such as GCP measurements. One of the objectives proposed for this research is to analyze which of the types of images considered (FEI or SI) are most suitable for this type of archaeological documentation work in complex areas.
In this study, we have selected four scenes to obtain more conclusive results. In this sense, we have considered some geometrical aspects, trying to include some specific structures such as vertical shafts, corridors and the presence of different chambers separated by narrow accesses. Table 1 shows the main characteristics of the scenes used in this study, including the total number of captured images, the average distance between the 360-degree camera and the object and the total number of GCPs and CPs used.
Figure 5 shows the geometry of these scenes and the locations (top view) of the camera, GCPs and CPs. These points were materialized using targets whose coordinates were calculated from a surveying network using a total station.
Considering that these burial structures are distributed throughout the inside of the hill, another important aspect to be considered is the illumination of the scenes in order to obtain images with a similar radiometric response. To solve this issue, we mounted an LED lamp connected to an external battery in the upper part of the camera (Figure 6), aiming to minimize both the zones where the illumination system appears in the images and the overexposed areas due to the proximity between the camera and the object.

3. Results and Discussion

3.1. Fisheye Images vs. Spherical Images

3.1.1. Fisheye Sensor Intrinsic Calibration

The use of FEI requires the knowledge of the internal geometry of the sensors and consequently, the development of intrinsic sensor calibration. In this study we carried out the intrinsic calibration of the selected 360-degree camera using a defined 3D pattern composed of a set of GCPs, materialized by targets. The coordinates of these points were previously calculated using TLS, and more specifically with a Faro Focus X130 scanner. This pattern was captured by the 360-degree camera in five different positions (Figure 7).
The images captured with the 360-degree camera have been oriented using the GCP set using the Agisoft Metashape v2 software, obtaining the intrinsic parameters (focal length [f], position of the main point [cx and cy] and radial distortion parameters [K1 to K4]) that are shown in Table 2. These parameters are related to the Brown distortion model.

3.1.2. Spherical Images

The comparison between the three stitching techniques is carried out using a set of keypoints obtained after applying feature point matching to the three blocks of images. The keypoints are compared by selecting those that are homologous between two images (tie points) obtained from different stitching procedures. Therefore, we avoid those keypoints that have no correspondence in the pair of images to be compared. After that, we obtained a displacement vector for each point and consequently the geometric discrepancies of the images of the same scene depending on the stitching technique used to generate the spherical images. Figure 8 shows an example of this comparison between spherical images obtained using and without using depth maps (Figure 8a,c). Figure 8b,d show detailed views of some displacement vectors calculated between homologous points located in the SI-HQS and the SI-DMS, respectively.
Figure 9 shows a frequency line chart of the distances obtained using more than 1200 tie points measured on both images (SI-HQS and SI-DMS). This analysis results in an average distance value of about 34.5 pixels and minimum and maximum values of 1.4 and 168.9 pixels, respectively.

3.1.3. Orientation Using GCPs

The previous analyses show large geometrical differences between spherical images obtained by different stitching techniques. To analyze their behavior with respect to the original fisheye images, we carried out an experiment consisting of studying the results of the photogrammetric block orientation procedure considering the four types of images previously described (SI-FS, SI-HQS, SI-DMS and FEI cases) (Figure 3) in all scenes (Figure 5). Thus, we used GCPs to carry out the orientation of image blocks and CPs to analyze the results of this procedure. These sets of points are well-distributed throughout the scene (Figure 5). In the FEI case, the intrinsic parameters (Table 2) are readjusted during the orientation procedure. The analysis is performed using Agisoft Metashape v2 software. Figure 10 shows the RMSE values of the four cases and the average values of all scenes. The graph shows a clear reduction of RMSE values from those cases of spherical images that do not consider depth maps to SI-DMS and FEI in all scenes (Figure 10a). More specifically, the average RMSE values are reduced in the case of using SI-HQS instead of SI-FS (from 11.3 pixels to 7.2 pixels, which is about 36%) and more in the case of using SI-DMS (from 7.2 pixels to 1.3 pixels, about 81%). In addition, Figure 10b shows a detailed view with a reduction of RMSE values from SI-DMS to FEI (1.3 to 0.6 pixels, about 54%).
The results show a large difference in RMSE values depending on the image types. Thus, there is a great improvement in residuals when using SI-DMS and FEI, so it can be recommended not to use the first two types of stitching (SI-FS and SI-HQS) when the use of depth maps (SI-DMS) is available. Otherwise, the SI-HQS should be selected when spherical images are to be used. Considering the cases SI-DMS and FEI the reduction of the average RMSE between SI-DMS and FEI is less than one pixel. Nevertheless, the use of FEI involves the processing of blocks made up of a greater number of images with respect to SI-DMS. This implies a longer processing time in all photogrammetric processes. Therefore, both cases show advantages and disadvantages, which are discussed in the next section.

3.1.4. Advantages and Disadvantages of Using Spherical and Fisheye Images

Currently, only a few 360-degree cameras allow spherical images to be obtained using depth maps due to the need for full overlapping between fisheye images, and as a consequence, the need for multiple sensors (more than two fisheye lenses). In fact, most cameras marketed so far have only two fisheye lenses, positioned opposite each other to cover 360 degrees but without full overlapping. Therefore, this configuration makes it impossible to obtain complete depth maps. We consider the use of SI-DMS limited to 360-degree multi-cameras with a larger number of fisheye sensors, which provide extensive overlapping between images. Other 360-degree cameras should use stitching techniques that do not consider depth maps.
The use of spherical images instead of the original fisheye images implies a large reduction in the number of images, although we eliminate the redundancy of information in each capture provided by the fisheye images. However, the processing time is reduced with respect to fisheye images where the number of images of the block is multiplied by the number of sensors. In addition, the use of spherical images does not require the knowledge of the internal geometry of the sensor (intrinsic calibration) because they are synthetic images obtained by projecting onto a sphere.
The use of FEI implies the need to know the intrinsic parameters of the sensor. The geometry of these cameras provides a certain redundancy of information in overlapping areas, improving the geometrical results. Despite this advantage, the use of FEI implies an increase in processing time. We have also detected greater difficulties in the orientation processes that are resolved by including manual tie points, but consequently increasing the time dedicated to this process.
Therefore, the selection of the type of image to use will consider the requirements of each project, taking into account accuracy and processing time, because the use of FEI allows obtaining more accurate results but involves more processing time.
Furthermore, from the improved accuracy in the orientation processes, the use of FEI adds another advantage over spherical images. This is the possibility of eliminating the use of GCPs or other external systems to scale the photogrammetric block. This is possible in the case of FEI through the use of intrinsic and extrinsic calibration of the camera, where the internal geometry and the relative position of all sensors is known. In this sense, the distance between all sensors can be calculated, and these values can be used to scale the scene without using external information. In this context, minimizing the application of surveying techniques to obtain the GCPs coordinates is an important advantage that is analyzed in the next section.

3.2. Using FEI without GCPs

3.2.1. Extrinsic Calibration

The 360-degree camera contains several fisheye sensors mounted on a stable platform. The relative orientation between these sensors can be obtained by extrinsic calibration of the camera. Knowledge of the extrinsic parameters can facilitate the orientation processing because the distances between sensors can be used as system constraints, for example, in the case of Agisoft Metashape v2 software by including them as scale bars (FEI-SBs). In the case of the 360-degree camera used in this study, we used the same 3D pattern developed to calculate the intrinsic parameters (Figure 7). As results, Table 3 shows the average values of the calculated distances between all sensors and the standard deviation (STD) derived from this calculation.

3.2.2. Orientation Using FEI without Using GCPs

The purpose of this analysis is to test the 360-degree camera orientation procedures without using GCPs. To verify the results obtained, the blocks have been oriented using two different schemes. Firstly, orientation was performed without considering GCP but incorporating the information derived from the distances previously calculated in the extrinsic calibration process, using the software option to incorporate scale bars (FEI-SBs). In this case, the intrinsic parameters (Table 2) are readjusted during the procedure, while the extrinsic parameters (Table 3) are used as constrains and are not recalculated, although they consider their quality (given by the STD in Table 3). Secondly, these orientations have been performed using a set of control points well distributed throughout the scene (see Section 3.1.1). Figure 11 shows an example of a block related to zone QH33SB (Figure 11a) and the scale bars included in two camera acquisitions (Figure 11b).

3.2.3. Three-Dimensional Rigid Transformation

In order to verify the results of using scale bars as a constraint for orientation processing, we used the targets as CPs. For that purpose, the CPs were measured on the images (FEI-SBs) after orientation, and their coordinates were obtained in a local CRS. In order to make the comparison, both systems (global coordinates in GCP and local coordinates in the orientation that do not use GCP) were adjusted using a three-dimensional rigid transformation (six parameters including three translations and three rotations—omega, phi and kappa. The adjustment parameters obtained and, in a special way, the distances calculated at each of the points provide us with information of great interest when establishing the quality of the orientation. Figure 12 shows the results of the average, standard deviation, minimum and maximum values of the 3D distances of all scenes. In all cases, the average value is less than 0.02 m. The maximum values of QH33SP and QH35P are approximately 0.03 m, while QH23 is between 0.015 and 0.02 m. The maximum value of QH33SB is about 0.01 m. Therefore, these results indicate a certain geometrical similarity between these points in both CRSs and, as a consequence, in the accuracy achieved using the two orientation techniques (GCPs or scale bars).

3.2.4. Modeling and Comparison of 3D Meshes

The previous analysis is based on specific points (targets), which are affected by errors of the surveying technique developed. In order to analyze a large number of points and minimize these errors, we compared the 3D meshes obtained using FEI-CGPs and FEI-SBs in all scenes. The 3D mesh obtained using scale bars was referenced to the CRS of the project using a registration divided into several stages and performed using Maptek Point Studio v2022 software. Firstly, we performed a translation and three rotations (Figure 13a,b) in order to place the FEI-SBs mesh approximately aligned with the FEI-GCPs mesh. Subsequently, we developed an automatic adjustment of the FEI-SBs mesh to the other to ensure that both meshes had the same CRS (Figure 13c). Finally, the distances between both meshes were calculated (Figure 13d).
Figure 14 shows the frequency line charts of distances between both meshes and the average distances and standard deviations obtained in all scenes. The differences are lower than 1 cm in most cases, with QH35P showing the highest discrepancies. Therefore, the results have demonstrated the possible use of FEI-SBs, avoiding or minimizing the use of GCPs, which is very interesting in order to improve the efficiency of field work.

3.3. Processing Times

This section discusses an example of processing times related to each procedure, taking into account all scenes used in this study. Three procedures were considered: stitching, relative orientation and marker projections (measurement of GCPs in all images). Table 4 shows the mean values for the four scenes, including the average number of images, the average number of GCPs and the average times (in seconds) spent for these procedures. The values were calculated using a computer with an i7-13700H CPU at 2.4 GHz, Geforce RTX 4070 GPU and 32 GB RAM. The total time column shows lower values for SI cases with respect to FEI cases. More specifically, the time differences between SI cases are not very significant and much less when comparing SI-HQS and SI-DMS. Considering FEI cases, the use of GCPs involves a large increase in time with respect to not using GCPs (almost three times longer). This case (FEI-SBs) spent about 25% more time than SI cases. However, this increase is largely compensated by the reduction of surveying fieldwork.

4. Conclusions

In this study, we have analyzed the use of 360-degree cameras, considering several cases and aspects related to spherical photogrammetry in complex scenes and more specifically in funerary structures characterized by reduced dimensions and poor illumination conditions. To carry out this study, we have focused on a specific polydioptric camera (Kamdao Obsidian Go) that allowed us to obtain spherical images using depth maps. The analysis has included all possible types of images related to this camera, the original fisheye images and three types of spherical images obtained using three different stitching techniques. The geometric errors obtained in the analyses show that the use of these types of images is suitable for most of the studies related to heritage documentation. Their use represents a great advantage with respect to the use of conventional photogrammetric techniques based on pinhole or perspective cameras due to the improvement of the capture tasks by reducing the number of images needed to cover the scene completely. In our opinion, the use of spherical photogrammetry is more suitable in cases of complex spaces due to the reduction of acquisition and processing time and consequently costs (Table 4). The selection of fisheye images or spherical images will consider the requirements of each project, taking into account the advantages and disadvantages evidenced in this study, which are summarized in Table 5 and described below:
  • Spherical images: The stitching technique selected will largely condition the geometric quality of these images and consequently the results. In this sense, we suggest the use of stitching techniques based on depth maps because this study has demonstrated a clear improvement in the results with respect to the others. This option is limited to sets of fisheye images with full overlaps, which are obtained with a larger number of sensors, such as the one selected in this study. In this regard, we recommend the use of 360-degree multi-cameras composed of more than two fisheye lenses to obtain full overlaps. Although the use of the original fisheye images has shown better results, the analysis of spherical images based on depth maps has shown sufficient accuracy for most heritage documentation studies, showing the advantage over fisheye images of a significant reduction in the number of images, and consequently orientation processing time (Table 4). However, these images require a stitching processing time. In addition, their orientation requires GCPs, which is a problem in complex scenes. In our opinion, spherical images can be used in blocks composed of a large number of images, where the distribution and measurement of GCPs is not a significant problem. In any case, in this context, the use of spherical images will be subordinated to the application of stitching techniques based on depth maps.
  • Fisheye images: The results have shown a higher geometric quality in the orientation processes, although the larger number of images will mean a longer processing time (Table 4). We have also detected greater difficulties in orientation processes when we do not use constraints between sensors. In some cases, we have to include tie points manually to complete the relative orientation. On the other hand, the results in the scenes studied have shown that the use of the extrinsic parameters to determine constraints as a function of the distance between sensors (scale bars) facilitates the relative orientation processes and reduces the use of GCPs to those necessary for the block georeferenced using a 3D rigid transformation (minimum 3 points). This has been confirmed by the transformation residuals and by the comparison of 3D meshes obtained in all scenes. In summary, the reduction of field work related to surveying techniques and consequently in costs is evident. In our opinion, the use of fisheye images will be recommended in complex scenes similar to those studied in this case, using a previous calibration that allows defining distance constraints to facilitate orientation and scaling processes.
Future work will focus on adding new complex indoor and outdoor scenes of higher dimensions in order to analyze disorientation problems caused by drift error in the case of using FEI-SBs. We will also analyze radiometric aspects when using this type of image in complex indoor scenes. This aspect is very important in the cases of using these images to obtain realistic textures for modeling. We also suggest the analysis of images extracted from video to improve capture efficiency, and more specifically to consider these acquisition techniques in complex indoor trajectories.

Author Contributions

Conceptualization, All authors; methodology, All authors; software, All authors; validation, All authors; formal analysis, All authors; investigation, All authors; resources, All authors; data curation, All authors; writing—original draft preparation, A.T.M.-C.; writing—review and editing, All authors; visualization, All authors; supervision, All authors; project administration, J.L.P.-G. and J.D.-G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to express their gratitude for the support of the Qubbet el-Hawa Research Project, developed during the last 15 years by the University of Jaén (Spain).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ullman, S. The interpretation of structure from motion. Proc. R. Soc. B 1979, 203, 405–426. [Google Scholar]
  2. Koenderink, J.J.; Van Doorn, A.J. Affine structure from motion. J. Opt. Soc. Am. A 1991, 8, 377–385. [Google Scholar] [CrossRef]
  3. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  4. Szeliski, R. Computer Vision: Algorithms and Applications; Springer: London, UK, 2011. [Google Scholar]
  5. Scharstein, D.; Szeliski, R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 2002, 47, 7–42. [Google Scholar] [CrossRef]
  6. Seitz, S.M.; Curless, B.; Diebel, J.; Scharstein, D.; Szeliski, R. A comparison and evaluation of multi-view stereo reconstruction algorithms. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17 June 2006. [Google Scholar]
  7. Furukawa, Y.; Hernández, C. Multi-view stereo: A tutorial. Found. Trends Comput. Graph. Vis. 2015, 9, 1–148. [Google Scholar]
  8. Brutto, M.L.; Meli, P. Computer vision tools for 3D modelling in archaeology. Int. J. Herit. Digit. Era 2012, 1, 1–6. [Google Scholar] [CrossRef]
  9. Green, S.; Bevan, A.; Shapland, M. A comparative assessment of structure from motion methods for archaeological research. J. Archaeol. Sci. 2014, 46, 173–181. [Google Scholar] [CrossRef]
  10. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar]
  11. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar]
  12. Campana, S. Drones in Archaeology. State-of-the-art and Future Perspectives. Archaeol. Prospect. 2017, 24, 275–296. [Google Scholar]
  13. Remondino, F. Heritage recording and 3D modeling with photogrammetry and 3D scanning. Remote Sens. 2011, 3, 1104–1138. [Google Scholar] [CrossRef]
  14. Hassani, F.; Moser, M.; Rampold, R.; Wu, C. Documentation of cultural heritage; techniques, potentials, and constraints. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2015, XL-5/W7, 207–214. [Google Scholar]
  15. Kadobayashi, R.; Kochi, N.; Otani, H.; Furukawa, R. Comparison and evaluation of laser scanning and photogrammetry and their combined use for digital recording of cultural heritage. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2004, 35, 401–406. [Google Scholar]
  16. Ahmon, J. The application of short-range 3D laser scanning for archaeological replica production: The Egyptian tomb of Seti I. Photogramm. Rec. 2004, 19, 111–127. [Google Scholar]
  17. Alshawabkeh, Y.; Haala, N. Integration of digital photogrammetry and laser scanning for heritage documentation. The Int. Arch. Photogramm. Remote Sens. 2004, 35, 1–6. [Google Scholar]
  18. Guarnieri, A.; Remondino, F.; Vettore, A. Digital photogrammetry and TLS data fusion applied to Cultural Heritage 3D modeling. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2006, 36 Pt 5, 1–6. [Google Scholar]
  19. Grussenmeyer, P.; Landes, T.; Voegtle, T.; Ringle, K. Comparison methods of terrestrial laser scanning, photogrammetry and tacheometry data for recording of cultural heritage buildings. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2008, XXXVII/B5, 213–218. [Google Scholar]
  20. Fernández-Palacios, B.J.; Rizzi, A.; Remondino, F. Etruscans in 3D-Surveying and 3D modeling for a better access and understanding of heritage. Virtual Archaeol. Rev. 2013, 4, 85–89. [Google Scholar] [CrossRef]
  21. Nabil, M.; Betrò, M.; Metwallya, M.N. 3D reconstruction of ancient Egyptian rockcut tombs: The case of Midan 05. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2013, XL-5/W2, 443–447. [Google Scholar]
  22. De Lima, R.; Vergauwen, M. From TLS recoding to VR environment for documentation of the Governor’s Tombs in Dayr al-Barsha, Egypt. In Proceedings of the 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Munich, Germany, 16 October 2018. [Google Scholar]
  23. Gómez-Lahoz, J.G.; González-Aguilera, D. Recovering traditions in the digital era: The use of blimps for modelling the archaeological cultural heritage. J. Archaeol. Sci. 2009, 36, 100–109. [Google Scholar] [CrossRef]
  24. Mozas-Calvache, A.T.; Pérez-García, J.L.; Cardernal-Escarcena, F.J.; Delgado, J.; Mata de Castro, E. Comparison of Low Altitude Photogrammetric Methods for Obtaining Dems and Orthoimages of Archaeological Sites. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2012, XXXIX-B5, 577–581. [Google Scholar]
  25. Martínez, S.; Ortiz, J.; Gil, M.L.; Rego, M.T. Recording complex structures using close range photogrammetry: The cathedral of Santiago de Compostela. Photogramm. Rec. 2013, 28, 375–395. [Google Scholar] [CrossRef]
  26. Fiorillo, F.; Limongiello, M.; Fernández-Palacios, B.J. Testing GoPro for 3D model reconstruction in narrow spaces. Acta IMEKO 2016, 5, 64–70. [Google Scholar] [CrossRef]
  27. Pérez-García, J.L.; Mozas-Calvache, A.T.; Barba-Colmenero, V.; Jiménez-Serrano, A. Photogrammetric studies of inaccessible sites in archaeology: Case study of burial chambers in Qubbet el-Hawa (Aswan, Egypt). J. Archaeol. Sci. 2019, 102, 1–10. [Google Scholar]
  28. Boulianne, M.; Nolette, C.; Agnard, J.P.; Brindamour, M. Hemispherical photographs used for mapping confined spaces. Photogramm. Eng. Remote Sens. 1997, 63, 1103–1108. [Google Scholar]
  29. Kedzierski, M.; Waczykowski, P. Fisheye lens camera system application to cultural heritage data acquisition. In Proceedings of the XXI International Cipa Symposium, Athens, Greece, 1–6 October 2007. [Google Scholar]
  30. Kedzierski, M.; Fryskowska, A. Application of digital camera with fisheye lens in close range photogrammetry. In Proceedings of the ASPRS 2009 Annual Conference, Baltimore, MD, USA, 9–13 March 2009. [Google Scholar]
  31. Georgantas, A.; Brédif, M.; Pierrot-Desseilligny, M. An accuracy assessment of automated photogrammetric techniques for 3D modelling of complex interiors. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2012, 39, 23–28. [Google Scholar] [CrossRef]
  32. Covas, J.; Ferreira, V.; Mateus, L. 3D reconstruction with fisheye images strategies to survey complex heritage buildings. In Proceedings of the Digital Heritage 2015, Granada, Spain, 28 September–2 October 2015. [Google Scholar]
  33. Perfetti, L.; Polari, C.; Fassi, F. Fisheye Photogrammetry: Tests and Methodologies for the Survey of Narrow Spaces. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, XLII-2/W3, 573–580. [Google Scholar]
  34. Mandelli, A.; Fassi, F.; Perfetti, L.; Polari, C. Testing different survey techniques to model architectonic narrow spaces. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, XLII-2/W5, 505–511. [Google Scholar]
  35. Perfetti, L.; Polari, C.; Fassi, F.; Troisi, S.; Baiocchi, V.; Del Pizzo, S.; Giannone, F.; Barazzetti, L.; Previtali, M.; Roncoroni, F. Fisheye Photogrammetry to Survey Narrow Spaces in Architecture and a Hypogea Environment. In Latest Developments in Reality-Based 3D Surveying and Modelling; MDPI: Basel, Switzerland, 2018; pp. 3–28. [Google Scholar]
  36. Alessandri, L.; Baiocchi, V.; Del Pizzo, S.; Rolfo, M.F.; Troisi, S. Photogrammetric survey with fisheye lens for the characterization of the la Sassa cave. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2019, XLII-2/W9, 25–32. [Google Scholar]
  37. León-Vega, H.A.; Rodríguez-Laitón, M.I. Fisheye Lens Image Capture Analysis for Indoor 3d Reconstruction and Evaluation. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2019, 42, 179–186. [Google Scholar] [CrossRef]
  38. Perfetti, L.; Fassi, F.; Rossi, C. Fisheye Photogrammetry to Generate Low–Cost DTMs. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2019, XLII-2/W17, 257–263. [Google Scholar]
  39. Gómez-López, J.M.; Pérez-García, J.L.; Mozas-Calvache, A.T.; Vico-García, D. Documentation of cultural heritage through the fusion of geomatic techniques. Case study of the cloister of “Santo Domingo” (Jaén, Spain). Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2023, XLVIII-M-2-2023, 677–683. [Google Scholar]
  40. Kossieris, S.; Kourounioti, O.; Agrafiotis, P.; Georgopoulos, A. Developing a low-cost system for 3D data acquisition. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, XLII-2/W8, 119–126. [Google Scholar]
  41. Barazzetti, L.; Previtali, M.; Roncoroni, F. 3D Modelling with the Samsung Gear 360. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, XLII-2-W3, 85–90. [Google Scholar]
  42. Barazzetti, L.; Previtali, M.; Roncoroni, F. Can we use low-cost 360 degree cameras to create accurate 3D models? Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2018, XLII-2, 69–75. [Google Scholar]
  43. Fangi, G.; Pierdicca, R.; Sturari, M.; Malinverni, E.S. Improving spherical photogrammetry using 360° omni-cameras: Use cases and new applications. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2018, XLII-2, 331–337. [Google Scholar]
  44. Barazzetti, L.; Previtali, M.; Roncoroni, F.; Valente, R. Connecting inside and outside through 360° imagery for close-range photogrammetry. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2019, XLII-2/W9, 87–92. [Google Scholar]
  45. Cantatore, E.; Lasorella, M.; Fatiguso, F. Virtual reality to support technical knowledge in cultural heritage. The case study of cryptoporticus in the archaeological site of Egnatia (Italy). Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2020, XLIV-M-1-2020, 465–472. [Google Scholar]
  46. Bertellini, B.; Gottardi, C.; Vernier, P. 3D survey techniques for the conservation and the enhancement of a Venetian historical architecture. Appl. Geomat. 2019, 12, 53–68. [Google Scholar] [CrossRef]
  47. Fangi, G. The multi-image spherical panoramas as a tool for architectural survey. In Proceedings of the 21st CIPA Symposium, Athens, Greece, 1–6 October 2007. [Google Scholar]
  48. D’Annibale, E.; Fangi, G. Interactive modelling by projection of oriented spherical panorama. In Proceedings of the ISPRS International Workshop on 3D Virtual Reconstruction and Visualization of Comprex Architectures (3D-Arch’2009), Trento, Italy, 25–29 February 2009. [Google Scholar]
  49. Fangi, G. Further Developments of the Spherical Photogrammetry for Cultural Heritage. In Proceedings of the XXII CIPA Symposium, Kyoto, Japan, 11–15 October 2009; pp. 11–15. [Google Scholar]
  50. Barazzetti, L.; Fangi, G.; Remondino, F.; Scaioni, M. Automation in multi-image spherical photogrammetry for 3D architectural reconstructions. In Proceedings of the 11th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST), Paris, France, 21–24 September 2010. [Google Scholar]
  51. Fangi, G.; Nardinocchi, C. Photogrammetric Processing of Spherical Panoramas. Photogramm. Rec. 2013, 28, 293–311. [Google Scholar] [CrossRef]
  52. Jiang, S.; Li, Y.; Weng, D.; You, K.; Chen, W. 3D reconstruction of spherical images: A review of techniques, applications, and prospects. arXiv 2023, arXiv:2302.04495. [Google Scholar] [CrossRef]
  53. Scaramuzza, D. Omnidirectional camera. In Computer Vision; Springer: Berlin, Germany, 2014; pp. 552–560. [Google Scholar]
  54. Herban, S.; Costantino, D.; Alfio, V.S.; Pepe, M. Use of low-cost spherical cameras for the digitisation of cultural heritage structures into 3d point clouds. J. Imaging 2022, 8, 13. [Google Scholar] [CrossRef]
  55. Pérez Ramos, A.; Robleda Prieto, G. Only image based for the 3D metric survey of gothic structures by using frame cameras and panoramic cameras. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2016, XLI-B5, 363–370. [Google Scholar]
  56. Barazzetti, L.; Previtali, M.; Roncoroni, F. Fisheye lenses for 3D modeling: Evaluations and considerations. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, XLII-2/W3, 79–84. [Google Scholar]
  57. Sun, Z.; Zhang, Y. Accuracy evaluation of videogrammetry using a low-cost spherical camera for narrow architectural heritage: An observational study with variable baselines and blur filters. Sensors 2019, 19, 496. [Google Scholar] [CrossRef]
  58. Teppati Losè, L.; Chiabrando, F.; Giulio Tonolo, F. Documentation of complex environments using 360 cameras. The Santa Marta Belltower in Montanaro. Remote Sens. 2021, 13, 3633. [Google Scholar] [CrossRef]
  59. Shi, Y.; Ji, S.; Shi, Z.; Duan, Y.; Shibasaki, R. GPS-supported visual SLAM with a rigorous sensor model for a panoramic camera in outdoor environments. Sensors 2012, 13, 119–136. [Google Scholar] [CrossRef]
  60. Ji, S.; Qin, Z.; Shan, J.; Lu, M. Panoramic SLAM from a multiple fisheye camera rig. ISPRS J. Photogramm. Remote Sens. 2020, 159, 169–183. [Google Scholar] [CrossRef]
  61. Zhang, Y.; Huang, F. Panoramic visual slam technology for spherical images. Sensors 2021, 21, 705. [Google Scholar] [CrossRef]
  62. Wei, L.Y.U.; Zhong, Z.; Lang, C.; Yi, Z.H.O.U. A survey on image and video stitching. Virtual Real. Intell. Hardw. 2019, 1, 55–83. [Google Scholar]
  63. Wang, Z.; Yang, Z. Review on image-stitching techniques. Multimed. Syst. 2020, 26, 413–430. [Google Scholar] [CrossRef]
  64. Abbadi, N.K.E.; Al Hassani, S.A.; Abdulkhaleq, A.H. A review over panoramic image stitching techniques. J. Phys. Conf. Ser. 2021, 1999, 012115. [Google Scholar] [CrossRef]
  65. Cheng, H.; Xu, C.; Wang, J.; Zhao, L. Quad-fisheye Image Stitching for Monoscopic Panorama Reconstruction. Comput. Graph. Forum 2022, 41, 94–109. [Google Scholar] [CrossRef]
  66. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  67. Bosch, J.; Istenič, K.; Gracias, N.; Garcia, R.; Ridao, P. Omnidirectional multicamera video stitching using depth maps. IEEE J. Ocean. Eng. 2019, 45, 1337–1352. [Google Scholar] [CrossRef]
  68. Liao, T.; Li, N. Natural image stitching using depth maps. arXiv 2022, arXiv:2202.06276. [Google Scholar]
  69. Campos, M.B.; Tommaselli, A.M.G.; Marcato Junior, J.; Honkavaara, E. Geometric model and assessment of a dual-fisheye imaging system. Photogramm. Rec. 2018, 33, 243–263. [Google Scholar] [CrossRef]
  70. Perfetti, L.; Polari, C.; Fassi, F. Fisheye Multi-Camera System Calibration for Surveying Narrow and Complex Architectures. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2018, XLII-2, 877–883. [Google Scholar]
  71. Huang, D.; Elhashash, M.; Qin, R. Constrained bundle adjustment for structure from motion using uncalibrated multi-camera systems. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, V-2-2022, 17–22. [Google Scholar]
  72. Bruno, N.; Perfetti, L.; Fassi, F.; Roncella, R. Photogrammetric survey of narrow spaces in cultural heritage: Comparison of two multi-camera approaches. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2024, 48, 87–94. [Google Scholar] [CrossRef]
  73. Abraham, S.; Förstner, W. Fish-eye-stereo calibration and epipolar rectification. ISPRS J. Photogramm. Remote Sens. 2005, 59, 278–288. [Google Scholar] [CrossRef]
  74. Schwalbe, E. Geometric modelling and calibration of fisheye lens camera systems. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2005, XXXVI-5, W8. [Google Scholar]
  75. Van Den Heuvel, F.A.; Verwaal, R.; Beers, B. Calibration of fisheye camera systems and the reduction of chromatic aberration. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2006, 36, 1–6. [Google Scholar]
  76. Kannala, J.; Brandt, S.S. A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1335–1340. [Google Scholar] [CrossRef]
  77. Schneider, D.; Schwalbe, E.; Maas, H.G. Validation of geometric models for fisheye lenses. ISPRS J. Photogramm. Remote Sens. 2009, 64, 259–266. [Google Scholar] [CrossRef]
  78. Sahin, C. The Geometry and Usage of the Supplementary Fisheye Lenses in Smartphones. In Smartphones from an Applied Research Perspective; InTech: London, UK, 2017. [Google Scholar] [CrossRef]
  79. Choi, K.H.; Kim, Y.; Kim, C. Analysis of Fish-Eye Lens Camera Self-Calibration. Sensors 2019, 19, 1218. [Google Scholar] [CrossRef] [PubMed]
  80. Wagdy, A.; Garcia-Hansen, V.; Isoardi, G.; Pham, K. A parametric method for remapping and calibrating fisheye images for glare analysis. Buildings 2019, 9, 219. [Google Scholar] [CrossRef]
  81. Aghayari, S.; Saadatseresht, M.; Omidalizarandi, M.; Neumann, I. Geometric calibration of full spherical panoramic Ricoh-Theta camera. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, IV-1/W1, 237–245. [Google Scholar]
  82. Khoramshahi, E.; Campos, M.B.; Tommaselli, A.M.G.; Vilijanen, N.; Mielonen, T.; Kaartinen, H.; Kukko, A.; Honkavaara, E. Accurate Calibration Scheme for a Multi-Camera Mobile Mapping System. Remote Sens. 2019, 11, 2778. [Google Scholar] [CrossRef]
  83. Lichti, D.D.; Jarron, D.; Tredoux, W.; Shahbazi, M.; Radovanovic, R. Geometric modelling and calibration of a spherical camera imaging system. Photogramm. Rec. 2020, 35, 123–142. [Google Scholar] [CrossRef]
  84. Strecha, C.; Zoller, R.; Rutishauser, S.; Brot, B.; Schneider-Zapp, K.; Chovancova, V.; Krull, M.; Glassey, L. Quality assessment of 3D reconstruction using fisheye and perspective sensors. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, II-3/W4, 215–222. [Google Scholar]
  85. Kandao. 2024. Available online: https://prd.kandaovr.com/2019/04/08/kandao-releases-a-substantial-upgrade-to-obsidian-adding-speed-up-depth-stitching-video-stabilization-and-ai-slow-motion/ (accessed on 27 February 2024).
Figure 1. Methodology proposed in this study.
Figure 1. Methodology proposed in this study.
Sensors 24 02268 g001
Figure 2. The 360-degree camera used in this study: (a) general view of the Kandao Obsidian Go camera; (b) top view scheme; (c) view of stitching menu of Kandao Studio v2.7.
Figure 2. The 360-degree camera used in this study: (a) general view of the Kandao Obsidian Go camera; (b) top view scheme; (c) view of stitching menu of Kandao Studio v2.7.
Sensors 24 02268 g002
Figure 3. Types of images used in this study: Fisheye images (FEI), spherical images by fast stitching (SI-FS), spherical images by high-quality stitching (SI-HQS) and spherical images using depth map stitching (SI-DMS).
Figure 3. Types of images used in this study: Fisheye images (FEI), spherical images by fast stitching (SI-FS), spherical images by high-quality stitching (SI-HQS) and spherical images using depth map stitching (SI-DMS).
Sensors 24 02268 g003
Figure 4. The Necropolis of Qubbet el-Hawa (Aswan, Egypt): (a) Location; (b) general view of the hill; (c) access courtyard of a burial structure; (d) burial chamber.
Figure 4. The Necropolis of Qubbet el-Hawa (Aswan, Egypt): (a) Location; (b) general view of the hill; (c) access courtyard of a burial structure; (d) burial chamber.
Sensors 24 02268 g004
Figure 5. Scenes used in this study: (a) QH23; (b) QH33SB; (c) QH33SP; (d) QH35P.
Figure 5. Scenes used in this study: (a) QH23; (b) QH33SB; (c) QH33SP; (d) QH35P.
Sensors 24 02268 g005aSensors 24 02268 g005b
Figure 6. Examples of the illumination system mounted on the camera.
Figure 6. Examples of the illumination system mounted on the camera.
Sensors 24 02268 g006
Figure 7. Calibration pattern. (a) Top view of the distribution of GCPs, camera positions and fisheye sensors. (b) Distribution of GCPs among one fisheye image captured from one position.
Figure 7. Calibration pattern. (a) Top view of the distribution of GCPs, camera positions and fisheye sensors. (b) Distribution of GCPs among one fisheye image captured from one position.
Sensors 24 02268 g007
Figure 8. Displacement vectors obtained between homologous points extracted from SI-HQS and SI-DMS: (a) SI-HQS; (b) detailed view of SI-HQS; (c) SI-DMS; (d) detailed view of SI-DMS.
Figure 8. Displacement vectors obtained between homologous points extracted from SI-HQS and SI-DMS: (a) SI-HQS; (b) detailed view of SI-HQS; (c) SI-DMS; (d) detailed view of SI-DMS.
Sensors 24 02268 g008
Figure 9. Histogram of distances between SI-HQS and SI-DMS.
Figure 9. Histogram of distances between SI-HQS and SI-DMS.
Sensors 24 02268 g009
Figure 10. RMSE at the CPs for the different types of images and study areas considered: (a) All types of images; (b) SI-DMS and FEI.
Figure 10. RMSE at the CPs for the different types of images and study areas considered: (a) All types of images; (b) SI-DMS and FEI.
Sensors 24 02268 g010
Figure 11. Photogrammetric orientation procedure using scale bars in Agisoft Metashape v2 software: (a) photogrammetric block; (b) scale bars in two captures.
Figure 11. Photogrammetric orientation procedure using scale bars in Agisoft Metashape v2 software: (a) photogrammetric block; (b) scale bars in two captures.
Sensors 24 02268 g011
Figure 12. Distances obtained after transformation.
Figure 12. Distances obtained after transformation.
Sensors 24 02268 g012
Figure 13. Example of meshes obtained using FEI-GCPs and FEI-SBs: (a) translation; (b) rotations; (c) automatic adjustment; (d) distances between meshes.
Figure 13. Example of meshes obtained using FEI-GCPs and FEI-SBs: (a) translation; (b) rotations; (c) automatic adjustment; (d) distances between meshes.
Sensors 24 02268 g013
Figure 14. Histograms of distances between meshes.
Figure 14. Histograms of distances between meshes.
Sensors 24 02268 g014
Table 1. Characteristics of the scenes used in this study.
Table 1. Characteristics of the scenes used in this study.
SceneCapturesAverage Distance (m)GCPsCPs
QH23191.9144
QH33SB203.4145
QH33SP141.694
QH35P282.1194
Table 2. Intrinsic parameters.
Table 2. Intrinsic parameters.
Sensorf
(Pixels)
cx
(Pixels)
cy
(Pixels)
K1K2K3K4
11110.66723.29619.106−5.9095 × 10−2−1.6487 × 10−31.2992 × 10−4−8.3588 × 10−7
21112.981−25.2304.259−6.0611 × 10−2−6.6440 × 10−4−3.1994 × 10−46.4145 × 10−5
31106.498−25.91934.177−5.8917 × 10−2−2.0134 × 10−31.3292 × 10−41.7086 × 10−5
41106.968−26.215−39.301−5.9852 × 10−2−6.9513 × 10−4−4.8518 × 10−41.1370 × 10−4
51118.038−57.2240.222−6.0680 × 10−2−1.3719 × 10−37.0947 × 10−57.0671 × 10−7
61110.72724.278−60.529−5.9773 × 10−2−2.6219 × 10−36.0588 × 10−4−7.5270 × 10−5
Table 3. Average distances between sensors and standard deviation.
Table 3. Average distances between sensors and standard deviation.
Distances (m)STD (m)
Sensor2345623456
10.06550.11320.13070.11340.06550.00020.00020.00020.00030.0003
2 0.06510.11310.13080.1133 0.00020.00020.00050.0001
3 0.06550.11340.1307 0.00020.00030.0003
4 0.06530.1130 0.00030.0001
5 0.0654 0.0002
Table 4. Summary of processing times.
Table 4. Summary of processing times.
CasesNumber of
Images
Number
of GCPs
Stitching
Time (s)
Orientation
Time (s)
Marker Projections
Time (s)
Total
(s)
SI-FS22157663568707
SI-HQS221513763568768
SI-DMS221514363568774
FEI-GCPs134150108419803064
FEI-SBs13400108401084
Table 5. Summary of the advantages and disadvantages of using each configuration.
Table 5. Summary of the advantages and disadvantages of using each configuration.
StageSIFEI-GCPsFEI-SBs
Pre-processing (obtaining images)StitchingNoNo
CalibrationNoIntrinsicComplete
OrientationWithout redundancyHigher redundancy. Problems to complete the relative orientationHigher redundancy
GCPs (photogrammetric)YesYesNo
Transformation from a local CRSNoNoYes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pérez-García, J.L.; Gómez-López, J.M.; Mozas-Calvache, A.T.; Delgado-García, J. Analysis of the Photogrammetric Use of 360-Degree Cameras in Complex Heritage-Related Scenes: Case of the Necropolis of Qubbet el-Hawa (Aswan Egypt). Sensors 2024, 24, 2268. https://doi.org/10.3390/s24072268

AMA Style

Pérez-García JL, Gómez-López JM, Mozas-Calvache AT, Delgado-García J. Analysis of the Photogrammetric Use of 360-Degree Cameras in Complex Heritage-Related Scenes: Case of the Necropolis of Qubbet el-Hawa (Aswan Egypt). Sensors. 2024; 24(7):2268. https://doi.org/10.3390/s24072268

Chicago/Turabian Style

Pérez-García, José Luis, José Miguel Gómez-López, Antonio Tomás Mozas-Calvache, and Jorge Delgado-García. 2024. "Analysis of the Photogrammetric Use of 360-Degree Cameras in Complex Heritage-Related Scenes: Case of the Necropolis of Qubbet el-Hawa (Aswan Egypt)" Sensors 24, no. 7: 2268. https://doi.org/10.3390/s24072268

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop