Next Article in Journal
Expanding Moment Magnitude Pools for Earthquake Magnitude Homogenization
Previous Article in Journal
Artificial Intelligence (AI)-Based Evaluation of Bolt Loosening Using Vibro-Acoustic Modulation (VAM) Features from a Combination of Simulation and Experiments
Previous Article in Special Issue
Creating a Haptic 3D Model of Wenceslas Hill in Olomouc
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

UAV Platforms and the SfM-MVS Approach in the 3D Surveys and Modelling: A Review in the Cultural Heritage Field

by
Massimiliano Pepe
1,
Vincenzo Saverio Alfio
2 and
Domenica Costantino
2,*
1
Department of Engineering and Geology (InGeo), “G. d’Annunzio” University of Chieti-Pescara, Viale Pindaro, 42, 65127 Pescara, Italy
2
Dipartimento di Ingegneria Civile, Ambientale, del Territorio, Edile e di Chimica, Polytechnic University of Bari, Via E. Orabona 4, 70125 Bari, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(24), 12886; https://doi.org/10.3390/app122412886
Submission received: 7 November 2022 / Revised: 6 December 2022 / Accepted: 12 December 2022 / Published: 15 December 2022
(This article belongs to the Special Issue 3D Virtual Reconstruction for Archaeological Sites)

Abstract

:
In recent years, structure from motion (SfM) and multi-view stereo (MVS) algorithms have been successfully applied to stereo images generated by cameras mounted on unmanned aerial vehicle (UAV) platforms to build 3D models. Indeed, the approach based on the combination of SfM-MVS and UAV-generated images allows for cost-effective acquisition, fast and automated processing, and detailed and accurate reconstruction of 3D models. As a consequence, this approach has become very popular for representation, management, and conservation in the field of cultural heritage (CH). Therefore, this review paper discusses the use of UAV photogrammetry in CH environments with a focus on state of the art trends and best practices in image acquisition technologies and 3D model-building software. In particular, this paper intends to emphasise the different techniques of image acquisition and processing in relation to the different platforms and navigation systems available, as well as to analyse and deepen the aspects of 3D reconstruction that efficiently describe the entire photogrammetric process, providing further insights for new applications in different fields, such as structural engineering and conservation and maintenance restoration of sites and structures belonging to the CH field.

1. Introduction

The use of unmanned aircraft systems (UASs) for photogrammetric purposes has become increasingly popular due to the possibility of acquiring geospatial information in a flexible, fast, and detailed way. UASs are known under various names, such as “Unmanned Aerial Vehicle” (UAV), “Remotely-Piloted Aerial System” (RPAS), “drone”, etc. In general, a UAS is a system comprising an unmanned aircraft (UA), a ground control station (GCS), and a communications data link for the UA command and control (C2) from the GCS [1]. For this reason, the term UAV refers to the aerial platform, while a UAS refers to the set of sensors and tools managing the vehicle and, at the same time, capable of acquiring geospatial information.
Currently, UASs are applied in numerous fields, such as archaeology, agriculture, civil engineering, architecture, etc. [2,3,4,5] because this aerial platform has, in many cases, replaced the traditional installation on board aircraft. In this way, the expert has also become a pilot capable of acquiring geospatial information autonomously and making the data acquisition process more efficient. Obviously, the operator must take into account the purposes and characteristics of the work area in order to assess the best choice of UAV platform to be used in the photogrammetric survey phase. Furthermore, if the extent of the area to be surveyed should be rather large, the use of other aerial platforms, such as ultra-light or light aircraft, must be considered. In fact, already in the data acquisition phase, it is possible to monitor the quality of the photogrammetric process. In addition, the great success of UASs in the photogrammetric field is also due to the development of structure from motion (SfM) and multi-view stereo (MVS) algorithms, which, implemented mainly in commercial software, have made the construction of three-dimensional models and/or high-quality orthophotos easy, intuitive, and fast [6]. The use of UAV photogrammetry in the field of cultural heritage has become a very important approach for the 3D documentation of archaeological excavations, monuments, historical buildings, archaeological sites, and landscapes [7]. The opportunity to survey archaeological structures with UAV platforms without the need for direct access to the site or remaining parts of ancient structures allows for the best possible preservation of archaeological assets and, at the same time, the acquisition of high geometric resolution. In addition, UAVs can be used to collect images and integrated with terrestrial surveying [8].
This manuscript aims to describe an appropriate methodological workflow in the field of UAV surveying, taking into account photogrammetric aspects and technological developments related to aerial platforms and their sensors. In fact, several aspects are discussed in this review paper: (i) highlighting the relationship between the “UAV platform” and computer vision approach, (ii) techniques and algorithms for reconstructing 3D models, (iii) emphasising the aspects concerning satellite navigation with particular regard to the aspects related to automatic planning and geospatial data acquisition, and (iv) the ability to rapidly produce useful models for subsequent analysis in the various fields of application.

1.1. Applications of UAV Photogrammetry in CH Field

Three-dimensional models or orthophotos in high geometric resolution, generated by an image acquired by UAS, are present many papers. Saleri et al., 2013 [9], described a special procedure used in Pompeii on the Area of the Theatres of Pompeii (Italy), processing aerial data and reconstructing the 3D model using innovative geomatic technologies; in this latter study, copter 4 drone (rotary wing/helicopter) was used for the research. Mouget and Lucet, G. [10] described a methodology to obtain information on the architecture of pre-Columbian archaeological sites in Mexico using a Wings S800 hexacopter with the Sony Nex7 camera. Adami et al. [11] described an aerial survey carried out by a DJI Spark drone during the excavations of the Naples (Italy) underground. Indeed, numerous archaeological finds were found in the excavations of the “Toledo”, “Municipio”, “Università”, and “Duomo” stations, attributable to different periods of the city of Naples (medieval, Roman, and Greek). Kadhim et al. [12] compared two types of survey, LiDAR and UAV photogrammetry, with an SfM approach; the experiment was conducted on one of the most remarkable ancient sites in the county of Cornwall (South West England), namely Chun Castle. Dasari et al. [13] described the UAV photogrammetry applied to 20 Ghanpur temples, a group of 12th-century stone temples different in design and size that are located near Warangal in Telangana (India). To build a 3D model of this structure, images acquired by the DJI Phantom 4 Pro UAS were processed by Pix4Dmapper Desktop software. Kanun et al. reconstructed an ancient Kanytellis village house three-dimensionally using a photogrammetric method using an Anafi Parrot HDR, comparing the results obtained from the analysis of two different software, Agisoft Metashape (Agisoft LLC, St. Petersburg, Russia) and Context Capture (Agisoft LLC, St. Petersburg, Russia) [14].
Additionally, in the field of architecture, many works have been presented. Pepe et al. [15] described a UAS survey methodology adapted to build, in a simple and fast way, 3D models of complex structures; in particular, it showed a case study of a historical masonry bridge built in the mid-1800s located in southern Italy. The authors analysed the potential of the developed method and discovered that it could enable the realisation of a maintenance or restoration project. Baiocchi et al. [16] discussed the integration of geomatic techniques for the location of lost hermitages. To achieve this aim, SfM algorithms were applied to UAV images to produce an orthophoto of seven hermitages belonging to the monastery of Fara in Sabina (Italy). In this latter case, the orthophoto was subsequently compared with the 1820 map belonging to the Gregorian Cadastre. The comparison resulted in the location of two lost hermitages, while the other two are yet to be discovered. In the paper by Ozimek et al. [17], a method aimed at creating an accurate model of historical architecture and landscape of complex geometries was presented. the authors, through terrestrial and UAV photogrammetric techniques, proposed methods related to the combination of photographs, highlighting the defects resulting from the automatic process of combination and possible corrections to be implemented. In addition, this latter paper presented verification of the reconstruction of a monument with a complex geometric shape, comparing it with a model obtained using a LiDAR point cloud. Photogrammetry from UASs has also been used in the case of the Isabel II dam, built in the mid-19th century, a monumental hydraulic structure located in Spain in order to conduct an investigation of the current state of the dam and adjacent constructions; the point cloud obtained from the photogrammetric process, combined with the historical data, was used as the basis for generating a historical building information model (HBIM) [18]. Sabil et al. [19], for the 3D survey of Masjid Tanjung Sembrong and the Teratak Selari Bonda, two types of UAVs were used. Specifically, a DJI Tello for low-altitude and short-range flight and a DJI Phantom 4 Pro for higher altitude and distance flights to capture the desired coordinated points of the aerial image to generate a 3D model of the building, were employed.
Therefore, the use of photogrammetry from UAVs represents, as shown in the case studies previously described, an increasingly used approach in the documentation, representation, and 3D reconstruction of sites and structures belonging to the CH.

1.2. Brief History of UAV Platform

The first functioning UAV in history was built in 1907 by brothers Jacques and Louis Bréguet (one of the founders of Air France) and Professor Charles Richet (an eclectic physician who received the Nobel Prize for his research on anaphylactic shock). This UAV, called the Gyroplane, had major limitations: it was cumbersome, it required four men to stop it, and during its first flight, it lifted just 60 centimetres off the ground.
In the following decades, new unmanned aircraft were developed in Europe, radio-controlled like the Ruston Proctor Aerial Target [20] or launched from a truck using compressed air [21]. In the same years, prototypes of aircraft use in the military were also developed in America. The etymology of the word “drone” may go back to the Old English “drān”, representing a male bee. This original usage was related to the noise that bees produce, and for this reason assimilated to the word drone to indicate a continuous, monotonous, low sound [22].
During the Second World War years [23] and later during the Vietnam War, new drones were produced for military purposes [24].
Since the 2000s, drones are no longer considered platforms for exclusive military use and for this reason, the Federal Aviation Administration (FAA) issued the first authorisations for drones for commercial use. These authorisations removed some restrictions imposed on consumer drones, opening up new possibilities for companies or professionals. This opened up new scenarios in the field of aerial platforms, and in 2010, the French company Parrot released its Parrot AR Drone, the first ready-to-fly drone that can be controlled entirely via Wi-Fi, using a smartphone (digital trends).
In recent years, interest in various purposes, such as land mapping, surveillance and control, and intelligent transport, has grown significantly, and, as a result, new, higher-performance platforms have been developed [25,26].
Figure 1 below shows the historical evolution of UAV platforms.

1.3. Structure and Configuration of UAS

Recent technological development has enabled the construction and implementation of various UAV platforms. The numerous classifications of UAS are, in general, defined according to the different nomenclature adopted (civil and scientific use) for their capacity, size, and flight endurance. In 2010, the American Department of Defense (DoD) also classified UASs into five categories, as shown in Table 1.
Austin [28], according to several parameters such as endurance, range, altitude, and weight, classified UAVs into high altitude long endurance (HALE) UAVs, medium altitude long endurance (MALE) UAVs, tactical UAVs, small UAVs, and mini and micro UAVs. In addition, Eisenbeiss and Eisenbeiss and Sauerbier [29] proposed a further classification considering UAV platforms as unpowered and powered [30,31], as well as lighter or heavier than air (Table 2).
Therefore, as shown in Table 1, several aerial platforms can be used in the field of photogrammetry; the most widely used platforms for acquiring photogrammetric datasets are single-rotor helicopters, multicopters, and fixed-wing aircraft. A multicopter is also called multirotor and can be considered a type of helicopter which has three or more propellers [32]. This basic design can give rise to various configurations. The various possible configurations must be realised in such a way as to balance the reaction torque by suitably alternating clockwise and anticlockwise rotating motors; furthermore, the configuration must not create problems with the centre of gravity and mass arrangements. The most popular multicopter is the quadcopter; it has four control inputs, which are the four propeller angular speeds.
In general, the task of being able to take off and move through the air is entrusted to the propellers, usually two-bladed (sometimes three- and four-bladed). The propellers are then operated by means of a brushless-type DC motor that is controlled by a microprocessor and regulates the rotation speed; in the case of quadcopters, two motors move clockwise, and the other two anticlockwise.
Other types of multicopter are hexacopters and octocopters: in the first case, the mechanism works with six motors where three work clockwise and three others move anticlockwise, while in the second case, power is provided by eight motors that transmit power to eight functional propellers. These latter configurations provide greater lifting power and stability than quadcopters, with the advantage of being able to land extremely safely [33]. A completely different category is fixed-wing UAVs, which are very similar in shape to an aeroplane and characterized by flight mechanics based on the lift of their wings. One of the advantages of this type of UAV is their longer flight times compared to multicopters, as well as their stability even in unfavourable wind conditions; in fact, they are mainly used in the field of land surveying and mapping, where flights can last up to an hour.
Compared to multicopters, some fixed-wing models have totally different take-off and landing systems. In fact, in addition to the classic vertical take-off and landing, some fixed-wing models are manually launched or pushed through a runway launch system during take-off (horizontal take-off and landing, HTOL), while during landing, they require small parachutes, nets, or prior planning of the landing point by assigning a correct GNSS (global navigation satellite system) position.
Over the years, various systems have been developed that combine the characteristics of UAVs capable of covering large areas in a short time with the vertical take-off characteristics of quadcopters. This has led to the development of so-called vertical take-off and landing (VTOL) drones. VTOL drones take off and land vertically in any location, thus drastically reducing the space problem for such delicate operations as take-off and landing. In general, a classification of UAVs according to landing, aerodynamics, and weight is depicted in Figure 2 below.

1.4. Organization of the Paper

The manuscript, after analysing the scientific literature and briefly discussing the historical evolution, structure, and configuration of UAS, intends to describe aspects related to the basic components of the UAS navigation system (Section 2) and approaches related to aerial surveying from a UAV platform (Section 3). In fact, the choice of the appropriate UAV platform to be used in the photogrammetric survey of the CH must necessarily know the criteria of flexibility and ease of use; the latter aspects are related to the nature of the object or structure to be surveyed. In particular, this section presents an overview of the main UAVs on the market and the usual procedures for planning the flight according to the principles of photogrammetry and verifying the execution of the aerial survey in compliance with aeronautical regulations.
Another aspect of particular importance is the type of dataset acquired and, consequently, the data processing process through SfM-MVS algorithms. Section 4 analyses the aspects of direct and indirect georeferencing, describes the main processes used in the 3D reconstruction, and lists some of the photogrammetric software available nowadays.
After describing the methodological workflow, analysing UAV platforms and photogrammetric processing, Section 5 finally illustrates some experiences in the construction of 3D point clouds, as well as the potential of these three-dimensional models in the production of outputs for the analysis, representation, management, and valorisation of structures belonging to cultural heritage. Conclusions are provided at the end of the document.

2. Navigation System

2.1. GNSS

UASs are systems capable of precise navigation due to the presence of GNSS receivers on board. To date, only the following GNSS systems provide global coverage: Global Positioning System (US system), GLONASS (Russian system), Galileo (European system), and BeiDou (Chinese system). There are also regional systems, called SBASs (satellite-based augmentation services) that provide augmentation services to position accuracies through a network of fixed stations that constantly monitor GNSS signals and allow errors to be controlled within predefined thresholds for navigation applications even in critical situations. In particular, GPS signals, and in some cases GLONASS, are ‘augmented’ at the regional level by services such as SBAS WAAS (US regional service), SDCM (Russian regional service), EGNOS (European regional service), MSAS (Japanese regional service), GAGAN (Indian regional service), and Beidou (Chinese regional service).
GNSS positioning can be achieved in different ways. In the field of aerial photogrammetry, two techniques are most widely used, namely, so-called “Point Positioning” and “Differential Positioning”. The point-positioning technique (absolute positioning of a single point in the assigned reference system, also known as “Single Point Positioning-SPP”) is a method in which observations are made by a single receiver acquiring data from different satellite constellations; the position of the receiver is determined through the use of coded pseudorange measurements [35]. In the last decade, a technique called “Precise Point Positioning” (PPP) has emerged that provides position accuracies at the decimetre level. In PPP systems, the user’s receiver does not use the clock and empirical data transmitted by GPS satellites but precise clock and satellite position estimates calculated and provided by an external network, such as that organised by the International GPS Service [36]. The differential positioning technique requires observations to be carried out simultaneously by at least two receivers. In the latter method, the so-called “baseline”, i.e., the vector connecting the antenna centres of two receivers, is calculated. In the acquisition process, the receiver is referred to as the base receiver and the rover receiver. In the case the GNSS system is mounted on the UAV, it represents the “rover”, while the “base” can be represented by a single GNSS or a multi-GNSS. In recent years, GPS has often been replaced by continuously operating reference station services (CORS) [37,38]. With differential kinematic GNSS solutions, rover positioning can reach an accuracy of a few centimetres [39]. To achieve high-precision real-time positioning using GNSS, several signal enhancement techniques have been developed, such as real-time kinematic, RTK, and post-processed kinematic, PPK. The latter two approaches, using different frequencies (L1-L2-L5 for GPS, L1-L2 for GLONASS, E1-E5a-E5b for Galileo, B1l-B2l for BeiDou), allow phase ambiguities, atmospheric propagation delay, and receiver clock errors to be resolved. RTK positioning also requires a radio or Internet link between the base and the UAV platform, which is not always available due to link interruptions. On the other hand, in PPK positioning, data processing takes place at the end of the aerophotogrammetric flight mission and allows the accuracy to be improved through post-processing of the GNSS data collected directly on site [40]. Recently, hybrid variants have been developed (e.g., PPK-RTK) with the advantage that when the drone is no longer receiving real-time corrections [41], it continues the pre-set flight mission by taking all the planned photos; once the drone mission is completed, the photos are stored on a digital device, together with a *.RINEX (Receiver Independent Exchange Format) file, i.e., a file containing GNSS observations and navigation of the entire route taken by the drone. The data will then be post-processed to acquire the photo coordinates with high accuracy. In order to obtain the position of the UAV’s image, several strategies can be used, as shown in Table 3.

2.2. IMU

IMU stands for inertial measurement unit and is one of the most important parts of the drone for navigation since it is able to manage translation and vertical and rotational movements. These latter movements are called pitch, yaw, and roll; specifically, the pitch is the upward or downward movement of the nose about an axis from wing to wing, yaw is the leftward or rightward movement of the nose about an axis that goes up and down, and finally, the roll is the rotation about an axis from the nose to the tail [43]. While the identification of these axes and their rotation is simple in the case of fixed-wing UAVs (Figure 3a,c), it is somewhat more complex in the case of multicopters due to their geometry (Figure 3b,c). In addition to measuring trim (roll, pitch, and yaw), the IMU is able to measure velocity, changes in altitude, and gravitational forces acting on a UAV. Indeed, the IMU typically contains three orthogonal rate-gyroscopes and three orthogonal accelerometers measuring angular velocity and linear acceleration [44]. Recent advances in the construction of MEMS (microelectromechanical systems) devices have made it possible to produce inertial navigation systems that are small and lightweight and, consequently, can be efficiently adapted to UAV platforms [45].

2.3. Integrated Navigation Systems

Currently, there are a large number of navigation sensors (GPS, GNSS, and IMU in microelectromechanical systems) on the market that are used on board UAVs for autonomous flight. The navigation solution (position and trim usually stored in a GPX file, GPS exchange format) estimated by the UAV’s internal sensor (metric accuracies on position and approximately ± 2° on trim) is associated with the images [46]. INS/GNSS integration architectures can be classified as: uncoupled, loosely coupled (LC), tightly coupled (TC), and deeply coupled or ultra-tightly coupled [47,48]. A widely used integration scheme for positioning UAV-Borne sensors is LC due to the simplicity of the integration architecture [49]. In this integration strategy, the position and velocity estimated by the GNSS Kalman filter (KF) are processed in the navigation KF to assist the INS (decentralised or cascaded filtering). However, this architecture contains errors related to the position and velocity information provided by the GNSS KF that are correlated over time; this problem can cause performance degradation or even instability of the navigation KF, so it is necessary to compensate for these correlations [50]. This means that, from an operational point of view, special care must be taken not to lose the lock with satellites, i.e., carefully planning the flight mission by verifying the good coverage of GNSS satellites and sharp turns or sudden movements due to strong winds. Indeed, it is necessary not to lose GNSS raw measurements, especially if the acquisition is based on phase measurements.
A commercial example of a GNSS-assisted inertial solution is the APX-15 UAV from Applanix (Richmond Hill, ON, Canada), which is a platform capable of achieving centimetre accuracy on control points using next-generation GNSS-inertial MEMs sensors and implemented in a single board that is small in size, low power, and weighs only 60 g [51].

2.4. Further Fundamental Components (Compass and Barometers)

The navigation of a UAS is carried out not only with GNSS and inertial navigation but also with a barometer and compass [52].
The barometer sensor recognises the pressure difference between the point where the drone is flying and a reference point and is, therefore, able to estimate the altitude of the UAV. For example, the new generation of barometers for UAVs, such as the MS5611-01BA03, developed by the TE Connectivity company, allows an altitude resolution of 10 cm.
The compass provides accurate readings of magnetic north. Knowing the orientation of the drone is essential, particularly when the drone is distant, and it is difficult or impossible with sight to tell which direction it is facing. It is important to check that the compass provides consistent results before each flight. A test to check the correct alignment with the direction of the north consists of positioning the pilot behind the drone so that they are on the line between the bow and the stern. Once in position, it is necessary to check on the UAV’s management app that the compass is aligned consistently: if the two alignments deviate by more than a few degrees, the initialisation has failed, and consequently, this operation must be repeated until correct alignment is achieved.

3. Aerial Survey by UAS

3.1. Review of Commercial UAS

Today more and more high-performance UASs are available on the market. In addition, very competitive prices and the improving aerial survey performance of UASs have increased user accessibility. Table 4 shows some of the UASs used in the field of photogrammetry; in particular, for each of them, the type, manufacturer, weight, maximum flight time for each battery, maximum speed, type of navigation, and the camera supplied with the drone are reported.

3.2. Mission Planning

Mission planning in UAV photogrammetry applications is the first and essential step to ensure the success of a survey mission. Photogrammetric mission planning can be defined as the process of planning the places to be flown over (waypoints, i.e., a reference point in space) and the actions to be performed, such as taking the photo [53].
The main steps to perform a survey by UAS for photogrammetry purposes are [54]:
(i) selection of a sensor and platform suitable for the purpose;
(ii) flight plan design;
(iii) assessment of factors to be controlled during the flight mission.
The sensors, initially designed and adapted for installation aboard aircraft (single- or twin-engine) or helicopters, are adapted over the years (in terms of size and weight) to UAV platforms due to technological development.
Today, many UAS solutions are present on the market for photogrammetry purposes, as described in Section 3.1. In the photogrammetric environment, an important parameter to take into account is the ground sample distance (GSD) which is the distance between the centre points of two consecutive pixels of an image, measured on the ground (Figure 4a).
Using a pinhole camera model (or, sometimes called projective camera model), i.e., a model that relates the physical image space to the physical workspace using the projective transformation [55], it is possible to calculate the GSD [56,57]:
G S D = H   f s x ,
where H is the normal distance from the ground to the perspective centre, 𝑓 is the focal length of the camera, and s x is the pixel size.
Graphically, these parameters can be sketched in the following picture (Figure 4b), where the focal length is exaggerated in order to identify the key elements of the photogrammetric approach; details of the focal length relative to the sensor are sketched in Figure 4c.
In fact, from the point of view of flight operations, it is necessary to calculate the absolute height above sea level (ASL) of the flight from the knowledge of the mean height above ground (AGL). This means adding the ground elevation to the height determined in relation to the GSD, which can be determined using regional and global digital terrain models. The more accurate the digital terrain model, the more the GSD is respected.
Once the sensor and platform suitable for the type of survey or investigation have been identified, flight planning can be designed.
To cover the entire area, it is necessary to produce more images. In fact, by acquiring the same scene from two different viewpoints, i.e., from two metric photographs, it is possible to obtain a stereoscopic 3D view where the distance between the two perspective centres of two consecutive photos (in the direction of the UAV movement) is defined as the “baseline”.
If the length of the baseline varies, then several longitudinal overlaps (i.e., along the direction of FL) can be obtained. Generally, traditional longitudinal overlap values range from 70% to 90%. The overlap between adjacent parallel strips, called sidelaps, takes on values ranging from 50% to 80%.
One of the effects that influence the correct planning of a UAV survey is the so-called “Motion Blur” phenomenon, i.e., the effect of motion blur in a photograph or sequence of frames.
This effect occurs when the drone flies too fast compared to the shutter speed, causing the scene to change substantially during a single camera exposure. In general, we can define the “Motion Blur” ( M B ) effect by the following relationship:
M B = C S I     v U A V ,
where:
C S I camera shutter interval;
  v U A V UAV speed.
Although it is difficult to completely eliminate motion blur during a 3D mapping/modelling mission, it is possible to reduce it considerably to obtain better results in post-processing by setting a correct drone speed beforehand to minimise this effect.
According to the project, several types of flight plans can be realized: polygon, grid, double grid, circular, and free flight. Polygon and grid flight plans are particularly suitable for surveying generally 2D objects, while double grid and circular plans are used for surveying 3D objects.
An example of the flight plan is performed by the use of Pix4D software on the archaeological area of Iuvanum, which is a site from Roman times and is located in the countryside of Montenerodomo (41°59′54.26″ N–14°14′59.60″ E), Chieti (Italy), and concerns an area of approximately 4 hectares.
The advantage of using flight planning software, such as Pix4D, is that it displays the high-resolution geometric orthophoto of the site of interest and the digital terrain model on the display of one’s device (smartphone, tablet, etc.); in this way, the spatial coordinates of the waypoints can be determined automatically and quickly.
Table 5 shows schematically the different flight types applied over the same archaeological area. The same settings were used for the different flight types; in particular, an overlap and sidelap value of 80% and a flight altitude of 80 metres were set in the planning step of the aerial flight mission. In addition, the flight plans were carried out in order to obtain a GSD of approximately 2 cm for nadiral acquisition and roughly 4 cm for oblique acquisition (circular mode).
A review of software for flight and mission planning is reported in Table 6.
Before starting the flight mission, a good GNSS satellite configuration and suitable weather conditions must be checked [63]. In order to check suitable weather conditions to perform photogrammetric flight, several websites and apps are available, such as the uavforecast.com service. However, before executing the flight, in addition to checking the weather and GNSS conditions, it is necessary to verify that the airspace where the photogrammetric mission is to be carried out is free of any restrictions or prohibitions, as described in more detail in the following paragraph (Section 3.3). Once the good weather and GNSS conditions and regular procedures (air rules) are verified, the flight plan can be carried out by UAS. Flight planning operations are organised and controlled by the ground control station (GCS); the GCS manages the launch, landing, flight, and eventual recovery, and handles any emergency situations. Furthermore, the GCS (installed on a dedicated remote controller or implemented on laptop devices, smartphones and tablets) operates as an interface between the UAS system and the outside world, managing communication with the various sensors [64].

3.3. UAV Regulation

Increasingly high-performance and increasingly easy to steer, drones are widely used in photogrammetric surveying for the representation of cultural heritage. Despite such simplifications, there are countless safety issues; in fact, regulations and laws have been drawn up and passed worldwide that restrict the areas of use. Even in some countries, such as Syria, Morocco, and Iraq, the use of UASs is banned. This issue raises important questions for the protection and preservation of the cultural heritage contained in these countries.
Regulations and laws in different states and continents are constantly changing. A globe-wide overview is offered to us by the map created by SurfShark, a virtual private network service provider, which has collected and catalogued the worldwide scenario of the different drone regulations in force [65]. Nations are coloured according to the approach of the regulations in force in that country, divided into seven different categories: (i) outright ban, (ii) effective ban, (iii) restrictions apply, (iv) visual line of sight required, (v) experimental visual line of sight, (vi) unrestricted (when flying away from private property and airports, under 500 ft/150 m height and with drones weighing less than 250 g), and (vii) no drone-related legislation.
A WebGis listing the laws, regulations, and restrictions of each country can be found on the droneregulations.info website [66]. Indeed, the user can search by country or directly on the map (Figure 5a) and consult the rules in force in that country (Figure 5b).

4. Three-Dimensional Modelling by the SfM-MVS Approach

4.1. Direct and Indirect Georeferencing

To build 3D models using a pinhole camera model, it is necessary to understand the internal orientation (IO) and exterior orientation (EO) of each image. The IO parameters (focal length and main point offsets) of the camera can be provided directly by the manufacturer or determined through calibration processes. However, the pinhole camera model is valid only when lenses with long focal lengths are used. If more accurate measurements are required, image coordinates perturbations (radial and tangential distortions) must be taken into account [67]. A widely used model in photogrammetry is that proposed by Brown [68], which is based on a polynomial approximation. Knowing the EO of a photo means establishing its spatial position at the moment of taking the picture and, as a consequence, calculating the movements that define the position of a rigid body in space, i.e., three translations (x, y, z) and three trim rotations of the camera (ω, φ, κ) [69]. The EO can be determined by the use of direct or indirect georeferencing [70,71].
In order to provide an accurate external orientation of the collected data, image acquisition systems must be equipped with integrated GPS/INS devices. If the georeferencing process takes place using such devices, this is referred to as direct georeferencing (DG). However, if acquisition systems without integrated devices are available, georeferencing can be carried out through the acquisition of control points in a predefined reference system and post-processing in order to determine the external orientation parameters of the photogrammetric processes. In the latter case, the indirect georeferencing process will be implemented [72].
The DG of the data from the sensor mounted on the drone is conducted through the determination of position and orientation; in this way, each pixel or element acquired can be georeferenced, without any field survey, using the data collected by the GNSS integrated with the measurements of the inertial sensors, equipped on the drone. The data collected by GNSS sensors and inertial sensors mounted on UAV platforms can be processed in real-time or after the mission; in either case, it is necessary to refer to a GNSS or a network of GNSS reference stations in order to calculate the position and orientation of the images with high accuracy [73].
It is possible, knowing the various systematic parameters a priori, to determine the coordinates of a map element directly from the coordinates of the acquired images, as shown in the following relation:
r o A l = r o b l t + R b l t ( s A R c b r c a c + r b c c ) .
In the formula, r denotes a vector and R a rotation matrix. Their superscripts and subscripts represent the frame, respectively, and the subscript indicates the start and end point of the vector:
r o A l —vector of coordinates of point (A) in the local level frame (LLF, l-frame);
r o b l t —vector of coordinates of the navigation sensors (INS/GPS) in frame l;
s A   —scaling factor;
R b l t   —rotation matrix interpolated from the b-frame to the l-frame;
t —exposure time, i.e., the image acquisition time;
R c b r—rotation matrix from the camera frame (c-frame) to the b-frame, determined by calibration;
r c a c —coordinate vector of point (a) in frame c (i.e., the image coordinate);
r b c c —vector between the IMU centre and the perspective centre of the camera in the b frame, determined by calibration.
The calibration procedure allows the determination of the rotation matrix ( R c b ), which can be determined by the product between the rotation matrix provided by the IMU ( R b l ) and the rotation matrix provided by the conventional beam adjustment using the control field of frame l. Specifically, the rotation matrix ( R c b ) is expressed by the following relationship [74]:
R c b = R l b R c l .
The vector between the GPS phase centre and the IMU centre is determined through a survey process. The vector and boresight angle can be calculated by comparing the position and trim differences between the EO parameters and the interpolated INS/GPS solutions:
r b c b = R l b X o c l X o b l Y o c l Y o b l Z o c l Z o b l ,
where:
r b c b —vector to be estimated;
( X o b l , Y o b l , Z o b l )—positional vector of the INS centre in the l-frame provided by integrated INS/GPS POS solutions;
( X o c l , Y o c l , Z o c l )—positional vector of the camera centre in the l-frame provided by the beam adjustment.
A sketch of the DG acquisition scheme and several frames is shown in Figure 6.
Indirect photogrammetry is the traditionally most widely used procedure, and it is a particularly robust method that uses a series of ground points of known coordinates, called ground control points (GCPs), and “tie points” between individual frames to perform orientation. In general, GCPs are reference points easily recognisable on the images, such as intersections, manholes, antennas (Figure 7a), marks made by tracing an X shape on the ground with spray paint (Figure 7b), panels made of waterproof, high-contrast material (black and white or yellow and black), with a matt finish to reduce reflections and improve visibility (Figure 7c), and coded targets (Figure 7d). Target sizes are variable depending on the design GSD; however, their size must be such to ensure that they can be identified at the flight height from which they are detected. GCPs should be distributed evenly over the area to be surveyed and, depending on the terrain morphology, positioned at different altitudes. More information regarding the use and impact of the GCPs in UAV photogrammetry can be found in refs. [75,76]. Subsequently, within the photogrammetric processing software, GCPs must be manually identified on the individual photos and their associated coordinates in the appropriate reference system used.
Currently, various photogrammetric processing software can automatically identify GCPs (coded targets) within the project area through the use of machine learning and computer vision algorithms.
The use of GCPs makes it possible to define a relationship between the reference system of the object and that of the image. In fact, GCPs are known to coordinate points in both reference systems, capable of making the parameters within the collinearity equations the only unknown elements.
The collinearity equations are the constitutive equations of photogrammetry, useful in the transition from image space to object space, and express the alignment between the object point, the image point, and the point of the camera:
x 0 = c r ^ 11 X ^ X ^ 0 + r ^ 12 Y ^ Y ^ 0 + r ^ 13 Z ^ Z ^ 0 r ^ 31 X ^ X ^ 0 + r ^ 32 Y ^ Y ^ 0 + r ^ 33 Z ^ Z ^ 0 , y 0 = c r ^ 21 X ^ X ^ 0 + r ^ 22 Y ^ Y ^ 0 + r ^ 23 Z ^ Z ^ 0 r ^ 31 X ^ X ^ 0 + r ^ 32 Y ^ Y ^ 0 + r ^ 33 Z ^ Z ^ 0 ,
where:
c —focal length;
r ^ i j —element of rotation matrix;
X ^ ,   Y ^ ,   Z ^ —object coordinates of the point;
X ^ 0 ,   Y ^ 0 ,   Z ^ 0 —ground coordinates system.
The collinearity equations have nine unknown parameters (the orientation angles of the sensors, the coordinates of the camera points, and the coordinates of the object point). Since the control points have been chosen, the accuracy of that selection can be calculated through the difference between the image coordinates of the GCPs and the actual coordinates, called residual. The residual allows one to estimate both the accuracy of individual GCPs and the overall accuracy of the transformation, expressed as root mean square (RMS) error. The error can be calculated in map units (meters, degrees) or image units (pixels). The operator must be able to correctly interpret the residuals in order to obtain correct georeferencing of the image. The distribution of GCPs should be as uniform as possible. The best selection is homogeneous distribution. Starting from the four vertices and gradually proceeding according to a regular mesh will prevent the image from being nonuniform in rectification.
The number of ground points to be surveyed must be such that the number of equations greater than the number of unknowns can be obtained by generating additional information, which is represented by the tie points. The combined presence of GCPs and tie points makes it possible to solve the system of equations, finding the optimal unknown parameters through least squares estimation and ensuring high redundancy to accuracy in the process. This is generally carried out by photogrammetric software that performs the procedure through two steps. First, a type of alignment between images called relative is performed, taking advantage of tie points. Subsequent use of GCPs allows the relationship between image and object to be identified, binding the photogrammetric block to the object reference system, and performing what is called absolute orientation. The equations used in these two operations are entered into a single system called “Bundle Block Adjustment”, the resolution of which allows the estimation of the optimal unknown parameters and completion of the orientation. This is a procedure whose objective is to optimize the parameters of the collinearity equations, finding an appropriate configuration for them, such as to ensure the best possible accuracy for the photogrammetric block. Once the EO parameters have been determined with this procedure and the IO parameters have been determined through camera calibration, the collinearity equations can be applied, and, as a result, the metric characteristics of the object can be determined with high accuracy.
However, although indirect georeferencing is a robust and established procedure, it shows some disadvantages. Firstly, the estimated EO do not necessarily agree with the physical position and orientation of the camera during image exposure [77]. In addition, the survey execution time must be considered, which combines not only the drone flight time but also the time related to measuring GCPs on the ground. Moreover, such operations can be very difficult in certain environments due to the complex and diverse morphological characteristics of the terrain, as well as in emergency situations where the rapid determination of image EO parameters is required.
In photogrammetric applications, direct georeferencing offers several advantages over indirect georeferencing: (i) since it is no longer necessary to survey GCPs in the field, there is a significant advantage in terms of time and cost of surveying, (ii) concerning the efficiency of the flight mission, since a significant lateral overlap (sidelap) is not required, it is possible to reduce the number of flight lines per area, (iii) efficiency in integrated orientation and point matching during photogrammetric processing, and (iv) real-time mapping in emergency situations where an urgent response is required.

4.2. Images Acquisition in SfM and MVS Approach

SfM and MVS approaches have become very popular since they can provide fully automated 3D modelling for arbitrary images without any prior knowledge or onsite measurements. Indeed, starting from a collection of stereoscopic images, it is possible to build a 3D point cloud. Figure 8 shows an example of a 3D reconstruction where, for each captured image, location and file name are reported.
The standard error σ X , Y , Z of the X, Y, and Z object coordinates of a generic 3D point may be evaluated by the following relation [78]:
σ X , Y , Z = q   Z c k σ p ξ ,
where
q —design factor expressing the strength of the camera network (generally between 0.4 and 2);
c —focal length;
k —number of images used to determine the same point;
σ p ξ —measurement precision of image coordinates.
In order to obtain a high-quality three-dimensional model, the base-to-depth ratio (B/D) must be increased during image acquisition, and a set of images with converging rather than parallel optical axes must be achieved [78].
In general, the SfM-based approach offers enormous redundancy to compensate for the potentially modest loss of geometric strength. Consequently, in UAV photogrammetry, in order to create a robust and reliable network, it might be useful to create parallel flight planes at different heights [79]. For dealing with SfM problems in computer vision, projective geometry is a fundamental tool, especially in multiple-view geometry. For this reason, before explaining the SfM and MVS process, a brief introduction to epipolar geometry is illustrated.

4.3. Multiple View Geometry

In a pin-hole model, a 3D point X is projected to an image point x as:
λ x = P X ,
where λ is the scalar that represents the inverse depth of the 3D point, and P is called the camera matrix and can be decomposed as:
P = K R | t ,
where R is the rotation matrix relative to the camera orientation, t represents a 3D translation vector relative to the position of the camera centre, and K represents the intrinsic calibration matrix representing the projection properties of the camera [80].
Thus, epipolar geometry describes the geometric relationships that exist between the stereo images and that depend only on the internal parameters of the cameras and their poses [81].
Considering point X in three spaces imaged in two views, at x 1 in the first and at x 2 in the second, it is necessary to stabilise the relationships between these two points. Figure 9 shows a representation of epipolar geometry; the distance between the two camera centers, C 1 and C 2 , is called baseline; the intersection of the baseline with the image planes generates the epipoles, e 1 and e 2 . The lines l 1 and l 2 are called epipolar lines, and the plane containing the baseline and the epipolar lines is called the epipolar plane.
Two-view geometry can always transform the cameras with homography to simplify the treatment; moreover, the homographic transformation is a rigid translation. Therefore, one approach is to place the origin of the reference system and the axes in the first camera. In this condition, it is possible to write the epipolar constraint equation:
x 2 t   F   x 1 = 0 ,  
where F , the fundamental matrix, is k 2 t   S t R   k 1 t . S t represents the skew-symmetric matrix.
The camera matrices, if the internal calibration parameters ( k 1 and K 2 ) are not known, are definable only up to a projective transformation (angles and distances not respected); otherwise, with a known calibration, the reconstruction will be metric.
In the latter case, the fundamental matrix for the coordinates normalized by the calibration is called the essential matrix and is usually denoted E instead of F to emphasize that it is the calibrated case; the image coordinates are normalized, and the equation of the epipolar constraint becomes x k 2 t   E   x k 1 = 0 .

4.4. SfM and MVS Pipeline

The main steps of a typical SfM pipeline can be summarized as follows: (i) feature extraction for an individual image, (ii) feature matching for each image pair and geometric verification, and (iii) reconstruction initialization [82].
The feature extraction task allows us to identify local features on each image, the so-called key points; the most common algorithm used in SfM software is the scale-invariant feature transform (SIFT) feature detector [83].
The second step, the correspondence search task, can be divided into the following stages:
-
Feature matching: determination of images that show common parts of the scene through the analysis of key points obtained in the feature extraction phase.
-
Geometric verification: using a robust estimation technique such as RANSAC or RANdom sample consensus, the image pairs that are potentially overlapping are verified [84].
-
Regarding reconstruction initialization, the main steps are [85]:
-
Initialization: once a geometrically verified image pair has been selected, the points in common between the two images are used as input for the construction of the point cloud.
-
Image registration: provided there is a three-dimensional set of points and the corresponding two-dimensional projections in the image, it is necessary to estimate the pose of a calibrated camera (Perspective-n-Point (PnP)) in order to register new images to the current model [84].
-
Triangulation: this task determines the 3D locations of the matched points using triangulate, which is based on the direct linear transformation (DLT) algorithm [81];
-
Bundle adjustment: optimisation of 3D structures and camera motions of each view using the Levenberg–Marquardt (LM) algorithm [86,87].
Multi-view stereo algorithms allow us to increase the point cloud density previously generated during the SfM process. These algorithms take a possibly very large set of images and construct a 3D plausible geometry that explains the images under some reasonable assumptions, the most important being scene rigidity [88]. There are several MVS algorithms [89] in the literature, such as volumetric representations [90], Furukawa’s PMVS implementation [91], and Goesele’s multi-view stereo for community photo collections approach [92]. A very efficient algorithm able to handle numerous images and take high accuracy was proposed by ref. [93], which is based on four distinguishable steps: (i) stereo pair selection, (ii) depth-map computation, and (iii) depth-map refinement and depth-map merging.

4.5. Software for UAV Photogrammetry

Using software based on SfM-MVS algorithms, it is possible to obtain 3D point clouds useful to build realistic 3D models, digital surface models, and other derived technical representations such as orthophotos, features, contours, etc.
In recent years, there has been an increase in the development of new photogrammetric commercial software; as a consequence, more and more high-performance software packages are available on the market. In the following Table 7, a review of the main commercial photogrammetric software is reported.
In addition, open-source software is being continuously developed by the scientific community [110]. Indeed, the development of open-source software is an important research direction in order to maximise efficiency in the construction of 3D models, make the interfaces user-friendly, and reduce the need for further processing in other software to obtain any desired output, as well as improve the performance of 3D information reconstruction in the case of objects characterised by complex geometries [111]. Table 8 shows several open-source software.

4.6. Three-Dimensional Accuracy Assessment Based on GCPs in SfM Photogrammetry

The evaluation of the accuracy of the point cloud obtained from the photogrammetric process is carried out using check points (CPs), which, in the same way as GCPs, are materialised on the area to be surveyed. Indeed, GCPs and CPs can be acquired by GNSS survey or Total Station (TS).
TS is an electronic surveying instrument that combines an electronic distance measuring device (EDM) with an electronic theodolite, integrated with a microprocessor, electronic data collector, and storage system. The TS measures horizontal angles, vertical angles, and slope distance from the TS to the reference point.
Accuracy measurements are based on the variation between the value obtained from the UAV photogrammetric solution and the reference value at the CP considered.
The parameter that defines the deviations between the (more accurate) reference data and the UAV-derived data is defined as root mean square error (RMSE), which is a statistical index that defines the internal variance provided by the ratio of the internal deviance (or deviance within groups) to the total numerosity.
In UAV photogrammetry, the evaluation of accuracy is defined by the following relationships [125]:
R M S E x = i = 1 n x U A S x S 2 n ,
R M S E y = i = 1 n y U A S y S 2 n ,
R M S E z = i = 1 n z U A S z S 2 n ,
where:
n —number of CPs tested for each project;
x U A S ,   y U A S ,   z U A S —coordinates estimated in the bundle adjustment for the i t h CP;
x S ,   y S ,   z S —coordinates obtained from the topographic survey for the i t h CP.
These RMSE values (see Equations (11)–(13)) show the accuracy assessment in East (X), North (Y), and height (Z) and are carried out on surveyed points that were not used for georeferencing.
The horizontal error (XY), i.e., the error distributed in the two dimensions, is defined by the relationship [126]:
R M S E x y = ( R M S E x ) 2 + ( R M S E y ) 2 .

5. Experience in Building 3D Point Cloud and Orthophoto

5.1. Three-Dimensional Reconstruction Using UAV Photogrammetry

Experiences of photogrammetry from UAV platforms are increasingly numerous due to the ability to build detailed and accurate three-dimensional models, even of complex structures.
The prior management and organisation of a photogrammetric survey from UAVs play a fundamental role in the subsequent processing phase of the dataset [127]; in fact, before any aerial survey operation, it is necessary to know in detail the structure to be surveyed and the objective and degree of detail to be achieved in the 3D reconstruction phase. This is due to the fact that in the CH field, the structures being surveyed almost always show different historical–morphological characteristics.
A spire rich in ornaments, statues, and fine architectural details, for example, requires a UAV photogrammetric survey that integrates different acquisition schemes (polygonal grids, circular, free flight mode) at different heights with different camera angles; in this way, it is possible to acquire the greatest number of details necessary to obtain a performing 3D reconstruction.
Table 9 shows some experiences of using UAV photogrammetric surveys of structures belonging to CH.
In particular, some 3D models obtained through an SfM-MVS photogrammetric process are illustrated, indicating some characteristics such as the UAS used for the survey, the number of cameras, the type of 3D reconstruction software used, and the number of points in the dense cloud. Each survey was carried out taking into account the objectives of the project and, consequently, with different characteristics.
Another output that can be obtained from the photogrammetric process and is highly utilised in the representation of CH is the orthophoto. In fact, once a dense, coloured point cloud is constructed, a continuous 3D model (triangular or tetrahedral mesh) can be produced. From this model, an orthographic projection can be realised, i.e., 3D objects can be represented in two dimensions according to the desired plane. Lastly, the orthophoto can be edited with raster graphics editing software in order to improve the graphic quality and, if necessary, remove elements outside the survey area. An orthophoto application generated from the 3D mesh model, using different projection planes, concerns a Baroque architecture built in the second half of the 18th century and located in the town of Nardò (Italy) (Figure 10).

5.2. Beyond 3D Modelling

UAV photogrammetry enables the construction of 3D models even in critical scenarios, such as the reconstruction of a masonry bridge from the late 1800s in a hard-to-reach area [128]. Once the 3D model has been built according to a mesh point cloud, it is possible to create models useful for heritage building information modelling (HBIM). For example, Karachaliou et al. [129], through a UAV photogrammetric approach and the implementation of a set of architectural building information, developed an HBIM model of the ‘Averof Museum of Neo-Hellenic Art’ located in Metsovo (Greece). Donato et al. [130] discuss the construction of a BIM model of the Giorgini Bridge in Castiglione della Pescaia, Grosseto (Italy), through the integration of UAV photogrammetry and terrestrial laser scanning. Themistocleous et al. [131] describe an accurate, simple, and cost-effective method to document CH sites and generate 3D digital models from images acquired by the UAV platform; the 3D model was managed in Autodesk Revit software environment to generate a BIM of the church taken into consideration.
Other applications can be found in the transformation of the 3D model generated from UAV photogrammetry in the finite element model (FEM) or finite element analysis (FEA) model [132,133]. Meschini et al. [134] describe an experiment conducted at the Convent of San Francesco located in the centre of Monterubbiano (Marche, Italy). In particular, a 3D survey that integrated two different acquisition techniques was carried out: a TOF laser scanner and a 3D model based on images acquired by a UAV platform equipped with a digital camera. This approach was developed in order to obtain a dense 3D model from which high poly and low poly 3D models could be generated and which could be performant in the field of historical documentation and photorealistic representation, as well as valid support in the field of risk assessment based on the finite element method.
Pepe et al. [135] describe an efficient method in order to obtain a FEM model of a bridge from a point cloud generated by UAV photogrammetry. In particular, the 3D point cloud was carried out using UAS Xiaomi Mi 4K, a multicopter rotary equipped with a 12 MP camera and Agisoft Metashape software. The point cloud was first imported into Rhinoceros software, and thanks to the implementation of some plug-ins developed in Grasshopper, the parametric model was built; subsequently, the model of the bridge was imported into Midas software to build a model suitable for FEM analysis (Figure 11).
A widely used 2D approach from orthophotos concerns the construction of 2D CAD drawings that are particularly useful in the fields of archaeology and architecture [136]. An architectural application was realised for the construction of a CAD representation of the staircase of the church of San Domenico in Taranto (Italy), as shown in Figure 12.
This photogrammetric survey was carried out with a Parrot Anafi drone in order to cover each element of the staircase and some structural elements that compose it where it is necessary to take zonal photos.
The post-processing that led to the construction of an orthophoto with a pixel size of 5mm was carried out with Metashape software; from this orthophoto, CAD restitution was carried out in such a way as to highlight the architectural elements that characterise the structure.

6. Conclusions

This review comprehensively examined both the hardware component of the UAV platform (drone design, sensors for navigation, and image acquisition) and the software component concerning the design and realisation phase of the flight as well as the image processing phase using SfM-MVS algorithms. Many applications of UAV photogrammetry in the field of CH have been shown; each case study shown has morphological characteristics, areas, and complexity of survey operations.
The application of SfM-MVS algorithms to images acquired with UAVs is a perfect combination for the construction of accurate and detailed 3D models. Indeed, UAV photogrammetry for the representation of CH is a scientifically growing field, as shown by the increase in manuscripts in the various databases. A bibliometric analysis carried out on the Scopus database showed that the use of the UAV platform is constantly increasing, as also shown by Nex et al. [137], who highlighted the presence of more than 80,000 manuscripts since 2001. As far as the analysis of photogrammetry from UAVs is concerned, growth with a rather linear trend and the presence, since 2012, of approximately 2400 manuscripts has emerged. This trend is also reflected in the field of UAV photogrammetry for CH documentation, with a slight increase from the year 2016; this means that in recent years, UAV photogrammetry has become more and more widespread in CH applications, and this can be asserted by the quality of the 3D models and orthophotos generated by UAV images.
In this paper, various 3D models and orthophotos applied to CH structure were shown; furthermore, different applications and processing were conducted from these models. For example, from point clouds, it is possible to model elements and structures belonging to the CH field in order to obtain parametric 3D models useful for subsequent analyses in the field of BIM and FEM. Following this line of research, the photogrammetric survey from UAV applied to CH plays a fundamental role in the management, valorisation, and conservative maintenance of historical–architectural heritage.
From this point of view, the UAV photogrammetric approach not only reduces acquisition times but also makes it possible to map areas that are difficult to access and is more versatile in situations where the use of other sensors is strongly conditioned or inadvisable. For example, UAV photogrammetry allows acquisition at modest heights (4 to 5 m), making it a highly competitive method compared to a survey performed with TLS, which requires platforms or instruments capable of reaching the heights necessary for the purpose. For greater heights, on the other hand, the UAV platform becomes an even more competitive tool than aerial photogrammetry, with the advantage of acquiring images at relatively low altitudes and consequently with high geometric resolution. Furthermore, the possibility of equipping UAVs with a wide range of other sensors (such as ultra-high resolution digital cameras, thermal imaging cameras, multispectral sensors, and laser scanner sensors) makes this solution useful in the field of photogrammetry and remote sensing.
A bibliographic analysis conducted on the documents indexed in the Scopus database, using the keywords “UAV”, “UAS”, “Drones”, and “Cultural Heritage”, was carried out in order to highlight the type of sensor (or multi-sensors), the UAV platform, and the purposes used in the photogrammetric survey of CH.
Regarding the use of sensors, it was found that, in the field of CH, in 80% of the cases examined, the sensor consisted of an RGB camera, usually integrated into the platform itself; the remaining 20% concerned the equipping of the UAV platform with an RGB camera and possible hyperspectral, thermal, and LiDAR sensors.
Bibliographic analysis of the type of UAV has shown that multicopters (in most cases quadricopters) represent the most widely used platform. In fact, as highlighted in the pie chart (Figure 13a), only a rather modest percentage (2%) employs hybrid UAV platforms (generally self-built and equipped with multi-sensor systems), and approximately 7% use fixed-wing drones. The latter type is widely used in contexts where the areas to be surveyed for mapping and 3D reconstruction, especially in the archaeological field, are extremely extensive. The bibliographic analysis also showed that approximately 50% of the research analysed focuses on 3D reconstruction using the photogrammetric technique from UAVs as they recognise this approach as automatic, versatile, and economical. The remaining part, on the other hand, develops its scientific research starting from the photogrammetric 3D model to obtain parametric models (HBIM, FEM, etc.), mapping and remote sensing applications, or is useful for the creation of virtual tours and augmented reality. Figure 13b shows the purposes of photogrammetric surveying applied to CH found in the analysed manuscripts.
This trend in the use of UAVs for academic purposes is also reflected in the UAV market; indeed, the global UAV drones market size was USD 19,784.8 million in 2021 and is predicted to grow with a compound annual growth rate (CAGR) of 19.6%, by generating a revenue of USD 102,466.7 million by 2030 [138]. This growing market and interest in UAV photogrammetry are due to the ability to perform three-dimensional and multi-temporal surveying in a rapid and easy way; this is also made possible by the small size of UAVs and the possibility of equipping them with different types of sensors. A very important aspect related to the technological development of such platforms is certainly that of making them perform even better in conditions where power, size, and mass can negatively influence the final result (emergency conditions, reduced survey spaces, adverse weather conditions).
In this manuscript, a valid methodological approach was described for UAV surveying aimed at the reconstruction of 3D models in the field of CH and highlighting any limitations or criticalities in the proposed methods, as well as to contribute the various actors operating in these fields, in order to obtain high-quality, realistic 3D models. Furthermore, the manuscript aims to provide techniques and methods for efficient 3D acquisition and reconstruction of cultural heritage assets. Therefore, this review paper can contribute to future research, markets, and applications of UAVs for photogrammetric purposes in the field of cultural heritage.
In the future, it is hoped that technological innovation will contribute to the creation of platforms where passive sensors (high-resolution cameras) and active sensors (LiDAR) can be easily integrated with thermal sensors in order to obtain 3D models not only with realistic content but also that can easily meet the current requirements for energy efficiency, which is of particular importance today in the field of cultural heritage.

Author Contributions

Conceptualisation, M.P., V.S.A., and D.C.; methodology, M.P., V.S.A., and D.C.; software, M.P., V.S.A., and D.C.; validation, M.P., V.S.A., and D.C.; formal analysis, M.P., V.S.A., and D.C.; data curation, M.P., V.S.A., and D.C.; writing—review and editing, M.P., V.S.A., and D.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We want to thank the reviewers for their suggestions; in addition, we would like to thank G.A. Restuccia for his contribution in the drawing of Figure 9 and S. Brescia and M. Chiricallo for the drawing of Figure 10.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CHCultural Heritage
BIMBuilding Information Modelling
HBIMHeritage Building Information Modelling
FEAFinite Element Analysis
FEMFinite Element Model
LiDARLight Detection and Ranging
SfMStructure of Motion
MVSMulti-View Stereo
SIFTScale-Invariant Feature Transform
UAVUnmanned Aerial Vehicle
UASUnmanned Aircraft System
HTOLHorizontal TakeOff and Landing
VTOLVertical TakeOff and Landing
HALEHigh Altitude Long Endurance
MALEMedium Altitude Long Endurance
GCPGround Control Point
CPCheck Point
GCSGround Control Station
GSDGround Sample Distance
ASLAbove Sea Level
AGLAbove Ground Level
MBMotion Blur
GNSSGlobal Navigation Satellite Systems
GPSGlobal Positioning System
SPPSingle Point Positioning
PPPPrecise Point Positioning
RTKReal-Time Kinematic
PPKPost-Processed Kinematic
RINEXReceiver Independent Exchange Format
CORSContinuously Operating Reference Station Services
KFFilter Kalman
IMUInertial Measurement Unit
INSInertial Navigation System
TCTightly Coupled
LCLoosely Coupled
EDMElectronic distance measurement
TSTotal Station
TOFTime-of-Flight
RMSRoot Mean Square
RMSERoot Mean Square Error
IOInternal Orientation
EOExterior Orientation
DGDirect Georeferencing
RANSACRANdom Sample Consensus
PnPPerspective-n-Point
DLTDirect Linear Transformation
LMLevenberg–Marquardt

References

  1. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef] [Green Version]
  2. Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A review on UAV-based applications for precision agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef] [Green Version]
  3. Seo, J.; Duque, L.; Wacker, J.P. Field application of UAS-based bridge inspection. Transp. Res. Rec. 2018, 2672, 72–81. [Google Scholar] [CrossRef]
  4. Esposito, S.; Fallavollita, P.; Melis, M.G.; Balsi, M.; Jankowski, S. UAS imaging for archaeological survey and documentation. In Proceedings of the Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2013, Wilga, Poland, 27 May–2 June 2013; Volume 8903, pp. 147–153. [Google Scholar]
  5. Scianna, A.; La Guardia, M. Survey and photogrammetric restitution of monumental complexes: Issues and solutions—The case of the manfredonic castle of mussomeli. Heritage 2019, 2, 774–786. [Google Scholar] [CrossRef] [Green Version]
  6. Girelli, V.A.; Borgatti, L.; Dellapasqua, M.; Mandanici, E.; Spreafico, M.C.; Tini, M.A.; Bitelli, G. Integration of geomatics techniques for digitizing highly relevant geological and cultural heritage sites: The case of san leo (italy). In Proceedings of the 26th International CIPA Symposium 2017, Ottawa, ON, Canada, 28 August–1 September 2017; Volume 42. [Google Scholar]
  7. Campana, S. Drones in archaeology. State-of-the-art and future perspectives. Archaeol. Prospect. 2017, 24, 275–296. [Google Scholar] [CrossRef]
  8. Opitz, R.S.; Cowley, D.C. Interpreting Archaeological Topography: 3D Data, Visualisation, and Observation; Occasional Publications of the Aerial Archaeology Research Group 5; Oxbow: Oxford, UK, 2013; pp. 115–122. [Google Scholar]
  9. Saleri, R.; Cappellini, V.; Nony, N.; De Luca, L.; Pierrot-Deseilligny, M.; Bardiere, E.; Campi, M. UAV photogrammetry for archaeological survey: The Theaters area of Pompeii. In Proceedings of the 2013 Digital Heritage International Congress (DigitalHeritage), Marseille, France, 28 October–1 November 2013; Volume 2, pp. 497–502. [Google Scholar]
  10. Mouget, A.; Lucet, G. Photogrammetric archaeological survey with UAV. In Proceedings of the ISPRS Technical Commission V Symposium, Riva del Garda, Italy, 23–25 June 2014; Volume 2. [Google Scholar]
  11. Adami, A.; Fregonese, L.; Gallo, M.; Helder, J.; Pepe, M.; Treccani, D. Ultra light UAV systems for the metrical documentation of cultural heritage: Applications for architecture and archaeology. In Proceedings of the 6th International Workshop LowCost 3D–Sensors, Algorithms, Applications, Strasbourg, France, 2–3 December 2019; Volume 42, pp. 15–21. [Google Scholar]
  12. Kadhim, I.; Abed, F.M. The Potential of LiDAR and UAV-photogrammetric data analysis to interpret archaeological sites: A Case Study of Chun Castle in South-West England. ISPRS Int. J. Geo-Inf. 2021, 10, 41. [Google Scholar] [CrossRef]
  13. Dasari, S.; Mesapam, S.; Kumarapu, K.; Mandla, V.R. UAV in Development of 3D Heritage Monument Model: A Case Study of Kota Gullu, Warangal, India. J. Indian Soc. Remote Sens. 2021, 49, 1733–1737. [Google Scholar] [CrossRef]
  14. Kanun, E.; Alptekin, A.; Karataş, L.; Yakar, M. The use of UAV photogrammetry in modeling ancient structures: A case study of “Kanytellis”. Adv. UAV 2022, 2, 41–50. [Google Scholar]
  15. Pepe, M.; Costantino, D. UAV photogrammetry and 3D modelling of complex architecture for maintenance purposes: The case study of the masonry bridge on the Sele river, Italy. Period. Polytech. Civ. Eng. 2021, 65, 191–203. [Google Scholar] [CrossRef]
  16. Baiocchi, V.; Onori, M.; Scuti, M. Integrated Geomatic Techniques for the Localization and Georeferencing of Ancient Hermitages. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 46, 31–37. [Google Scholar] [CrossRef]
  17. Ozimek, A.; Ozimek, P.; Skabek, K.; Labędź, P. Digital modelling and accuracy verification of a complex architectural object based on photogrammetric reconstruction. Buildings 2021, 11, 206. [Google Scholar] [CrossRef]
  18. Martínez-Carricondo, P.; Carvajal-Ramírez, F.; Yero-Paneque, L.; Agüera-Vega, F. Combination of HBIM and UAV photogrammetry for modelling and documentation of forgotten heritage. Case study: Isabel II dam in Níjar (Almería, Spain). Herit. Sci. 2021, 9, 95. [Google Scholar] [CrossRef]
  19. Sabil, A.; Mahmud, N.A.A.; Utaberta, N.; Amin, N.D.N.; Asif, N.; Yusof, H. The Application of Photogrammetry in Architecture Historical Documentation: The measured drawing of Tanjung Sembrong Mosque and Teratak Selari Bonda. IOP Conf. Ser. Earth Environ. Sci. 2022, 1022, 012007. [Google Scholar] [CrossRef]
  20. Prisacariu, V. The history and the evolution of UAVs from the beginning till the 70s. J. Def. Resour. Manag. JoDRM 2017, 8, 181–189. [Google Scholar]
  21. Udeanu, G.; Dobrescu, A.; Oltean, M. Unmanned aerial vehicle in military operations. Sci Res Educ Air Force 2016, 18, 199–206. [Google Scholar] [CrossRef]
  22. Vogler, L.C.A.; Hughes, T. Anything but ‘drone’: Why Naming Matters.
  23. Birtchnell, T.; Gibson, C. Less talk more drone: Social research with UAVs. J. Geogr. High. Educ. 2015, 39, 182–189. [Google Scholar] [CrossRef] [Green Version]
  24. Major, R. RQ-2 Pioneer: The Flawed System that Redefined US Unmanned Aviation; Air Command And Staff College Mawell AFB United States: Montgomery, AL, USA, 2012. [Google Scholar]
  25. Saboor, A.; Coene, S.; Vinogradov, E.; Tanghe, E.; Joseph, W.; Pollin, S. Elevating the future of mobility: UAV-enabled Intelligent Transportation Systems. arXiv 2021, arXiv:2110.09934. [Google Scholar]
  26. Mohsan, S.A.H.; Othman, N.Q.H.; Khan, M.A.; Amjad, H.; Żywiołek, J. A Comprehensive Review of Micro UAV Charging Techniques. Micromachines 2022, 13, 977. [Google Scholar] [CrossRef]
  27. Barnhart, R.K.; Marshall, D.M.; Shappee, E. Introduction to Unmanned Aircraft Systems; CRC Press: Boca Raton, FL, USA, 2021. [Google Scholar]
  28. Austin, R. Unmanned Aircraft Systems: UAVS Design, Development and Deployment; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  29. PS, R.; Jeyan, M.L. Mini Unmanned Aerial Systems (UAV)-A Review of the Parameters for Classification of a Mini UAV. Int. J. Aviat. Aeronaut. Aerosp. 2020, 7, 5. [Google Scholar] [CrossRef]
  30. Eisenbeiß, H. UAV Photogrammetry. Ph.D. Thesis, University of Technology Dresden, Dresden, Germany, 2009. [Google Scholar]
  31. Eisenbeiss, H.; Sauerbier, M. Investigation of UAV systems and flight modes for photogrammetric applications. Photogramm. Rec. 2011, 26, 400–421. [Google Scholar] [CrossRef]
  32. Quan, Q. Introduction to Multicopter Design and Control; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  33. Verbeke, J.; Hulens, D.; Ramon, H.; Goedeme, T.; De Schutter, J. The design and construction of a high endurance hexacopter suited for narrow corridors. In Proceedings of the 2014 International Conference on Unmanned Aircraft Systems (ICUAS), Orlando, FL, USA, 27–30 May 2014; pp. 543–551. [Google Scholar]
  34. Singhal, G.; Bansod, B.; Mathew, L. Unmanned aerial vehicle classification, applications and challenges: A review. Preprints 2018. [Google Scholar] [CrossRef] [Green Version]
  35. Zhao, S.; Cui, X.; Lu, M. Single point positioning using full and fractional pseudorange measurements from GPS and BDS. Surv. Rev. 2021, 53, 27–34. [Google Scholar] [CrossRef]
  36. Kaplan, E.D.; Hegarty, C. Understanding GPS/GNSS: Principles and Applications; Artech House: Norwood, MA, USA, 2017. [Google Scholar]
  37. Pepe, M. CORS architecture and evaluation of positioning by low-cost GNSS receiver. Geod. Cartogr. 2018, 44, 36–44. [Google Scholar] [CrossRef]
  38. Schwieger, V.; Lilje, M.; Sarib, R. GNSS CORS-Reference frames and services. In Proceedings of the 7th FIG Regional Conference, Hanoi, Vietnam, 19–22 October 2009; Volume 19, p. 2009. [Google Scholar]
  39. Tomaštík, J.; Mokroš, M.; Surovỳ, P.; Grznárová, A.; Merganič, J. UAV RTK/PPK method—An optimal solution for mapping inaccessible forested areas? Remote Sens. 2019, 11, 721. [Google Scholar] [CrossRef] [Green Version]
  40. Zhang, H.; Aldana-Jague, E.; Clapuyt, F.; Wilken, F.; Vanacker, V.; Van Oost, K. Evaluating the potential of post-processing kinematic (PPK) georeferencing for UAV-based structure-from-motion (SfM) photogrammetry and surface change detection. Earth Surf. Dyn. 2019, 7, 807–827. [Google Scholar] [CrossRef] [Green Version]
  41. Pepe, M.; Costantino, D.; Vozza, G.; Alfio, V.S. Comparison of two approaches to gnss positioning using code pseudoranges generated by smartphone device. Appl. Sci. 2021, 11, 4787. [Google Scholar] [CrossRef]
  42. Jason. Available online: https://jason.docs.rokubun.cat/strategies/ (accessed on 5 September 2022).
  43. Krajník, T.; Vonásek, V.; Fišer, D.; Faigl, J. AR-drone as a platform for robotic research and education. In Proceedings of the International Conference on Research and Education in Robotics, Prague, Czech Republic, 15-17 June 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 172–186. [Google Scholar]
  44. Woodman, O.J. An Introduction to Inertial Navigation; University of Cambridge, Computer Laboratory: Cambridge, UK, 2007. [Google Scholar]
  45. de Alteriis, G.; Conte, C.; Moriello, R.S.L.; Accardo, D. Use of consumer-grade MEMS inertial sensors for accurate attitude determination of drones. In Proceedings of the 2020 IEEE 7th International Workshop on Metrology for AeroSpace (MetroAeroSpace), Pisa, Italy, 22–24 June 2020; pp. 534–538. [Google Scholar]
  46. Turner, D.; Lucieer, A.; Wallace, L. Direct georeferencing of ultrahigh-resolution UAV imagery. IEEE Trans. Geosci. Remote Sens. 2013, 52, 2738–2745. [Google Scholar] [CrossRef]
  47. Petovello, M.G. Real-Time Integration of a Tactical-Grade IMU and GPS for High-Accuracy Positioning and Navigation. Ph.D. Thesis, The University of Calgary, Calgary, AB, Canada, 2003. [Google Scholar]
  48. Cazzaniga, N.E. Sviluppo e implementazione di algoritmi per la navigazione inerziale assistita. PhD Thesis, Tesi di dottorato, DIIAR-Sezione Rilevamento, Politecnico di Milano, Milano, 2007. [Google Scholar]
  49. Chiang, K.-W.; Tsai, M.-L.; Naser, E.-S.; Habib, A.; Chu, C.-H. A new calibration method using low cost MEM IMUs to verify the performance of UAV-borne MMS payloads. Sensors 2015, 15, 6560–6585. [Google Scholar] [CrossRef] [Green Version]
  50. Wendel, J.; Trommer, G.F. Tightly coupled GPS/INS integration for missile applications. Aerosp. Sci. Technol. 2004, 8, 627–634. [Google Scholar] [CrossRef]
  51. Mian, O.; Lutes, J.; Lipa, G.; Hutton, J.J.; Gavelle, E.; Borghini, S. Direct georeferencing on small unmanned aerial platforms for improved reliability and accuracy of mapping without the need for ground control points. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 397. [Google Scholar] [CrossRef] [Green Version]
  52. Chung, P.-H.; Ma, D.-M.; Shiau, J.-K. Design, manufacturing, and flight testing of an experimental flying wing UAV. Appl. Sci. 2019, 9, 3043. [Google Scholar] [CrossRef] [Green Version]
  53. Gandor, F.; Rehak, M.; Skaloud, J. Photogrammetric mission planner for RPAS. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 61. [Google Scholar] [CrossRef] [Green Version]
  54. Pepe, M.; Fregonese, L.; Scaioni, M. Planning airborne photogrammetry and remote-sensing missions with modern platforms and sensors. Eur. J. Remote Sens. 2018, 51, 412–436. [Google Scholar] [CrossRef]
  55. Moustris, G.; Tzafestas, C. Image-Guided Motion Compensation for Robotic-Assisted Beating Heart Surgery. In Handbook of Robotic and Image-Guided Surgery; Elsevier: Amsterdam, The Netherlands, 2020; pp. 363–374. [Google Scholar]
  56. Neumann, K.J. Digital Aerial Cameras; Intergraph Z/I Deutschland GmbH: Aalen, Germany, 2005; pp. 1–5. [Google Scholar]
  57. Hernandez-Lopez, D.; Felipe-Garcia, B.; Gonzalez-Aguilera, D.; Arias-Perez, B. An automatic approach to UAV flight planning and control for photogrammetric applications. Photogramm. Eng. Remote Sens. 2013, 79, 87–98. [Google Scholar] [CrossRef]
  58. Pix4D Capture. Available online: https://www.pix4d.com/product/pix4dcapture (accessed on 13 September 2022).
  59. UgCS UAV Mission Planning. Available online: https://www.geometrics.com/software/ugcs-uav-mission-planning-software/ (accessed on 13 September 2022).
  60. Uav Flight Map Pro Planner. Available online: www.uavflightmap.com (accessed on 13 September 2022).
  61. DJI GS PRO. Available online: https://www.dji.com/it/ground-station-pro (accessed on 13 September 2022).
  62. eMotion. Available online: https://www.sensefly.com/drone-software/emotion/ (accessed on 13 September 2022).
  63. Şasi, A.; Yakar, M. Photogrammetric modelling of sakahane masjid using an unmanned aerial vehicle. Turk. J. Eng. 2017, 1, 82–87. [Google Scholar] [CrossRef] [Green Version]
  64. Hong, Y.; Fang, J.; Tao, Y. Ground control station development for autonomous UAV. In Proceedings of the International Conference on Intelligent Robotics and Applications; Springer: Berlin/Heidelberg, Germany, 2008; pp. 36–44. [Google Scholar]
  65. License To Fly. Available online: https://surfshark.com/drone-privacy-laws (accessed on 2 September 2022).
  66. Global Drone Regulations Database. Available online: https://www.droneregulations.info/ (accessed on 2 September 2022).
  67. Kutila, M.; Korpinen, J.; Viitanen, J. Camera calibration in machine automation. In Proceedings of the 2nd International Conference on Machine Automation, ICMA 2000, Osaka, Japan, 27–29 September 2001; pp. 211–216. [Google Scholar]
  68. Duane, C.B. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
  69. Verhoeven, G.; Wieser, M.; Briese, C.; Doneus, M. Positioning in time and space: Cost-effective exterior orientation for airborne archaeological photographs. In Proceedings of the XXIV International CIPA Symposium, Strasbourg, France, 2–6 September 2013; Volume 2, pp. 313–318. [Google Scholar]
  70. Jozkow, G.; Toth, C. Georeferencing experiments with UAS imagery. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 25. [Google Scholar] [CrossRef] [Green Version]
  71. Gabrlik, P. The use of direct georeferencing in aerial photogrammetry with micro UAV. IFAC-Pap. 2015, 48, 380–385. [Google Scholar] [CrossRef]
  72. Bujakiewicz, A.; Podlasiak, P.; Zawieska, D. Georeferencing of close range photogrammetric data. Arch. Fotogram. Kartogr. Teledetekcji 2011, 22, 91–104. [Google Scholar]
  73. Hutton, J.; Mostafa, M.M. 10 years of direct georeferencing for airborne photogrammetry. GIS Bus. GeoBit 2005, 11, 33–41. [Google Scholar]
  74. Fraser, C.S. Digital camera self-calibration. ISPRS J. Photogramm. Remote Sens. 1997, 52, 149–159. [Google Scholar] [CrossRef]
  75. Villanueva, J.K.S.; Blanco, A.C. Optimization of ground control point (GCP) configuration for unmanned aerial vehicle (UAV) survey using structure from motion (SFM). Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 167–174. [Google Scholar] [CrossRef] [Green Version]
  76. Martínez-Carricondo, P.; Agüera-Vega, F.; Carvajal-Ramírez, F.; Mesas-Carrascosa, F.-J.; García-Ferrer, A.; Pérez-Porras, F.-J. Assessment of UAV-photogrammetric mapping accuracy based on variation of ground control points. Int. J. Appl. Earth Obs. Geoinf. 2018, 72, 1–10. [Google Scholar] [CrossRef]
  77. Cramer, M.; Stallmann, D.; Haala, N. Direct georeferencing using GPS/inertial exterior orientations for photogrammetric applications. Int. Arch. Photogramm. Remote Sens. 2000, 33, 198–205. [Google Scholar]
  78. Fraser, C.S. Network Design in Close-range Photogrammetry and Machine Vision. In Proceedings of the 26th International CIPA Symposium 2017, Ottawa, ON, Canada, 28 August–1 September 2017; pp. 256–282. [Google Scholar]
  79. Fraser, C. Camera calibration considerations for UAV photogrammetry. In Proceedings of the ISPRS TC II Symposium: Towards Photogrammetry, Riva del Garda, Italy, 3–7 June 2018. [Google Scholar]
  80. Solem, J.E. Programming Computer Vision with Python: Tools and Algorithms for Analyzing Images; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2012. [Google Scholar]
  81. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  82. Jiang, S.; Jiang, C.; Jiang, W. Efficient structure from motion for large-scale UAV images: A review and a comparison of SfM tools. ISPRS J. Photogramm. Remote Sens. 2020, 167, 230–251. [Google Scholar] [CrossRef]
  83. Lowe, G. Sift-the scale invariant feature transform. Int. J. 2004, 2, 2. [Google Scholar]
  84. Fischler, M.A.; Bolles, R.C. Perceptual organization and curve partitioning. In Readings in Computer Vision; Elsevier: Amsterdam, The Netherlands, 1987; pp. 210–215. [Google Scholar]
  85. Bianco, S.; Ciocca, G.; Marelli, D. Evaluating the performance of structure from motion pipelines. J. Imaging 2018, 4, 98. [Google Scholar] [CrossRef] [Green Version]
  86. Nocedal, J.; Wright, S.J. Numerical Optimization; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  87. Wei, X.; Zhang, Y.; Li, Z.; Fu, Y.; Xue, X. Deepsfm: Structure from motion via deep bundle adjustment. In Proceedings of the European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2020; pp. 230–247. [Google Scholar]
  88. Furukawa, Y.; Hernández, C. Multi-view stereo: A tutorial. Found. Trends Comput. Graph. Vis. 2015, 9, 1–148. [Google Scholar] [CrossRef]
  89. Seitz, S.M.; Curless, B.; Diebel, J.; Scharstein, D.; Szeliski, R. A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; Volume 1, pp. 519–528. [Google Scholar]
  90. Fraser, C.S. Automatic Camera Calibration in Close Range Photogrammetry. Photogramm. Eng. Remote Sens. 2013, 79, 381–388. [Google Scholar] [CrossRef] [Green Version]
  91. Furukawa, Y.; Ponce, J. Accurate, Dense, and Robust Multiview Stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1362–1376. [Google Scholar] [CrossRef]
  92. Goesele, M.; Snavely, N.; Curless, B.; Hoppe, H.; Seitz, S.M. Multi-View Stereo for Community Photo Collections. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar]
  93. Shen, S. Accurate multiple view 3d reconstruction using patch-based stereo for large-scale scenes. IEEE Trans. Image Process. 2013, 22, 1901–1914. [Google Scholar] [CrossRef] [PubMed]
  94. Agisoft Metashape. Available online: https://www.agisoft.com/ (accessed on 5 September 2022).
  95. Tinkham, W.T.; Swayze, N.C. Influence of Agisoft Metashape parameters on UAS structure from motion individual tree detection from canopy height models. Forests 2021, 12, 250. [Google Scholar] [CrossRef]
  96. 3DF Zephyr. Available online: https://www.3dflow.net/it (accessed on 5 September 2022).
  97. Oniga, V.-E.; Breaban, A.-I.; Pfeifer, N.; Chirila, C. Determining the suitable number of ground control points for UAS images georeferencing by varying number and spatial distribution. Remote Sens. 2020, 12, 876. [Google Scholar] [CrossRef] [Green Version]
  98. Autodesk ReCap. Available online: https://www.autodesk.com/products/recap/overview?term=1-YEAR&tab=subscription (accessed on 5 September 2022).
  99. Jones, C.A.; Church, E. Photogrammetry is for everyone: Structure-from-motion software user experiences in archaeology. J. Archaeol. Sci. Rep. 2020, 30, 102261. [Google Scholar] [CrossRef]
  100. Pix4D. Available online: https://www.pix4d.com/ (accessed on 5 September 2022).
  101. Barbasiewicz, A.; Widerski, T.; Daliga, K. The analysis of the accuracy of spatial models using photogrammetric software: Agisoft Photoscan and Pix4D. E3S Web Conf. 2018, 26, 00012. [Google Scholar] [CrossRef]
  102. PhotoModeler. Available online: https://www.photomodeler.com/ (accessed on 5 September 2022).
  103. Irschara, A.; Kaufmann, V.; Klopschitz, M.; Bischof, H.; Leberl, F. Towards fully automatic photogrammetric reconstruction using digital images taken from UAVs. In Proceedings of the ISPRS TC VII Symposium—100 Years ISPRS, Vienna, Austria, 5–7 July 2010. [Google Scholar]
  104. Reality Capture. Available online: https://www.capturingreality.com/ (accessed on 5 September 2022).
  105. Kingsland, K. Comparative analysis of digital photogrammetry software for cultural heritage. Digit. Appl. Archaeol. Cult. Herit. 2020, 18, e00157. [Google Scholar] [CrossRef]
  106. Trimble Inpho. Available online: https://geospatial.trimble.com/products-and-solutions/trimble-inpho (accessed on 5 September 2022).
  107. Lumban-Gaol, Y.A.; Murtiyoso, A.; Nugroho, B.H. Investigations on the Bundle Adjustment Results from Sfm-Based Software for Mapping Purposes. In Proceedings of the ISPRS TC II Mid-term Symposium “Towards Photogrammetry 2020”, Riva del Garda, Italy, 4–7 June 2018. [Google Scholar]
  108. WebODM. Available online: https://www.opendronemap.org/webodm/ (accessed on 5 September 2022).
  109. Vacca, G. WEB Open Drone Map (WebODM) a Software Open Source to Photogrammetry Process. In Proceedings of the Fig Working Week 2020. Smart Surveyors for Land and Water Management, Amsterdam, The Netherlands, 10–14 May 2020. [Google Scholar]
  110. Bartoš, K.; Pukanská, K.; Sabová, J. Overview of available open-source photogrammetric software, its use and analysis. Int. J. Innov. Educ. Res. 2014, 2, 62–70. [Google Scholar] [CrossRef]
  111. Stathopoulou, E.K.; Welponer, M.; Remondino, F. Open-source image-based 3D reconstruction pipelines: Review, comparison and evaluation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2W17, 331–338. [Google Scholar] [CrossRef]
  112. Regard 3D. Available online: http://www.regard3d.org/ (accessed on 5 September 2022).
  113. Palestini, C.; Basso, A. Low-Cost Technological Implementations Related to Integrated Application Experiments. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W17, 241–248. [Google Scholar] [CrossRef] [Green Version]
  114. MicMac. Available online: https://micmac.ensg.eu/index.php/Accueil (accessed on 5 September 2022).
  115. Rupnik, E.; Daakir, M.; Pierrot Deseilligny, M. MicMac–a free, open-source solution for photogrammetry. Open Geospat. Data Softw. Stand. 2017, 2, 14. [Google Scholar] [CrossRef] [Green Version]
  116. Jaud, M.; Passot, S.; Allemand, P.; Le Dantec, N.; Grandjean, P.; Delacourt, C. Suggestions to limit geometric distortions in the reconstruction of linear coastal landforms by SfM photogrammetry with PhotoScan® and MicMac® for UAV surveys with restricted GCPs pattern. Drones 2018, 3, 2. [Google Scholar] [CrossRef] [Green Version]
  117. Meshroom. Available online: https://alicevision.org/#meshroom (accessed on 5 September 2022).
  118. Griwodz, C.; Gasparini, S.; Calvet, L.; Gurdjos, P.; Castan, F.; Maujean, B.; De Lillo, G.; Lanthony, Y. AliceVision Meshroom: An open-source 3D reconstruction pipeline. In Proceedings of the 12th ACM Multimedia Systems Conference, Istanbul, Turkey, 28 September–1 October 2021; pp. 241–247. [Google Scholar]
  119. Colmap. Available online: https://colmap.github.io/ (accessed on 5 September 2022).
  120. Rahaman, H.; Champion, E. To 3D or not 3D: Choosing a photogrammetry workflow for cultural heritage groups. Heritage 2019, 2, 1835–1851. [Google Scholar] [CrossRef] [Green Version]
  121. Schonberger, J.L.; Frahm, J.-M. Structure-from-motion revisited. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA, 27–30 June 2016, pp. 4104–4113.
  122. Schönberger, J.L.; Zheng, E.; Frahm, J.-M.; Pollefeys, M. Pixelwise view selection for unstructured multi-view stereo. In Proceedings of the European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 501–518. [Google Scholar]
  123. Visual SFM. Available online: http://ccwu.me/vsfm/index.html (accessed on 5 September 2022).
  124. Morgan, J.A.; Brogan, D.J. How to VisualSFM; Department of Civil & Environmental Engineering, Colorado State University: Fort Collins, CO, USA, 2016. [Google Scholar]
  125. Clay, E.R.; Lee, K.S.; Tan, S.; Truong, L.N.H.; Mora, O.E.; Cheng, W. Assessing the Accuracy of Georeferenced Point Clouds from Uas Imagery. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 46, 59–64. [Google Scholar] [CrossRef]
  126. Elkhrachy, I. Accuracy assessment of low-cost Unmanned Aerial Vehicle (UAV) photogrammetry. Alex. Eng. J. 2021, 60, 5579–5590. [Google Scholar] [CrossRef]
  127. Pepe, M.; Alfio, V.S.; Costantino, D.; Scaringi, D. Data for 3D reconstruction and point cloud classification using machine learning in cultural heritage environment. Data Brief 2022, 42, 108250. [Google Scholar] [CrossRef]
  128. Pepe, M.; Costantino, D.; Crocetto, N.; Garofalo, A.R. 3D modeling of roman bridge by the integration of terrestrial and UAV photogrammetric survey for structural analysis purpose. Int Arch Photogramm Remote Sens Spat Inf Sci 2019, 42, W17. [Google Scholar] [CrossRef] [Green Version]
  129. Karachaliou, E.; Georgiou, E.; Psaltis, D.; Stylianidis, E. UAV for mapping historic buildings: From 3D modelling to BIM. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 397–402. [Google Scholar] [CrossRef] [Green Version]
  130. Donato, V.; Biagini, C.; Bertini, G.; Marsugli, F. Challenges and opportunities for the implementation of h-bim with regards to historical infrastructures: A case study of the ponte giorgini in castiglione della pescaia (grosseto-italy). In Proceedings of the Geomatics & Restoration—Conservation of Cultural Heritage in the Digital Era, Florence, Italy, 22–24 May 2017. [Google Scholar]
  131. Themistocleous, K.; Agapiou, A.; Hadjimitsis, D. 3D documentation and BIM modeling of cultural heritage structures using UAVS: The case of the Foinikaria church. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 42, 45. [Google Scholar] [CrossRef]
  132. Mishra, M.; Barman, T.; Ramana, G.V. Artificial intelligence-based visual inspection system for structural health monitoring of cultural heritage. J. Civ. Struct. Health Monit. 2022, 1–18. [Google Scholar] [CrossRef]
  133. Shabani, A.; Skamantzari, M.; Tapinaki, S.; Georgopoulos, A.; Plevris, V.; Kioumarsi, M. 3D simulation models for developing digital twins of heritage structures: Challenges and strategies. Procedia Struct. Integr. 2022, 37, 314–320. [Google Scholar] [CrossRef]
  134. Meschini, A.; Petrucci, E.; Rossi, D.; Sicuranza, F. Point cloud-based survey for cultural heritage. An experience of integrated use of range-based and image-based technology for the san francesco convent in monterubbiano. In Proceedings of the ISPRS Technical Commission V Symposium. Riva del Garda, Italy, 23–25 June 2014. [Google Scholar]
  135. Pepe, M.; Costantino, D.; Restuccia Garofalo, A. An efficient pipeline to obtain 3D model for HBIM and structural analysis purposes from 3D point clouds. Appl. Sci. 2020, 10, 1235. [Google Scholar] [CrossRef] [Green Version]
  136. Ebolese, D.; Brutto, M.L. Study and 3D survey of the Roman baths in the archaeological site of Lylibaeum (Marsala, Italy). IOP Conf. Ser. Mater. Sci. Eng. 2020, 949, 012103. [Google Scholar] [CrossRef]
  137. Nex, F.; Armenakis, C.; Cramer, M.; Cucci, D.A.; Gerke, M.; Honkavaara, E.; Kukko, A.; Persello, C.; Skaloud, J. UAV in the advent of the twenties: Where we stand and what is next. ISPRS J. Photogramm. Remote Sens. 2022, 184, 215–242. [Google Scholar] [CrossRef]
  138. UAV Drones Market. Available online: https://www.researchdive.com/8348/unmanned-aerial-vehicle-uav-drones-market (accessed on 29 September 2022).
Figure 1. Historical evolution of drones.
Figure 1. Historical evolution of drones.
Applsci 12 12886 g001
Figure 2. Classification of UAVs according to landing, aerodynamics, and multirotor [34].
Figure 2. Classification of UAVs according to landing, aerodynamics, and multirotor [34].
Applsci 12 12886 g002
Figure 3. Identification of the three axes and three rotations on a fixed-wing UAV (a), on a quadcopter (b), and schematisation of axes and rotations (c).
Figure 3. Identification of the three axes and three rotations on a fixed-wing UAV (a), on a quadcopter (b), and schematisation of axes and rotations (c).
Applsci 12 12886 g003
Figure 4. GSD calculation: pixel size and its relation to the GSD (a), geometric acquisition (b), and detail of main elements of the camera (c).
Figure 4. GSD calculation: pixel size and its relation to the GSD (a), geometric acquisition (b), and detail of main elements of the camera (c).
Applsci 12 12886 g004
Figure 5. Global drone regulations database: choosing a country (colour blue), for example, Italy (a), consulting the regulations (b).
Figure 5. Global drone regulations database: choosing a country (colour blue), for example, Italy (a), consulting the regulations (b).
Applsci 12 12886 g005
Figure 6. DG: Acquisition scheme (a) and representation of several frames (b).
Figure 6. DG: Acquisition scheme (a) and representation of several frames (b).
Applsci 12 12886 g006
Figure 7. Ground control design: natural points (a), marks made with spray paint (b), high-contrast targets (c), and coded targets (d).
Figure 7. Ground control design: natural points (a), marks made with spray paint (b), high-contrast targets (c), and coded targets (d).
Applsci 12 12886 g007
Figure 8. From a collection of stereoscopic images to a 3D model.
Figure 8. From a collection of stereoscopic images to a 3D model.
Applsci 12 12886 g008
Figure 9. Fundamentals of epipolar geometry.
Figure 9. Fundamentals of epipolar geometry.
Applsci 12 12886 g009
Figure 10. Several orthophotos of the spire generated from the 3D model.
Figure 10. Several orthophotos of the spire generated from the 3D model.
Applsci 12 12886 g010
Figure 11. Masonry bridge from Roman times incorporated into a bridge from the 1800s and located in South of Italy: image of step acquisition (a) and FEM model of the arches of the bridge (b).
Figure 11. Masonry bridge from Roman times incorporated into a bridge from the 1800s and located in South of Italy: image of step acquisition (a) and FEM model of the arches of the bridge (b).
Applsci 12 12886 g011
Figure 12. Two-dimensional CAD representation from orthophoto: UAS survey (a), 3D processing in SfM software (b), orthophoto (c), and 2D architectural drawing (d).
Figure 12. Two-dimensional CAD representation from orthophoto: UAS survey (a), 3D processing in SfM software (b), orthophoto (c), and 2D architectural drawing (d).
Applsci 12 12886 g012aApplsci 12 12886 g012b
Figure 13. Representation by means of a pie chart of literature review on platform type (a) and purposes of UAV photogrammetry in CH (b).
Figure 13. Representation by means of a pie chart of literature review on platform type (a) and purposes of UAV photogrammetry in CH (b).
Applsci 12 12886 g013
Table 1. UAV categorization used by the American Department of Defense [27].
Table 1. UAV categorization used by the American Department of Defense [27].
UAV CategoryMax Takeoff Weight (Gross)Normal Operation Altitude (ft)Airspeed
Group 1<20 pounds (9.07 kg)<1200 AGL (365.76 m)<100 knots (<185.20 km/h)
Group 221–55 pounds (9.53–24.95 kg)<3500 AGL (1066.8 m)<250 knots (<463.00 km/h)
Group 3<1320 pounds (<598.74 kg)<18,000 MSL (5486.4 m)Any airspeed
Group 4>1320 pounds<18,000 MSL (5486.4 m)
Group 5 >18,000 MSL
Table 2. Classification of UASs according to Eisenbeiss (2009) [30] and Eisenbeiss and Sauerbier (2011) [31].
Table 2. Classification of UASs according to Eisenbeiss (2009) [30] and Eisenbeiss and Sauerbier (2011) [31].
Lighter than AirHeavier than Air
Flexible WingFixed WingRotary Wing
UnpoweredBallonHang glider
Paraglider
Kites
GlidersRotor-kite
PoweredAirshipParagliderPropeller
Jet engines
Single rotors Coaxial
Quadrotors
Multirotors
Table 3. Features of several GNSS strategies [42].
Table 3. Features of several GNSS strategies [42].
StrategyNeeds Base?Uses Carrier-Phase?Orbits and ClocksAccuracy
SPPnonobroadcast<5 m
PPPnoyesprecisecm (multi-freq), dm (single-freq)
RTKyesyesbroadcastcm (multi-freq), dm (single-freq)
PPKyesyesbroadcastcm (multi-freq), dm (single-freq)
Table 4. Commercial UASs.
Table 4. Commercial UASs.
ModelManufacturerPhotoTypeWeight (gr)Endurance (min)Max Speed (m/s)NavigationCamera
SwitchBlade-Elite + Sony RX1R IIVision AerialApplsci 12 12886 i001Multicopter Tricopter29004027RTK/PPK
GPS + GLONASS
Sensor: Full frame BSI
Resolution: 42 MP
Phantom 4 Pro v2.0DJIApplsci 12 12886 i002Multicopter Quadricopter13883020SPP/PPK
GPS + GLONASS
Sensor: CMOS 1″
Resolution: 20 MP
PHANTOM 4 RTKDJIApplsci 12 12886 i003Multicopter Quadricopter13913013.9RTK
GPS + BeiDou + Galileo (Asia)
GPS + GLONASS + Galileo
(other regions)
Sensor: CMOS 1″
Resolution: 20 MP
Parrot AnafiParrot SAApplsci 12 12886 i004Multicopter Quadricopter3202515SPP
GPS + GLONASS
Sensor: CMOS 1/2.4″
Resolution: 20 MP
EVO II RTKAutel RoboticsApplsci 12 12886 i005Multicopter Quadricopter12503612.5RTK
GPS + GLONASS+ BeiDou + Galileo
Sensor: CMOS 1″
Resolution: 20 MP
TYPHOON H3Yuneec and LeicaApplsci 12 12886 i006Multicopter Hexacopter19852520SPP
GPS + GLONASS + Galileo
Sensor: CMOS 1″
Resolution: 20 MP
Leica Aibot (customized DJI Matrice 600—Sony aR7ii)Leica GeosystemsApplsci 12 12886 i007Multicopter Hexacopter11,2002418RTK
GPS + GLONASS + Galileo
Resolution: 42.4 MP
UX5 HPTrimbleApplsci 12 12886 i008Fixed-Wing25003523.5PPK
GPS + GLONASS
Sensor: CMOS 1″
Resolution: 36.4 MP
eBee X + SenseFly S.O.D.A. 3DSenseFly, an AgEagle companyApplsci 12 12886 i009Fixed-Wing1300
1600
9012.8RTK/PPK
GPS + GLONASS
Sensor: CMOS 1″
Resolution: 20 MP
WingtraOne + Sony RX1R IIWingtra AGApplsci 12 12886 i010Multicopter and Fixed-Wing37005912PPK
GPS + GLONASS+ BeiDou + Galileo
Sensor: Full frame BSI
Resolution: 42 MP
Table 5. Type of flight plans.
Table 5. Type of flight plans.
LocationTypeFlight Plan
Applsci 12 12886 i011Applsci 12 12886 i012
Polygon
Applsci 12 12886 i013
Applsci 12 12886 i014
Grid
Applsci 12 12886 i015
Applsci 12 12886 i016
Double Grid
Applsci 12 12886 i017
Applsci 12 12886 i018
Circular
Applsci 12 12886 i019
Table 6. Software for flight and mission planning.
Table 6. Software for flight and mission planning.
SoftwareManufacturerRef.
PIX4DcapturePix4D, Switzerland[58]
UgCS UAV Mission Planning SoftwareGeometrics, U.S.A.[59]
Uav Flight Map Pro PlannerDrone Emotions srl, Italy[60]
DJI GS PROSZ DJI Technology Co. Ltd., China[61]
Mission PlannerCreated by Michael Oborne for the ArduPilot
open-source autopilot project
[59]
eMotion, flight planning software for eBee dronesAgEagle Aerial Systems Inc., U.S.A.[62]
Table 7. Main commercial photogrammetric software.
Table 7. Main commercial photogrammetric software.
SoftwareDescriptionReference
Applsci 12 12886 i020Agisoft Metashape is a professional software that can perform photogrammetric processing of digital images and videos and generate 3D spatial data for use in GIS applications, cultural heritage documentation, environmental monitoring, and more. It allows the generation and export in different formats of point clouds, textured 3D models, high-resolution orthophotos, digital elevation models, and contour lines, as well as the possibility of measuring distances, areas, and volumes thanks to tools integrated into the software.[94,95]
Applsci 12 12886 i0213DF Zephyr software is produced in Italy and distributed by 3Dflow, a private consulting and software production company operating in the field of computer vision and image processing. It was founded in 2011 as a spin-off of the University of Verona and, in 2012, was recognised as a spin-off of the University of Udine. 3DF Zephyr is an automatic photogrammetry software that enables the creation of a 3D survey using photographs or digital videos. It allows us to create and export meshes and point clouds in the most common 3D formats, generate video animations, orthophotos, digital terrain models, sections, and contour lines, and calculate angles, areas, and volumes.[96,97]
Applsci 12 12886 i022ReCap software, owned by Autodesk, can convert digital images into 3D models or 2D drawings from aerial photogrammetric datasets or through close-up photogrammetry. The software makes it easy to create a point cloud or mesh ready for use with other CAD software or tools. AutoDesk ReCap Photo is part of the broader subscription programme, ReCap Pro is bundled with ReCap Pro as of 2019. ReCap Photo limits the amount of processing a single user can do by providing a limited number of ‘Cloud Credits’.[98,99]
Applsci 12 12886 i023Pix4D is a software developed by the Computer Vision Lab in Switzerland and implemented for both the image acquisition phase through mobile apps available for Android and iOS and the image processing phase. Pix4Dmapper generates point clouds, orthomosaics, and elevation models and is suitable for applications such as agriculture, surveying, architecture, and real estate by providing tools and instruments for measuring distances, areas, and volumes, enabling virtual inspections.[100,101]
Applsci 12 12886 i024PhotoModeler is essentially a phototriangulation programme capable of performing image-based modelling to produce 3D models and measurements. The software is also used for neighbourhood, aerial, and UAV photogrammetry (at relatively low altitudes) to perform measurements and modelling in agriculture, archaeology, architecture, biology, engineering, mining, storage volumes, etc.[102,103]
Applsci 12 12886 i025RealityCapture (RC) is photogrammetric software for creating 3D models obtained from terrestrial and/or aerial images or laser scans. In addition to point cloud measurement and filtering functions, it enables image registration, automatic calibration, polygon mesh production, texturing, digital model creation, and georeferencing and conversion of coordinate systems. The fields of use include everything from engineering and architectural applications in CH to land mapping, as well as visual effects (VFX) and virtual reality (VR) applications.[104,105]
Applsci 12 12886 i026Trimble Inpho is suitable for the high-precision transformation of aerial images into point clouds and surface models, orthophotos, and 3D digital models, using various modules that can be integrated into any photogrammetric workflow and production. It is a standard software solution for aerial photogrammetry that has been used for large metric mapping.[106,107]
Applsci 12 12886 i027WebODM is a user-friendly platform for producing point clouds, elevation models, textured models, and georeferenced maps using aerial digital images acquired from UAV systems. The software is a project of OpenDroneMap and supports multiple processing engines, currently ODM and MicMac. There is no cost to purchase a licence, but there is a one-off fee for installation and technical support.[108,109]
Table 8. Main open-source software.
Table 8. Main open-source software.
SoftwareDescriptionReference
Applsci 12 12886 i028Regard3D is an SfM programme that can create 3D models of objects using a series of photographs taken from different viewpoints. The free, open-source software is available on various platforms (Windows, OS X, and Linux) and uses third-party tools and libraries. Several parameters enable the processing of models in a controlled and accurate manner. Furthermore, the actions necessary for processing the 3D model are implemented within the software in a logical and sequential manner.[112,113]
Applsci 12 12886 i029MicMac (Multi-Images Correspondances, Méthodes Automatiques de Corrélation) is a free and open-source photogrammetric product (Cecill-B license) used in various application areas such as cartography, environment, forestry, cultural heritage, etc. It allows the reconstruction of 3D models and the production of georeferenced orthophotos in local/global/absolute coordinate systems.[114,115,116]
Applsci 12 12886 i030Meshroom is a free and open-source 3D reconstruction software based on the AliceVision framework. AliceVision is a photogrammetric computer vision framework that provides 3D reconstruction and camera tracking algorithms. The project is the result of a university–industry collaboration to implement robust and high-performance algorithms in the field of computer vision in a nodal environment.[117,118]
Applsci 12 12886 i031COLMAP is a structure from motion (SfM) and multi-view stereo (MVS) pipeline with which it is possible, via the command line, to reconstruct 3D models from ordered and unordered image datasets. The tool has a simple graphical interface and automatic reconstruction tool; however, it includes command-line options for more advanced users. The programme is also equipped with numerous tools and settings, making it suitable for various reconstruction scenarios.[119,120,121,122]
Applsci 12 12886 i032VisualSFM is a GUI application designed by Changchang Wu, dedicated to 3D reconstruction. It integrates several algorithms required for sparse point cloud reconstruction, including SIFT on GPU (SiftGPU), multicore bundle adjustment, and towards linear-time incremental structure from motion.For the dense reconstruction phase, the application decomposes the problem into reasoned clusters and integrates Yasutaka Furukawa’s PMVS/CMVS algorithms.[123,124]
Table 9. Examples of UAV photogrammetry applications in the CH field.
Table 9. Examples of UAV photogrammetry applications in the CH field.
Case Study3D Point CloudModel and Features of UAV Photogrammetry Process
1Applsci 12 12886 i033Applsci 12 12886 i034
2Applsci 12 12886 i035Applsci 12 12886 i036
3Applsci 12 12886 i037Applsci 12 12886 i038
4Applsci 12 12886 i039Applsci 12 12886 i040
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pepe, M.; Alfio, V.S.; Costantino, D. UAV Platforms and the SfM-MVS Approach in the 3D Surveys and Modelling: A Review in the Cultural Heritage Field. Appl. Sci. 2022, 12, 12886. https://doi.org/10.3390/app122412886

AMA Style

Pepe M, Alfio VS, Costantino D. UAV Platforms and the SfM-MVS Approach in the 3D Surveys and Modelling: A Review in the Cultural Heritage Field. Applied Sciences. 2022; 12(24):12886. https://doi.org/10.3390/app122412886

Chicago/Turabian Style

Pepe, Massimiliano, Vincenzo Saverio Alfio, and Domenica Costantino. 2022. "UAV Platforms and the SfM-MVS Approach in the 3D Surveys and Modelling: A Review in the Cultural Heritage Field" Applied Sciences 12, no. 24: 12886. https://doi.org/10.3390/app122412886

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop