Next Article in Journal
Smart Portable Devices Suitable for Cultural Heritage: A Review
Next Article in Special Issue
A Probabilistic Target Search Algorithm Based on Hierarchical Collaboration for Improving Rapidity of Drones
Previous Article in Journal
A Terahertz CMOS V-Shaped Patch Antenna with Defected Ground Structure
Previous Article in Special Issue
Internet-Of-Things in Motion: A UAV Coalition Model for Remote Sensing in Smart Cities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Camera Imaging System for UAV Photogrammetry

by
Damian Wierzbicki
Department of Remote Sensing, Photogrammetry and Imagery Intelligence, Institute of Geodesy, Faculty of Civil Engineering and Geodesy, Military University of Technology, 01-476 Warsaw, Poland
Sensors 2018, 18(8), 2433; https://doi.org/10.3390/s18082433
Submission received: 20 June 2018 / Revised: 22 July 2018 / Accepted: 23 July 2018 / Published: 26 July 2018
(This article belongs to the Special Issue Unmanned Aerial Vehicle Networks, Systems and Applications)

Abstract

:
In the last few years, it has been possible to observe a considerable increase in the use of unmanned aerial vehicles (UAV) equipped with compact digital cameras for environment mapping. The next stage in the development of photogrammetry from low altitudes was the development of the imagery data from UAV oblique images. Imagery data was obtained from side-facing directions. As in professional photogrammetric systems, it is possible to record footprints of tree crowns and other forms of the natural environment. The use of a multi-camera system will significantly reduce one of the main UAV photogrammetry limitations (especially in the case of multirotor UAV) which is a reduction of the ground coverage area, while increasing the number of images, increasing the number of flight lines, and reducing the surface imaged during one flight. The approach proposed in this paper is based on using several head cameras to enhance the imaging geometry during one flight of UAV for mapping. As part of the research work, a multi-camera system consisting of several cameras was designed to increase the total Field of View (FOV). Thanks to this, it will be possible to increase the ground coverage area and to acquire image data effectively. The acquired images will be mosaicked in order to limit the total number of images for the mapped area. As part of the research, a set of cameras was calibrated to determine the interior orientation parameters (IOPs). Next, the method of image alignment using the feature image matching algorithms was presented. In the proposed approach, the images are combined in such a way that the final image has a joint centre of projections of component images. The experimental results showed that the proposed solution was reliable and accurate for the mapping purpose. The paper also presents the effectiveness of existing transformation models for images with a large coverage subjected to initial geometric correction due to the influence of distortion.

1. Introduction

Multi-camera systems, and the nadir and oblique images acquired by them, are of increasing importance in professional aerial photogrammetry. In comparison with classical photogrammetry, nadir and oblique imaging technology allow for the registration of footprints and building facades. Thanks to this, it is possible to simplify the identification and interpretation of some objects that are difficult to recognize from the unique perspective view [1,2]. Oblique images can be used to fill the existing gap between aerial images and terrestrial images [3]. Professional photogrammetry cameras are capable of mapping large areas. Multisite imaging systems Microsoft Vexcel UltraCam [4] and ZI Imaging DMC [5] generating virtual images were developed. However, these solutions are very expensive and adapted to traditional aerial photogrammetry. Recently, the use of unmanned aerial vehicles (UAVs) equipped with a compact digital camera has significantly accelerated and reduced the costs of image acquisitions [6]. UAV photogrammetry produces extremely high spatial resolution images in the short term for topography mapping [7,8], 3D modeling [9], or point cloud classification [10]. The next stage in the development of low-level photogrammetry was the development of data from UAV oblique images, where image data was acquired from side-facing directions. Similar to professional photogrammetric systems, it was possible to register footprints and facades of buildings in an urban environment. In recent years, this technique has developed significantly and become essential for the photogrammetric community. The integration of UAV photogrammetry and oblique imaging will significantly increase the range of image data obtained from low altitudes for the implementation of photogrammetry and remote sensing studies [11]. UAV equipped with a multi-camera imaging system can obtain oblique images from almost any angle of view. In the context of the development of UAV technology and multi-image matching [12], the number of studies is gradually increasing. That is why research into the implementation of the multi-camera system in UAV photogrammetry is still valid. Research in the context of the UAV is particularly important [12,13].
In addition, the use of a multi-camera system will reduce one of the main limitations of UAV photogrammetry, which is a significant lowering of the ground coverage area, associated with increasing the number of images, increasing the number of flights lines, and reducing the area imaged during one flight. One of the possible ways of increasing the range and the associated imaging area is to use several synchronized oblique and nadir cameras. At the same time, acquired images from the multiple cameras can be processed as oblique strips [14,15]. Another solution may be transformation, registration, and mosaics in order to generate one large virtual image [5,15,16].
The approach proposed in this paper is to generate a larger virtual image from five head cameras. As part of the research work, a multi-camera system was designed consisting of five GoPro action cameras to increase the total FOV without affecting the fish-eye effect. Thanks to the proposed solution, it will be possible to increase the ground coverage area and to acquire image data effectively. Virtual images will be mosaicked to limit the total number of images for the mapped area. As part of the research, cameras were calibrated to determine the interior orientation parameters (IOPs). The next section presents the method of combining images based on the scale-invariant feature transform (SIFT) and Fast Library for Approximate Nearest Neighbors (FLANN) based matcher detector and geometric correction using projective transform and the Random sample consensus (RANSAC) algorithm. The results of the mosaics of images obtained with the use of a multi-camera system are also presented.

Related Works

Until recently, multi-camera systems were installed only on board manned aircraft. However, for several years, multi-camera systems have also been mounted on board UAV. One of the first research works on this topic was carried out by Grenzdörffer et al. [17]. As part of the research work, UAV with a multi-camera system consisting of one nadir and four oblique cameras was built. The geometric and radiometric corrections of acquired images and the possibility of applying them to automatic texture mapping of 3D-city were discussed. Other research related to UAV oblique images focused on 3D modeling of buildings [18] and trees [19]. These systems usually consist of several small and medium low-cost, compact digital cameras [15,20,21]. Thanks to such a solution, the FOV (Field of View) is significantly increased; when images from different cameras overlap, they can be mosaicked based on the geometric relationship between cameras. As a result, one virtual image can be created. By such a solution, a much larger area can be acquired in one UAV flight. Simple systems can consist of two low-cost digital cameras in vertical viewing. This solution was proposed by Tommaselli et al. [15]. In their work, they presented the next steps related to platform calibration, image rectification, and registration. Another interesting solution is a multi-camera system consisting of five digital compact cameras mounted on a large UAV AL-150 UAS platform (Aeroland UAV, Hong Kong, China). Authors of previous papers [16,22] proposed generating a virtual image based on Modified Perspective Transform. An innovative approach in the development of images from a multi-camera system was real-time bundle adjustment images from UAV as two stereo pairs [23]. Another solution was proposed previously [20]. In their work, the authors presented the concept of generating virtual images from six vertical cameras mounted on the UAV board. According to this approach, it is important to mount the cameras in a vertical position in such a way as to create an integrated structure. For each of the cameras included in the system, the interior orientation parameters (IOPs) are important, as well as for single cameras that are mounted off-nadir, the relative orientation parameters (ROPs) respective to the nadir camera [24,25,26]. Elements of the internal orientation of each of the cameras included in the system should be determined in the independent calibration procedure. In the case of relative orientation, it should be assumed that its elements are fixed for individual cameras. ROP can be determined using two methods. In the first one, elements of relative orientation are determined by differences between known elements of the external orientation (EOP) of each camera—which can be determined by observations from GNSS/IMU (Global Navigation Satellite System/Inertial Measurement Unit) sensors installed on the UAV. This method is simple, but the ROP accuracy is directly dependent on the accuracy of the measured EOP elements. That also depends on the misalignment between the GNSS/IMU onboard the UAV and the cameras. It is also based on the number and distribution of Ground Control Points (GCPs). The second method of relative orientation is based on the determination of corresponding points between the nadir camera (Master camera) and the off-nadir camera, and then on the determination of the rotation matrix and the translation vector or on the bundle adjustment. In many studies, the second method is used [15,27]. When using the second orientation method, its complexity increases with the number of cameras included in the system and their location relative to each other. The main assumption of the presented method is that the physical relations between the cameras are unchanged during the flight. In order to increase the accuracy of the designated ROPs in low-cost multi-camera systems, it is necessary to ensure adequate mutual coverage between images from individual cameras. Thanks to such coverage, the appropriate number of tie points will allow setting unknown ROPs using the bundle adjustment. Calibration and generation of virtual images with the multi-camera system mounted on the UAV can also be difficult due to the low stability of this type of platform. Wind gusts or heavy load of UAV and a relatively low flight altitude can effectively cause the whole platform to vibrate [28,29]. Therefore, one of the solutions to this problem is the use of the UAV platform presented in this article along with a dedicated 2-axis stabilized head. This solution will compensate relative movements between the cameras and will mechanically improve the stability of the cameras. This technique can be considered a bridge between classic and terrestrial image acquisition [3], and their usage in civil applications has been increasingly documented [30]. In some situations, such as the inspection of power lines [31,32,33], flexible data collection functions and high-resolution images are required, and aircraft platforms cannot meet their needs.
The contents of the paper are organized as follows. Section 1 gives the introduction and a review of related works. In Section 2, the methodology is presented. A new approach was proposed in the initial geometric correction of images obtained from the multi-camera system, and the method of the matching technique for fish-eye images is presented. Section 3 presents the research. Section 4 presents the results of research from individual stages of image processing. In Section 5, the accuracy of the proposed geometric adjustment and matching method was evaluated. Finally, Section 6 discusses the results in the context of experiments carried out by other researchers. Section 7 contains conclusions from the research and plans for further scientific research.

2. Methodology

The following section describes UAV platforms and a set of cameras with fish-eye lenses that were used to obtain a sequence of images. The following subchapters also present the methodology of the subsequent stages of image processing in order to obtain the mosaicked images.

2.1. Description of UAV Multi-Camera Imaging System

This chapter presents the description of the multi-camera UAV imaging system. The image data from low flying heights was obtained using the Novelty Ogar mk II platform (NoveltyRPAS, Gliwice, Poland), which can be classified to the mini multirotor category (see Figure 1).
The UAV Ogar mk II can perform air missions in beyond-visual-line-of-sight (BVLOS). The maximum takeoff weight (MTOW) of this UAV platform is 4.5 kg. Its flight time (endurance) is 40 min. The maximum speed of the UAV platform is up to 20 m/s. The system may be operated at wind speeds of up to 14 m/s and in weather conditions no worse than light intensity precipitation. UAV Ogar mk II can acquire depictions for mapping purposes in two modes—nadir and oblique imaging. Imaging in the nadir and oblique mode allows the acquisition of images to develop orthophoto maps. The multirotor ensures a completely autonomous flight at a given altitude and the given transverse and longitudinal coverage—among others thanks to the mounted GNSS/IMU receiver. The system equipment includes a flight controller that allows real-time flight management. Ogar mk II can automatically control take-off, flight, and landing. The multirotor is equipped with a stabilized gimbal. The sequences of video images are acquired in a continuous mode. For the GoPro camera set, the BLh position and Yaw, Pitch Roll angle values for the head are recorded. Flight safety is controlled automatically, but operator intervention is possible by controlling emergency safety procedures.

Camera Specifications

In the research carried out five GoPro Hero 4 Black cameras (GoPro Inc., San Mateo, CA, USA) were used (see Figure 2) equipped with a wide-angle lens and rolling shutter. The complementary metal–oxide–semiconductor (CMOS) sensor reads images by rows. GoPro 4 camera can work in camera and video modes.
In this system the use of the different dividedness and the speeds of recording the sequence of video with different FOV (Field of View) is also possible. It records videos in 4 K/30 fps modes in ultra-wide FOV (Field of View) combination up to 170°, 2.7 K/50 fps, and Full HD/120 fps. The camera also has a fast serial mode which enables taking up to 30 pictures (12 megapixels) per second [33]. Table 1 shows the technical specification.
At present, sensors of the video are deprived of mechanical systems of the shutter for electronic rolling shutters. For camera synchronization, Smart Remote—GoPro and UAV—Mission Planner software was used.

2.2. Imaging Geometry for UAV Oblique Photogrammetry

For each GoPro action, the camera FOV measures the area on the surface of the Earth that is observed in a given camera by a single sensor. The area is determined based on the knowledge of the Ground Sampling Distance—GSD. The GSD for nadir was calculated using the following formula Equation (1):
GSD = p c k H ,
where:
p—the CMOS sensor pixel size
ck—focal length derived from the camera calibration
H—altitude (AGL)
Table 2 shows GSD theoretical values for Nadir as a function of height and image acquisition parameters. Flight height would usually vary from 50 to 200 m for image data obtained from a low flying height. For this study a resolution of 2704 × 1520 pixels (2.7 K mode) was chosen (the central part of the image, to reduce the negative impact of image distortion caused by the camera lens) with 2.70 mm focal length.
For oblique cameras, individual GSD values have not been determined due to the fact that, depending on the viewing angle and the time of frame capture, the scale and GSD of each image frame in different parts of the frame will be different. Figure 3 shows the acquisition geometry of the UAV oblique multi-camera photogrammetry system for 3D modeling and ortho-photomap generation. Imaging geometry is presented for roll angle.
The camera system has been designed so that a nominal overlap of the cameras is at least 70% across flight direction [11,34]. For such a system, the pitch angles of Cam1 and Cam2 cameras from the nadir are 13.2°. For Cam4 and Cam5 cameras, the pitch angles from nadir are 26.4°. The terrain range of image frames from GoPro cameras in the function of inclined from the nadir can be expressed by equations:
tan α n = f o o t p r i n t n H ,
where
αn—gimbal angle for each GoPro camera from n = 1 to 5
footprintn—height of photo footprint for each GoPro camera from n = 1 to 5
H—altitude
f o o t p r i n t n = H ( tan ( H F O V 2 + α n ) + tan ( H F O V 2 α n ) ) ,
where
αn—gimbal angle for each GoPro camera from n = 1 to 5 [deg]
footprintn—height of photo footprint for each GoPro camera from n = 1 to 5
H—altitude
HFOV—vertical angle of view [deg].

2.3. Camera Calibration

Non-metric camera calibration allows the extraction of elements of the internal orientation for accurate 3D metric information extraction [35], those are: calibrated focal length (ck), the coordinates of the centre of projection of the image (xp, yp), the radial lens distortion coefficients (k1, k2, k3) [36], and tangential distortion coefficients (p1, p2). Therefore, it is recommended to pre-calibrate action cameras to extract reliable elements of internal orientation that allow for precise photogrammetric reconstructions. The calibration of cameras and the evaluation of the high credibility of appointed elements of the internal orientation are still an issue in the area of research of the development of photogrammetry [37] including UAV photogrammetry. Unknown internal geometry is a main problem in sensors equipped with wide-angle lenses [38,39]. The full review of camera calibration methods and models is discussed in many publications [37,40,41]. The results presented in the aforementioned articles summarize the experience associated with using digital cameras for photogrammetric measurements. It was then presented in the interpretation of different configurations, parameters, and analysis techniques of the cameras associated with the calibration. They also presented well-known photogrammetric systems from implemented models of the calibration of cameras and increasing 3D accuracy algorithms through the self-calibration bundle adjustment. The issues associated with the calibration of cameras has also become a current research topic in the field of Computer Vision (CV). Research focuses on full automatism of the process of calibration [42] on the basis of linear approaches with simplified imaging models [43]. The first work beyond these methods concerned the pinhole camera model and included the modeling radial distortion [43,44,45].

Camera Calibration—A Mathematical Model

Camera calibration is intended to reproduce the geometry of rays entering the camera through the projection center at the moment of exposure. The calibration parameters of the camera are:
  • calibrated focal length—ck;
  • the projection centers in relation to the pictures, determined by x0 and y0—image coordinates of the principal point;
  • lens distortion: radial (k1, k2, k3) and decentering (p1 and p2) lens distortion coefficients.
In the case of action cameras, there is one large FOV in wide angle viewing mode. The calibration process plays a very important role in modeling the distortion of the lens. The model of internal orientation used in the research was applied in the OpenCV based on the modified mathematical Brown Calibration model [46].
In the case of large distortion, as in the wide angle lens, radial distortion will be extended by additional distortion coefficients: 1 + k4r2 + k5r4 + k6r6. Ideally, radial distance will have the form:
r 2 = x 2 + y 2 ,
where:
r—radial distance;
x′, y′—are measured image coordinates referenced to the principal point.
When taking into account the influence of distortion, image coordinates will take the form of:
x = x ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) + 2 p 1 x y + p 2 ( r 2 + 2 x 2 ) ,
y = y ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) + p 1 ( r 2 + 2 y 2 ) + 2 p 2 x y ,
u = f x x + c x v = f y y + c y ,
where:
k1, k2, k3—polynomial coefficients of radial distortion;
p1, p2—coefficients describe the impact of tangential distortion;
x″, y″—image coordinates of point repositioning based on the distortion parameters;
fx, fy—are the focal lengths expressed in pixel units;
cx, cy—are the principal point offset in pixel units;
u, v—are the coordinates of the projection point in pixels.
In the case of camera calibration with a fish-eye lens, the calibration model in the OpenCV library is expressed using the coordinate vector of P in the camera reference frame is:
X c = R X + T ,
where:
R—is a rotation matrix;
X—3D coordinates of P point
The pinhole projection coordinates of P is (a, b)T where: a = x/z, b = y/z, r2 = a2 + b2 and θ = atan(r).
The equation describing the fisheye distortion will take the form:
θ d = θ ( 1 + k 1 θ 2 + k 2 θ 4 + k 3 θ 6 + k 4 θ 8 ) .
The distorted point coordinates are (x′ y′)T where:
x = ( θ d / r ) a y = ( θ d / r ) b .
Finally, conversion into pixel coordinates: The final pixel coordinates vector (u v)T where:
u = f x ( x + α y ) + c x v = f y y + c y .
At present, algorithms of the calibration of cameras were broadened by libraries open source ready answers e.g., OpenCV containing ready solutions. These algorithms are based on detecting the substantial amount of points on the flat test field of the type ‘chessboard’ [47,48]. However, the use of flat objects for camera calibration does not provide such high accuracy as 3D test fields. However, in most applications applying the 2D test fields of type ‘chessboard’ is acceptable [49,50]. For the photogrammetric purpose both mentioned methods are acceptable. The proper design of measurements, correct photography calibration tests, image measurement, and bundle adjustment allow the accurate and correct calibration for the majority of compact digital cameras.

2.4. Relative Orientation

The problem of the relative orientation of the cameras is to determine the 3D rotation and translation between the various cameras included in the set. Jhan [16,51] proposed that the elements of relative orientation for each camera should be calculated in such a way that the Nadir Camera (Master camera) is marked as Master, while the other Oblique cameras are marked as Slave. In this case, the angular elements of the relative orientation (ΔωPitch, ΔφYaw, ΔϰRoll) and spatial offset vectors (Vx, Vy, Vz) for all five cameras can be determined by Equations (12) and (13):
R C S C M = R L C M × R C S L ,
r C S C M = R L C M × ( r C S L r C M L ) ,
where:
R C M C S —rotation matrix between two cameras;
r C M C S —the position vector between two cameras perspective centers.
For the above equations, the relative orientation angles are calculations from R C M C S . It means that the rotation matrix between two cameras is a coordinated system under the local mapping frame L, where CM and CS represent Master and Slave cameras. The offset vector (Vx, Vy, Vz) was derived from calculations of r C M C S which depicts the position vector between two cameras perspective centres [16]. In the proposed approach, the elements of the relative orientation between the Master and Slave cameras were determined based on OpenCV library and epipolar geometry [52] (Figure 4).
According to theory, the main goal is to determine the rotation matrix R and translation vector. In the first stage of relative orientation, the search for homological points takes place using the SIFT descriptor [55] and FLANN based matcher [52]. On the basis of homological points in a pair of images, it is possible to recreate the Fundamental matrices of slave cameras. For each common point, the condition must be met [53,54]:
p 1 T F p 0 = 0 ,
where
F = K 2 T S t R K 1 1 .
The matrix St is the skew symmetric matrix
S t = [ 0 t 3 t 2 t 3 0 t 1 t 2 t 1 0 ] ,
where:
K1, K2—are the calibration matrices
R—is the rotation of slave camera
t—is the translation of the slave camera
p0, p1—are images points (normalized image coordinates)
P—the projection point.
Then fundamental matrix F is determined using RANSAC and the 8-point algorithm [54,56], which defines the set of epipolar lines. The fundamental matrix is expressed in the components of the two camera matrices (relative orientation matrix—R and translation—t). The fundamental matrix has rank 2 and det(F) = 0 [53].

2.5. Rectify Action Camera Images

For geometric correction (rectification) of inclined images, the projective transformation is often used. It is an eight-parameter transformation in which information about internal and external orientation is contained. In order to determine eight coefficients of this transformation, it is necessary to know the minimum of tie points, whereby no three can lie on one straight line. For homogeneous coordinates, the projective transform can be expressed as [57]:
[ x y 1 ] = [ L 1 L 2 L 3 L 4 L 5 L 6 L 7 L 8 1 ] [ X Y 1 ] ,
where:
x, y stand for the image coordinates, X, Y for the image coordinates of reference camera (master camera), and L1, ..., L8 for the projective transformation parameters [58,59]. On this basis, the equations can be written in a linear form:
x = L 1 X + L 2 Y + L 3 L 7 X + L 8 Y + 1 y = L 4 X + L 5 Y + L 6 L 7 X + L 8 Y + 1 .
These equations are the basis for the rectification of oblique images [58,59]. A characteristic feature of projective transform is that homography transformation has eight Degrees of Freedom (DOF). The homogeneous coordinates of the adjustment points can be tied by the homogeneous matrix H, in such a way that for a pair of corresponding points p = (x, y, 1), q = (u, v, 1) to get:
p ~ H q = K 1 R 1 R 2 T K 2 1 q .
The homography matrix is a 3 × 3 matrix with an ambiguous scale. It has the following form:
H = [ a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 1 ] .
Because there are eight DOFs, the minimum number of points required to solve the homography is four, as shown in the following equation [60]:
[ x 1 y 1 1 0 0 0 0 0 0 x 1 y 1 1 x 2 y 2 1 0 0 0 0 0 0 x 2 y 2 1 x 3 y 3 1 0 0 0 0 0 0 x 3 y 3 1 x 4 y 4 1 0 0 0 0 0 0 x 4 y 4 1 ] [ a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 ] = [ x 1 y 1 x 2 y 2 x 3 y 3 x 4 y 4 ]  
Image coordinates can be determined according to Equations (19) and (20):
x i = ( a 11 x + a 12 y + a 13 ) ( a 31 x + a 32 y + 1 ) y i = ( a 11 x + a 12 y + a 13 ) ( a 31 x + a 32 y + 1 ) .
Then Random Sample Consensus (RANSAC) was used, which uses a distance tolerance to find correspondence between two sets of points to determine the transformation function. If the tolerance is too low, the process may not remove the correspondences. In the case that the tolerance value is too high, some of the correspondences may be inaccurate or incorrect. The selection of an appropriate tolerance value plays an essential role in the level of RANSAC stability and is relevant to the quality of the categorized core correspondences. The advantage of the algorithm is its simplicity and relatively high resistance to outliers even with a large number of observations. Its limitation is that with too much noise it has too many iterations, its computational complexity can be very high [61].

3. Research

3.1. Study Site and Data Set

Images obtained from low flying heights with the camera set were acquired over the test area located in the vicinity of Gliwice (Poland) (50°17′32″ N, 18°40′03″ E). The area was flat, partly wooded, and single buildings appeared on its surface. Image data from low flying heights in good weather and lighting conditions were obtained. Low grassy and shrubby vegetation covered the observed area. The test data consisted of five sets of fisheye video frames acquired with the Novelty Ogar mk II platform over the test area. A total of 100 video frames were selected for the tests. Image data was obtained from a 50 m height with GSD equal to 0.029 m in the central part of the image.

3.2. Proposed Approach

The approach proposed in this article takes into account the generation of one large virtual image based on images acquired from five cameras mounted horizontally. In this configuration, the central camera is a cam-oriented camera (Cam3). However, other cameras are tilted towards central camera (Master camera) by 13.2° (Cam2 and Cam4) and 26.4° (Cam1 and Cam5), respectively (Figure 5).
Figure 5 shows a diagram of rectification of images acquired from five cameras installed on the UAV. In the proposed approach, the images are combined in such a way that the final image is a mosaic of component images.
The main stages of the proposed study are:
(a)
Acquiring low-level images with cameras with the fish-eye lens;
(b)
Calibration of cameras;
(c)
Geometric correction of images due to distortion (Lens distortion correction);
(d)
Relative orientation based on the SIFT and FLANN matcher descriptor;
(e)
Projective transformation (Geometric Transform)
(f)
Mosaicking to generate one large image.
The above figure (Figure 6) shows a scheme of geometric correction and mosaic of images obtained from a low flying height. First, the calibration of each camera is performed to determine the interior orientation parameters (IOPs) and distortion factors. In the further stage, the negative effect of the distortion of the lens is removed for each image. During the relative orientation of the images, tie points are found on the basis of the SIFT descriptor, and the adjustments determined using the FLANN algorithm are optimized. Next, a homography matrix is determined based on the RANSAC algorithm, for which empirically the cut-off threshold was set at 0.7. In the next stage, the Projective matrix is calculated, and the perspective transformation is performed. Then the geometric correction images are combined into a single mosaic (virtual image).

4. Results

4.1. Results of Camera Calibration

Video sequences were registered under different angles: from the front, from the right, from the left, from above, and from the ground, and all taken from the same distance. During the process of image acquisition all conditions were preserved so that the pivot of lens of each of the cameras proceeded through the focal point of the test. In this research, video modes were used to record the chessboard field at different view angles and positions. Video frames are converted to single pictures at one image per second. For each action camera five measuring series were carried out, taking into consideration in each series the accomplishment of a minimum of five frames in the different locations of the camera. During the video sequencing, similar measuring conditions were ensured for acquired samples for the most accurate results. The results of internal orientation for five cameras for the 2.7 K. Twenty calibration images for that purpose were used.
Within the framework of the research, five action cameras calibration results were in video-mode. The results obtained in both variants of calibration for the 2.7 K mode (the central part of the image mode) are comparable. The determined calibrated focal length values differ, on average, by about 0.3 mm from the given value by the producer. However, calibrated focal length and principal point coordinates are comparable with other test results [62]. The last column in Table 3 presents the results of reprojection errors for each of the calibrated cameras. The obtained error value for individual cameras ranges from 0.16 to 0.34 pixels. The most significant error value has been calculated for Cam4 camera, and it is equal to 0.34 pixels. The obtained results of the calibration of fish-eye lens cameras are comparable with the calibration results obtained by Scaramuzza et al. [63], based on the performed calibrations, the authors obtained an average reprojection error of fewer than 0.30 pixels [63].
Figure 7 shows the distribution of distortion functions for an example camera that is part of the head (for other cameras, distortion functions are very similar).

4.2. Undistorted Fisheye Video Sequence

Based on the calibration processed on the basis of the OpenCV script, the camera matrix and distortion coefficients were developed. However, thanks to the cv2.undistort function, individual images have been rectified (the negative effect of the distortion of the lens has been removed). The undistortion method changes the position of the extreme pixels of the image and shifts them closer to the center of the image. Sometimes, some pixels are placed on the edges of the image, which distorts it. Implementation from the OpenCV library allowed the minimization of this phenomenon.

4.3. Visual Evaluation of the Undistortion Method

The figure below (Figure 6) shows the original image before the correction of distortion and the image after the correction of distortion.
The proposed method of initial geometric correction noticeably (Figure 8) reduces geometric distortions in the image before the process of relative orientation. Moreover, the proposed method shows the ability to maintain angles and proportions on the stage. The advantage of the proposed method of initial geometric correction of images is the lack of distortion within the geometric image—the post-correction scene retains its original resolution.

4.4. Relative Orientation—Feature Image Matching

Relative orientation is performed based on generated corresponding points in every image. Corresponding points are generated based on the SIFT algorithm and FLANN matcher implemented in the OpenCV library in the Python programming environment. In SIFT algorithm, features are generated in the common area reference images. Each feature is matched by comparisons based on the Euclidean distance of their feature vectors [63]. As for matching using SIFT, the threshold of ratio test was set to 0.70.
The average number of tie points (see Table 4) generated for particular camera pairs (stereograms) ranged from 2497 to 4319. The average number of points for the camera set was 3365. The standard deviation value was 648 points. After relative orientation and image matching, the next step was to calculate the geometric transformation parameters (projective transform). In the next stage the raw matches were used to estimate the fundamental matrix. For this purpose, it was necessary to reject outliers and select inliers for the correct determination of geometric transform.
After every matrix estimation, a matrix validation test is performed to avoid a distorted transformation based on incorrect matches.
(a)
Torsional factor in homography (H3.1, H3.2) cannot be too significant. Its absolute value is usually less than 0.002.
(b)
A shift between images is not allowed when combining images. The homography is rejected if it changes the x and y coordinate between itself.
In an unrelated combination, it should be decided whether the two images match or not. The number of estimated matches (inliers) can be one of the criteria. However, high-resolution images often have outliers (see Table 5). A sufficient number of iterations in RANSAC should eliminate this problem. Also, it should be noted that incorrect matches are often randomly placed on the image. An additional criterion of geometry can also improve the accuracy of image matching [64,65].
As can be seen from the Figure 9 analysis, after optimization, the density of tie points has been significantly reduced. On the basis of the RANSAC algorithm, only points (inliers) meeting the cut-off criterion were used to transform the images.

5. Accuracy Assessment of Rectifying Results

5.1. Results from Multi-Camera Matching

Table 6 presents the results of the relative matching of images after transformation. Relative orientation in the proposed approach was made to the pixel level. For each pair of images, the mean square error (RMSE) values were determined.
Based on the analysis of the obtained results, it can be seen that the highest stability was characterized by a pair of images acquired using Cam1 and Cam2. In this case, the RMSExy value was only ±2.13 pix. In the case of stereograms acquired from Cam2 and Cam3 cameras, the accuracy of image matching was almost 4 pixels, more precisely RMSExy = ±3.80 pix. The most significant error value for these stereograms was probably caused by a presence on the photographed scenes—an object being in motion (a moving person). The average value of the RMSE error for matching all images was ±3.18 pix.

5.2. Result of Image Stitching

Figure 10 shows the mosaics on the example of two stereograms. At the initial stage of the study, the adverse influence of distortion was removed, then the relative orientation and geometric correction of the images were made. The part of the presented mosaic is the introduction to the final form of the images presented in Figure 11. The average distance between the calculated and the actual location of the point was slightly over 3 pixels. The most substantial image distortion was recorded at the edges of the mosaic.
During the development of the mosaic consisting of a nadir image and four off-nadir images, it will still be geometrically distorted relative to the orthoimage. Therefore, the images that have been mapped must be subjected to the classical orthorectification process, taking into account the influence of the relief, which was not the subject of this study, so that they could finally be presented from the nadir view. The proposed process of developing one large image can effectively increase the coverage area. A small limitation of the proposed method on mosaic images may be the adverse phenomenon of ghosting of objects in motion.

6. Discussion

As part of the performed research, a method for acquiring, calibrating, geometric correction, and combining images obtained from a low-level was proposed. The presented method allows the integration of images obtained from the multi-camera system installed on the UAV board. The proposed method of registering multiple images with the help of a multi-chamber system will allow for the issuing and timely registration of remote sensing data, which can be successfully applied in environmental mapping and change detection. Increasing efficiency in obtaining oblique images from UAV was also observed during research work carried out previously [2]. The authors proposed using SIFT descriptor and feature matching to orientate oblique images. In their work, they stated that combining tiling strategy with existing workflows can provide an efficient and reliable solution. Also, in a previous paper [30], the orientation process of oblique aerial images is presented based on the Binary Robust Independent Elementary Features (BRIEF) descriptor. The effective method of geometric correction of remote sensing images using the SIFT and Affine-Scale Invariant Feature Transform (ASIFT) algorithm is also presented in a previous research [66], where the authors obtained geometric correction errors in the range from 0.63 to 3.74 pixels in the image defocus function from 30° to 70°. In other studies [67], similar results were obtained by mosaicking UAV images based on SIFT descriptor and RANSAC to remove the wrong matching points. Based on the experimental results, it was proved that the proposed solution could effectively reduce the impact of accumulative error and improve the precision of the mosaic while reducing the mosaic time by up to 60%. The accuracy of the geometric correction and multi-camera mosaic of images was also studied in a previous paper [20]. The authors used the six-camera system in their temperaments and achieved the accuracy of mosaic and geometric correction of images, which was 3 pixels. A similar result was achieved in the experiments presented in this article, where the average RMSE value was 3.18 pixels.
The main limitation of the proposed method is that it works effectively in the case of images acquired by cameras, mutual stability must characterize them—they should be placed in one frame. In addition, it is also sensitive to the tonal heterogeneity of acquired images. However, a similar allegation can be made about other systems acquiring image data obtained from a low flying height. Another limitation is the fact that there should not be objects in motion in the photographed area (as the example of the Cam2-Cam3 stereogram in which an object in motion was photographed, the accuracy of image matching was the lowest). Also, in such cases, erratic estimation of homography is possible for oblique images, which leads to the inaccurate geometric correction of images.

7. Conclusions

Until now, the possibility of acquiring images from the multi-camera imaging system installed on the UAV multi-copter was not taken into account in environmental mapping. The basic methods of geometric correction are insufficient to accurately correct images acquired with fisheye-lens cameras, which are additionally mounted diagonally. On the basis of the above premises, a method for the geometric correction of images and their combination into one virtual image was developed. The proposed method takes into account the correction of distortions. This approach allows the effective binding of the tie points and also improves the accuracy of the geometric transformation. The proposed method of image integration can increase the area of imaging by the UAV multirotor. Based on the above, it is evident that the multi-camera system has a higher dynamic range in relation to individual cameras equipped with normal or wide-angle lenses. The performed tests are particularly important in the context of geometric correction of remote sensing images and in environment mapping. Future research will focus on taking into account tonal differences in component images and fully automating the processing of large sets of images. In addition, it is planned to implement the proposed UAV fixed-wing application. Thanks to this it will be possible to increase the imaging area even more effectively. Additionally, in future research work, the use of geometric properties of cameras in the use of effective 3D modeling of buildings is planned.

Funding

This paper has been supported by a grant cofinanced by the Military University of Technology, the Faculty of Civil Engineering, and Geodesy, Geodesy Institute, Grant No.GB/1/2018/205/2018/DA-990.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Remondino, F.; Gerke, M. Oblique Aerial Imagery—A Review. In Proceedings of the Photogrammetric Week 2015, Stuttgart, Germany, 7–11 September 2015; pp. 75–83. [Google Scholar]
  2. Jiang, S.; Jiang, W. On-board GNSS/IMU assisted feature extraction and matching for oblique UAV images. Remote Sens. 2017, 9, 813. [Google Scholar] [CrossRef]
  3. Sun, Y.; Sun, H.; Yan, L.; Fan, S.; Chen, R. RBA: Reduced Bundle Adjustment for oblique aerial photogrammetry. ISPRS J. Photogramm. Remote Sens. 2016, 121, 128–142. [Google Scholar] [CrossRef]
  4. Gruber, M.; Ladstädter, R. Results from ultracam monolithic stitching. In Proceedings of the ASPRS Annual Conference, Milwaukee, WI, USA, 1–5 May 2011; pp. 1–6. [Google Scholar]
  5. Zeitler, W.; Doerstel, C.; Jacobsen, K. Geometric Calibration of the DMC: Method and Results. ISPRS J. Photogramm. Remote Sens. 2002, 34, 324–332. [Google Scholar]
  6. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  7. Gonçalves, J.A.; Henriques, R. UAV photogrammetry for topographic monitoring of coastal areas. ISPRS J. Photogramm. Remote Sens. 2015, 104, 101–111. [Google Scholar] [CrossRef]
  8. Burdziakowski, P.; Bobkowska, K. Accuracy of a low-cost autonomous hexacopter platforms navigation module for a photogrammetric and environmental measurements. In Proceedings of the Environmental Engineering 10th International Conference, Vilnius, Lithuania, 27–28 April 2017. [Google Scholar]
  9. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
  10. Xiao, J.; Gerke, M.; Vosselman, G. Building extraction from oblique airborne imagery based on robust façade detection. ISPRS J. Photogramm. Remote Sens. 2012, 68, 56–68. [Google Scholar] [CrossRef]
  11. Wierzbicki, D.; Fryskowska, A.; Kędzierski, M.; Wojtkowska, M.; Delis, P. Method of radiometric quality assessment of NIR images acquired with a custom sensor mounted on an unmanned aerial vehicle. J. Appl. Remote Sens. 2018, 12, 015008. [Google Scholar] [CrossRef]
  12. Haala, N.; Rothermel, M. Dense multiple stereo matching of highly overlapping UAV imagery. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 387–392. [Google Scholar] [CrossRef]
  13. Niemeyer, F.; Schima, R.; Grenzdörffer, G. Relative and absolute Calibration of a multihead Camera System with oblique and nadir looking Cameras for a UAS. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2013, 2, 287–291. [Google Scholar] [CrossRef]
  14. Mostafa, M.M.R.; Schwarz, K.-P. A multi-sensor system for airborne image capture and georreferencing. Photogramm. Eng. Remote Sens. 2000, 66, 1417–1423. [Google Scholar]
  15. Tommaselli, A.M.G.; Galo, M.; de Moraes, M.V.A.; Marcato, J., Jr.; Caldeira, C.R.T.; Lopes, R.F. Generating Virtual Images from Oblique Frames. Remote Sens. 2013, 5, 1875–1893. [Google Scholar] [CrossRef] [Green Version]
  16. Jhan, J.P.; Li, Y.T.; Rau, J.Y. A modified projective transformation scheme for mosaicking multi-camera imaging system equipped on a large payload fixed-wing UAS. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-3/W2, 87–93. [Google Scholar] [CrossRef]
  17. Grenzdörffer, G.; Niemeyer, F.; Schmidt, F. Development of Four Vision Camera System for a Micro-UAV. In Proceedings of the XXII ISPRS Congress, Melbourne, Australia, 25 August–1 September 2012; pp. 369–374. [Google Scholar]
  18. Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A.M.; Noardo, F.; Spanò, A. UAV photogrammetry with oblique images: First analysis on data acquisition and processing. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 835–842. [Google Scholar] [CrossRef]
  19. Lin, Y.; Jiang, M.; Yao, Y.; Zhang, L.; Lin, J. Use of UAV oblique imaging for the detection of individual trees in residential environments. Urban For. Urban Green. 2015, 14, 404–412. [Google Scholar] [CrossRef]
  20. Holtkamp, D.J.; Goshtasby, A.A. Precision registration and mosaicking of multicamera images. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3446–3455. [Google Scholar] [CrossRef]
  21. Ritchie, G.L.; Sullivan, D.G.; Perry, C.D.; Hook, J.E.; Bednarz, C.W. Preparation of a low-cost digital camera system for remote sensing. Appl. Eng. Agric. 2008, 24, 885–894. [Google Scholar] [CrossRef]
  22. Rau, J.Y.; Jhan, J.P.; Li, Y.T. Development of a large-format uas imaging system with the construction of a one sensor geometry from a multicamera array. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5925–5934. [Google Scholar] [CrossRef]
  23. Schneider, J.; Läbe, T.; Förstner, W. Incremental real-time bundle adjustment for multi-camera systems with points at infinity. ISPRS Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 1, W2. [Google Scholar] [CrossRef]
  24. Detchev, I.; Mazaheri, M.; Rondeel, S.; Habib, A. Calibration of multi-camera photogrammetric systems. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL-1, 101–108. [Google Scholar] [CrossRef]
  25. Detchev, I.; Habib, A.; Mazaheri, M.; Melia, A. Long Term Stability Analysis for a Multi-Camera Photogrammetric System. In Proceedings of the 2015 ASPRS Annual Conference, Tampa, FL, USA, 4–8 May 2015. [Google Scholar]
  26. Brunn, A.; Meyer, T. Calibration of a Multi-Camera Rover. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B5, 445–452. [Google Scholar] [CrossRef]
  27. Habib, A.; Detchev, I.; Kwak, E. Stability analysis for a multi-camera photogrammetric system. Sensors 2014, 14, 15084–15112. [Google Scholar] [CrossRef] [PubMed]
  28. Taha, Z.; Tang, Y.R.; Yap, K.C. Development of an onboard system for flight data collection of a small-scale UAV helicopter. Mechatronics 2011, 21, 132–144. [Google Scholar] [CrossRef]
  29. Novelty RPAS Ogar MK 2. 2017. Available online: http://noveltyrpas.com/ogar-mk2/ (accessed on 25 August 2017).
  30. Hu, H.; Zhu, Q.; Du, Z.; Zhang, Y.; Ding, Y. Reliable spatial relationship constrained feature point matching of oblique aerial images. Photogramm. Eng. Remote Sens. 2015, 81, 49–58. [Google Scholar] [CrossRef]
  31. Jiang, S.; Jiang, W.; Huang, W.; Yang, L. UAV-Based Oblique Photogrammetry for Outdoor Data Acquisition and Offsite Visual Inspection of Transmission Line. Remote Sens. 2017, 9, 278. [Google Scholar] [CrossRef]
  32. Jiang, S.; Jiang, W. Efficient structure from motion for oblique UAV images based on maximal spanning tree expansion. ISPRS J. Photogramm. Remote Sens. 2017, 132, 140–161. [Google Scholar] [CrossRef]
  33. GoPro 2016. Gopro Hero 4 Black User Manual. Available online: http://cbcdn2.gp-static.com/uploads/product_manual/file/490/UM_H4Black_ENG_REVA_WEB.pdf (accessed on 3 January 2017).
  34. Zeisl, B.; Georgel, P.F.; Schweiger, F.; Steinbach, E.G.; Navab, N.; Munich, G. Estimation of Location Uncertainty for Scale Invariant Features Points. In Proceedings of the British Machine Vision Conference (BMVC), London, UK, 7–10 September 2009; pp. 1–12. [Google Scholar]
  35. Habib, A.; Pullivelli, A.; Mitishita, E.; Ghanma, M.; Kim, E.M. Stability analysis of low-cost digital cameras for aerial mapping using different georeferencing techniques. Photogramm. Rec. 2006, 21, 29–43. [Google Scholar] [CrossRef]
  36. Balletti, C.; Guerra, F.; Tsioukas, V.; Vernier, P. Calibration of action cameras for photogrammetric purposes. Sensors 2014, 14, 17471–17490. [Google Scholar] [CrossRef] [PubMed]
  37. Remondino, F.; Fraser, C. Digital camera calibration methods: Considerations and comparisons. International Archives of Photogrammetry. Remote Sens. Spat. Inf. Sci. 2006, 36, 266–272. [Google Scholar]
  38. Hastedt, H.; Luhmann, T. Investigations on the quality of the interior orientation and its impact in object space for UAV photogrammetry. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 321. [Google Scholar] [CrossRef]
  39. Kedzierski, M.; Fryskowska, A. Precise method of fisheye lens calibration. In Proceedings of the International Society for Photogrammetry and Remote Sensing (ISPRS) Congress, Beijing, China, 3–11 July 2008; pp. 765–768. [Google Scholar]
  40. Fryer, J.G. Camera calibration. In Close Range. Photogrammetry and Machine Vision; Atkinson, K.B., Ed.; Whittles Publishing: Caithness, UK, 1996; pp. 156–179. [Google Scholar]
  41. Luhmann, T.; Fraser, C.; Maas, H.G. Sensor modelling and camera calibration for close-range photogrammetry. ISPRS J. Photogramm. Remote Sen. 2016, 115, 37–46. [Google Scholar] [CrossRef]
  42. Fraser, C.S. Automatic camera calibration in close range photogrammetry. Photogramm. Eng. Remote Sens. 2013, 79, 381–388. [Google Scholar] [CrossRef]
  43. Tsai, R.Y. An efficient and accurate camera calibration technique for 3D machine vision. In Proceedings of the International Conference on Computer Vision and Pattern Recognition, Miami Beach, FL, USA, 22–26 June 1986; pp. 364–374. [Google Scholar]
  44. Weng, J.; Cohen, P.; Herniou, M. Camera calibration with distortion models and accuracy evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 965–980. [Google Scholar] [CrossRef]
  45. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  46. Brown, D.C. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
  47. Yin, W.S.; Luo, Y.L.; Li, S.Q. Camera calibration based on OpenCV. Comput. Eng. Des. 2007, 28, 197–199. [Google Scholar]
  48. Wang, Y.M.; Li, Y.; Zheng, J.B. A camera calibration technique based on OpenCV. In Proceedings of the 3rd International Conference on Information Sciences and Interaction Sciences (ICIS), Chengdu, China, 23–25 June 2010; pp. 403–406. [Google Scholar] [CrossRef]
  49. De la Escalera, A.; Armingol, J.M. Automatic chessboard detection for intrinsic and extrinsic camera parameter calibration. Sensors 2010, 10, 2027–2044. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Douterloigne, K.; Gautama, S.; Philips, W. Fully automatic and robust UAV camera calibration using chessboard patterns. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; Volume 2, pp. 551–554. [Google Scholar]
  51. Jhan, J.P.; Rau, J.Y.; Huang, C.Y. Band-to-band registration and ortho-rectification of multilens/multispectral imagery: A case study of MiniMCA-12 acquired by a fixed-wing UAS. ISPRS J. Photogramm. Remote Sens. 2016, 114, 66–77. [Google Scholar] [CrossRef]
  52. Muja, M.; Lowe, D.G. Fast approximate nearest neighbors with automatic algorithm configuration. In Proceedings of the Fourth International Conference on Computer Vision Theory and Applications, Lisboa, Portugal, 5–8 February 2009. [Google Scholar]
  53. Solem, J.E. Programming Computer Vision with Python: Tools and Algorithms for Analyzing Images; O’Reilly Media. Inc.: Sebastopol, CA, USA, 2012. [Google Scholar]
  54. Hartley, R.I.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004; 655p. [Google Scholar]
  55. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  56. Hartley, R.I. In Defense of the Eight-Point Algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 580–593. [Google Scholar] [CrossRef]
  57. Seedahmed, G.H. Direct retrieval of exterior orientation parameters using a 2D projective transformation. Photogramm. Rec. 2006, 21, 211–231. [Google Scholar] [CrossRef]
  58. Remondino, F.; Börlin, N. Photogrammetric calibration of image sequences acquired with a rotating camera. In Proceedings of the ISPRS Working Group V/1, Panoramic Photogrammetry Workshop, Dresden, Germany, 19–22 February 2004; Volume 34. No. 5/W16. [Google Scholar]
  59. Cho, W.; Schenk, T. Resampling Digital Imagery to Epipolar Geometry. IAPRS Int. Arch. Photogramm. Remote Sens. 1992, 418, 404–408. [Google Scholar]
  60. Redzuwan, R.; Radzi, N.A.M.; Din, N.M.; Mustafa, I.S. Affine versus projective transformation for SIFT and RANSAC image matching methods. In Proceedings of the International Conference on Signal and Image Processing Applications (ICSIPA), Kuala Lumpur, Malaysia, 19–21 October 2015; pp. 447–451. [Google Scholar] [CrossRef]
  61. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  62. Hastedt, H.; Ekkela, T.; Luhmann, T. Evaluation of the Quality of Action Cameras with Wide-Angle Lenses in UAV Photogrammetry. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B1, 851–859. [Google Scholar] [CrossRef]
  63. Scaramuzza, D.; Martinelli, A.; Siegwart, R. A toolbox for easily calibrating omnidirectional cameras. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 5695–5701. [Google Scholar]
  64. Zhao, J.; Zhou, H.J.; Men, G.Z. A method of sift feature points matching for image mosaic. In Proceedings of the International Conference on Machine Learning and Cybernetics, Hebei, China, 12–15 July 2009; Volume 4, pp. 2353–2357. [Google Scholar] [CrossRef]
  65. Brown, M.; Lowe, D.G. Automatic panoramic image stitching using invariant features. Int. J. Comput. Vis. 2007, 74, 59–73. [Google Scholar] [CrossRef]
  66. Wang, C.; Liu, X.; Zhao, X.; Wang, Y. An Effective Correction Method for Seriously Oblique Remote Sensing Images Based on Multi-View Simulation and a Piecewise Model. Sensors 2016, 16, 1725. [Google Scholar] [CrossRef] [PubMed]
  67. Pan, X.; Zhao, X.; Gao, D.; Li, X. A multi-core parallel mosaic alorithm for multi-view UAV images. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 845–850. [Google Scholar] [CrossRef]
Figure 1. The multirotor platform adapted to carry the multi-camera system.
Figure 1. The multirotor platform adapted to carry the multi-camera system.
Sensors 18 02433 g001
Figure 2. Five cameras GoPro 4 Hero Black.
Figure 2. Five cameras GoPro 4 Hero Black.
Sensors 18 02433 g002
Figure 3. Imaging geometry for UAV multi-camera imaging system.
Figure 3. Imaging geometry for UAV multi-camera imaging system.
Sensors 18 02433 g003
Figure 4. An illustration of epipolar geometry [53,54]. P—a 3D point; p0, p1—the projections of point P onto the image planes (normalized image coordinates); C1, C2—the baseline between the two camera centers; e1, e2—epipolar points; l1, l2—epipolar lines.
Figure 4. An illustration of epipolar geometry [53,54]. P—a 3D point; p0, p1—the projections of point P onto the image planes (normalized image coordinates); C1, C2—the baseline between the two camera centers; e1, e2—epipolar points; l1, l2—epipolar lines.
Sensors 18 02433 g004
Figure 5. The scheme of rectification of oblique images.
Figure 5. The scheme of rectification of oblique images.
Sensors 18 02433 g005
Figure 6. Workflow of the proposed raw image mosaicking.
Figure 6. Workflow of the proposed raw image mosaicking.
Sensors 18 02433 g006
Figure 7. The graph with the distortion function of the cameras.
Figure 7. The graph with the distortion function of the cameras.
Sensors 18 02433 g007
Figure 8. (a) The original image with distortion; (b) Undistorted image.
Figure 8. (a) The original image with distortion; (b) Undistorted image.
Sensors 18 02433 g008
Figure 9. The distribution of matching points (using the best correspondences) for images acquired by every camera.
Figure 9. The distribution of matching points (using the best correspondences) for images acquired by every camera.
Sensors 18 02433 g009
Figure 10. The correction results after relative orientation and geometric correction of two stereograms.
Figure 10. The correction results after relative orientation and geometric correction of two stereograms.
Sensors 18 02433 g010
Figure 11. The final mosaicking result from five cameras.
Figure 11. The final mosaicking result from five cameras.
Sensors 18 02433 g011
Table 1. Technical specification of GoPro 4 Hero Black camera.
Table 1. Technical specification of GoPro 4 Hero Black camera.
ItemDescription
Size [mm]41 × 59 × 30
Weight [g]88
Optical sensors typeCMOS
Digital Video FormatH.264
Nominal focal length [mm]3
Image Recording FormatJPEG
Max Video Resolution3840 × 2160
Effective Photo Resolution12.0 MP
Sensor size [mm]6.16 × 4.62
Pixel pitch [µm]1.55
Sensor width [mm]4.19
Sensor height [mm]2.36
Table 2. Calculated GSD and FOV for GoPro action camera in Nadir.
Table 2. Calculated GSD and FOV for GoPro action camera in Nadir.
Flight Height [m]GSD [m]HFOV Nadir [m]VFOV Nadir [m]
500.02977.6143.63
750.043116.4265.44
1000.057155.2387.26
1250.072194.04109.07
1500.086232.84130.89
1750.100271.65152.70
2000.115310.46174.52
Note: horizontal field of view (HFOV); vertical field of view (VFOV).
Table 3. Calibration results for five cameras for 2.7 K Video mode GoPro4 Hero Black.
Table 3. Calibration results for five cameras for 2.7 K Video mode GoPro4 Hero Black.
ParameterCAM 1CAM 2CAM 3CAM 4CAM 5
Mean ValueσMean ValueσMean ValueσMean ValueσMean Valueσ
ck [mm]2.700.0152.700.0012.770.0252.720.0412.790.019
x0 [mm]0.1440.0740.1500.0030.1450.0500.1300.041−0.2970.008
y0 [mm]−0.0560.0070.0240.0050.0960.0510.0690.0760.0250.107
k14.56 × 10−42.31 × 10−64.59 × 10−43.72 × 10−84.62 × 10−42.26 × 10−64.61 × 10−41.36 × 10−64.56 × 10−41.42 × 10−6
k22.70 × 10−71.87 × 10−82.89 × 10−79.72 × 10−102.65 × 10−71.98 × 10−82.57 × 10−71.21 × 10−82.80 × 10−71.18 × 10−8
k33.86 × 10−113.98 × 10−111.75 × 10−112.73 × 10−123.10 × 10−114.88 × 10−111.06 × 10−102.85 × 10−113.51 × 10−112.60 × 10−11
k4−3.27 × 10−22.72 × 10−3−9.10 × 10−23.12 × 10−3−1.99 × 10−21.09 × 10−3−1.17 × 10−22.93 × 10−3−8.30 × 10−21.32 × 10−3
p16.00 × 10−53.49 × 10−5−3.46 × 10−51.00 × 10−69.87 × 10−53.29 × 10−52.91 × 10−52.44 × 10−55.68 × 10−51.30 × 10−5
p23.46 × 10−64.85 × 10−6−6.34 × 10−52.54 × 10−6−3.42 × 10−43.09 × 10−56.04 × 10−55.00 × 10−5−1.85 × 10−46.49 × 10−5
Reprojection error [pix]0.290.240.160.340.260.290.240.160.340.26
Table 4. The number of raw matches for stereograms.
Table 4. The number of raw matches for stereograms.
StereogramsCam1 and Cam2Cam2 and Cam3Cam3 and Cam4Cam4 and Cam5
Raw matches2497339732464319
Table 5. Results of matching images from individual cameras.
Table 5. Results of matching images from individual cameras.
CamerasInliers (RANSAC)Fundamental Matrix Error
Cam1 and Cam2249−0.002334
Cam2 and Cam3332−0.000219
Cam3 and Cam43800.000074
Cam4 and Cam56820.032159
Table 6. Results from multi-camera matching.
Table 6. Results from multi-camera matching.
Image PairsRMSEx [pix]RMSEy [pix]Total RMSExy [pix]
Cam1 and Cam21.671.322.13
Cam2 and Cam32.482.883.80
Cam3 and Cam42.592.093.32
Cam4 and Cam52.582.283.45

Share and Cite

MDPI and ACS Style

Wierzbicki, D. Multi-Camera Imaging System for UAV Photogrammetry. Sensors 2018, 18, 2433. https://doi.org/10.3390/s18082433

AMA Style

Wierzbicki D. Multi-Camera Imaging System for UAV Photogrammetry. Sensors. 2018; 18(8):2433. https://doi.org/10.3390/s18082433

Chicago/Turabian Style

Wierzbicki, Damian. 2018. "Multi-Camera Imaging System for UAV Photogrammetry" Sensors 18, no. 8: 2433. https://doi.org/10.3390/s18082433

APA Style

Wierzbicki, D. (2018). Multi-Camera Imaging System for UAV Photogrammetry. Sensors, 18(8), 2433. https://doi.org/10.3390/s18082433

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop