Next Article in Journal
Efficient Uncertainty Propagation in Model-Based Reinforcement Learning Unmanned Surface Vehicle Using Unscented Kalman Filter
Next Article in Special Issue
Performance Analysis of Multi-Hop Flying Mesh Network Using Directional Antenna Based on β-GPP
Previous Article in Journal
Distributed Offloading for Multi-UAV Swarms in MEC-Assisted 5G Heterogeneous Networks
Previous Article in Special Issue
On Countermeasures against Cooperative Fly of UAV Swarms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Influence of UAV Altitudes and Flight Techniques in 3D Reconstruction Mapping

by
Muhammad Hafizuddin Zulkifli
1 and
Khairul Nizam Tahar
2,*
1
Syarikat Jurukur Jauhari, No. 7, Jalan Teluk Baharu 4, Tanjung Lumpur, Kuantan 26060, Pahang, Malaysia
2
School of Surveying Science and Geomatics, College of Built Environment, Universiti Teknologi MARA, Shah Alam 40450, Selangor, Malaysia
*
Author to whom correspondence should be addressed.
Drones 2023, 7(4), 227; https://doi.org/10.3390/drones7040227
Submission received: 16 February 2023 / Revised: 13 March 2023 / Accepted: 16 March 2023 / Published: 24 March 2023
(This article belongs to the Special Issue Wireless Networks and UAV)

Abstract

:
Occasionally, investigating an accident is time-consuming, further compounding traffic congestion. This study aims to reconstruct a 3D model of an accident scene using an unmanned aerial vehicle (UAV). This study tested several flight parameters to check the accuracy and differences compared to site measurement data. The flight parameters selected were POIs and waypoint techniques. These designs can produce a good 3D model to achieve our objectives. This study tested all parameters for accuracy based on the root mean square error (RMSE) value by comparing the UAV data and site measurement data. This study tested this objective using five types of processing and different types of flight parameters (including RMSE) to determine the accuracy of the outcomes. The POI technique achieved an optimal result with centimeter-level accuracy. Furthermore, using UAVs can speed up decision-making, especially in data acquisition, and offer reliable accuracy for specific applications. This study is useful for accident investigation teams to expedite their data collection process.

1. Introduction

Photogrammetry is an engineering discipline heavily influenced by computer science and electronics developments. The ever-increasing use of computers has had and will continue to have a great impact on photogrammetry. The discipline, similar to many others, is constantly changing. This becomes especially evident in the shift from analogue to analytical and digital methods [1]. There is always a technological gap between the latest findings in research and the implementation of research results in manufactured products and between the manufactured product and its general use in an industrial process. In that sense, photogrammetric practice is an industrial process involving several organizations. Inventions will likely be associated with research organizations, such as universities, research institutes, and industry research departments. Developing a product based on research results is the second phase and is carried out, for example, by companies manufacturing photogrammetric equipment. There are many similarities between research and development, but the major difference is that the research results are unknown beforehand. Development goals, however, are accurately defined in terms of product specifications, time, and cost. The third partner in the chain is the photogrammetrists, who use the instruments and methods and give valuable feedback to researchers and developers [2].
Unmanned aerial vehicles (UAVs) are the latest technology in photogrammetry and are useful in data collection. UAVs do not require any pilots onboard during image acquisition. The aerial images can simply be captured using specific apps containing all the parameters the user requires. UAVs comprise many electronic parts, including the positioning sensor, to allow accurate data to be obtained. They are also useful in many applications across disciplines, such as construction, engineering, maintenance, building monitoring, land use changes, plant disease assessment, etc. They can access difficult areas that humans cannot access, especially in hilly, steep, and sloped areas [3].
UAVs are able to capture images in an aerial or oblique mode based on the user’s specific requirements. Both modes have their advantages and disadvantages, depending on the desired outcomes. UAV images can be used to produce many kinds of 3D models in any application, such as 3D building models, 3D terrain models, 3D visualizations of objects, etc. [4]. A UAV requires only a small space for hovering and landing, making it useful for any challenging project involving limited space.
UAVs are quickly becoming among the most exciting research fields (Deepanshu, 2021). Applications that require 3D modeling for real-time scenes can use UAVs to produce them. Engineers can make better decisions with real-time construction site details, particularly regarding resource scheduling and planning. There are a variety of techniques to collect on-site data. One of the most common methods is to use sensors, such as the global positioning system (GPS), geographic information system (GIS), and radio frequency identification (RFID), to detect real-time environmental and structural changes and collect the positions of field equipment or persons [5].
UAVs have been used in serious or dynamic building situations where workers are difficult to access. They play two roles: autonomous operation and on-site data collection. For the former, a UAV fitted with an automatic actuator assembles a modular building frame made of wood, concrete, steel, or masonry to increase the productivity of manual labor. In the construction industry, UAVs with cameras have been widely used as surveillance instruments [6].
UAV technology is also used to extract online and offline traffic parameters from video data using vision processing methods to improve traffic surveillance and monitoring mechanisms. Since UAV technology has been used for civilian applications, the prime focus has been on using it for vehicle detection, tracking, and extracting traffic flow parameters, such as speed and density [7].
Using drones for accident reconstruction significantly reduces the time needed to collect sufficient data to map the scene for an investigation [8,9,10,11,12]. In older approaches, investigators used pencils and measuring tapes, manually identifying skid marks and important parts of the scene to reconstruct the cause of the accident. This reconstruction could take anywhere from six to eight hours, meaning long delays for the roadway on which the accident occurred. Even though, more recently, laser scanners have been used to help speed up the manual approach, accident mapping can still take two to three hours to complete. Previous studies compared the following factors [13]:
(a)
accuracy of the corrected camera positions,
(b)
the average error of the camera locations computed in the photo-alignment and optimization process,
(c)
the models’ geo-referencing errors via nine GCPs based on four scenarios, and
(d)
Root Mean Square (RMS) errors in the Z-direction for different surface types (i.e., roads, shadows, shrubs, boulders, trees, and ground).
UAVs can help capture data and record all information on the accident scene’s surrounding area instead of just the accident scene itself. It is very important to simulate the before and after conditions of the accident scene area, especially in challenging terrains with bushes, trees, and infrastructure facilities (Chiabrando, 2017). UAVs are capable of video recording, photography, evidence search, and site examinations that are difficult for humans to reach (Amin et al., 2020). UAVs can record small details and retrieve evidence that investigators might miss [14].
The scene’s photogrammetric reconstruction starts with image acquisition, where images must meet particular requisites in terms of object coverage and the spatial distribution of camera stations, which should follow an ad hoc geometry. The 3D coordinates extraction requires the same scene to be visible from at least two points of view, and consecutive images must overlap. Accident reconstruction (AR) is the term used in the 3D mapping of traffic accident scenes. The first step in AR is characterizing the accident dimensions accurately. The final AR output is a computer-aided drawing, which can be developed into a 3D model or computer animation. The first thing that must be established in this process is the 3D reconstruction workflow and accident scene energy analysis [15].
The data acquisition must follow two protocols: the parallel protocol and the convergent protocol. Image pre-processing is required because the light condition at the time of the accident determines the presence of shadows, texture, and highly specular surfaces along the scene. The purpose is to homogenize the captured images for 3D reconstruction, improving the key points of extraction and matching. The result covers an invariant algorithm’s scale, rotation, and movement between images [16]. Moreover, the advantage of localization (geodetic total station) is that it is possible to create the scene’s 3D profile, including its surroundings, for process analysis and to view the conditions involved in the accident.
In this case study, the accident scene’s situation makes conventional methods irrelevant because they are too time-consuming. Additionally, the scene’s condition and the intersection shape were impaired, making it hard to use conventional methods [17]. An advanced method was used, involving a geodetic total station to localize the scene area. The localization made it possible to create the accident scene’s 3D profile, including the surroundings, which can be used for process analysis and to view the conditions involved in the accident. [18] stated that the advantage of UAV images over a rectified image is that it can capture a larger accident scene or provide an outlook of the whole scene rather than just limited areas, such as those used for viewing collision scenes or in difficult terrain (fields of grain, bushes, trees).
The geodetic total station was used to localize the accident scene, which took 40 min. The rectified image was applied instead of a UAV image because one of the accidents happened under a railway bridge, where UAV imaging was impossible. The documentation took less time using the advanced method and produced the best result with centimeter-level accuracy [19,20,21]. UAVs can be fitted with different types of cameras and camcorders. When an accident happens, it is possible to get photographic images or video recordings of the accident scene immediately. Additionally, documentation is possible even after the removal of vehicles from the scene, provided traces of the scene and the vehicles’ final positions are appropriately recorded.
There are multiple applications of UAVs in an investigation scene, such as photography, videotaping, searching for evidence, safety assessment, and examination of sites that are hard to reach. Before investigators visit the scene, UAVs can be used to minimize contamination while examining the scene [22]. Previously, capturing images from an aerial view involved climbing the ladder of a firefighter truck, from a high-rise building, or using aircraft that are costly and require professional handling. UAVs can also be used for forensics to obtain high-quality real-time scene images. UAVs shorten the period of road closures, thus avoiding traffic congestion while they quickly assess the scene. Furthermore, UAVs reduce the cost of crime scene investigation.
Lastly, UAVs can access and retrieve evidence from accident areas inaccessible to investigators [23]. Recent advanced methods, such as laser scanning and Lidar, are expensive and require professionals to handle the instruments. Any mistakes made can be costly and require a restart of the process. Current methods, such as total station and laser scanning, also require road closures for safety. This study aims to reconstruct accident scenes using UAV photogrammetry.
The previous study used POI techniques with a flight altitude and scene radius of 15 m. An operator captured the images manually every two seconds while a drone flew around the accident scene at a horizontal speed of 1.0 m/s. The image overlap was about 90%. The first step was flight planning and simulating a mapping mission. Flight planning was important to avoid any misconduct during the work process. Flight path safety was ensured by ensuring there were no obstacles along the path. [24] discusses aerial photogrammetry or photo-interpretation and various types of aerial photographic cameras in detail. Further discussed are the implications of flight height, photographic orientation, and view angle on aerial photographic products. For a given focal length, the higher the camera is, the larger the area covered by each aerial photo. The scale of aerial photographs taken at higher altitudes will be smaller than that of those taken at lower altitudes. However, photographs taken at higher altitudes are severely affected by the atmosphere.
Nowadays, UAVs are very useful because rapid technology increases efficiency. Forensic work or other agencies investigating an accident scene require a 3D model with texture mapping to reconstruct the scene digitally. Accuracy is important to determine the cause and effect of the accident [9,25,26,27]. This study investigates the accuracy of 3D reconstruction mapping using different altitudes and flight techniques.

2. Materials and Methods

This study used a UAV to reconstruct a 3D map and produce a 3D model of an accident scene. The GCP control points were marked using the GPS/GNSS tool, a real-time kinematic technique. Each GCP has at least 10 observational data points for each epoch, and there must be at least 2 epochs. The RTK observation technique was based on a circular produced by the Department of Survey and Mapping. This technique can confirm the GCP accuracy was within the allowable tolerance and accuracy. Each measurement at the scene should be monitored for obstructions to avoid potential problems.
This study used the Agisoft Metashape software 1.8.5 to process all images acquired by the UAV. These steps were followed during software processing to obtain the 3D model: align the photos, build a dense cloud, build mesh, and build texture. Before starting an operation, the photos were selected to use as a source for 3D reconstruction. After making a detailed choice of images, the photos were loaded into the software.
The first phase is data acquisition, which involves flight planning involving setting specific parameters and techniques. Then, the data were processed using Agisoft to produce a 3D model of the accident scene. The process starts with aligning the captured photos. Then, all GCPs were inserted, and we continued optimizing the camera, building a dense cloud, a mesh, a texture, a DEM, and an orthomosaic. Completing the steps using the right parameters results in a 3D model. The final phase is data analysis to produce a 3D model and determine the accuracy between the altitudes and flight techniques. This study determined the accuracy by comparing the actual value measured on the field to that obtained from Agisoft. Using these techniques allowed us to compare and determine the best RMSE results.

2.1. Data Acquisition (UAV Image for Car Accident Scene)

The quality of the 3D representation depends on the raw data acquired during data acquisition at the accident site. Safety factors, such as take-off and landing areas and the determination of flying height, are very important during image acquisition using UAVs. The highest object at the accident scene area should be assessed to ensure no obstacles or disturbances during flight mode. The area’s terrain profile must also be considered to determine the safety of UAV flight missions. There are many UAV flight mission app options on the market. This study used the Altizure and DJI GO 4 apps to design the points of interest (POI) and waypoint flight missions. The altitude for image acquisition was about 15 m, and the radius from the accident scene’s center point was also 15 m. The camera trigger mode during the flight mission was set automatically to 2 s to capture images with a horizontal speed of 1 m/s. Therefore, the overlap is about 90%. Figure 1 illustrates the POI technique during image acquisition. In any mapping mission, flight planning is crucial to avoid misconduct during image acquisition.
This study used Altizure apps to capture the vertical photographs from the nadir angle. The images were captured autonomously with the help of the DJI SDK. The flight paths were checked to ensure there were no obstacles. Figure 2 shows the 30 m × 25 m area at the accident scene area. The overlap was about 80 to 90% using the programmed flight path. Strong winds and bad weather conditions should be avoided during flight missions. The waypoint technique used the grid pattern concept to eliminate systematic errors and obtain highly overlapped stereo pairs. The flying height was approximately 15 m, and the ground sampling was calculated at a distance of about 6 mm. The image footprint size on the ground was about 25.6 m × 19.2 m.
Accident scene reconstruction is a systematic practice of investigating, analyzing, and drawing conclusions about the origins and sequence of events in a traffic incident. Reconstruction is used to perform in-depth collision analysis to ascertain the cause and contributing factors of the crash. For this operational evaluation, the focus was primarily on the investigation activities performed at the crash scene.
Crash scene reconstruction typically requires images of the scene from many angles to capture all relevant aspects, including the vehicles at their final rest position, evidence of the impact area, collision debris distribution, road evidence, the operator’s and witness’s views, and vehicle damage. These photographs can also be used to create scaled diagrams of the scene, model objects, and measure various distances. Traditionally, taking these photographs involves law enforcement personnel performing on-scene investigations, which consumes time and may expose them to secondary collisions. Many technologies, including UAV and GNSS, have been utilized to reduce the clearance time after a crash and personnel’s exposure to secondary collisions.

2.2. Data Processing

This study used Agisoft Metashape to process all images acquired from the UAV platform, following these steps: align the photos, build a dense cloud, build a mesh, and build texture. Before starting any operation, the photos were selected to use as a source for 3D reconstruction. The main image processing stages were performed based on the master channel that this study selected. All spectral bands were processed during the orthophoto export to form an orthophoto with the same bands as the source images.
The overall procedure for imagery processing does not differ from the usual procedure for normal photos, except for the additional master channel selection step performed after adding images to the project. The sharp spectral band was selected with as many details as possible for the best results. All images were selected from the accident scene for POI data processing. The masking process was done for all images to select the accident scene area and exclude surrounding images during image processing. The intelligent scissors icon is the masking tool available in this software. The invert selection button was used to select the accident scene object. Once the masking process was complete, all images were loaded into the data frame. The camera’s position for all images was determined, and sparse point cloud models were generated. The alignment photo processing setting was high, and the pair pre-selection was generic.
Once the align photo stage was completed, the optimization camera stage was performed. This optimization was used to reduce errors during image processing. Then a dense cloud was built to obtain high data redundancy to eliminate the blunder data and improve the image matching accuracy at the accident scene area. The outcome of a dense cloud can be visualized in the data frame. Ideally, the dense clouds were generated from two camera positions where a parallactic angle was generated to obtain the depth value for each point. The setting used during the dense point cloud stage was high quality with mild depth filtering. The next step was the 3D model, which comprised mesh generation, refinement, and texture mapping.
Once the dense cloud stage was completed, the outcomes were used to perform mesh processing. Mesh is the process of generating surfaces at the accident scene area. In theory, every three dense cloud points were connected to create one surface, and this process was repeated for the whole set of dense cloud points. Therefore, many surfaces were generated, and the outcome was a very fine and detailed surface of the accident scene area. These generated surfaces can visualize the target object’s shape. The next step is the texture mapping process, which assigns each surface to a 3D model. It minimized the misalignment between textures from different sources. The generic setting was chosen for building textures for the mapping mode.
All images were selected to process data from the accident scene for the POI technique (5 m, 7 m, and 10 m) and were processed separately. Therefore, there were 5 processors. Once the photos were loaded into Metashape, they were aligned. At this stage, Metashape found each photo’s camera position and orientation and built a sparse point cloud model. High accuracy was selected for the aligned photos. When the alignment step was completed, the point cloud and estimated camera positions were exported for processing using another software, if needed. All images were aligned and distributed by POI flight planning. After completing the alignment, the coordinates from the text file were imported to input the coordinate system into WGS 84.
This study chose the same setting when combining POI and waypoint techniques (7 m and 10 m) and did processing separately to optimize camera alignment. The dense build used a cloud setting, a high-quality process, and aggressive depth filtering. The build mesh on the surface setting was set to a height field, which differed from the POI setting. The source data was chosen for the dense cloud, and the face count was set to high. The build texture was set in the mapping mode to orthophoto, which differed from the POI setting. Metashape estimated the image quality as the photo’s relative sharpness with respect to other images in the data set. The parameter’s value was calculated based on the sharpness level of the most focused part of the picture. GCP in the WGS84 system was manually inserted using markers to create markers (Figure 3). Then, this study filled in the coordinate information (longitude, latitude, and altitude). The process was repeated for all six GCPs. After inserting the GCPs, Optimize Cameras was selected to calculate the error after applying the marker to each image.
The GCP was produced to give geo-reference to the images while processing. A UAV has a GPS on board, but GCP post-processing gives a more accurate location for the final output. Table 1 shows the GNSS observation results for all GCP in WGS84 coordinate system format (latitude, longitude, and elevation).
Six GCPs were established around the accident scene, using a post-mark on the ground to locate the GCP points covering the accident area. A total station was used to apply the radiation method to all GCPs. The GCP’s point location was crucial to ensuring the least amount of error when identifying the exact image coordinates during processing. 3D GCP was produced because it contained elevation data along with the coordinates. The user gave the accuracy XYZ of the GCP in the XYZ direction. It indicated the accuracy of the GCP/Check Point in each direction. The error in the GCP report table was the difference between the computed GCP 3D point and the original position in the XYZ direction, which was the original computed position.
The projection error was the average distance in the images, where the GCP/Check Point was marked and re-projected. The verified or marked column indicated the number of images on which the GCP/Check Point was marked and considered for the reconstruction. Dense clouds and mesh were used to choose the output parameters. The medium parameters were specified in the computer specification. The last part of processing is DSM and orthomosaic. The Build DEM setting was chosen. The resolution allowed us to specify the spatial resolution at which the DEM was generated. The dense cloud was used as the source data to build the DSM. The orthomosaic can be generated in Google Maps, TIFF, and KML formats.
This study collected and analyzed two types of data: UAV data and site measurement data using the conventional method. UAV data was obtained by making measurements using the 3D model generated from data processing. The outcome of this study was determined by comparing site measurements to UAV measurements obtained from Agisoft. The standard deviation was determined to assess the acceptable data range for this investigation. The outlier data was defined as data that fell beyond the standard deviation range and could not be included in the RMSE computation, which was used to measure the UAV product’s accuracy. The Agisoft software created the marker from the 3D model. Then, the next marker point was chosen because at least two points were needed to compute a measurement (Figure 4). The measurement must be the same as the measurement taken on site to get the best comparison so that this study can analyze and determine if our objectives have been achieved. If the point was not the same as the site measurements, it might skew the data, and our study would fail to reach its objectives.
This study can check the point location on the marker based on images in the right window to ensure that the line is located in the correct position. The location was adjusted to get the best distance data. Each line gave the distance data that was generated automatically by the software. After creating the line, the recorded distance data needed to be compared with the actual site measurements. The data were recorded in Microsoft Excel for easy calculation. The same steps were applied to other processing types that used different flight plans (Figure 5).

3. Results

This study’s 3D accident scene model can be used for real accidents. The results demonstrate that UAV photogrammetry produced an accuracy on the scale of centimetres for both the 3D model and orthophoto for visualization. Therefore, this study established six GCPs. Since UAVs are affordable, photogrammetry offers the best alternative technique to reconstruct an accident scene. Our results show that the accident scene’s 3D model produced using UAV photogrammetry can speed up police documentation with reliable accuracy (Figure 6).
The accuracy assessment analysis used points to measure a few distance samples. The same distances were measured using tools in Agisoft Metashape. The UAV measurements were calculated directly from the points. The standard deviation was calculated to determine the range of acceptable data for this study. Data outside the standard deviation range is known as an outlier and cannot be used for RMSE calculation.
The RMSE formula was used to define the accuracy assessment because it is suitable for comparing two data sets. There are five RMSE results because there are five types of processing. Table 2 shows that the RMSE of the POI technique is 0.040 m for an altitude of 5 m, 0.041 m for an altitude of 7 m, and 0.047 m for an altitude of 10 m. Table 3 shows that the RMSE when combining POI and waypoint techniques is 0.051 m for an altitude of 7 m and 0.046 m for an altitude of 10 m.
The POI technique with an altitude of 5 m had the lowest RMSE value (0.040 m) compared to other altitudes. The best technique to reconstruct an accident scene was POI 5 m, because its error was very small compared to the actual and UAV values. The factors contributing to this result were high 3D densified points (8,707,567) and a clear view of the triangle mesh that helped with data analysis. Figure 7 shows the results for RMSE applied to all altitudes and flight techniques.
Figure 7 shows the accuracy differences between altitudes of 5 m and 7 m, and 10 m gave 1 mm to 7 mm differences. The combination of POI and waypoint techniques for 7 m and 10 m altitudes gave a difference of about 5 mm. Therefore, different altitudes and combinations of flight techniques provide millimeter-level accuracy. This study concludes that the 7 m altitude POI technique is the optimum for accident scene reconstruction. In contrast, combining POI and waypoint techniques did not improve accuracy. The standard deviation for the POI technique at a 5 m altitude is ±0.038 m, at a 7 m altitude it is ±0.037 m, and at a 10 m altitude it is ±0.046 m. Based on the standard deviation data, 7 m was the optimum altitude for accident scene reconstruction. The standard deviation for combining POI and waypoint techniques at a 7 m altitude is ±0.047 m, and at a 10 m altitude, it is ±0.045 m. These standard deviation results show that combining POI and waypoint techniques did not improve the data quality. Figure 8a illustrates the error pattern for POI techniques at different altitudes, and Figure 8b illustrates the error pattern when combining POI and waypoint techniques at different altitudes.
Figure 8a shows the R2 for a POI with a 5 m altitude is 0.07, a POI with a 7 m altitude is 0.14, and a POI with a 10 m altitude is 0.28. These results show that POI with a 10 m altitude have a better relationship than those at other altitudes. Figure 8b describes R2 when combining POI and waypoint techniques at 7 m altitude, which is 0.14, and at 10 m altitude, which is 0.21. The results show that combining POI and waypoint techniques with 10 m altitude provided a better relationship than 7 m altitude. Generally, the margin of error for all altitudes and combinations for all techniques in Figure 8 is about 0.08 m to −0.08 m. Therefore, the errors are about centimeter-level for accident scene reconstruction.
This project’s final product is a 3D model produced using Agisoft Metashape. After processing, the 3D model consisted of an orthophoto. Figure 9 shows a car in the 3D model, which is similar to the actual accident scene, where two cars collided and the white car crashed into the red car.

4. Conclusions

This study used two techniques, namely POI and waypoint, at three altitudes: 5, 7, and 10 m. The 3D models developed for different altitudes and using different flight techniques showed only slight differences. This study found that 7 m is the optimum altitude for accident scene reconstruction. The best result (0.040 m) used the POI technique with an altitude of 5 m. Other good results used the POI technique with an altitude of 7 m (0.041 m), combined POI and waypoint techniques with an altitude of 10 m (0.046 m), the POI technique with an altitude of 10 m (0.047 m), and combined POI and waypoint techniques with an altitude of 7 m (0.051 m). This project produced a good 3D model of the accident scene for investigation purposes.
Additionally, UAVs can replace manual site measurements to reconstruct an accident scene. It helped an investigator speed up the data collection and recording process. Previous studies [19,20] also achieved accuracy at the centimeter-level. Another study [21] found that comparing tape measurements on a real accident scene and a 3D accident scene generated by photogrammetric software was about 1 cm to 1.4 cm. [9] produced a crash scene model at 3 cm accuracy. Comparing our study’s outcome to previous studies, this study found that the results obtained have similar accuracy, which is centimeter-level accuracy.
This study has a few recommendations for future action and reference. A future study should focus on heavy vehicles, such as lorries, buses, and vehicles other than cars, to determine optimal flight parameters and altitudes for different types of vehicles. There might be differences in terms of suitable techniques and accuracy. Additional UAV features, such as waterproof UAVs, would be a good enhancement because most accidents happen while it is raining. A normal UAV cannot fly while it is raining. Collecting and analyzing data under these conditions would be useful to compare its accuracy. Another useful feature is a UAV with a built-in spotlight so that it is ready to fly anytime and under any conditions. Sometimes, an accident occurs involving many vehicles, in which case GCP is needed to cover a larger area than the one covered in this study.

Author Contributions

Conceptualization, K.N.T.; methodology, M.H.Z. and K.N.T.; software, M.H.Z.; validation, M.H.Z. and K.N.T.; formal analysis, M.H.Z. and K.N.T.; investigation, M.H.Z.; resources, K.N.T.; data curation, M.H.Z. and K.N.T.; writing—original draft preparation, M.H.Z.; writing—review and editing, K.N.T.; visualization, M.H.Z.; supervision, K.N.T.; project administration, K.N.T.; funding acquisition, K.N.T. All authors have read and agreed to the published version of the manuscript.

Funding

The Ministry of Higher Education (MOHE) is greatly acknowledged for funding under the Fundamental Research Grant Scheme (Grant No. FRGS/1/2021/WAB07/UITM/02/2). The authors also acknowledge UiTM, particularly the College of Built Environment, for providing support under YTR Grant No. 600-RMC/YTR/5/3 (004/2020) and the GPK fund (Grant No. 600-RMC/GPK 5/3 (223/2020)) and enabling the conduct of this research. The authors would also like to thank the people who were directly or indirectly involved in this research.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nex, F.; Armenakis, C.; Cramer, M.; Cucci, D.A.; Gerke, M.; Honkavaara, E.; Kukko, A.; Persello, C.; Skaloud, J. UAV in the advent of the twenties: Where we stand and what is next. ISPRS J. Photogramm. Remote Sens. 2022, 184, 215–242. [Google Scholar] [CrossRef]
  2. Chu, C. Auxiliary application of UAV oblique Photogrammetry in planning completion survey. Surv. Spat. Geo-Graph. Inf. 2020, 43, 205–208. [Google Scholar]
  3. Trajkovski, K.K.; Grigillo, D.; Petrovič, D. Optimization of UAV Flight Missions in Steep Terrain. Remote Sens. 2020, 12, 1293. [Google Scholar] [CrossRef] [Green Version]
  4. Lu, C.; Yu, X. Construction of 3D Design Model of Urban Public Space Based on ArcGIS Water System Terrain Visualization Data. Math. Probl. Eng. 2022, 1881342, 1–11. [Google Scholar] [CrossRef]
  5. Taherdoost, H. Different Types of Data Analysis; Data Analysis Methods and Techniques in Research Projects. Int. J. Acad. Res. Manag. 2022, 9, 1–9. [Google Scholar]
  6. Jiang, B.; Yang, J.; Song, H. Protecting Privacy from Aerial photography: State of the Art, Opportunities, and Challenges. In Proceedings of the IEEE INFOCOM 2020—IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Toronto, ON, Canada, 6–9 July 2020; pp. 799–804. [Google Scholar]
  7. Ghamari, M.; Rangel, P.; Mehrubeoglu, M.; Tewolde, G.S.; Sherratt, R.S. Unmanned Aerial Vehicle Communications for Civil Applications: A Review. IEEE Access 2022, 10, 102492–102531. [Google Scholar] [CrossRef]
  8. Saveliev, A.; Izhboldina, V.; Letenkov, M.; Aksamentov, E.; Vatamaniuk, I. Method for automated generation of road accident scene sketch based on data from mobile device camera. Transp. Res. Procedia 2020, 50, 608–613. [Google Scholar] [CrossRef]
  9. Desai, J.; Mathew, J.K.; Zhang, Y.; Hainje, R.; Horton, D.; Hasheminasab, S.M.; Habib, A.; Bullock, D.M. Assessment of Indiana Unmanned Aerial System Crash Scene Mapping Program. Drones 2022, 6, 259. [Google Scholar] [CrossRef]
  10. Liu, Z. Research and Practice of Large-scale City Real Scene 3D Modeling Technology Based on Oblique Photography. Surv. Spat. Geogr. Inf. 2019, 42, 187–189. [Google Scholar]
  11. Pádua, L.; Sousa, J.; Vanko, J.; Hruška, J.; Adão, T.; Peres, E.; Sousa, A.; Sousa, J.J. Digital Reconstitution of Road Traffic Accidents: A Flexible Methodology Relying on UAV Surveying and Complementary Strategies to Support Multiple Scenarios. Int. J. Environ. Res. Public Health 2020, 17, 1868. [Google Scholar] [CrossRef] [Green Version]
  12. Amin, M.A.M.; Abdullah, S.; Mukti, S.N.A.; Zaidi, M.H.A.M.; Tahar, K.N. Reconstruction of 3D accident scene from multirotor uav platform. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B2-2, 451–458. [Google Scholar] [CrossRef]
  13. Driscoll, J.O. Reports landscape applications of photogrammetry using unmanned aerial vehicles. J. Archaeol. Sci. Rep. 2018, 22, 32–44. [Google Scholar]
  14. Zhou, T.; Lv, L.; Liu, J.; Wan, J. Application of uav oblique photography in real scene 3D modeling. ISPRS-Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2021, XLIII-B2-2, 413–418. [Google Scholar] [CrossRef]
  15. Dascăl, A.; Popa, M. The 3D reconstruction of a road accident used the specialized program PC Crash 12. J. Phys. Conf. Ser. 2021, 1781, 1–6. [Google Scholar] [CrossRef]
  16. Tang, W.; Jia, F.; Wang, X. Image Large Rotation and Scale Estimation Using the Gabor Filter. Electronics 2022, 11, 3471. [Google Scholar] [CrossRef]
  17. Athiappan, K.; Karthik, C.; Rajalaskshmi, M.; Subrata, C.; Dastjerdi, H.R.; Liu, Y.; Fernández-Campusano, C.; Gheisari, M. Identifying Influencing Factors of Road Accidents in Emerging Road Accident Blackspots. Adv. Civ. Eng. 2022, 2022, 1–10. [Google Scholar] [CrossRef]
  18. Delavarpour, N.; Koparan, C.; Nowatzki, J.; Bajwa, S.; Sun, X. A Technical Study on UAV Characteristics for Precision Agriculture Applications and Associated Practical Challenges. Remote Sens. 2021, 13, 1204. [Google Scholar] [CrossRef]
  19. Dascăl, A.; Popa, M. Possibilities of 3D reconstruction of the vehicle collision scene in the photogrammetric environment Agisoft Metashape 1.6.2. J. Phys. Conf. Ser. 2021, 1781, 1–7. [Google Scholar] [CrossRef]
  20. Morales, R.C.; Farias, E. Accuracy and Validation of 360-Degree Camera Use in Photogrammetry. SAE Tech. Pap. 2022, 0829, 1–12. [Google Scholar] [CrossRef]
  21. Topolšek, D.; Herbaj, E.; Sternad, M. The Accuracy Analysis of Measurement Tools for Traffic Accident Investiga-tion. J. Transp. Technol. 2014, 4, 84–92. [Google Scholar]
  22. Mohsan, S.A.H.; Khan, M.A.; Noor, F.; Ullah, I.; Alsharif, M.H. Towards the Unmanned Aerial Vehicles (UAVs): A Comprehensive Review. Drones 2022, 6, 147. [Google Scholar] [CrossRef]
  23. Shakhatreh, H.; Sawalmeh, A.H.; Al-Fuqaha, A.; Dou, Z.; Almaita, E.; Khalil, I.; Othman, N.S.; Khreishah, A.; Guizani, M. Unmanned Aerial Vehicles (UAVs): A Survey on Civil Applications and Key Research Challenges. IEEE Access 2019, 7, 48572–48634. [Google Scholar] [CrossRef]
  24. Srivastava, S.; Narayan, S.; Mittal, S. A survey of deep learning techniques for vehicle detection from UAV images. J. Syst. Arch. 2021, 117, 102152. [Google Scholar] [CrossRef]
  25. Galantucci, R.A.; Fatiguso, F. Advanced damage detection techniques in historical buildings using digital photogrammetry and 3D surface analysis. J. Cult. Heritage 2019, 36, 51–62. [Google Scholar] [CrossRef]
  26. Liao, Y.; Hu, Y.; Ye, T. Application of low-altitude drone oblique photography in land management. Surv. Spat. Geogr. Inf. 2019, 42, 97–100. [Google Scholar]
  27. Bullock, J.L.; Hainje, R.; Habib, A.; Horton, D.; Bullock, D.M. Public Safety Implementation of Unmanned Aerial Systems for Photogrammetric Mapping of Crash Scenes. Transp. Res. Rec. J. Transp. Res. Board 2019, 2673, 567–574. [Google Scholar] [CrossRef]
Figure 1. Flight planning (POI technique).
Figure 1. Flight planning (POI technique).
Drones 07 00227 g001
Figure 2. Flight planning (Waypoint technique).
Figure 2. Flight planning (Waypoint technique).
Drones 07 00227 g002
Figure 3. Inserting GCP.
Figure 3. Inserting GCP.
Drones 07 00227 g003
Figure 4. Mark two-point measurement.
Figure 4. Mark two-point measurement.
Drones 07 00227 g004
Figure 5. Layout of All Measurement on Agisoft Software.
Figure 5. Layout of All Measurement on Agisoft Software.
Drones 07 00227 g005
Figure 6. Result of 3D model.
Figure 6. Result of 3D model.
Drones 07 00227 g006
Figure 7. Results for RMSE applied to all altitude.
Figure 7. Results for RMSE applied to all altitude.
Drones 07 00227 g007
Figure 8. Error for each technique: (a) POI 5 m–7 m, (b) Waypoint and POI 7 m, 10 m.
Figure 8. Error for each technique: (a) POI 5 m–7 m, (b) Waypoint and POI 7 m, 10 m.
Drones 07 00227 g008
Figure 9. 3D model at four difference perspective views, (a) left, (b) right, (c) front, (d) back.
Figure 9. 3D model at four difference perspective views, (a) left, (b) right, (c) front, (d) back.
Drones 07 00227 g009aDrones 07 00227 g009b
Table 1. Coordinate and Elevation of GCP.
Table 1. Coordinate and Elevation of GCP.
GCPLatitude (Decimal Degree)Longitude (Decimal Degree)Altitude (m)
1101.4962303.06646832.879
2101.4962723.06641532.879
3101.4963113.06636532.879
4101.4962503.06631032.879
5101.4961803.06637432.879
Table 2. Accuracy Assessment of POI techniques.
Table 2. Accuracy Assessment of POI techniques.
POI Technique 5 mPOI Technique 7 mPOI Technique 10 m
SampleActual (m)Software (m)Error (m)SampleActual (m)Software (m)Error (m)SampleActual (m)Software (m)Error (m)
Point 1–22.0601.9990.061Point 1–22.0601.9810.079Point 1–22.0602.130−0.070
Point 3–41.6001.626−0.026Point 3–41.6001.5940.006Point 3–41.6001.670−0.070
Point 17–181.6001.617−0.017Point 17–181.6001.5980.002Point 17–181.6001.670−0.070
Point 5–60.4600.4190.041Point 5–60.4600.4300.030Point 5–60.4600.4200.040
Point 7–80.4400.503−0.063Point 7–80.4400.480−0.040Point 7–80.4400.500−0.060
Point 9–102.0002.074−0.074Point 9–102.0002.060−0.060Point 9–102.0002.040−0.040
Point 11–122.0001.9400.060Point 11–122.0002.040−0.040Point 11- 122.0001.9600.040
Point 13–141.1101.0800.030Point 13–141.1101.0870.023Point 13–141.1101.0800.030
Point 15–160.3400.2800.060Point 15–160.3400.3100.030Point 15–160.3400.2600.080
Point 19–200.5800.620−0.040Point 19–200.5800.630−0.050Point 19–200.5800.630−0.050
Point 20–210.3700.428−0.058Point 20–210.3700.414−0.044Point 20–210.3700.410−0.040
Point 22–230.4300.3870.043Point 22–230.4300.4080.022Point 22–230.4300.3700.060
Point 25–260.6400.6180.022Point 25–260.6400.6090.031Point 25–260.6400.6100.030
Point 27–280.5700.608−0.038Point 27–280.5700.604−0.034Point 27–280.5700.610−0.040
Point 28–290.6000.5470.053Point 28–290.6000.5300.070Point 28–290.6000.5400.060
Point 29–300.3200.2980.022Point 29–300.3200.2800.040Point 29–300.3200.2700.050
Point 29–331.1021.0890.013Point 29–331.1021.0440.058Point 29–331.1021.0530.049
Point 32–331.1101.0890.021Point 32–331.1101.0810.029Point 32–331.1101.0840.026
Point 34–350.7500.7190.031Point 34–350.7500.7150.035Point 34–350.7500.7120.038
Point 36–370.5200.4870.033Point 36–370.5200.4860.034Point 36–370.5200.4980.022
Point 38–390.7800.7580.022Point 38–390.7800.7480.032Point 38–390.7800.7530.027
Point 40–440.8700.8500.020Point 40–440.8700.8410.029Point 40–440.8700.8100.060
Point 42–430.9500.9180.032Point 42–430.9500.8850.065Point 42–430.9500.8900.060
Point 40–460.6600.6040.056Point 40–460.6600.5960.064Point 40–460.6600.5990.061
Point 45–460.5800.5380.042Point 45–460.5800.5460.034Point 45–460.5800.5540.026
Point 47–480.9300.9210.009Point 47–480.9300.9190.011Point 47–480.9300.9150.015
Point 49–500.5800.5590.021Point 49–500.5800.5510.029Point 49–500.5800.5590.021
Point 51–530.5800.5520.028Point 51–530.5800.5460.034Point 51–530.5800.5490.031
Point 49–521.2001.1890.011Point 49–521.2001.1750.025Point 49–521.2001.1810.019
Point 53–540.8100.7920.018Point 53–540.8100.7840.026Point 53–540.8100.7770.033
RMSE0.040RMSE0.041RMSE0.047
Table 3. Accuracy Assessment of combination between POI and waypoint techniques.
Table 3. Accuracy Assessment of combination between POI and waypoint techniques.
POI + Waypoint Technique 7 mPOI + Waypoint Technique 10 m
SampleActual (m)Software (m)Error (m)SampleActual (m)Software (m)Error (m)
Point 1–22.0601.9880.072Point 1–22.0602.110−0.050
Point 3–41.6001.657−0.057Point 3–41.6001.640−0.040
Point 17–181.6001.667−0.067Point 17–181.6001.640−0.040
Point 5–60.4600.4100.050Point 5–60.4600.4300.030
Point 7–80.4400.490−0.050Point 7–80.4400.520−0.080
Point 9–102.0002.080−0.080Point 9–102.0002.080−0.080
Point 11− 122.0001.9400.060Point 11–122.0001.9200.080
Point 13–141.1101.0670.043Point 13–141.1101.0600.050
Point 15–160.3400.2800.060Point 15–160.3400.2900.050
Point 19–200.5800.625−0.045Point 19–200.5800.640−0.069
Point 20–210.3700.440−0.070Point 20–210.3700.420−0.059
Point 22–230.4300.3640.066Point 22–230.4300.3900.049
Point 25–260.6400.5890.051Point 25–260.6400.6100.039
Point 27–280.5700.599−0.029Point 27–280.5700.610−0.040
Point 28–290.6000.5600.040Point 28–290.6000.5500.050
Point 29–300.3200.2500.070Point 29–300.3200.3000.020
Point 29–331.1021.0390.063Point 29–331.1021.0330.069
Point 32–331.1101.0880.022Point 32–331.1101.0970.013
Point 34–350.7500.7190.031Point 34–350.7500.7080.042
Point 36–370.5200.4890.031Point 36–370.5200.4950.025
Point 38–390.7800.7380.042Point 38–390.7800.7430.037
Point 40–440.8700.8200.050Point 40–440.8700.8300.040
Point 42–430.9500.9020.048Point 42–430.9500.9400.010
Point 40–460.6600.5870.073Point 40–460.6600.5930.067
Point 45–460.5800.5380.042Point 45–460.5800.5440.036
Point 47–480.9300.9120.018Point 47–480.9300.9230.007
Point 49–500.5800.5430.037Point 49–500.5800.5680.012
Point 51–530.5800.5440.036Point 51–530.5800.5570.023
Point 49–521.2001.1780.022Point 49–521.2001.1840.016
Point 53–540.8100.7880.022Point 53–540.8100.7740.036
RMSE0.051RMSE0.046
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zulkifli, M.H.; Tahar, K.N. The Influence of UAV Altitudes and Flight Techniques in 3D Reconstruction Mapping. Drones 2023, 7, 227. https://doi.org/10.3390/drones7040227

AMA Style

Zulkifli MH, Tahar KN. The Influence of UAV Altitudes and Flight Techniques in 3D Reconstruction Mapping. Drones. 2023; 7(4):227. https://doi.org/10.3390/drones7040227

Chicago/Turabian Style

Zulkifli, Muhammad Hafizuddin, and Khairul Nizam Tahar. 2023. "The Influence of UAV Altitudes and Flight Techniques in 3D Reconstruction Mapping" Drones 7, no. 4: 227. https://doi.org/10.3390/drones7040227

Article Metrics

Back to TopTop