Next Article in Journal
Joint Trajectories and Resource Allocation Design for Multi-UAV-Assisted Wireless Power Transfer with Nonlinear Energy Harvesting
Previous Article in Journal
Trust–Region Nonlinear Optimization Algorithm for Orientation Estimator and Visual Measurement of Inertial–Magnetic Sensor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identifying Tree Species in a Warm-Temperate Deciduous Forest by Combining Multi-Rotor and Fixed-Wing Unmanned Aerial Vehicles

1
Hubei Key Laboratory of Regional Ecology and Environment Change, School of Geography and Information Engineering, Chinese University of Geosciences, Wuhan 430074, China
2
State Key Laboratory of Resources and Environmental Information System, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing 100101, China
3
Key Laboratory of Ecosystem Network Observation and Modelling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing 100101, China
4
College of Resources and Environment, University of Chinese Academy of Sciences, Beijing 100049, China
5
Key Laboratory of Low Altitude Geographic Information and Air Route, Civil Aviation Administration of China, Beijing 100101, China
6
Institute of Geographical Sciences, Hebei Academy of Sciences, Shijiazhuang 050011, China
7
Hebei Technology Innovation Center for Geographic Information Application, Shijiazhuang 050011, China
8
China TOPRS Technology Co., Ltd., Beijing 100039, China
9
China Aero Geophysical Survey and Remote Sensing Center for Natural Resources, Beijing 100083, China
*
Authors to whom correspondence should be addressed.
Drones 2023, 7(6), 353; https://doi.org/10.3390/drones7060353
Submission received: 1 May 2023 / Revised: 21 May 2023 / Accepted: 26 May 2023 / Published: 27 May 2023
(This article belongs to the Section Drones in Agriculture and Forestry)

Abstract

:
Fixed-wing unmanned aerial vehicles (UAVs) and multi-rotor UAVs are widely utilized in large-area (>1 km2) environmental monitoring and small-area (<1 km2) fine vegetation surveys, respectively, having different characteristics in terms of flight cost, operational efficiency, and landing and take-off methods. However, large-area fine mapping in complex forest environments is still a challenge in UAV remote sensing. Here, we developed a method that combines a multi-rotor UAV and a fixed-wing UAV to solve this challenge at a low cost. Firstly, we acquired small-scale, multi-season ultra-high-resolution red-green-blue (RGB) images and large-area RGB images by a multi-rotor UAV and a fixed-wing UAV, respectively. Secondly, we combined the reference data of visual interpretation with the multi-rotor UAV images to construct a semantic segmentation model and used the model to expand the reference data. Finally, we classified fixed-wing UAV images using the large-area reference data combined with the semantic segmentation model and discuss the effects of different sizes. Our results show that combining multi-rotor and fixed-wing UAV imagery provides an accurate prediction of tree species. The model for fixed-wing images had an average F1 of 92.93%, with 92.00% for Quercus wutaishanica and 93.86% for Juglans mandshurica. The accuracy of the semantic segmentation model that uses a larger size shows a slight improvement, and the model has a greater impact on the accuracy of Quercus liaotungensis. The new method exploits the complementary characteristics of multi-rotor and fixed-wing UAVs to achieve fine mapping of large areas in complex environments. These results also highlight the potential of exploiting this synergy between multi-rotor UAVs and fixed-wing UAVs.

1. Introduction

As important terrestrial ecosystems on earth, forests have key ecological functions, such as biodiversity conservation, carbon sequestration and oxygen release, climate regulation, water connotation, soil and water conservation, and environmental purification. [1,2,3]. Forest ecosystems greatly contribute to the ecological diversity and carbon cycle of the region and even the earth [2]. The Dongling Mountains region is one of the most well-restored and protected areas of warm-temperate deciduous secondary forests in China and one of the most distinctive and valuable areas for research in this vegetation belt [4]. The vegetation community structure in this area is relatively complex, with typical characteristics of the northern subzone of the warm-temperate deciduous broad-leaved forest region [5].
The forests in the Dongling Mountains region have not yet recovered to top condition [6], and it is important to understand the distribution of forest vegetation for the management and protection of the forests in the area. In the past, field surveys were often used to obtain the distribution of vegetation. However, this method is not only costly but also dangerous in mountainous areas with complicated terrain. With the continuous improvement of earth observation technology, remote sensing has become an important means to obtain the distribution of forest vegetation and is widely utilized in forestry [7,8]. Traditional satellite data, such as Landsat and SPOT series satellite images, are often only used for large-scale vegetation mapping due to their low spatial resolution and inability to provide sufficient spatial details [9], while high-resolution satellite data, such as QuickBird, WordView, Planet, and other satellite images, can provide remote sensing data at the meter level or even submeter level, they have the problems of poor timeliness and high cost [10,11].
Recently, the endurance of UAVs has been increasing and their carrying capacity has become stronger, UAVs have become a new type of remote sensing observation platform and are widely employed in various fields [12,13,14,15,16]. Compared to traditional satellites, UAVs are flexible, fast, and capable of operating in complex scenarios and events [15] and can quickly obtain observation data with high temporal and spatial resolution [17]. UAV remote sensing technology has been applied to forest fire [18] and pest monitoring [19], forest structure parameter estimation [20,21,22], and forest vegetation observation [23], as well as classification [24,25] and tree species diversity monitoring [26].
Divided by the mechanical structure composition and flight principle, UAVs are mainly fixed-wing UAVs and multi-rotor UAVs. Multi-rotor UAVs have the advantages of being low cost, with simple operation, flexibility, and convenience, and having low landing and takeoff environment requirements [12,27]. Due to the flight principle of UAVs, fixed-wing UAVs can fly faster and higher than multi-rotor UAVs with large loads and long endurance [28,29]. Traditional fixed-wing UAVs have certain requirements for takeoff and landing sites. Although vertical takeoff and landing fixed-wing UAVs have emerged in recent years [30], traditional fixed-wing UAVs still dominate due to cost issues. Therefore, fixed-wing UAVs are generally used for large-area (>1 km2) environmental monitoring, while multi-rotors are mainly used for small-area (<1 km2) fine vegetation type surveys.
Fixed-wing UAVs and multi-rotor UAVs have been gradually used in many studies [15,27,28]. Gonçalves et al. [31] used fixed-wing and multi-rotor UAVs for the 3D reconstruction of coastal cliffs to analyze the effects of SfM-MVS processing parameters, image redundancy, and acquisition geometry at the central Portuguese coast. Wandrie et al. [32] evaluated the effectiveness of fixed-wing and multi-rotor UAVs at different flight altitudes in protecting crops from blackbird damage. Boon et al. [33] assessed the performance of fixed-wing UAVs versus multi-rotor UAVs in environmental mapping applications in terms of vegetation mapping, gully erosion characterization, and wetland slope and contours. However, most studies only compared the capabilities of fixed-wing and multi-rotor UAVs in their applications but rarely combined the characteristics of the two types of UAVs.
In this paper, we aimed to achieve large-area tree species mapping within a forest area with high heterogeneity and complex topographic conditions at a low cost. We used a low-cost, multi-rotor UAV to acquire multi-seasonal ultra-high-resolution image data, and accurate a priori data based on the tree species characteristics of the region and visual interpretation, and then combined the wide-range imagery from a fixed-wing UAV and a multi-class semantic segmentation method (U-Net) to conduct large-area tree mapping. To further explore the potential of combining fixed-wing UAV imagery and multi-rotor UAV imagery, we also analyzed the impact on model accuracy with different sizes of datasets.

2. Materials and Methods

2.1. Study Area

The study area is located in Dongling Mountain (Figure 1a), Mentougou District, Beijing, China (115°26′–115°30′ E, 40°00′–40°02′ N). The area belongs to the remnants of the Little Wutai Mountains in the Taihang Mountains, with an elevation range of 400–2303 m and an average elevation of 1100 m [34]. The area has a warm-temperate continental monsoon climate with four distinct seasons, cold and dry winters, and warm and humid summers [35]. The average annual temperature in the region is in the range of 2–8 °C, and the annual ≥ 0° accumulation temperature is in the range of 3600–3800 °C; the annual precipitation is concentrated in summer, with an average annual precipitation of approximately 600 mm and in the range of 300–400 mm from June to August; and the frost-free period is less than 160 days [6]. The terrain is dominated by the mountain erosion structure type, and the soil type is mainly brown loam [4].
The vegetation is a typical deciduous, broad-leaved secondary forest in the warm-temperate zone, and the community is well developed. There are many shrubs in the community, the arbor can reach 20 m, the vertical structure is complex, and the stratification phenomenon is obvious. Generally, the vegetation is divided into 4 layers: the main forest layer (≥9 m), the secondary forest layer (≥4 m and <9 m), the shrub layer (<4 m and ≥0.5 m), and the herbaceous layer (<0.5 m) [4]. In the forests of the Dongling Mountains region, the Beijing Forest Ecosystem Orientation Research Station has established several research sample plots of different sizes. In this study, we selected a sample plot of 20 ha and its surrounding area in the Dongling Mountains as the study sample site (Figure 1c).

2.2. Workflow Description

The workflow of this paper is shown in Figure 2. First, we obtained accurate reference data using multi-rotor UAV images and multi-seasonal features of vegetation for visual interpretation and combined them with convolutional neural networks to construct a high-precision classification model. Second, we use the high-precision prediction ability of the multi-rotor classification model and the visual interpretation method to expand the sample data and make a sample set using the location information with fixed-wing UAV images, the U-Net model is then used to train the model on the sample set of fixed-wing UAV images and to perform large-area tree prediction.

2.3. Data Acquisition

2.3.1. Fixed-Wing UAV Platform and Data Acquisition

On 16 August 2018, we conducted UAV flights in Dongling Mountain using a fixed-wing FEIMA F200 (SZ Feima Robotics Co., Shenzhen, China) with a Sony RX1R II digital camera (Table 1). This fixed-wing UAV has a flight altitude range of 150–1500 m, a maximum endurance of 1.5 h, and a cruising speed of 60 km/h. The fixed-wing drone was equipped with a Sony RX1R II digital camera, which has 42.4 million effective pixels, a sensor size of 35.9 × 24.0 mm, and a focal length of 35 mm.
Since fixed-wing UAVs are subject to local laws and regulations, we also applied for and received approval for airspace from the local air traffic control department. It was difficult to identify a site for a fixed-wing, skid-steer takeoff in mountainous areas, such as a flat open space or a wide road surface, so we used a manual throw for fixed-wing takeoff. The flight was conducted in relatively clear weather with wind speeds below 8 m/s and at an altitude of 800 m. Both the frontal overlap and side overlap of the images are 80% for the fixed-wing flight to meet the map requirement of without gaps and blind spots. The images acquired in the fixed-wing flight were processed with Agisoft PhotoScan. After a series of operations, such as image matching and dense point cloud creation, we derived orthomosaics of 10.8 square kilometers with a resolution of 10 cm (Figure 1b and Figure 3a).

2.3.2. Multi-Rotor UAV Platform and Data Acquisition

On 22 July 2022, we used the multi-rotor Phantom 4 RTK (SZ DJI Technology Co., Shenzhen, China) to conduct a drone flight in the 20 ha sample site at Beijing Forest Station. The Phantom 4 RTK is an industry-grade UAV designed for high-precision mapping and precision flight with a high-precision RTK positioning and navigation system. The multi-rotor UAV has 1 CMOS lens with 20 million effective pixels, a maximum flight time of 30 min, and a flight altitude between 0 and 500 m (Table 2).
The maximum elevation drop In the ‘stud’ area is up to 250 m. If we use the multi-rotor to fly the UAV at a fixed altitude, the relative position of the UAV and the ground cannot be maintained at a certain distance, which will affect the image quality of the UAV. Therefore, we decided to adopt a terrain-following strategy for the flight [36]. First, we employed a multi-rotor UAV to conduct UAV flights over the entire area at a fixed altitude based on the altitude of the highest point in the area and the altitude of the landing site. Then, we used DJI Terra software to obtain the DSM data of the sample site and imported them into the remote control. Finally, in the actual flight, we set the flight altitude on the remote control to 200 m and the frontal overlap rate and side overlap rate to 80%, while the multi-rotor UAV started the flight at a relative altitude of 200 m from the ground. We utilized the RTK navigation system of the multi-rotor UAV, so we did not set additional ground control points. We imported the obtained images into DJI Terra software to generate orthomosaics with a resolution of 6 cm (Figure 3b).
On 1 October 2022, we used the multi-rotor DJI Mavic 2 pro (SZ DJI Technology Co., Shenzhen, China) for the sample plot flight. The DJI Mavic 2 pro is a consumer-grade drone that is lightweight and ideal for taking photos in forest environments. We divided the sample area into 3 different flight zones and set the flight altitude to 150 m from the highest point in the flight zone. We used the same data processing as the Phantom 4 RTK to generate orthomosaics with a resolution of 4 cm (Figure 3c).

2.4. Reference Data Extraction

The topography of the study sample site is complex, with a maximum elevation of 1509 m, a minimum elevation of 1290 m, an average elevation of 1395 m, a maximum elevation difference of 219 m, and a steep terrain with a slope range of 20 to 80° [4,37,38]. The high degree of depression in this sample site makes it difficult to obtain high-accuracy positioning in dense forest conditions and will have large geolocation errors [39]. Such errors are usually in the range of decimeters to meters when using differential GNSS and may even exceed several meters when using standalone GNSS, especially under dense tree canopies [40,41]. Ground data are usually point observations, while broadleaf trees can shade each other and trees do not grow straight, so it is very difficult to establish a spatially explicit connection between ground data and tree species information from UAV imagery. Based on the above findings, we did not use forest inventory data in this study.
Instead of using in situ observations, reference data in CNN-based studies are typically (62%) obtained directly in remote sensing data (e.g., higher-resolution imagery) using visual interpretation [42]. Reference data obtained by visual interpretation are usually spatially explicit, as they are directly derived from images [19,43]. In addition, there are no location errors if the same input data are employed for CNN and visual interpretation. Visual interpretation provides a very efficient mode of generating reference data, provided that the variable of interest can be clearly identified in the image. Therefore, this mode of reference data collection is particularly suitable for tree species-type annotation in high-resolution UAV imagery [41,42].
Based on Liu’s study at this sample site (Table 3), it is known that both Quercus wutaishanica and Juglans mandshurica are the dominant tree species at this sample site. Quercus wutaishanica is one of the main dominant and established species of warm-temperate deciduous broad-leaved forests in China and has an important influence on the appearance, structure, dynamics, and species composition of the forests in the area [37,44]. Juglans mandshurica is the dominant species in the Dongling Mountains, the main building block species of warm-temperate forests, and the most representative deciduous tree in temperate regions [38]. Therefore, we selected these two dominant species for identification.
Notably, the leaves of Quercus wutaishanica are rich green in summer and green with red and yellow spots in autumn (Figure 4). The crown of Juglans mandshurica is flat and rounded, with leaves that are soft green in summer and already deciduous by October (Figure 4). As shown in Figure 4, the spatial resolution of fixed-wing UAV images is lower, and it is difficult to accurately distinguish tree species categories using fewer spatial details. The spatial resolution of multi-rotor UAV images is higher, below 5 cm, with sufficient spatial details, and the canopy texture features and colors are distinct. We employed the seasonal features of tree species and multi-seasonal multi-rotor, high-resolution UAV images for visual interpretation to obtain the labeled data. These chosen samples were further checked by cross-referencing with other data sources, such as https://www.plantplus.cn/cn (accessed on 20 April 2023) and https://www.zhiwutong.com/ (accessed on 20 April 2023).
Liu et al. [38,45] analyzed the distribution of Quercus wutaishanica and Juglans mandshurica in relation to topography (Table 3). The results showed that there were obvious differences in the growth environment of these two species within the sample site, with Quercus wutaishanica mainly distributed at high elevations with good light and Juglans mandshurica mainly distributed at low elevations with good moisture conditions. This finding is consistent with the distribution of our sample labels, which again proves the reliability of the labeled data. Considering that the cropping size of remote sensing images has some influence on the training model, we choose two sizes to crop the UAV images: 256 × 256 pixels and 128 × 128 pixels. We applied 90% of the data as the training set and 10% of the data as the validation set and performed data augmentation to improve the model capabilities.

2.5. Classification Method

The U-Net network model, proposed by Ronneberger et al. [46], is an end-to-end network model structure. This model follows the encoder–decoder structure and the characteristics of jump networks, enabling the fusion of high-level semantic information with shallow features and fully utilizing contextual and detailed information, producing a more accurate feature map even on a smaller training set.
The U-Net network model (Figure 5) mainly consists of a contracting path and a symmetric expanding path. The contracting path is mainly used to extract the contextual features of an image and has four blocks, each of which consists of two 3 × 3 convolutional layers and a 2 × 2 pooling layer, where the convolutional layer uses a modified linear unit as the activation function. The pooling layer reduces the image size by half after each block, while the convolutional layer doubles the feature channels. The expanding path mainly maps the contextual information to the original scale and has four blocks. Each region is passed through an upsampled convolutional layer, undergoes feature stitching, and is then passed through two 3 × 3 convolutional layers and activation functions (rectifier linear unit). Each block doubles the size of the image and doubles the feature channels while connecting the feature information of the shrinkage network. The full feature vector is mapped to the set class by a 1 × 1 convolution operation. This study uses a deep learning framework based on PyTorch. In addition, we chose stochastic gradient descent (SGD) as our optimizer, with a batch size of 8 and 100 training epochs. To solve the overfitting problem in deep learning, we used data augmentation, early stopping, and batch normalization to solve the overfitting problem. If the loss does not drop within 5 epochs, the training stops running.

2.6. Classification Accuracy Assessment

In this study, we evaluated the accuracy of the model based on the Precision (P), Recall (R), and F1 score (F1). The F1 is a way of combining the precision and recall of the model. The formula is defined as follows:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 = 2 × P × R P + R
where TP refers to true positive samples, that is, predicting positive samples as positive samples; FP refers to false positive samples, that is, predicting negative samples as positive samples; and FN refers to false negative samples, that is, predicting positive samples as negative samples.

3. Results

3.1. Model Accuracy Using Multi-Rotor UAV Imagery and Fixed-Wing UAV Imagery

The classification models trained using UAV imagery and CNNs achieved effective classification results (Table 4). The model accuracy of multi-rotor UAV imagery with a tile size of 256 × 256 pixels was the highest (mean F1 score = 96.06%). Among the models, the model accuracy of Quercus wutaishanica (F1 score = 95.52%), the main tree species in the region, was slightly lower than that of Juglans mandshurica (F1 score = 96.59%). The model accuracy of fixed-wing UAV imagery was low. The best-performing model had a tile size of 256 × 256 pixels, with an average F1 score of 92.93%, including 92.00% for Quercus wutaishanica and 93.86% for Juglans mandshurica.

3.2. Effect of Size on the UAV Image Model

For multi-rotor UAV imagery, the accuracy of the model with a tile size of 128 × 128 pixels was 8.5% lower than that of the model with a tile size of 256 × 256 pixels in terms of the F1 score. The F1 score of Juglans mandshurica was reduced by only 5.05%, and that of Quercus wutaishanica was reduced by only 12.12%.
For fixed-wing UAV imagery, the accuracy of the model with a large tile size was 6.54% higher than that of the model with a small tile size in terms of the F1 score. The accuracy of Juglans mandshurica was more reduced in terms of precision, recall, and the F1 score, with an 8.35% reduction in the F1 score. The model accuracy of Quercus wutaishanica was less affected, and the F1 score was only reduced by 4.74%.

3.3. Model Prediction

We used the trained model to predict the UAV imagery and show the local enlargement of the predicted images in Figure 6. The model of multi-rotor imagery accurately predicted the tree species distribution and rarely misidentified forest floors as tree species. The model adequately predicts in contiguous tree species, and the edges can be effectively detected. Fragmented patches can also be better predicted, but the prediction of the edges is slightly worse.
The fixed-wing UAV image model adequately predicts the distribution of tree species but more often misidentifies the forest floor inside contiguous polygons as tree species. The model predicts better in areas with contiguous tree species, and the edge detection is general. Fragmented patches can also be predicted, but edge detection is worse.
We selected an area of 120 ha on the fixed-wing UAV image as the scene for model prediction (Figure 7). We used the 256 × 256 pixels model with the highest model accuracy to predict the scenes. Overall, the prediction ability of the model was effective, and the model was able to make accurate predictions for Juglans mandshurica, but the prediction for the fragmented Quercus wutaishanica was general.

4. Discussion

4.1. Comparison of Multi-Rotor and Fixed-Wing Model Capabilities

We were able to achieve high classification accuracy (F1 score > 90%) on tree species classification using multi-rotor UAV images and fixed-wing UAV images combined with CNN networks. Our results show that the same tree species with different genetic lines can be identified. This is because CNN can fully exploit the spatial information features of high-resolution UAV images for tree species identification based on the differences in canopy morphology of different tree species. In future research, we will further add an infrared band and a thermal infrared band to improve the spectral information of UAV images, and thus improve the portability of the model, which can expand the scope of our study.
Multi-rotor UAVs and fixed-wing UAVs have effective application effects in the fine classification of ground objects. Revuelto et al. [47] used two multi-rotor drones and one fixed-wing drone to map snow depth distribution and obtained effective results with both drones under good lighting conditions. Liu et al. [48] utilized multitype UAV data and an object-oriented approach to classify mangroves, achieving classification accuracies of 81.1% and 84.8% for fixed-wing UAV data and multi-rotor UAV data, respectively.
For both UAV image models, the prediction accuracy of Juglans mandshurica was higher than that of Quercus liaotungensis. The leaves of Quercus wutaishanica appear dark green in summer and are easily confused with those of other tree species. In contrast, the leaves of Juglans mandshurica are tender green in summer; thus, they can be effectively identified. Zheng et al. [19] used an improved deep learning approach to classify plantations in Indonesia; the results showed that the accuracy of the fewer yellowish palms with distinct features is 88.92%, while the accuracy of the more numerous smallish palms is 77.06%. These results show that acquiring data when the tree species features are obvious improves the classification accuracy.
The multi-rotor UAV imagery model was more accurate than the fixed-wing UAV imagery model probably due to the resolution of the UAV imagery. The multi-rotor UAV applied a terrain-following strategy, which allows the multi-rotor UAV to continuously work at a relative altitude of 200 m. In contrast, fixed-wing UAVs operated at an altitude of 800 m in the air. These factors make fixed-wing UAV imagery smaller in spatial resolution than multi-rotor UAV imagery. Schiefer et al. [41] also discovered in their study that the improvement in high resolution contributes to the accuracy of the CNN model. Lin et al. [49] compared the classification results of images with different aerial camera heights and found that the correct classification rate showed a decreasing trend as the aerial camera height increased.

4.2. Combined Multi-Rotor and Fixed-Wing UAV for Large-Area Mapping

We combined a multi-rotor UAV and fixed-wing UAV to map tree species over a large area in a forest with complex topography and high heterogeneity at a low cost. In our study, we used only one fixed-wing UAV dataset and two multi-rotor UAV datasets. This method not only substantially reduces the cost of fixed-wing UAV flights but also improves the efficiency of tree species mapping.
It is difficult to conduct fine vegetation mapping in complex forest areas using fixed-wing UAVs alone. First, fixed-wing UAVs, most of which have certain requirements for take-off and landing locations [30], also require professional and skilled personnel to operate them. Second, since the altitude of its flight can reach more than 1000 m, the fixed-wing UAV is restricted by local laws and regulations, and each flight has to be reported to the air traffic control department. Last, the weather in mountainous areas is changeable and often accompanied by high winds, while the wind resistance of UAVs is limited. These characteristics render the use of fixed-wing UAVs in forest areas risky and costly [48]. Our study area is in a warm-temperate deciduous broadleaf forest with complex topography and high heterogeneity, with similar texture and canopy morphology among tree species, making it difficult to obtain accurate reference data via lower-resolution fixed-wing imagery. Multi-rotor UAVs have the advantages of low cost, easy operation, and high stability and can fly at a specific speed to ensure a certain photo interval and image quality [12]. Therefore, we used the multi-rotor UAV to solve this challenge by collecting data from multiple seasons and generating sample sets using location information.
The synergy between multi-rotor UAVs and fixed-wing UAVs has great value to further improve the application capabilities of UAVs. On the one hand, multi-rotor UAVs and fixed-wing UAVs have different advantages in terms of flight cost, operational efficiency, etc. On the other hand, they also have different acquisition characteristics that are related to the signal-to-noise ratio, spatial resolution, spectral resolution, etc. Both multirotor UAVs and fixed-wing UAVs are highly complementary and have strong and promising potential synergies. However, most studies usually separately investigate them or only compare their capabilities [47,48], with the synergy between them less explored.
Our study focuses on data complementation using the advantages of fixed-wing UAVs and multi-rotor UAVs, which is a multiscale observation approach [50]. Since the multi-rotor UAVs collected fewer sites, we only predicted the fixed-wing UAV images for 120 ha near the sample site. However, the prediction ability of the model may be lower for areas far from the sample sites, so in future studies, the area and number of sample sites should be selected based on the range of fixed-wing UAV imagery.

4.3. Tile Size

For both fixed-wing images and multi-rotor images, the accuracy of the 128 × 128 size model was lower than that of the 256 × 256 size model. Since both Quercus wutaishanica and Juglans mandshurica are major tree species in the region, the use of larger sizes considers more spatial context information and facilitates the training of semantic segmentation models. This is similar to the finding of Schiefer et al. [41], who discovered that the F1 score and accuracy of Abies alba was 0.86 at a size of 512 × 512 and 0.79 at a size of 128 × 128, respectively. Notably, the model accuracy of Quercus wutaishanica decreases more than that of Juglans mandshurica when the size is reduced, which may be related to the distribution situation of the tree species. Quercus wutaishanica is very adaptable and can survive on steep slopes and even cliffs, so it shows a clustered or fragmented distribution, while Juglans mandshurica mostly grows on the sides of gullies and valleys and has a more concentrated distribution.

5. Conclusions

We have developed a method that combines a multi-rotor UAV and a fixed-wing UAV to achieve tree mapping over a large area at a low cost. This new method combines the low-cost and flexible features of multi-rotor UAVs with the high efficiency of fixed-wing UAVs to achieve fine-grained mapping in complex scenarios. We also tested the effect of different sizes on the semantic segmentation model and found that low-resolution UAV images are more affected by size. These studies have significant implications for tree species classification, especially for regional forest resource surveys. In future research, we aim to improve the spatial resolution of fixed-wing UAV images while balancing the operational efficiency and will use multi-rotor UAVs for data collection in multiple regions, further exploring the synergistic potential of both types of UAVs.

Author Contributions

W.S.: Conceptualization, methodology, formal analysis, writing—original draft, writing—review and editing. S.W.: Conceptualization, writing—review and editing, funding acquisition. H.Y. (Huanyin Yue): Writing—review and editing. D.W.: Writing—review and editing. H.Y. (Huping Ye): Writing—review and editing. L.S.: Writing—review and editing, funding acquisition. J.S.: Writing—review and editing, funding acquisition. J.L.: Writing—review and editing, data curation. Z.D.: Data curation, Y.R.: Data curation, Z.H.: Data curation, X.S.: Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (No. XDA19040104, No. XDA19050501), the Scientific Research Foundation of China University of Geosciences (Wuhan) (No. 162301192642), the National Natural Science Foundation of China (42001314) and the Key Research and Development Project of Hebei Academy of Sciences (22A03).

Data Availability Statement

The data presented in this study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors are thankful to the staff who work at the Beijing Forest Ecosystem Orientation Research Station.

Conflicts of Interest

The authors declare that have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Perera, A.H.; Peterson, U.; Pastur, G.M.; Iverson, L.R. Ecosystem Services from Forest Landscapes: Where We Are and Where We Go. Ecosyst. Serv. For. Landsc. 2018, 249–258. [Google Scholar] [CrossRef]
  2. Pan, Y.; Birdsey, R.A.; Fang, J.; Houghton, R.; Kauppi, P.E.; Kurz, W.A.; Phillips, O.L.; Shvidenko, A.; Lewis, S.L.; Canadell, J.G.; et al. A Large and Persistent Carbon Sink in the World’s Forests. Science 2011, 333, 988–993. [Google Scholar] [CrossRef]
  3. Bonan, G.B. Forests and climate change: Forcings, feedbacks, and the climate benefits of forests. Science 2008, 320, 1444–1449. [Google Scholar] [CrossRef]
  4. Liu, H.; Li, L.; Sang, W. Species composition and community structure of the Donglingshan forest dynamic plot in a warm temperate deciduous broad-leaved secondary forest, China. Biodivers. Sci. 2011, 19, 232–242. [Google Scholar] [CrossRef]
  5. Zhu, Y.; Bai, F.; Liu, H.; Li, W.; Li, L.; Li, G.; Wang, S.; Sang, W. Population distribution patterns and interspecific spatial associations in warm temperate secondary forests, Beijing. Biodivers. Sci. 2011, 19, 252–259. [Google Scholar]
  6. Bai, F.; Zhang, W.; Wang, Y. A dataset of seasonal dynamics of the litter fall production of deciduous broad-leaf forest in the warm temperate zone of Beijing Dongling Mountain (2005–2015). China Sci. Data 2020, 5, 1-8–8-8. [Google Scholar] [CrossRef]
  7. Fassnacht, F.E.; Latifi, H.; Stereńczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.T.; Straub, C.; Ghosh, A. Review of studies on tree species classification from remotely sensed data. Remote Sens. Environ. 2016, 186, 64–87. [Google Scholar] [CrossRef]
  8. Pu, R. Mapping Tree Species Using Advanced Remote Sensing Technologies: A State-of-the-Art Review and Perspective. J. Remote Sens. 2021, 2021, 9812624. [Google Scholar] [CrossRef]
  9. Stoffels, J.; Hill, J.; Sachtleber, T.; Mader, S.; Buddenbaum, H.; Stern, O.; Langshausen, J.; Dietz, J.; Ontrup, G. Satellite-Based Derivation of High-Resolution Forest Information Layers for Operational Forest Management. Forests 2015, 6, 1982–2013. [Google Scholar] [CrossRef]
  10. Ke, Y.; Quackenbush, L.J.; Im, J. Synergistic use of QuickBird multispectral imagery and LIDAR data for object-based forest species classification. Remote Sens. Environ. 2010, 114, 1141–1154. [Google Scholar] [CrossRef]
  11. Immitzer, M.; Atzberger, C.; Koukal, T. Tree Species Classification with Random Forest Using Very High Spatial Resolution 8-Band WorldView-2 Satellite Data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef]
  12. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  13. Zhang, J.; Hu, J.; Lian, J.; Fan, Z.; Ouyang, X.; Ye, W. Seeing the forest from drones: Testing the potential of lightweight drones as a tool for long-term forest monitoring. Biol. Conserv. 2016, 198, 60–69. [Google Scholar] [CrossRef]
  14. Pajares, G. Overview and Current Status of Remote Sensing Applications Based on Unmanned Aerial Vehicles (UAVs). Photogramm. Eng. Remote Sens. 2015, 81, 281–329. [Google Scholar] [CrossRef]
  15. Liao, X.; Xiao, Q.; Zhang, H. UAV remote sensing: Popularization and expand application development trend. J. Remote Sens 2019, 23, 1046–1052. [Google Scholar] [CrossRef]
  16. Liao, X.; Zhou, C.; Su, F.; Lu, H.; Yue, H.; Gou, J. The Mass Innovation Era of UAV Remote Sensing. J. Geo-Inf. Sci. 2016, 18, 1439–1447. [Google Scholar]
  17. Kiyak, E.; Unal, G. Small aircraft detection using deep learning. Aircr. Eng. Aerosp. Technol. 2021, 93, 671–681. [Google Scholar] [CrossRef]
  18. He, C.; Zhang, S.; Yao, S. Forest Fires Locating Technology Based on Rotor UAV. Bull. Surv. Mapp. 2014, 12, 24–27. [Google Scholar]
  19. Zheng, J.; Fu, H.; Li, W.; Wu, W.; Yu, L.; Yuan, S.; Tao, W.Y.W.; Pang, T.K.; Kanniah, K.D. Growing status observation for oil palm trees using Unmanned Aerial Vehicle (UAV) images. ISPRS J. Photogramm. Remote Sens. 2021, 173, 95–121. [Google Scholar] [CrossRef]
  20. Yu, J.-W.; Yoon, Y.-W.; Baek, W.-K.; Jung, H.-S. Forest Vertical Structure Mapping Using Two-Seasonal Optic Images and LiDAR DSM Acquired from UAV Platform through Random Forest, XGBoost, and Support Vector Machine Approaches. Remote Sens. 2021, 13, 4282. [Google Scholar] [CrossRef]
  21. Liu, J.; Liao, X.; Ni, W.; Wang, Y.; Ye, H.; Yue, H. Individual Tree Recognition Algorithm of UAV Stereo Imagery Considering Three-dimensional Morphology of Tree. J. Geo-Inf. Sci. 2021, 23, 1861–1872. [Google Scholar] [CrossRef]
  22. Onwudinjo, K.C.; Smit, J. Estimating the performance of multi-rotor unmanned aerial vehicle structure-from-motion (UAVsfm) imagery in assessing homogeneous and heterogeneous forest structures: A comparison to airborne and terrestrial laser scanning. S. Afr. J. Geomat. 2022, 11, 1. [Google Scholar] [CrossRef]
  23. Chandrasekaran, A.; Shao, G.; Fei, S.; Miller, Z.; Hupy, J. Automated Inventory of Broadleaf Tree Plantations with UAS Imagery. Remote Sens. 2022, 14, 1931. [Google Scholar] [CrossRef]
  24. Feng, Q.; Yang, J.; Liu, Y.; Ou, C.; Zhu, D.; Niu, B.; Liu, J.; Li, B. Multi-Temporal Unmanned Aerial Vehicle Remote Sensing for Vegetable Mapping Using an Attention-Based Recurrent Convolutional Neural Network. Remote Sens. 2020, 12, 1668. [Google Scholar] [CrossRef]
  25. Shi, W.; Liao, X.; Sun, J.; Zhang, Z.; Wang, D.; Wang, S.; Qu, W.; He, H.; Ye, H.; Yue, H.; et al. Optimizing Observation Plans for Identifying Faxon Fir (Abies fargesii var. Faxoniana) Using Monthly Unmanned Aerial Vehicle Imagery. Remote Sens. 2023, 15, 2205. [Google Scholar] [CrossRef]
  26. Saarinen, N.; Vastaranta, M.; Näsi, R.; Rosnell, T.; Hakala, T.; Honkavaara, E.; Wulder, M.; Luoma, V.; Tommaselli, A.; Imai, N.; et al. Assessing Biodiversity in Boreal Forests with UAV-Based Photogrammetric Point Clouds and Hyperspectral Imaging. Remote Sens. 2018, 10, 338. [Google Scholar] [CrossRef]
  27. Zhang, J.; Sun, Q.; Ye, Z.; Yang, M.; Zhao, X.; Ju, Y.; Hu, T.; Guo, Q. New Technology for Ecological Remote Sensing: Light, Small Unmanned Aerial Vehicles (UAV). Trop. Geogr. 2019, 39, 604–615. [Google Scholar]
  28. Guo, Q.; Wu, F.; Hu, T.; Chen, L.; Liu, J.; Zhao, X.; Gao, S.; Pang, S. Perspectives and prospects of unmanned aerial vehicle in remote sensing monitoring of biodiversity. Biodivers. Sci. 2016, 24, 1267–1278. [Google Scholar] [CrossRef]
  29. Jayathunga, S.; Owari, T.; Tsuyuki, S. Evaluating the Performance of Photogrammetric Products Using Fixed-Wing UAV Imagery over a Mixed Conifer–Broadleaf Forest: Comparison with Airborne Laser Scanning. Remote Sens. 2018, 10, 187. [Google Scholar] [CrossRef]
  30. Zhou, M.; Zhou, Z.; Liu, L.; Huang, J.; Lyu, Z. Review of vertical take-off and landing fixed-wing UAV and its application prospect in precision agriculture. Int. J. Precis. Agric. Aviat. 2020, 3, 8–17. [Google Scholar] [CrossRef]
  31. Gonçalves, G.; Gonçalves, D.; Gómez-Gutiérrez, Á.; Andriolo, U.; Pérez-Alvárez, J.A. 3D Reconstruction of Coastal Cliffs from Fixed-Wing and Multi-Rotor UAS: Impact of SfM-MVS Processing Parameters, Image Redundancy and Acquisition Geometry. Remote Sens. 2021, 13, 1222. [Google Scholar] [CrossRef]
  32. Wandrie, L.J.; Klug, P.E.; Clark, M.E. Evaluation of two unmanned aircraft systems as tools for protecting crops from blackbird damage. Crop Prot. 2019, 117, 15–19. [Google Scholar] [CrossRef]
  33. Boon, M.A.; Drijfhout, A.P.; Tesfamichael, S. Comparison of a Fixed-Wing and Multi-Rotor Uav for Environmental Mapping Applications: A Case Study. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W6, 47–54. [Google Scholar] [CrossRef]
  34. Zhang, Q.; Zhang, J.; Zhang, B.; Cheng, J.; Tian, S.; Suriguga. Pattern of Larix principis-rupprechtii Plantation and Its Environmental Interpretation in Dongling Mountain. J. Wuhan Bot. Res. 2010, 28, 577–582. [Google Scholar]
  35. Li, Z.; Chen, W.; Wei, J.; Maierdang, K.; Zhang, Y.; Zhang, S.; Wang, X. Tree-ring growth responses of Liaodong Oak (Quercus wutaishanica) to climate in the Beijing Dongling Mountain of China. Acta Ecol. Sin. 2021, 41, 27–37. [Google Scholar]
  36. Wu, K.; Sun, X.; Zhang, J.; Chen, F. Terrain Following Method of Plant Protection UAV Based on Height Fusion. Trans. Chin. Soc. Agric. Mach. 2018, 49, 17–23. [Google Scholar]
  37. Ma, F.; Wang, S.; Feng, J.; Sang, W. The study of the effect of tree death on spatial pattern and habitat associations in dominant populations of Dongling Mountains in Beijing. Acta Ecol. Sin. 2018, 38, 7669–7678. [Google Scholar]
  38. Liu, H.; Sang, W.; Xue, D. Topographical habitat variability of dominant species populations in a warm temperate forest. Chin. J. Ecol. 2013, 32, 795–801. [Google Scholar]
  39. Tan, W.; Wang, K.L.; Luo, X.; Wang, Z.J. Positioning precision with handset GPS receiver in different stands. Beijing Linye Daxue Xuebao/J. Beijing For. Univ. 2008, 30, 163–167. [Google Scholar]
  40. Kaartinen, H.; Hyyppä, J.; Vastaranta, M.; Kukko, A.; Jaakkola, A.; Yu, X.; Pyörälä, J.; Liang, X.; Liu, J.; Wang, Y.; et al. Accuracy of Kinematic Positioning Using Global Satellite Navigation Systems under Forest Canopies. Forests 2015, 6, 3218–3236. [Google Scholar] [CrossRef]
  41. Schiefer, F.; Kattenborn, T.; Frick, A.; Frey, J.; Schall, P.; Koch, B.; Schmidtlein, S. Mapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 2020, 170, 205–215. [Google Scholar] [CrossRef]
  42. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in vegetation remote sensing. ISPRS J. Photogramm. Remote Sens. 2021, 173, 24–49. [Google Scholar] [CrossRef]
  43. Zhang, C.; Atkinson, P.M.; George, C.; Wen, Z.; Diazgranados, M.; Gerard, F. Identifying and mapping individual plants in a highly diverse high-elevation ecosystem using UAV imagery and deep learning. ISPRS J. Photogramm. Remote Sens. 2020, 169, 280–291. [Google Scholar] [CrossRef]
  44. Wang, M.; Sang, W. The change of phenology of tree and shrub in warm temperate zone and their relationships with climate change. Ecol. Sci. 2020, 39, 164–175. [Google Scholar]
  45. Liu, H.; Xue, D.; Sang, W. Effect of topographic factors on the relationship between species richness and aboveground biomass in a warm temperate forest. Ecol. Environ. Sci. 2012, 21, 1403–1407. [Google Scholar]
  46. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015. [Google Scholar]
  47. Revuelto, J.; Alonso-Gonzalez, E.; Vidaller-Gayan, I.; Lacroix, E.; Izagirre, E.; Rodríguez-López, G.; López-Moreno, J.I. Intercomparison of UAV platforms for mapping snow depth distribution in complex alpine terrain. Cold Reg. Sci. Technol. 2021, 190, 103344. [Google Scholar] [CrossRef]
  48. Liu, K.; Gong, H.; Cao, J.; Zhu, Y. Comparison of Mangrove Remote Sensing Classification Based on Multi-type UAV Data. Trop. Geogr. 2019, 39, 492–501. [Google Scholar]
  49. Lin, Z.; Ding, Q.; Huang, J.; Tu, W.; Hu, D.; Liu, J. Study on Tree Species Classification of UAV Optical Image based on DenseNet. Remote Sens. Technol. Appl. 2019, 34, 704–711. [Google Scholar]
  50. Alvarez-Vanhard, E.; Corpetti, T.; Houet, T. UAV & satellite synergies for optical remote sensing applications: A literature review. Sci. Remote Sens. 2021, 3, 100019. [Google Scholar] [CrossRef]
Figure 1. Overview of the study area: (a) Geographical location of the Dongling Mountains region, (b) fixed-wing RGB orthomosaics, and (c) Multi-rotor RGB orthomosaics.
Figure 1. Overview of the study area: (a) Geographical location of the Dongling Mountains region, (b) fixed-wing RGB orthomosaics, and (c) Multi-rotor RGB orthomosaics.
Drones 07 00353 g001
Figure 2. Analysis workflow of the study.
Figure 2. Analysis workflow of the study.
Drones 07 00353 g002
Figure 3. Partial diagram of visible light images of multi-rotor UAV and fixed-wing UAV: (a) Partial view of fixed-wing image in August; (b) Partial view of multi-rotor images in June; (c) Partial view of multi-rotor images in October.
Figure 3. Partial diagram of visible light images of multi-rotor UAV and fixed-wing UAV: (a) Partial view of fixed-wing image in August; (b) Partial view of multi-rotor images in June; (c) Partial view of multi-rotor images in October.
Drones 07 00353 g003aDrones 07 00353 g003b
Figure 4. Examples of classified tree species on different UAV imagery: (a) Image data acquired by Feima F200 in August. (b) Image data acquired by DJI Phantom 4 RTK in July. (c) Image data acquired by DJI Mavic 2 Pro in October.
Figure 4. Examples of classified tree species on different UAV imagery: (a) Image data acquired by Feima F200 in August. (b) Image data acquired by DJI Phantom 4 RTK in July. (c) Image data acquired by DJI Mavic 2 Pro in October.
Drones 07 00353 g004
Figure 5. Architecture of U-Net for tree species segmentation.
Figure 5. Architecture of U-Net for tree species segmentation.
Drones 07 00353 g005
Figure 6. Prediction of UAV images using trained models: (a) Multi-rotor UAV RGB imagery. (b) Fixed-wing UAV RGB imagery. (c) Reference data. (d) Prediction map of multi-rotor UAV imagery model. (e) Prediction map of the fixed-wing UAV imagery model. MR and FW represent multi-rotor UAV imagery model and fixed-wing UAV imagery model, respectively. The black rectangle represents incorrect classification results.
Figure 6. Prediction of UAV images using trained models: (a) Multi-rotor UAV RGB imagery. (b) Fixed-wing UAV RGB imagery. (c) Reference data. (d) Prediction map of multi-rotor UAV imagery model. (e) Prediction map of the fixed-wing UAV imagery model. MR and FW represent multi-rotor UAV imagery model and fixed-wing UAV imagery model, respectively. The black rectangle represents incorrect classification results.
Drones 07 00353 g006
Figure 7. Prediction of fixed-wing UAV imagery: (a) RGB orthomosaics over 120 ha acquired with Feima F200 and (b) Prediction map based on 256 ×256 pixels.
Figure 7. Prediction of fixed-wing UAV imagery: (a) RGB orthomosaics over 120 ha acquired with Feima F200 and (b) Prediction map based on 256 ×256 pixels.
Drones 07 00353 g007
Table 1. Drone equipment parameters.
Table 1. Drone equipment parameters.
Drone TypesMulti-RotorsFixed-Wing
Drone modelsDJI Phantom 4 RTKDJI Mavic 2 ProFEIMA F200
Takeoff Weight1391 g907 g3193 g
Flight Speed50 km/h (max speed)50 km/h (max speed)60 km/h (mid-speed)
Max Flight Time30 min30 min1 h and 30 min
Flight Altitude0–500 m0–500 m150–1500 m
Operating Temperature0 to 40 °C0 to 40 °CAbove −10 °C
Camera ModelSONY FC6310Hasselblad L1D-20cSONY DSC-RX1R II
Sensor size1” CMOS1” CMOS35.9 × 24.0 mm
Effective pixels20 Million20 Million42.4 Million
Table 2. Flight parameters and image information of UAVs on different dates.
Table 2. Flight parameters and image information of UAVs on different dates.
UAV ModeDJI Phantom 4 RTKDJI Mavic 2 ProFEIMA F200
Flight Date22 July1 October16 August
Flight time9:1516:1215:12
Flight height200 m150 m800 m
Flight Strategyterrain followingzonal flightsFixed height
Total images270552238
Spatial resolution6 cm4 cm10 cm
Table 3. Literature on the distribution of vegetation in the 20 ha sample plot.
Table 3. Literature on the distribution of vegetation in the 20 ha sample plot.
LiteratureVegetation Condition
Liu et al. [4], Ma et al. [37]Dominant species in the canopy layer were mainly composed of Quercus wutaishanica, Betula dahurica, Populus davidiana, Juglans mandshurica, and other tall positive trees.
Liu et al. [38], Liu et al. [45]Distribution of Quercus wutaishanica and Juglans mandshurica in relation to topography
Table 4. Tree species mapping model accuracies using multi-rotor UAV imagery and fixed-wing UAV imagery. P, R, and F1 represent precision, recall, and F1 score, respectively.
Table 4. Tree species mapping model accuracies using multi-rotor UAV imagery and fixed-wing UAV imagery. P, R, and F1 represent precision, recall, and F1 score, respectively.
UAV TypesDJI Phantom 4 RTKFEIMA F200
Tile Size256 Pixel128 Pixel256 Pixel128 Pixel
PRF1PRF1PRF1PRF1
Quercus liaotungensis95.23%95.80%95.52%84.16%82.65%83.40%91.73%95.11%92.00%83.68%83.63%83.65%
Juglans mandshurica96.29%96.90%96.59%90.20%92.92%91.54%92.65%95.10%93.86%87.47%90.84%89.12%
Mean F196.06%87.47%92.93%86.39%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, W.; Wang, S.; Yue, H.; Wang, D.; Ye, H.; Sun, L.; Sun, J.; Liu, J.; Deng, Z.; Rao, Y.; et al. Identifying Tree Species in a Warm-Temperate Deciduous Forest by Combining Multi-Rotor and Fixed-Wing Unmanned Aerial Vehicles. Drones 2023, 7, 353. https://doi.org/10.3390/drones7060353

AMA Style

Shi W, Wang S, Yue H, Wang D, Ye H, Sun L, Sun J, Liu J, Deng Z, Rao Y, et al. Identifying Tree Species in a Warm-Temperate Deciduous Forest by Combining Multi-Rotor and Fixed-Wing Unmanned Aerial Vehicles. Drones. 2023; 7(6):353. https://doi.org/10.3390/drones7060353

Chicago/Turabian Style

Shi, Weibo, Shaoqiang Wang, Huanyin Yue, Dongliang Wang, Huping Ye, Leigang Sun, Jia Sun, Jianli Liu, Zhuoying Deng, Yuanyi Rao, and et al. 2023. "Identifying Tree Species in a Warm-Temperate Deciduous Forest by Combining Multi-Rotor and Fixed-Wing Unmanned Aerial Vehicles" Drones 7, no. 6: 353. https://doi.org/10.3390/drones7060353

APA Style

Shi, W., Wang, S., Yue, H., Wang, D., Ye, H., Sun, L., Sun, J., Liu, J., Deng, Z., Rao, Y., Hu, Z., & Sun, X. (2023). Identifying Tree Species in a Warm-Temperate Deciduous Forest by Combining Multi-Rotor and Fixed-Wing Unmanned Aerial Vehicles. Drones, 7(6), 353. https://doi.org/10.3390/drones7060353

Article Metrics

Back to TopTop