Previous Article in Journal
Observed Vertical Dispersion Patterns of Particulate Matter in Urban Street Canyons and Dominant Influencing Factors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Satellite Remote Sensing Images of Crown Segmentation and Forest Inventory Based on BlendMask

1
College of Science, Beijing Forestry University, Beijing 100083, China
2
Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518071, China
*
Author to whom correspondence should be addressed.
Forests 2024, 15(8), 1320; https://doi.org/10.3390/f15081320 (registering DOI)
Submission received: 8 July 2024 / Revised: 26 July 2024 / Accepted: 26 July 2024 / Published: 29 July 2024

Abstract

:
This study proposes a low-cost method for crown segmentation and forest inventory based on satellite remote sensing images and the deep learning model BlendMask. Taking Beijing Jingyue ecoforestry as the experimental area, we combined the field survey data and satellite images, and constructed the dataset independently, for model training. The experimental results show that the F1-score of Sophora japonica, Pinus tabulaeformis, and Koelreuteria paniculata reached 87.4%, 85.7%, and 86.3%, respectively. Meanwhile, we tested for the study area with a total area of 146 ha, and 27,403 tree species were identified in nine categories, with a total crown projection area of 318,725 m2. We also fitted a biomass calculation model for oil pine (Pinus tabulaeformis) based on field measurements and assessed 205,199.69 kg of carbon for this species across the study area. Additionally, we compared the model to U-net, and the results showed that BlendMask has strong crown-segmentation capabilities. This study demonstrates that BlendMask can effectively perform crown segmentation and forest inventory in large-scale complex forest areas, showing its great potential for forest resource management.

1. Introduction

As the largest terrestrial ecosystem on earth, forests play a crucial role in protecting biodiversity, maintaining carbon balance, and preventing global warming [1,2]. Forest resources are an important cornerstone of sustainable development, and sound and scientific forest management can not only promote local economic development but also protect the ecological environment and slow down climate change [3,4]. Therefore, efficiently and accurately grasping important information such as the type of forest species, growth conditions, and distribution location is very crucial for efficient forest management.
At present, many researchers use unmanned aerial vehicles (UAVs) to detect forest ecosystems in detail, and UAVs can identify tree species and obtain tree species information for a certain range of forested areas by their compactness, convenience, high resolution, and flexibility of manoeuvrability [5]. A common method to acquire field data is to pair UAVs with sensors, such as high-resolution cameras [6], airborne LiDAR [7], and hyperspectral sensors [8]. To address the limitations of single-data sources in detailed detection of complex forest scenes, scholars have turned to fusing multi-source data, including UAV and LiDAR data [9], and airborne hyperspectral and LiDAR data [10]. Aarne Hovi et al. have detected forest reflectance with the help of multispectral and hyperspectral remote sensing data and airborne LiDAR [11]. However, due to the limited endurance and fixed altitude of the UAV, it is difficult to complete the macro-scale detection of the forest area, and it is not possible to directly obtain information related to the large-scale forest-species categories, crown distribution, etc. Although airborne radar and hyperspectral equipment have higher detection accuracy, specific detection of large-scale forestry scenarios requires more detection time as well as labour costs. In order to be able to carry out large-scale forest detection at a lower cost, many scholars have chosen to use remotely sensed data for forest detection [12]. James R. Hastings et al. utilized airborne remote sensing for forest species classification [13], while Virpi Junttila and Tuomo Kauranne introduced a remote sensing-based post-processing approach for forest inventory prediction [14], but this study only relied on linear models with limited expressiveness.
In recent decades, the development and growth of machine learning have provided new means for forest detection. Classically popular machine learning algorithms include Support Vector Machines (SVMs) [15,16,17], Random Forest (RF) [17,18], and Multilayer Perceptron (MLP) [19,20]. By creating sample datasets, researchers use the above algorithms for the acquisition of detailed information on forest systems, such as tree species classification [21,22], crown segmentation [23,24], etc. However, due to the rich details of forest remote sensing images, it is difficult for machine learning-related algorithms to extract features more comprehensively, with limited ability to acquire information and insufficient generalisation. In recent years, the deep development of deep learning has effectively compensated for the above deficiencies. Therefore, some researchers try to use remote sensing images as the basis for tree species classification and crown segmentation with the help of deep learning methods, and the main popular algorithms are CNN [25,26], GCN [27], U-Net [28], etc. Many scholars continuously optimise and improve the model on the basis of mainstream algorithms for crown detection and segmentation. Wu et al. used the Fast-RNN algorithm to identify the apple tree crown with the help of UAV images [29], and Debarun Chakraborty et al. similarly extracted the lychee tree crown using UAV images [30]. Guillaume Lassalle et al. abandoned the localised nature of drone exploration and used large-scale high-resolution satellite images to detect individual mangrove tree canopies [31]. I Nurhabib used the YOLO algorithm to achieve the detection and counting of oil palm trees in satellite imagery [32]. Juan Camilo Rivera Palacio et al. geographically scaled the counting of coffee cherries with the help of deep learning [33]. However, these studies identified a single tree species, and the ability of the model to generalise was insufficient, so that the practical significance of the extension and the degree of application need to be strengthened. Some scholars have tried to use a combination of image enhancement and target detection techniques, aiming to achieve more accurate crown recognition based on improved image resolution. Zhafri Hariz Roslan et al. used the hyper-segmentation model GAN and the target detection model RetinaNet for crown detection in tropical forests [34]. Huang et al. tried to use the Real-ESRGAN technique to detect and count oil palm trees in images [35]. Unfortunately, despite the super-resolution preprocessing step of remote sensing images before crown detection, the final detection accuracy was only improved by about 1 per cent, which is not satisfactory. So far, there are fewer studies on the application of convolutional neural networks to multi-species and large-area single-plant crown segmentation and species identification for mixed forests in complex environments. The existing studies have certain limitations in terms of tree species categories, recognition accuracy, study area, and segmentation effect. Therefore, adopting a more subtle target-detection model is one of the effective ways to solve the above problems.
As technology advances, researchers have crafted a sophisticated deep learning model known as BlendMask [36] for image segmentation. This model harnesses the power of attention mechanisms to precisely delineate various objects within an image. At its core, BlendMask leverages these mechanisms to discern the intricate relationships between image regions. The model is able to distinguish between foreground and background, as well as the boundaries between different objects by learning the global and local feature crowns of the image. Many scholars have used the BlendMask model for image segmentation in some specific fields: Geng et al. used image segmentation for tunnel water leakage [37], Gao et al. used a dual BlendMask network to assess the severity of wheat fusarium head blight [38], Quan et al. used BlendMask for obtaining information about weeds in complex environments of fields [39], Yang et al. used image segmentation to distinguish the feeding level of fish in aquaculture [40], and Wang et al. used example segmentation for lesion cell nuclei in medical images [41]. Many studies have shown that the original BlendMask model has demonstrated strong image segmentation capability and high accuracy in various fields, but the application of the model to multi-species classification and crown segmentation in large and complex forest areas has never been seen. The Segment Anything Model (SAM) is an efficient image segmentation model known for its strong generalization ability and high segmentation accuracy [42]. Thanks to training on a large dataset, it is widely used to segment various objects in images. For example, Moghimi et al. [43] successfully employed SAM to segment river water accurately from close-range remote sensing images [43]. However, SAM encounters challenges when applied to forest detection from space-borne remote sensing images, particularly in recognizing multiple species and delineating object boundaries, as its backbone was trained on close-range rather than space-borne remote sensing images. We aim to leverage the efficient segmentation performance of BlendMask for low-cost remote sensing satellite imagery to provide valuable research on tree species classification, canopy segmentation, and acquisition of forest information for long-term forest management and monitoring over large areas.
In this paper, combining the use of a convolutional neural network in satellite remote-sensing tree crown recognition, we design a method for segmenting tree and tree-species recognition based on high-resolution satellite remote sensing images and the BlendMask model. With the large-scale characteristics of satellite remote sensing images and the efficient recognition performance of the BlendMask model, the method can effectively extract the crown information in a wide range of forests, accurately identify the tree categories in forests, accurately locate the centre of gravity of individual trees and conduct relevant outcome statistics. The specific research objectives include:
  • Using the satellite remote sensing images of Beijing Jingyue Ecological Forestry as experimental data, using the BlendMask network to segment the crown and identify the species of the trees in the forest field, and counting the number of each type of tree species. respectively.
  • Evaluate the inference ability of the model using relevant accuracy-evaluation indexes.
  • Positioning of the centre of gravity of individual trees, calculation of forest crown projection area, canopy cover and other indicators based on the results of the model testing.
  • Establishing biomass regression models for carbon estimation in oil pine (Pinus tabulaeformis) species.

2. Materials and Methods

2.1. Study Area

The study area of this project is located in Jingyue ecoforestry, Baishan Town, Changping District, Beijing, with a total area of about 146 ha. The geographical coordinates are 116°19′7.2192″ E, 40°11′32.002″ N (Figure 1). The local climate is characterized as a warm temperate, semi-humid continental monsoon, featuring a dry and windy spring, a hot and rainy summer, a cool autumn, and a cold and dry winter. It has an average annual sunshine of 2684 h. The average annual temperature is approximately 11.8 °C, the average annual precipitation is 550.3 mm, and the humidity is 60%. The soil type is yellow soil, which is conducive to the growth of warm-temperate broad-leaved and coniferous forests. The forest contains a variety of tree species such as Pinus tabulaeformis, Sophora japonica, Ginkgo biloba, etc., which provides an excellent opportunity for crown segmentation and species identification.

2.2. Collection of Remote Sensing Images and Field Survey

To enhance the accuracy of tree crown segmentation, we selected remote sensing images from Worldview-3 satellite during June 2022, sourced from Google Historical Imagery. These images, characterized by a high spatial resolution of 0.3 m and the WGS84 geographic coordinate system, were chosen to capture luxuriant branches and leaves of trees in the spring and summer seasons. Additionally, considering the influence of cloud cover on image quality, the selection aimed to mitigate any potential impact on the study area.
In alignment with the timing of the remote sensing data, a field survey was conducted in August 2023 to ensure temporal and seasonal consistency. Utilizing professional tree identification software, we authenticated the species of trees through photographs taken in the field. This process was complemented by on-site appraisals and discussions with experts to guarantee the precision of our identifications. Ultimately, this comprehensive approach facilitated the accurate documentation of tree species within the forest field. In addition, we selected a sample plot from the same age stand, numbered all the oil pines in the plot, and measured the diameter-at-breast-height values at 1.2 m for all the oil pines in the plot. These data were later used to estimate biomass as well as to fit correlation functions.

2.3. Data Set Construction

We artificially divided the study area into 3 parts, labelled A–C (Figure 1). Since block A contains basically all the tree species in the study area, we use the remote sensing images in block A as the training data for the neural network, the remote sensing images in block B as the BlendMask model accuracy-validation data, and the remote sensing image data in block C as the inference data for the BlendMask model.
We segmented multiple remote sensing images selected within block A into images of 512 × 512 size, totalling 200 images. Based on the knowledge of the tree species information in block A obtained from the previous field survey, we used Labelme (version 5.2.1) software to manually interpret all the trees in each image for species labelling. To precisely delineate the crown boundaries of individual trees on RGB images, we employed the Create Polygon tool within the Labelme software. Subsequently, we assigned tree labels in accordance with the species data gathered during our comprehensive field survey (Figure 2). The sample set of species labels for each image is stored in JSON file format. About 80% of the sample sets are randomly selected as the training set, and the remaining sample sets are used as the validation set. Due to the random location of the selected images and the existence of overlapping areas, the number of labelled samples of each tree species is larger than the actual number of tree species in Block A. Given the vast expanse of the research area, it is challenging for us to ascertain the precise species of some trees that are part of the exotic plantations. However, these species are not only abundant in number but also exhibit distinct crown characteristics and are extensively distributed throughout the forest. Therefore, we have labelled the crowns of these unknown species and named them Unknown 1, Unknown 2 and Unknown 3. The detailed sample-set label information is shown in Table 1.

2.4. Overall Workflow

The overall workflow contains several parts such as data preprocessing, network training, parameter optimisation, model accuracy check, model prediction and result calculation. Firstly, the remote sensing image of block A is segmented into images and labelled with the tree species to construct the dataset. Secondly, the images and their corresponding tree-species labels are imported into the BlendMask network for training, the model parameters are updated by gradient back propagation to obtain the optimal model, and the model prediction ability is evaluated by using the accuracy indexes as well as the tree-species and crown-detection results of block B. The model is then used to predict the tree species and crown in the whole study area. The resulting model was used for species identification and crown segmentation in the whole study area and provided support for the subsequent classification and counting of tree species, the positioning of the centre of gravity of individual plants, and the calculation of the crown projection area and the canopy cover (Figure 3).

2.5. BlendMask Algorithm

The BlendMask algorithm consists of the Detector module and the BlendMask module. The Detector module is based on the target detection algorithm FCOS (Fully Convolutional One-Stage Object Detection) as the base framework, with minor modification. The BlendMask module consists of three modules: the Bottom module, the Top Layer module, and the Blender module.
Bottom Module: The module takes as input the low-level features output from the backbone network and the high-level features output from the FPN (Feature Pyramid Network) to generate the base. The module uses the DeepLabV3+ decoder, which contains convolutions. The bottom module shape size is the following:
N K H S W S ,
where N is the batch size, K is the number of bases, H × W is the input image size and S is the score map output stride.
Top Layer Module: BlendMask adds a convolutional layer to the top of each detection pyramid for predicting the top attention (A). The shape size of A is
N × K M M × H l × W l ,
where Hl × Wl is the pyramidal resolution and M × M is the attentional resolution, i.e., the weight value of each pixel point of the corresponding base.
Blend Module: The inputs of this module are base (B) generated by the bottom module, top-level attention (A) generated by the top layer, and proposal box (P) generated by the detector. The module firstly crops the region corresponding to P on B with the help of ROIPooler in Mask R-CNN, and resizes it to a feature map rd of fixed size R × R. ROIPooler can be considered as a drop function for reducing the spatial dimensions of the features. The module then uses the ROIPooler to create a feature map rd.
r d = RoIPool R × R B , p d ,   d   1 D ,
The top layer module then performs a post-processing operation, drawing on the post-processing method in FCOS, by selecting the first D detection frames and the corresponding A, and resizing A to (K × M × M). Since the attention size M is smaller than R, interpolating A from M × M to the size of R × R yields a dimension of K × R × R, denoted as a d .
a d = interpolate M × M R × R a d , d 1 D ,
Afterwards, a d is normalised in k dimensions using the softmax (activation function) to obtain a series of score maps, denoted as Sd.
s d = softmax a d   , d 1 D ,
Both rd and Sd dimensions are K × R × R. The two are multiplied and stacked according to the channels to obtain the final mask, denoted as md, where k is the index of the basis.
m d = K = 1 K S d k r d k , d   1 D ,
A set of RGB remote sensing images with pixel sizes all 512 × 512 are firstly fed into the network for feature extraction to generate the corresponding feature maps. In the bottom module, the underlying features of the image are processed to form score maps (bases), while the top module is spliced at the head of the box to generate the corresponding top-level attention (A). Finally, the bases are fused with A through the blender module to output the prediction map (Figure 4).

2.6. Model Training

The model network training parameters are crucial for the recognition accuracy and prediction effect of the model, and BlendMask is trained using the Stochastic Gradient Descent (SGD) algorithm. In this study, Batch size is set to 4, the initial learning rate is adjusted to 0.003, and the training process spans 1000 epochs with 200 iterations per epoch.
The workstation hardware configuration is Intel Xeon Processor E5-2640 v3 (the manufacturer is INTEL, located in Santa Clara, CA, USA), Teclast A850 2TB Solid State Drive (SSD) (the manufacturer is SAMSUNG, located in Suwon, Republic of Korea) and NVIDIA Quadro RTX 6000 with 24 GB GDDR6 (the manufacturer is NVIDIA, located in Santa Clara, CA, USA); the operating system is 64-bit Windows 10 Professional.

2.7. Precision Evaluation

To ensure the accuracy of crown detection and tree species identification, we use Precision (P), Recall (R), F1-score and other related metrics to evaluate the model accuracy and performance from different perspectives. The higher P means that the model makes fewer errors in predicting positive classes. The higher R means that the model misses fewer positive-class samples. In practice, there is often a trade-off between recall and precision, and the F1-score is used to strike a balance between the two.
P = T p T p + F p × 100 % ,
R = T p T p + F N × 100 % ,
F 1 score = 2 × P × R P + R × 100 % ,
Accuracy = T P + T N T P + T N + F P + F N × 100 % ,
where TP is the number of trees correctly identified by the model, TN is the number of other tree species correctly identified as not being of that species, FP is the number of trees incorrectly identified by the model and FN is the number of trees not identified by the model [44].

2.8. Biomass Assessment

There is a direct relationship between biomass and carbon assessment. In order to carry out the carbon assessment of single tree species, we carried out the biomass assessment with the help of classical models.
ln ( B ) = a + b ln ( D )
In this model, B denotes biomass, D denotes diameter at breast height (DBH), and a and b are model parameters that together determine the relationship between biomass and diameter at breast height.

3. Results

3.1. Model Training Losses

We trained the model using the annotated data of block A. A series of loss curves are obtained: in order, the bounding-box and ground-truth-box centre offset loss, the class loss, the model-segmentation loss, the bounding-box-position offset loss, mask-generation loss, and total loss, where total loss is the sum of the previous five losses (Figure 5). Before iteration reaches 25,000 times, each loss curve is decreasing and the trend remains consistent. When iteration reaches 50,000 times, the loss of fcos centre fluctuates up and down in a small range, and the rest of the curves have stabilised. When iteration reaches 100,000 times, all types of losses converge and reach a stable state, indicating that the trained model has converged.

3.2. Validation Accuracy

In order to complete the validation of model accuracy, we use the satellite images of block B for model crown detection and tree species classification. We use remote sensing images with a size of 512 × 512 pixels for tree crown detection and assign different coloured masks to the crowns of different tree species (Figure 6). For better visual presentation, we extracted the colour masks and superimposed them on the corresponding original images in order to judge the model-detection effect intuitively. Since the colours of the colour masks change when they are superimposed on the original images, the mask colours in b1–b5, and c1–c5 are slightly deviated from the original images. In addition, we calculate the centre of gravity of each tree based on the model-detected coloured closed masks of the crown and then assign the centre of gravity to the colour of the corresponding tree species, which is finally positioned and superimposed on the original image, and this processing also facilitates the subsequent classification and counting of tree species.
We compare the model detection results with the manual visual interpretation results and carry out the result statistics and accuracy calculation. The precision of all six tree species is above 90% (Table 2), which indicates that the model has high accuracy, and at the same time, it also indicates that the model is stable in crown detection of different tree species. The F1-score of five tree species is approximately 80%, which indicates that the model performs well in the classification of these species. In addition, the accuracy for all tree species is above 90%, which indicates that the overall performance of the model in identifying tree species is quite accurate.

3.3. Model Detection in the Study Area

We used the trained model for crown detection over the whole study area, and different coloured masks were assigned based on tree species categories. Then we located the centre of gravity of a single tree based on the coloured mask (Figure 7).
Our calculations, based on the scale of the maps, give the study area a size of approximately 146 hectares. We carried out crown-mask extraction, centre-of-gravity positioning, crown-projection-area calculation, and canopy-cover calculation based on the results of the model crown identification. Canopy cover is an indicator used in ecology to measure the extent to which a tree’s canopy covers the ground horizontally. It is usually expressed as a percentage, reflecting the extent of canopy cover on the ground, and is one of the most important parameters for evaluating the structure of forests and the functioning of ecosystem services.
The model finally detected 27,468 trees, with a total crown projection area of 318,725 m2 and a total canopy cover of 21.83% (Figure 8), and the statistical results of various tree species in the study area are shown in Table 3. Among them, Sophora japonica is the dominant tree species in terms of number and crown projection area. Ginkgo biloba, although more than Sophora japonica in terms of number, has a lower crown projection area and canopy cover. Statistics on the above relevant information can help to provide a more comprehensive understanding of the structure, functions and ecological services of forests, and provide a scientific basis for forest management and conservation.

3.4. Carbon Assessment of Single Tree Species

Understanding and managing carbon stocks is critical for mitigating climate change. We measured the DBH of 164 oil pines in the field and calculated the corresponding biomass of each oil pine with the help of a classical nonlinear regression model [45]. Parameter values and associated precision are shown in Table 4.
Due to the lack of empirical formulas that directly correspond to the relationship between crown projection area (CPA) and biomass, we conducted linear and nonlinear regressions for the CPA and biomass (Figure 9). The R2 of the non-linear regression is higher than the linear regression, and the mean absolute error (MAE) and root mean square error (RMSE) of the non-linear regression are lower than the linear regression. The results are shown in Table 5. Therefore, we used the nonlinear regression model with higher accuracy for biomass prediction of oil pine in the study area.
Based on the results of the model detection, we obtained the average vertical crown projection area of a single oil pine by dividing the total vertical canopy area of oil pines by the number of oil pine plants, and then calculated the average biomass of a single oil pine with the help of the fitted model, which in turn gave us the biomass of all oil pines in the study area. Typically, the carbon stock of a single tree is 50% of its biomass [46]. The results are shown in Table 6.

4. Discussion

4.1. Comparison of BlendMask with U-Net Network

In order to compare the performance of different models for crown segmentation and tree species classification in satellite imagery, we performed a classical comparison between BlendMask [36] and U-net [47]. Note that for significant contrast, the mask colour settings for each tree species in U-net are not the same as in BlendMask (Figure 10). For densely distributed tree species such as Sophora japonica and Populus nigra, U-net has better regional recognition, but it is difficult to clearly segment the single tree crowns compared to BlendMask, which poses a great challenge for accurately calculating the number of trees. For tree species such as oil pine, which have obvious crown features and scattered distribution, the segmentation effect of U-net is similar to that of BlendMask, but there still exists the phenomenon of not being able to segment neighbouring crowns. In addition, U-net tends to have the result of incorrect tree species recognition and cannot correctly identify a single tree species, so the error rate of U-net tree species recognition is higher than that of BlendMask. BlendMask outperforms U-net in terms of precision and accuracy [44], which indicates that it is more reliable and has a lower error rate in recognising trees. The overall performances are shown in Table 7.
The results show that BlendMask can achieve better results of crown segmentation and tree species recognition, because its top module is sufficient to support the acquisition of more detailed information of crown instances. BlendMask can better achieve the task of crown segmentation in remote sensing images of forested areas, which can provide convenience for forest-inventory and forest-resource management.

4.2. Effectiveness of Model Detection in the Study Area

We performed crown detection, species identification, single tree centre-of-gravity localisation, and calculation of crown projection area for nine types of tree species in the entire study area. The model effectively detected individual tree crowns across various species with a high degree of accuracy. It achieved a 90% correct identification rate, producing crown masks with smooth, precise edges closely aligned with actual tree crowns, in the images. In addition, the differentiation of categories with the help of mask colours also achieved the purpose of tree species identification well.
However, the growth of trees is affected by various conditions such as light, soil, etc., which results in the shape of tree crowns varying widely. The canopies of different tree species are even more complex in distribution and shape, which is one of the main difficulties hindering the improvement of model-detection accuracy. For the whole study area, there are many differences in recognition effects between different tree species. Tree species such as Sophora japonica and Populus nigra, which have lush crowns and close arrangement between individuals, have a higher accuracy of crown recognition than species such as Pinus tabulaeformis, which have smaller crown projection areas and a dispersed arrangement of individuals. For the model, the crown with a large area and regular shape is easier to learn and recognise, while the crown with scattered individuals has variable shapes and requires the model to capture more features. In addition, due to the limited number of tree species and tree crowns contained in Block A, the training samples selected therein could not completely cover the tree species and tree crowns in the entire study area, and thus the existence of individual tree species was not recognised, some tree crowns had low recognition accuracy, and the misclassification of tree species could not be avoided.
In addition, there is a certain difference in the recognition effect between tree species with lush-crown distribution and compact arrangement. Take Sophora japonica and Populus nigra as an example: in the whole study area, the crown recognition rate of Sophora japonica is significantly higher than that of Populus nigra, due to the fact that there are significantly more Sophora japonica samples than Populus nigra samples in the block A of the training data, which is the main reason leading to the gap in recognition. In addition, the crown colour and brightness also affect the recognition rate, to some extent. In remote sensing images, the crown of Sophora japonica is thick green, while the crown of Populus nigra is dark green and less bright, and the former has a more distinctive outline and more easily captured features for both manual marking and model detection. The compactness of the distribution of individual trees is also one of the important factors affecting model identification. The influence of this factor is particularly evident among different tree species. Ginkgo biloba, Juniperus chinensis and other tree species have large spacing between individual distributions and complex and variable crown edges, and their crown recognition rates are lower than those of Sophora japonica and other tree species. However, among the same tree species, the result of the degree of crown compactness on the recognition rate may be just the opposite. The recognition rate of Populus nigra at the boundary of the block (cyan mask) is higher than that of the inside of the block, which is due to the fact that the spacing of Populus nigra singles at the boundary is larger than that of Populus nigra singles inside the block.

5. Conclusions

In this study, the BlendMask is successfully applied to high-resolution satellite remote sensing images for the segmentation of tree crowns and identification of tree species in Beijing Jingyue ecoforestry. By combining advanced deep learning techniques and remote sensing technologies, this study demonstrates a new approach to forest management with high efficiency and accuracy. The BlendMask model can effectively capture the inter-relationships between different regions in the image, through its attention mechanism. The results show that the BlendMask model performs well in crown segmentation and tree species identification, with high accuracy and stability. The model training loss decreases significantly with the number of iterations and eventually reaches a stable state, proving the effectiveness and convergence of the model. In terms of the validation accuracy of Block B, the model’s precision for tree species such as Ginkgo biloba, Sophora japonica, Pinus tabulaeformis, Populus nigra and Koelreuteria paniculata are all above 90%, and the F1-score is around 80% and above, showing the model’s excellent performance in classification. By applying the model to the whole study area, the model was able to efficiently detect single-plant crowns of different tree species and accurately calculate the correlation index, such as crown projection area and canopy cover. In addition, we established a biomass regression model for the species of Pinus tabulaeformis, which accurately and efficiently realised the carbon assessment of Pinus tabulaeformis in the study area, and provided a low-cost and efficient means of assessing the forest carbon in large forest areas, as well as a powerful technical support for the accurate management and protection of forest resources.
However, the study also pointed out some challenges and limitations. The complexity and diversity of tree crowns, as well as the close alignment between tree species, pose challenges to the recognition accuracy of the model. Due to the limitation of the training samples, the model has some errors in the recognition of certain tree species. In addition, although we achieved carbon assessment of a single tree species in a low-cost way, there are many differences in biomass estimates of different parts of the tree, so more accurate means and systems of carbon assessment deserve further research.
Future work can focus on expanding the diversity of training samples to cover a wider range of tree species and crown conditions to further improve the generalisation ability and recognition accuracy of the model. Meanwhile, it can explore models and methods for carbon assessment of more tree species to achieve more accurate and comprehensive forest carbon assessments. In addition, further research can be conducted on how to combine multi-source data and advanced image processing techniques to enhance the efficiency and accuracy of forest remote sensing detection.
In conclusion, this study underscores the substantial potential of BlendMask in forest remote sensing for crown segmentation and tree species identification, furnishing a novel technological tool for forest resource inventory and conservation. With ongoing technological advancements and optimizations, this methodology is poised for wider application and promotion across diverse domains.

Author Contributions

Conceptualization, L.Z.; methodology, Z.J.; software, Z.J. and J.X.; validation, Z.J. and L.Y.; formal analysis, Z.J.; investigation, J.X., Z.J. and L.Z.; resources, L.Z.; data curation, Z.J., J.M., B.C. and Y.Z.; writing—original draft preparation, Z.J.; writing—review and editing, L.Z.; visualization, Z.J.; supervision, L.Z. and P.W.; project administration, L.Z.; funding acquisition, L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National College Student Innovation and Entrepreneurship Training Program, grant number No. 202310022103 and the Beijing Municipal Natural Science Foundation of China, grant number No. 6232031.

Data Availability Statement

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

We are very grateful to all the students who assisted with data collection and the experiments. We also thank the anonymous reviewers for helpful comments and suggestions to this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Costa, G.; Silva, C.; Mendes, K.R.; Bezerra, B.; Rodrigues, T.R.; Silva, J.B.; Dalmagro, H.J.; Nunes, H.; Gomes, A.; Silva, G.; et al. The Relevance of Maintaining Standing Forests for Global Climate Balance: A Case Study in Brazilian Forests. In Tropical Forests—Ecology, Diversity and Conservation Status; IntechOpen: London, UK, 2023; ISBN 978-1-83768-575-2. [Google Scholar]
  2. Lindner, M.; Maroschek, M.; Netherer, S.; Kremer, A.; Barbati, A.; Garcia-Gonzalo, J.; Seidl, R.; Delzon, S.; Corona, P.; Kolström, M.; et al. Climate Change Impacts, Adaptive Capacity, and Vulnerability of European Forest Ecosystems. For. Ecol. Manag. 2010, 259, 698–709. [Google Scholar] [CrossRef]
  3. Findlay, A. Climate Mitigation through Indigenous Forest Management. Nat. Clim. Chang. 2021, 11, 371–373. [Google Scholar] [CrossRef]
  4. Li, L.; Lin, J.; Zhou, J. The Brief Analysis of Problems and Countermeasures for Forest Resources Management. World J. For. 2021, 10, 60. [Google Scholar] [CrossRef]
  5. Terryn, L.; Calders, K.; Bartholomeus, H.; Bartolo, R.E.; Brede, B.; D’hont, B.; Disney, M.; Herold, M.; Lau, A.; Shenkin, A.; et al. Quantifying Tropical Forest Structure through Terrestrial and UAV Laser Scanning Fusion in Australian Rainforests. Remote Sens. Environ. 2022, 271, 112912. [Google Scholar] [CrossRef]
  6. Dainelli, R.; Toscano, P.; Di Gennaro, S.F.; Matese, A. Recent Advances in Unmanned Aerial Vehicle Forest Remote Sensing—A Systematic Review. Part I: A General Framework. Forests 2021, 12, 327. [Google Scholar] [CrossRef]
  7. Harikumar, A.; Bovolo, F.; Bruzzone, L. A Local Projection-Based Approach to Individual Tree Detection and 3-D Crown Delineation in Multistoried Coniferous Forests Using High-Density Airborne LiDAR Data. IEEE Trans. Geosci. Remote Sens. 2018, 57, 1168–1182. [Google Scholar] [CrossRef]
  8. Sothe, C.; La Rosa, L.E.C.; De Almeida, C.M.; Gonsamo, A.; Schimalski, M.B.; Castro, J.D.B.; Feitosa, R.Q.; Dalponte, M.; Lima, C.L.; Liesenberg, V.; et al. Evaluating a convolutional neural network for feature extraction and tree species classification using uav-hyperspectral images. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 2020. [Google Scholar] [CrossRef]
  9. Briechle, S.; Krzystek, P.; Vosselman, G. Silvi-Net—A Dual-CNN Approach for Combined Classification of Tree Species and Standing Dead Trees from Remote Sensing Data. Int. J. Appl. Earth Obs. Geoinf. 2021, 98, 102292. [Google Scholar] [CrossRef]
  10. Liu, L.; Pang, Y.; Sang, G.; Li, Z.; Hu, B. Applying High Resolution Remote Sensing to Assess Tree Species Diversity of Monsoonal Broad-Leaved Evergreen Forest in Pu’er City. Acta Ecol. Sin. 2022, 42, 8398–8413. [Google Scholar] [CrossRef]
  11. Hovi, A.; Schraik, D.; Kuusinen, N.; Fabiánek, T.; Hanuš, J.; Homolová, L.; Juola, J.; Lukeš, P.; Rautiainen, M. Synergistic Use of Multi- and Hyperspectral Remote Sensing Data and Airborne LiDAR to Retrieve Forest Floor Reflectance. Remote Sens. Environ. 2023, 293, 113610. [Google Scholar] [CrossRef]
  12. Denisova, A.; Kavelenova, L.; Korchikov, E.; Prokhorova, N.; Terentieva, D.; Fedoseev, V. Tree Species Classification in Samara Region Using Sentinel-2 Remote Sensing Images and Forest Inventory Data. Sovrem. Probl. Distantsionnogo Zondirovaniya Zemli Iz Kosmosa 2019, 16, 86–101. [Google Scholar] [CrossRef]
  13. Hastings, J.; Sullivan, F.; Ollinger, S.V.; Ouimette, A.; Palace, M.W. Using Aircraft Remote Sensing to Map Tree Species Distribution at Harvard Forest, Massachusetts, USA; American Geophysical Union: Washington, DC, USA, 2018; Volume 2018, p. B33K-2823. [Google Scholar]
  14. Junttila, V.; Kauranne, T. Distribution Statistics Preserving Post-Processing Method With Plot Level Uncertainty Analysis for Remotely Sensed Data-Based Forest Inventory Predictions. Remote Sens. 2018, 10, 1677. [Google Scholar] [CrossRef]
  15. Hartling, S.; Sagan, V.; Maimaitijiang, M. Urban Tree Species Classification Using UAV-Based Multi-Sensor Data Fusion and Machine Learning. GIScience Remote Sens. 2021, 58, 1250–1275. [Google Scholar] [CrossRef]
  16. Song, G.; Wang, Q. Species Classification from Hyperspectral Leaf Information Using Machine Learning Approaches. Ecol. Inform. 2023, 76, 102141. [Google Scholar] [CrossRef]
  17. Jain, H. Tree Species Classification Based on Machine Learning Techniques: Mapping Chir Pine in Indian Western Himalayas. In Proceedings of the Remote Sensing for Agriculture, Ecosystems, and Hydrology XXIV; Neale, C.M., Maltese, A., Eds.; SPIE: Berlin, Germany, 2022; p. 21. [Google Scholar]
  18. Wang, N.; Wang, G. Tree Species Classification Using Machine Learning Algorithms with OHS-2 Hyperspectral Image. Sci. For. 2023, 51, e3991. [Google Scholar] [CrossRef]
  19. Cetin, Z.; Yastikli, N. The Use of Machine Learning Algorithms in Urban Tree Species Classification. ISPRS Int. J. Geo-Inf. 2022, 11, 226. [Google Scholar] [CrossRef]
  20. Sumsion, G.R.; Bradshaw, M.S.; Hill, K.T.; Pinto, L.D.G.; Piccolo, S.R. Remote Sensing Tree Classification with a Multilayer Perceptron. PeerJ 2019, 7, e6101. [Google Scholar] [CrossRef]
  21. Hologa, R.; Scheffczyk, K.; Dreiser, C.; Gärtner, S. Tree Species Classification in a Temperate Mixed Mountain Forest Landscape Using Random Forest and Multiple Datasets. Remote Sens. 2021, 13, 4657. [Google Scholar] [CrossRef]
  22. Li, Q.; Hu, B.; Shang, J.; Li, H. Fusion Approaches to Individual Tree Species Classification Using Multisource Remote Sensing Data. Forests 2023, 14, 1392. [Google Scholar] [CrossRef]
  23. Weinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks. Remote Sens. 2019, 11, 1309. [Google Scholar] [CrossRef]
  24. Jing, L.; Hu, B.; Noland, T.; Li, J. An Individual Tree Crown Delineation Method Based on Multi-Scale Segmentation of Imagery. ISPRS J. Photogramm. Remote Sens. 2012, 70, 88–98. [Google Scholar] [CrossRef]
  25. Guo, X.; Li, H.; Jing, L.; Wang, P. Individual Tree Species Classification Based on Convolutional Neural Networks and Multitemporal High-Resolution Remote Sensing Images. Sensors 2022, 22, 3157. [Google Scholar] [CrossRef] [PubMed]
  26. Zhao, H.; Morgenroth, J.; Pearse, G.; Schindler, J. A Systematic Review of Individual Tree Crown Detection and Delineation with Convolutional Neural Networks (CNN). Curr. For. Rep. 2023, 9, 149–170. [Google Scholar] [CrossRef]
  27. Wang, X.; Wang, J.; Lian, Z.; Yang, N. Semi-Supervised Tree Species Classification for Multi-Source Remote Sensing Images Based on a Graph Convolutional Neural Network. Forests 2023, 14, 1211. [Google Scholar] [CrossRef]
  28. Li, S.; Brandt, M.; Fensholt, R.; Kariryaa, A.; Igel, C.; Gieseke, F.; Nord-Larsen, T.; Oehmcke, S.; Carlsen, A.H.; Junttila, S.; et al. Deep Learning Enables Image-Based Tree Counting, Crown Segmentation, and Height Prediction at National Scale. PNAS Nexus 2023, 2, pgad076. [Google Scholar] [CrossRef] [PubMed]
  29. Wu, J.; Yang, G.; Yang, H.; Zhu, Y.; Li, Z.; Lei, L.; Zhao, C. Extracting Apple Tree Crown Information from Remote Imagery Using Deep Learning. Comput. Electron. Agric. 2020, 174, 105504. [Google Scholar] [CrossRef]
  30. Chakraborty, D.; Deka, B. UAV Sensing-Based Semantic Image Segmentation of Litchi Tree Crown Using Deep Learning. In Proceedings of the 2023 IEEE Applied Sensing Conference (APSCON), Bengaluru, India, 23–25 January 2023; pp. 1–3. [Google Scholar]
  31. Lassalle, G.; Ferreira, M.P.; La Rosa, L.E.C.; De Souza Filho, C.R. Deep Learning-Based Individual Tree Crown Delineation in Mangrove Forests Using Very-High-Resolution Satellite Imagery. ISPRS J. Photogramm. Remote Sens. 2022, 189, 220–235. [Google Scholar] [CrossRef]
  32. Nurhabib, I.; Seminar, K.B. Recognition and Counting of Oil Palm Tree with Deep Learning Using Satellite Image. IOP Conf. Ser. Earth Environ. Sci. 2022, 974, 012058. [Google Scholar] [CrossRef]
  33. Palacio, J.C.R.; Bunn, C.; Rahn, E.; Little-Savage, D.; Schmidt, P.G.; Ryo, M. Geographic-Scale Coffee Cherry Counting with Smartphones and Deep Learning. Plant Phenomics 2024, 6, 0165. [Google Scholar] [CrossRef] [PubMed]
  34. Roslan, Z.; Long, Z.A.; Ismail, R. Individual Tree Crown Detection Using GAN and RetinaNet on Tropical Forest. In Proceedings of the 2021 15th International Conference on Ubiquitous Information Management and Communication (IMCOM), Seoul, Republic of Korea, 4–6 January 2021; pp. 1–7. [Google Scholar] [CrossRef]
  35. Huang, Y.; Wen, X.; Gao, Y.; Zhang, Y.; Lin, G. Tree Species Classification in UAV Remote Sensing Images Based on Super-Resolution Reconstruction and Deep Learning. Remote Sens. 2023, 15, 2942. [Google Scholar] [CrossRef]
  36. Chen, H.; Sun, K.; Tian, Z.; Shen, C.; Huang, Y.; Yan, Y. BlendMask: Top-Down Meets Bottom-Up for Instance Segmentation. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 8570–8578. [Google Scholar]
  37. Geng, P.; Jia, M.; Ren, X. Tunnel Lining Water Leakage Image Segmentation Based on Improved BlendMask. Struct. Health Monit. 2023, 22, 865–878. [Google Scholar] [CrossRef]
  38. Gao, Y.; Wang, H.; Li, M.; Su, W.-H. Automatic Tandem Dual BlendMask Networks for Severity Assessment of Wheat Fusarium Head Blight. Agriculture 2022, 12, 1493. [Google Scholar] [CrossRef]
  39. Longzhe, Q.; Wu, B.; Mao, S.R.; Feng, H.; Yang, C.; Wei, J.; Li, H. Instance Segmentation Based Method to Obtain the Phenotypic Information of Weeds in Complex Field Environments. Res. Sq. 2020. [Google Scholar] [CrossRef]
  40. Yang, L.; Chen, Y.; Shen, T.; Yu, H.; Li, D. A BlendMask-VoVNetV2 Method for Quantifying Fish School Feeding Behavior in Industrial Aquaculture. Comput. Electron. Agric. 2023, 211, 108005. [Google Scholar] [CrossRef]
  41. Wang, J.; Zhang, Z.; Wu, M.; Ye, Y.; Wang, S.; Cao, Y.; Yang, H. Improved BlendMask: Nuclei Instance Segmentation for Medical Microscopy Images. IET Image Process. 2023, 17, 2284–2296. [Google Scholar] [CrossRef]
  42. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.-Y.; et al. Segment Anything. arXiv 2023, arXiv:2304.02643. [Google Scholar]
  43. Moghimi, A.; Welzel, M.; Celik, T.; Schlurmann, T. A Comparative Performance Analysis of Popular Deep Learning Models and Segment Anything Model (SAM) for River Water Segmentation in Close-Range Remote Sensing Imagery. IEEE Access 2024, 12, 52067–52085. [Google Scholar] [CrossRef]
  44. Jiang, Z. A Comparative Evaluation of Machine Learning Algorithms for Network Anomaly Detection. Appl. Comput. Eng. 2023, 19, 234–240. [Google Scholar] [CrossRef]
  45. Cheng, X.; Han, H.; Kang, F.; Song, Y.; Liu, K. Variation in Biomass and Carbon Storage by Stand Age in Pine (Pinus tabulaeformis) Planted Ecosystem in Mt. Taiyue, Shanxi, China. J. Plant Interact. 2013, 9, 521–528. [Google Scholar] [CrossRef]
  46. Ogbemudia, F.; Ita, R.E.; Anwana, E. Tree-Based Carbon Sequestration and Storage Abilities Vary in Natural and Plantation Forest Ecosystems. World J. Appl. Sci. Technol. 2023, 15, 19–25. [Google Scholar] [CrossRef]
  47. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Springer: Cham, Switzerland, 2015. [Google Scholar]
Figure 1. General information on the study area. (Left) Geographical location of the study area. (Right) The remote sensing image of the study plots.
Figure 1. General information on the study area. (Left) Geographical location of the study area. (Right) The remote sensing image of the study plots.
Forests 15 01320 g001
Figure 2. Example of original images (top) and labelled images (bottom). From left to right, Pinus tabulaeformis (yellow edge), Sophora japonica (blue edge), Unknown 1 (pink edge).
Figure 2. Example of original images (top) and labelled images (bottom). From left to right, Pinus tabulaeformis (yellow edge), Sophora japonica (blue edge), Unknown 1 (pink edge).
Forests 15 01320 g002
Figure 3. Overall workflow.
Figure 3. Overall workflow.
Forests 15 01320 g003
Figure 4. BlendMask overall workflow.
Figure 4. BlendMask overall workflow.
Forests 15 01320 g004
Figure 5. BlendMask model training various types of losses. (a) Bounding-box-position offset-loss value curves of BlendMask image detection. (b) Classification loss-value curves of BlendMask image detection. (c) Overall segmentation loss-value curves of BlendMask image detection. (d) Bounding-box and ground-truth-box centre offset-loss value of BlendMask image detection. (e) Mask-generation loss-value curves of BlendMask image detection. (f) Overall loss-value curves of BlendMask image detection.
Figure 5. BlendMask model training various types of losses. (a) Bounding-box-position offset-loss value curves of BlendMask image detection. (b) Classification loss-value curves of BlendMask image detection. (c) Overall segmentation loss-value curves of BlendMask image detection. (d) Bounding-box and ground-truth-box centre offset-loss value of BlendMask image detection. (e) Mask-generation loss-value curves of BlendMask image detection. (f) Overall loss-value curves of BlendMask image detection.
Forests 15 01320 g005
Figure 6. (a1a5): 512 × 512 images of part of block B. (b1b5): model-detection crown colour mask images. (c1c5): Block B image with superimposed crown colour mask. (d1d5): Block B images with superimposed crown centre-of-gravity localisation.
Figure 6. (a1a5): 512 × 512 images of part of block B. (b1b5): model-detection crown colour mask images. (c1c5): Block B image with superimposed crown colour mask. (d1d5): Block B images with superimposed crown centre-of-gravity localisation.
Forests 15 01320 g006
Figure 7. (Top) Crown colour masks for the study area. (Middle) Crown colour mask with magnification of red rectangular area. (Bottom) Positioning of the centre of gravity of trees with magnification of red rectangular area.
Figure 7. (Top) Crown colour masks for the study area. (Middle) Crown colour mask with magnification of red rectangular area. (Bottom) Positioning of the centre of gravity of trees with magnification of red rectangular area.
Forests 15 01320 g007
Figure 8. (a) Double bar chart of the number of each tree species versus the crown projection area; (b) canopy cover of tree species in the study area.
Figure 8. (a) Double bar chart of the number of each tree species versus the crown projection area; (b) canopy cover of tree species in the study area.
Forests 15 01320 g008
Figure 9. (a) Prediction of linear regression; (b) prediction of nonlinear regression.
Figure 9. (a) Prediction of linear regression; (b) prediction of nonlinear regression.
Forests 15 01320 g009
Figure 10. Comparison of U-net and BlendMask crown segmentation. Partial crown-segmentation errors are marked with solid red round boxes and species classification errors are marked with pink dashed round boxes.
Figure 10. Comparison of U-net and BlendMask crown segmentation. Partial crown-segmentation errors are marked with solid red round boxes and species classification errors are marked with pink dashed round boxes.
Forests 15 01320 g010
Table 1. The number of trees labelled in all image tiles of training set for each species.
Table 1. The number of trees labelled in all image tiles of training set for each species.
Scientific NameCommon NameAbbreviationTraining Instance Count
Ginkgo bilobaGinkgoGb2455
Sophora japonicaLocust treeSj3421
Pinus tabulaeformisOil pinePt2847
Populus nigraBlack poplarPn1425
Juniperus chinensisChinese juniperJc420
Koelreuteria paniculataGolden rain treeKp75
Unknown 1\U11454
Unknown 2\U21265
Unknown 3\U3154
Table 2. Accuracy evaluation of the number prediction in different tree species.
Table 2. Accuracy evaluation of the number prediction in different tree species.
Tree SpeciesMask ColourPredict QuantityPrecisionRecallF1-ScoreAccuracy
Ginkgo bilobaRed46394.3%72.1%81.6%93.3%
Sophora japonicaGreen90590.7%84.1%87.4%94.8%
Pinus tabulaeformisBlue74694.0%78.8%85.7%92.2%
Populus nigraCyan66197.3%67.3%79.6%88.6%
Koelreuteria paniculataPurple20193.3%80.1%86.3%98.5%
Juniperus chinensisOrange19798.5%61.7%75.8%95.7%
Table 3. Detection statistics for various tree species in the study area.
Table 3. Detection statistics for various tree species in the study area.
Tree SpeciesMask ColourTree Count
(Stump)
Crown Projection Area (m2)Canopy Cover
Ginkgo bilobaRed695224,3821.67%
Sophora japonicaGreen6323146,76210.05%
Pinus tabulaeformisBlue622153,9473.70%
Populus nigraCyan422640,8802.80%
Koelreuteria paniculataPurple67277810.53%
Juniperus chinensisOrange40520140.14%
Unknown 1Pink179026,7081.83%
Unknown 2Yellow77715,3591.05%
Unknown 3Brown398920.06%
Total\27,403318,72521.83%
Table 4. Evaluation of parameter values and accuracy of biomass-breast-diameter nonlinear regression model.
Table 4. Evaluation of parameter values and accuracy of biomass-breast-diameter nonlinear regression model.
abStandard Error of the Estimate
SEE
Coefficient of Determination
R2
−1.6592.3350.1180.983
Table 5. Evaluation indicators of different methods.
Table 5. Evaluation indicators of different methods.
MethodMean Absolute Error
MAE
Root Mean Square Error
RMSE
Coefficient of Determination
R2
Linear regression9.58112.1990.964
Nonlinear regression5.5748.0260.984
Table 6. Carbon assessment of oil pines in the study area.
Table 6. Carbon assessment of oil pines in the study area.
Tree SpeciesCommon NameAverage Single-Tree Crown Projection
(m2)
Biomass of a Single Plant
(Kg)
Biomass
(Kg)
Carbon Assessment
(Kg)
Pinus tabulaeformisOil pine8.6765.97410,399.37205,199.69
Table 7. Comparison of overall performances of U-net and BlendMask.
Table 7. Comparison of overall performances of U-net and BlendMask.
NetworkPrecisionF1-ScoreAccuracy
U-net84.92%74.10%86.04%
BlendMask93.83%81.84%92.68%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ji, Z.; Xu, J.; Yan, L.; Ma, J.; Chen, B.; Zhang, Y.; Zhang, L.; Wang, P. Satellite Remote Sensing Images of Crown Segmentation and Forest Inventory Based on BlendMask. Forests 2024, 15, 1320. https://doi.org/10.3390/f15081320

AMA Style

Ji Z, Xu J, Yan L, Ma J, Chen B, Zhang Y, Zhang L, Wang P. Satellite Remote Sensing Images of Crown Segmentation and Forest Inventory Based on BlendMask. Forests. 2024; 15(8):1320. https://doi.org/10.3390/f15081320

Chicago/Turabian Style

Ji, Zicheng, Jie Xu, Lingxiao Yan, Jiayi Ma, Baozhe Chen, Yanfeng Zhang, Li Zhang, and Pei Wang. 2024. "Satellite Remote Sensing Images of Crown Segmentation and Forest Inventory Based on BlendMask" Forests 15, no. 8: 1320. https://doi.org/10.3390/f15081320

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop