Next Article in Journal
Advancing Ultra-High Precision in Satellite–Ground Time–Frequency Comparison: Ground-Based Experiment and Simulation Verification for the China Space Station
Previous Article in Journal
A Method for Estimating Ship Surface Wind Parameters by Combining Anemometer and X-Band Marine Radar Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image to Image Deep Learning for Enhanced Vegetation Height Modeling in Texas

Department of Ecology and Conservation Biology, Texas A&M University, College Station, TX 77843, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(22), 5391; https://doi.org/10.3390/rs15225391
Submission received: 13 August 2023 / Revised: 16 September 2023 / Accepted: 15 November 2023 / Published: 17 November 2023
(This article belongs to the Section Biogeosciences Remote Sensing)

Abstract

:
Vegetation canopy height mapping is vital for forest monitoring. However, the high cost and inefficiency of manual tree measurements, coupled with the irregular and limited local-scale acquisition of airborne LiDAR data, continue to impede its widespread application. The increasing availability of high spatial resolution imagery is creating opportunities to characterize forest attributes at finer resolutions over large regions. In this study, we investigate the synergy of airborne lidar and high spatial resolution USDA-NAIP imagery for detailed canopy height mapping using an image-to-image deep learning approach. Our main inputs were 1 m NAIP image patches which served as predictor layers and corresponding 1 m canopy height models derived from airborne lidar data, which served as output layers. We adapted a U-Net model architecture for canopy height regression, training and validating the models with 10,000 256-by-256 pixel image patches. We evaluated three settings for the U-Net encoder depth and used both 1 m and 2 m datasets to assess their impact on model performance. Canopy height predictions from the fitted models were highly correlated (R2 = 0.70 to 0.89), precise (MAE = 1.37–2.21 m), and virtually unbiased (Bias = −0.20–0.07 m) with respect to validation data. The trained models also performed adequately well on the independent test data (R2 = 0.62–0.78, MAE = 3.06–4.1 m). Models with higher encoder depths (3,4) and trained with 2 m data provide better predictions than models with encoder depth 2 and trained on 1 m data. Inter-comparisons with existing canopy height products also showed our canopy height map provided better agreement with reference airborne lidar canopy height estimates. This study shows the potential of developing regional canopy height products using airborne lidar and NAIP imagery to support forest productivity and carbon modeling at spatially detailed scales. The 30 m canopy height map generated over Texas holds promise in advancing economic and sustainable forest management goals and enhancing decision-making in natural resource management across the state.

Graphical Abstract

1. Introduction

The state of Texas is endowed with a rich diversity of vegetation ecosystems, ranging from the thick Piney woods in the east to the Juniper–oak woodlands in the central region and the sprawling desert grasslands and shrubs in the west. These ecosystems play a vital role in supporting a myriad of plant and animal species, contributing to the overall ecological health and biodiversity of the region [1,2]. Forest canopy height measurements play a crucial role in understanding and managing ecosystems effectively, providing valuable insights into the vertical structure of forests and enabling assessment of forest health and productivity [3,4]. In parts of Texas, such as the Edwards Plateau, woody plant encroachment is causing fundamental ecological shifts. This phenomenon, whereby herbaceous-dominated landscapes are converted to landscapes more similar to forests and dense shrublands, is having a significant impact on the region’s ecosystems [5]. Accurate canopy height estimates together with comprehensive species mapping are critical to support a better understanding of the role such shifts are playing in the ecological processes in the region. However, collecting forest canopy height measurements over extensive regions such as Texas remains a challenging task due to huge labor costs and inefficiencies in data collection methods. Despite these challenges, the growing availability of airborne lidar and regularly collected high-resolution National Agriculture Imagery Program (NAIP) [6] imagery presents opportunities for characterizing forests and scrublands in the state at local and regional scales.
Large-scale forest canopy height mapping has typically relied on multi-sensor fusion approaches integrating airborne or terrestrial lidar data with spatially explicit image datasets [3]. Lidar data provides highly accurate and detailed vertical structure measurements which enable reliable estimates of canopy height but is often spatially restricted and infrequently collected. The modeling synergy with spatially explicit datasets extends mapping to areas and times not covered by airborne lidar. Previous studies have applied various methods ranging from simpler multiple regression models to more complex machine learning methods [3,7,8,9]. Due to the high intercorrelation of ancillary image variables, there has been a shift toward applying non-parametric machine learning methods such as decision trees [3,4] and neural networks [10] to limit the onerous statistical distributional assumptions of traditional regression methods. However, there are still challenges with these approaches owing to limited model generalization and reduced precision due to spectral saturation at higher canopy heights [11,12]. Here, data-driven deep learning approaches could offer a promising solution. Deep learning algorithms, with their ability to automatically learn patterns and extract features from large datasets, have already shown great potential in accurately estimating forest canopy height [13]. By training deep learning models on extensive lidar and optical imagery datasets, these algorithms can learn to recognize complex relationships between the data inputs and the corresponding canopy height values. The application of deep learning in forest canopy height estimation thus holds significant promise for improving accuracy, efficiency, and scalability, overcoming the limitations of traditional methods, and enabling more effective forest management and conservation [11].
Deep learning models have demonstrated remarkable effectiveness in various land cover classification tasks, leveraging their ability to learn complex spatial patterns from remote sensing data [14,15]. Convolutional neural networks (CNNs) in particular, have achieved high accuracies in identifying and categorizing land cover types from drone, aerial, and satellite imagery [16,17]. In a vast number of applications, land cover characterization is cast as a semantic segmentation problem where the goal is to assign pixel-level labels enabling the identification and delineation of specific land cover within the scene. Examples of deep neural models applied in previous studies for similar tasks include U-Net [18], a popular modeling architecture used for semantic segmentation, segmentation models based on the VGG-16 model [17,19], and other more recent segmentation models such as DeepLab [20]. By adapting deep neural models for semantic segmentation to regression problems, researchers can effectively map and estimate forest canopy height at a fine-grained spatial resolution.
While deep learning models like U-Net are increasingly used in remote sensing applications, little research has been conducted on the influence of model architectural parameters for their effective utilization. Although tuning hyperparameters is computationally demanding in deep learning, it is crucial to inform the influence of critical model architectural parameters that strike a balance between modeling accuracy and training time. The U-Net model consists of encoder and decoder blocks, and the depth of these blocks (encoder depth) can significantly affect the model size, training duration, and performance [21]. Additionally, data-specific factors such as grid resolution can impact modeling performance [22]. When modeling canopy height, the usual approach is to regress image variables against target heights derived from airborne lidar. However, in an image-to-image regression framework, the grid resolution can play a vital role as it influences the local spatial variability relied upon by convolutional neural networks (CNNs). Therefore, it is essential to investigate the influence of the modeling parameters to gain a better understanding of their impact on U-Net model performance, optimize its configuration, and develop a reliable and efficient solution for accurate canopy height mapping. Thus, this study had three main objectives: (1) to assess the impact of encoder depth and input grid resolution on U-Net model performance for forest canopy height prediction, providing insights into the optimal depth configuration for maximizing prediction accuracy; (2) to compare the generated forest canopy height maps with existing canopy height products, (3) to map forest canopy height across Texas at 30 m pixel scale using the developed U-Net model.

2. Materials and Methods

2.1. Study Area

2.1.1. Location, Species, Physiography

This study was conducted in the State of Texas, United States (latitude 31.78°N, longitude 98.90°W, Figure 1), which has a varied physical geography with different landforms and climatic zones. Texas is characterized by a diverse vegetation biome [2], leading to a wide range of tree species and challenges for canopy height modeling. Our canopy height modeling study focused on five main biomes. Among these included the temperate conifer forests (TCF) and temperate broadleaf and mixed forests, known for having the tallest trees in the state. The Piney woods ecoregion within the TCF biome is recognized for its variety of pine tree species including Pinus taeda (loblolly pine), Pinus echinata (shortleaf pine), and Pinus elliottii (slash pine) and diverse hardwood species including oaks Quercus stellata (post oak), Quercus alba (white oak), Quercus falcata (southern red oak) and Carya texana (black hickory) [23].
We also assessed canopy heights in the temperate grasslands, savannas and shrublands, tropical and subtropical grasslands, savannas and shrublands, and deserts and xeric shrublands biomes. The Edwards Plateau stands out in these sparse biomes as a significant Savanna woodland with a mix of tree species like live oak (Quercus spp.), Ashe Juniper (Juniperus ashei), and Mesquite (Prosopis glandulosa) along with grasslands [2,5,24]. To the south, the South Texas Plains ecoregion features arid brushlands, and thorn scrub forest ecosystems, which support tree species such as Mesquite, Huisache (Vachellia farnesiana), and various cacti adapted to semi-arid conditions. Along the coastal regions of Texas in the tropical and subtropical grasslands, savannas and shrublands biome, lies the Gulf Coast Prairies and marshes ecoregion. This area consists of extensive coastal prairies and tidal marshes that are vital habitats for waterfowl, shorebirds, and other wetland species [2]. For our canopy height modeling, we selected data from 15 sites encompassing all the 5 main vegetation biomes described above.

2.2. Data

2.2.1. National Agriculture Imagery Program (NAIP) Data

The NAIP imagery program collects high-resolution aerial imagery during the agricultural growing seasons across the United States with the primary goal of producing digital orthophotography to support governmental agencies and the public [6]. NAIP imagery is acquired in four spectral bands (red, green, blue, and near-infrared) at a 60 cm to 1 m ground sample distance depending on the state in the US.
To support the modeling of canopy height using deep learning, we collected 1 m NAIP data from the Google Earth Engine platform in 15 selected sites (Figure 1). The selection of the sites was guided by the availability of airborne lidar data and general ecoregion coverage in the study area. We selected 15 sites equally stratified across the 5 major biomes in the state. Each site measured approximately 3 km by 3 km providing adequate coverage for training patch sub-sampling for our deep learning models (see Section 2.3.2).

2.2.2. Airborne Lidar Data and Canopy Height Models (CHM)

We collected airborne lidar data in each of the 15 sites to provide reference canopy height data. The airborne lidar data between 2016 and 2018, collected under the 3D Elevation Program (3DEP) were obtained through the OpenTopography [25] web portal. The ready-classified airborne lidar data had average point densities ranging from 8.69 to 9.44 pts per square meter and appropriately georeferenced to a NAD83 (2011) UTM zone (zones 14 and 15N in our case).
All the airborne lidar data were manually processed to remove outliers and non-vegetation surfaces such as buildings and powerlines using the Quick Terrain Modeler® lidar software (v8.0) (Applied Imagery, Chevy Chase, MD, USA). Having cleaned the data, we created aboveground point clouds by subtracting the ground level elevation from all points, removing any points that had negative canopy height values. The aboveground point cloud data were gridded to create canopy height models with 1 m grid cell matching the NAIP imagery.
Figure 1. Study area in eastern Texas, USA classified by ecoregion. The ecoregion classification is based on the RESOLVE Ecoregions 2017 map [26]. Topographic base maps courtesy of ESRI ArcGIS®.
Figure 1. Study area in eastern Texas, USA classified by ecoregion. The ecoregion classification is based on the RESOLVE Ecoregions 2017 map [26]. Topographic base maps courtesy of ESRI ArcGIS®.
Remotesensing 15 05391 g001

2.2.3. Existing Canopy Height Models (CHM) Datasets

For comparative analyses of the canopy height maps generated in this study, we collected 3 existing canopy height datasets. The datasets ranged in spatial resolution from 10 m to 30 m and included the:
The 2020 LANDFIRE Forest Canopy Height (CH): LANDFIRE (Landscape Fire and Resource Management Planning Tools Project, http://www.landfire.gov, accessed on 20 October 2022) program is a national multi-partner project that provides comprehensive datasets that describe a variety of landscape-level factors including vegetation, wildland fuel, fire regimes and ecological disturbances across the United States [27]. CH represents the average height of the top of the vegetated canopy in a 30 m pixel and is estimated through fusion of Landsat imagery and airborne lidar in forested areas only.
The 2019 Global Land Analysis and Discovery (GLAD) GEDI Global Forest Canopy Height [7]: Global Forest Canopy Height is generated with a 30 m spatial resolution through fusion of GEDI canopy heights (95th percentile relative height) and multitemporal Landsat analysis-ready data.
The 2020 Global Canopy Height Model [11]: This 10 m spatial resolution height map was generated using a deep convolutional network with Sentinel-2 reflectance data as input and height estimates from the GEDI mission as the reference height data.

2.3. Model Development and Canopy Height Mapping

2.3.1. U-Net Model Architecture

For the canopy height regression task, we used a U-Net model architecture adapted for regression to fit deep learning models. The U-Net model is increasingly finding applications in image-to-image regression problems, with promising results [13]. Its model architecture consists of an encoder, a decoder, and several skip connections depending on the depth of the model. The encoder captures hierarchical features from the input image patches. It achieves this through successive convolutional and pooling layers, which help extract low-level to high-level features. The decoder, on the other hand, uses transposed convolutional layers to reconstruct high-resolution output maps [18]. To give contextual information learned in the encoder network to the decoder layers, the U-Net model uses skip connections that link neural network features between the two networks. This architecture enables the U-Net model to effectively learn and represent the relationship between the image patches and the corresponding output values [13,18]. In this study, a variety of model setups by varying the encoder depth were evaluated to assess modeling performance.

2.3.2. Training Data Preparation

We utilized the 1 m NAIP (3546 × 3546 × 4 image) and the corresponding airborne lidar CHM (3546 × 3546) data collected in the 15 sites to extract image patches to train and test the U-Net models. Our target input image size for the U-Net model was 256 × 256 pixels. For each site-level dataset (NAIP and CHM), we conducted a random sub-sampling to extract multiple 256 × 256 pixel image patches from 10 of 15 sites, randomly stratified across biome, for training and validation. The data for the remaining 5 sites were used to extract an independent test set. The data from the 15 sites were also resampled using nearest neighbor method to generate 2 m grid cell data to provide data for testing models at a 2 m grid resolution. The same sub-sampling procedure was used to extract 2 m 256 × 256 pixel image patches.
Further, we conducted a masking operation on the data to improve the training of our deep learning models, which was based on two main reasons (Figure 2). Firstly, our aim was to restrict canopy height estimation to vegetated areas. Secondly, we wanted to minimize inconsistencies caused by differences in the timing of data acquisition between lidar and NAIP. To target vegetated areas accurately, we used a combination of a vegetation mask derived from the normalized difference vegetation index (NDVI) calculated from the NAIP data and a canopy height model (CHM) mask. The NDVI mask was created by applying a threshold to the NDVI image derived using the Otsu method while the CHM mask was created by manually setting a threshold of 0.5 m. To improve the quality of the combined mask and ensure spatial consistency, we performed morphological opening operations using a 3 pixel disk structure element to derive a clean mask as shown in Figure 2d. Each generated mask was then used to mask values in non-forest or with heights less than 0.5 m in the extracted CHM patches.
We then extracted 10,000 256 × 256 image patches from corresponding NAIP and CHM datasets, at 1 m and 2 m resolutions, from the training sites. Random sampling was controlled to ensure a diverse representation of the canopy height variations within the dataset by limiting the distance between any two sample patch center locations to 50 m. Nevertheless, some partial overlap between patches was still inevitable. The partial overlap is not a problem for convolutional networks and could be considered a form of data augmentation, which is usually recommended in training deep neural networks [28].

2.3.3. Model Training and Validation

We fitted several U-Net models to assess the impact of encoder depth and spatial resolution on canopy height modeling. We trained U-Net models with encoder depths 2, 3 and 4 and patch grid resolution of 1 m and 2 m, resulting in 6 modeling combinations. In training each model, we split the 10,000 image patch dataset into training and validation sets using a 96:4 ratio to expose the model to adequate and varied training data samples. During training, the training set was used to optimize the model parameters, while the validation set allowed us to monitor the model’s performance and detect potential overfitting.
We trained each model for 150 epochs with a 0.001 learning rate and an 8 sample batch size on a 64 bit Dell Workstation (Intel® Xeon® Processor with 256 GB RAM, NVIDIA™ Quadro K5200 GPU with 8 Gb RAM). We employed ADAM (adaptive moment estimation) optimizer to minimize the mean squared error (MSE) loss. The 0.001 learning rate and the optimizer selection were determined through our preliminary hyperparameter tuning. We did not apply data augmentation to the training data, but we expect the sub-sampling approach, used in selecting image patches, to save a similar purpose.

2.3.4. Model Testing on an Independent Test Set

After training the models, we evaluated their performances on an independent test set. This independent dataset enabled us to assess the generalization ability of the trained model in predicting canopy height values in unseen areas. The image patches in the test set were generated using a different sub-sampling approach to the training set. The main difference here was that patches did not overlap. From the 5 site images set aside, we sub-sampled 100 patches for a total of 500 patches.
Performance metrics used for assessing the developed models at the validation and testing stage include the coefficient of determination (R2), mean bias (Bias), mean absolute error (MAE) and their equivalent percent metrics, percent bias (pBias) and percent MAE (pMAE), as shown in Equations (1) through (3).
R 2 = ( n ( i = 1 n h i r i ) ( i = 1 n h i ) i = 1 n r i ) ) 2 [ n i = 1 n h i 2 ( i = 1 n h i ) 2 ] [ n i = 1 n r i 2 ( i = 1 n r i ) 2 ]
B i a s = 1 n i = 1 n h i r i
M A E = 1 n i = 1 n ( h i r i )
where hi is the ith predicted canopy height, ri is the ith reference canopy height and n is the total number of pixels used for the assessment. The coefficient of determination (R2) captured the correlation while the other metrics captured prediction bias trends and prediction precision of the estimated against and reference canopy heights.

2.4. Comparative Analyses with Existing Canopy Height Products

We generated the canopy height maps in the 5 test sites using the best performing model from the modeling combinations for comparative analyses with existing canopy height products (Section 2.2.3). We conducted the comparative analyses at the same 30 m grid resolution given the resolution differences among the products. Thus, the generated canopy height maps, together with the 10 m product in [11], were aggregated to match 30 m resolutions. Canopy height values from the respective products were extracted and compared with heights from airborne lidar CHMs using the metrics listed in the previous section. We limited comparison to areas with canopy heights greater than 1 m. We also filtered out any changed areas, voids, and all non-vegetation areas including bare earth, water, and developed areas.

2.5. Canopy Height Mapping across Texas

The model was applied to generate canopy height map across the study area. To manage the processing load, the image data were split into 512 by 512 pixel areas and run in a batch process. The processing duration lasted about 12 days on a 64 bit Dell machine. After processing all the tiles, we mosaicked them into one final canopy height map for the study area.

3. Results

3.1. Validation Model Performance

The quantitative assessment of the trained U-Net models on the validation and independent test data showed promising results (refer to Table 1). On the 4% hold-out validation set, equivalent to 400 256-by-256 image patches, the trained models achieved R2 values ranging from 0.70 to 0.89, indicating a high correlation between predicted and reference canopy heights. As Figure 3 illustrates, the high concentration of canopy height predictions is concentrated around the 1:1 line, implying model predictions were generally in line with expected heights. The models also showed high precision with respect to reference canopy heights with MAE values ranging from 1.34 m to 2.21 m. Overall, model canopy height predictions were virtually unbiased against corresponding reference canopy heights (Bias = −0.20–0.07 m).
Figure 4 shows examples of canopy height maps generated by the model for three image patches in distinct environments: DXS (deserts and xeric shrublands), TGSS (temperate grasslands, savannas and shrublands), and TBMF (temperate broadleaf and mixed forest) biomes (refer to Section 2.1.1). The model generally reproduces the spatial distribution of canopy heights as seen in the reference canopy height models. The main distinction between the predicted and reference canopy height models lies in their spatial textures. The predicted maps show a relatively smoother appearance, which is a consequence of the convolutional modeling used in the U-Net model [18,19].
Different combinations of image resolution (1 m vs. 2 m) and model encoder depth (2, 3, and 4) led to variations in the model performance against the validation set. In general, models trained with 2 m data performed better (R2 = 0.85–0.89, MAE = 1.34–1.60 m) than models trained with 1 m data (R2 = 0.70–0.74, MAE = 2.0–2.2 m), in both correlation and precision. As shown in Figure 3, the concentrations of points around the 1:1 line are much tighter in the 2 m plots than in the 1 m plots, indicative of higher precision. However, models trained with 1 m data showed a slight overall underestimation of canopy height (average bias = −0.01 m) compared to models trained on 2 m data, which showed a slightly higher and opposite trend (average bias = 0.02 m). Increases in model encoder depth were also associated with better correlation and precision against reference canopy heights. In Figure 3, it is also evident that models exhibited less saturation at higher encoder depth settings. Specifically, the models with the lowest encoder depth were only capable of estimating heights up to approximately 25 m. In contrast, the deepest models demonstrated the capability to predict higher levels closer to the highest reference height. However, increasing the encoder depth also significantly increased the model training time from a lower duration of 47 h with an encoder depth setting of 2 to a high duration of 98 h, with an encoder depth of 4 (Table 1).

3.2. Model Performance on Independent Test Set

The trained models also performed adequately well on the independent test sets (R2 = 0.82–0.90, MAE = 1.78–2.57 m) though relatively lower compared to the performances on the validation sets. Like performances on the validation set, the correlation and precision of the predicted canopy heights were better for the 2 m than the 1 m dataset. For both datasets, a higher encoder depth led to better precision and bias in general. The observed relative drop in model performances on the independent test set is a common expectation given the model is not as well tuned to new data in the test set as the validation set, which is indirectly used to tune model hyperparameters during training. The hold-out splitting could also have led to similar datasets in the training and validation sets leading to better performance on the validation set than the test set.
When stratified by biome, the models showed the best performances, in terms of R2 and MAE metrics, in the TGSS (temperate grasslands, savannas and shrublands) and DXS (deserts and xeric shrublands) biomes (Table 2, Figure 5). This was followed by the TBMF (temperate broadleaf and mixed forests), TSGSS (tropical and subtropical grasslands, savannas and shrublands) and TCF (temperate conifer forests) biomes. In the TGSS and DXS biomes, high R2 values ranging from 0.61 to 0.83 and low MAE values ranging from 0.62 m to 1.26 m were observed. Predictions in these two biomes were virtually unbiased against reference canopy heights (bias = −0.14–0.30 m). Again, models showed better performance with 2 m data and higher encoder depths. In the other biomes, R2 ranged from 0.28 to 0.75 and MAE from 2.29 to 4.41 m, with the worst precision observed in the TCF biome. These results indicate that the fitted models were more effective in capturing canopy heights in relatively sparse canopy environments compared to denser ones.

3.3. Comparison with Existing CHMs

We compared canopy height estimates from our study and several existing canopy height products (refer to Section 2.2.3) with canopy heights derived from airborne lidar to assess the relative performance of the models. These products included the 2020 LANDFIRE Forest Canopy Height (hereafter LFCH), the 2019 Global Land Analysis and Discovery (GLAD) GEDI Global Forest Canopy Height (hereafter known as GLAD), and the 2020 Global Canopy Height Model by [11] (hereafter known as GCHM). Based on 22,832 random point locations, the overall R2, bias, and MAE metrics ranged from 0.55 to 0.92, −6.0 to 1.78, and 2.66 m to 7.17 m, respectively. The product developed in the study (R2 = 0.92, Bias = 1.78 m, MAE = 3.2 m) and the GCHM product (R2 = 0.89, Bias = −1.08 m MAE = 3.0 m) generally performed better than GLAD (R2 = 0.75, Bias = −6.04 m, MAE = 6.34 m) and LFCH (R2 = 0.55, Bias = −3.42 m, MAE = 7.17 m). Based on the overall metrics, the study CHM overestimated while the other products underestimated airborne lidar heights.
The assessment of the product’s performance in the different biomes also showed wide variations in agreement between respective products and airborne lidar canopy height estimates. The 22,832 used in the evaluation comprised 4898, 3683, 4866, 4694, and 4691 in the DXS, TBMF, TCF, TGSS, and TSGSS biomes, respectively (refer to Appendix A). The study and the GCHM products showed comparable performance across biomes and were the best performing among all products evaluated. Our canopy height product showed better R2 values, except in the TCF and TBMF biomes than the GCHM product. Our study and the GCHM products had the lowest precision (MAE = 3.1–5.1 m) in the TCF and TSGSS biome with our product showing significantly better precision. The GLAD product, the third best performing product, showed the highest R2 value in DXS (R2 = 0.48) while correlations in other biomes were lower (R2 < 0.3). Precision also varied by biome with the lowest MAE value observed in TGSS biome (MAE = 2.4 m) and the worst in the TCF biome (MAE = 9.0 m). The LFCH product showed similar variations across biomes though its correlations and precisions across biomes were less than the GLAD product.

3.4. Canopy Height Product across Texas

We generated a 30 m gridded canopy height product across Texas by applying the U-Net model trained on 2 m data with an encoder depth of 3 as it performed better than other modeling combinations on test data. The NAIP image data were resampled to 30 m data to facilitate mapping at this large regional scale.
Figure 6 shows the gridded canopy height product generated over the study area. Predicted canopy height ranged from 0 to 71.0 m (mean height = 13.7 m) with higher canopy heights falling in the TCF biome comprising the Piney woods ecoregion and several national forests dominated where large pine species are predominant. The generated map showed relatively lower canopy heights in more sparse regions in savanna and scrub environments in the central part of Texas. The generated canopy height map also shows improvement in the number of height estimates compared to existing LFCH and GLAD products that showed less height measurement in scrub environments.

4. Discussion

Machine learning and deep learning have become invaluable tools in the field of remote sensing, particularly for critical tasks such as forest canopy height modeling. Through these techniques, researchers have been able to scale canopy height measurements collected with traditional methods, which are often expensive, time-consuming, and limited in spatial coverage, to cover large areas. The large-scale mapping is crucial for understanding and mitigating the impacts of climate change and human activities on global ecosystems [29,30]. While deep learning approaches are seeing wide application in remote sensing studies, further research is still needed to inform the impacts of model architecture parameters on the model’s performance and generalization. This study tested the impact of data grid cell size and encoder depth on canopy height modeling performance using the U-Net model. We observed variations in model performance with different combinations of grid cell size (1 m vs. 2 m) and model encoder depth (2, 3, and 4), with better model performance on the validation set associated with models with higher encoder depths and trained with 2 m data (R2 = 0.85–0.89, MAE = 1.34–1.61 m) than with 1 m data (R2 = 0.70–0.75, MAE = 2.0–2.2 m). Results for the test set were generally similar but we observed the best performance with the model trained with encoder depth of 3 and 4. We attribute the better performance with 2 m data to the enhanced local spatial consistency between the NAIP and airborne lidar canopy height models at a large grid cell size.
Our biome-level assessments showed that the fitted models were more effective in capturing canopy heights in relatively sparse (DXS, TGSS) canopy environments compared to denser ones (TCF, TBMF, TSGSS). The difference in performance between sparse and dense environments in canopy modeling is attributed to the varying importance of spatial texture in the U-Net model. In sparse environments, spatial information is highly effective in discriminating trees or scrub from bare ground, making height differences readily inferable. However, in dense environments, spatial information, while crucial, may not efficiently differentiate trees of different heights due to their similar spectral signals. Spectrally, dense environments are also associated with high saturation effects that have been shown to reduce the prediction of various vegetation parameters. In their canopy height estimation study in Switzerland and Gabon, Lang et al. [12] observed lower prediction performance in dense canopy cover sites in Gabon (MAE = 4.3 m) but higher performance in relatively sparse environments in Switzerland (MAE = 1.7 m). Similar observations were reported previously in [4,31] when mapping forest heights across Africa and the globe.
While canopy height predictions from fitted models showed generally high agreement with airborne lidar data, they also differed in several observations as shown in Figure 3. The observed disagreements are attributed to co-registration errors, cover changes between airborne lidar and NAIP imagery acquisitions, limitation training data used, and general limitation of modeling forest structure parameters with optical imagery [3,31]. We took steps to ensure the NAIP and CHMs were adequately co-registered. However, even with adequate co-registration, movements in tree crowns and different sun-angle geometries between lidar and image acquisition can lead to significant differences at high spatial resolutions. For instance, shadows in NAIP imagery may occlude some trees but their heights would still be recorded in the airborne lidar data, leading to inconsistencies in the model representation of input and output values. At the higher grid cell size, some of these issues are alleviated, which led to better model performance compared to the 1 m data. Changes due to growth are also difficult to isolate. In high-growth environments such as pines [32], the land cover might not necessarily change and would have consistent spectral responses over time, but the underlying forest structure as captured by the lidar would still be different. Disagreement between predicted and reference canopy heights likely reflects inadequacies in the training data used, which might not have adequately captured the canopy height variations across the study area.
Overall, canopy heights estimated in this study showed better agreement with matching heights derived from airborne lidar data than several existing products. GCHM product, also generated using the deep convolutional model, matched our product and in some biomes (TBMD and TSGSS) showed better performance. The differences in agreement between the different products and airborne lidar heights are reflective of the modeling approaches (deep learning vs. traditional machine learning) used and vegetation structural changes between the airborne lidar data acquisition and the production time of the products, datasets used (NAIP for this study, Sentinel-2 for the GCHM, Landsat for the GLAD and LFCH products). Deep convolutional neural networks, as applied in this and the GCHM study, are designed to consider both the spatial distribution of image values and other high-level representations. This design makes them more robust in tackling such regression problems compared to traditional machine learning models that usually rely on non-spatial variables during modeling. This reinforces observations in previous studies that reported better performance in canopy height prediction with spatial interpolation approaches such as Kriging over non-spatial approaches [8,9]. Further, this study and the GCHM products applied higher spatial resolution data (1 m and 10 m) compared to 30 m Landsat medium resolution data for the GLAD and LFCH products. These differences in input image data resolutions also drove differences among the products assessed.
There is still room for further investigation on the impact of different model parameter combinations on canopy height predictions. Here, we focused on encoder depth at three levels and two data grid size sizes. While our investigations provided insight into the impact of these parameters, future studies should evaluate more levels, especially the grid size, to offer a better view of trends in modeling performance. We also applied a fixed 256 × 256 patch size based on previous studies. However, there still a need to investigate the impact of patch size, as they may affect the operational development of models over large areas.

5. Conclusions

In this study, we demonstrated the utility of the U-Net deep learning model for canopy height mapping. We assessed the impact of the U-Net model encoder depth and grid cell size on the performance of canopy height modeling with NAIP and airborne lidar CHMs. Higher encoder depths in the U-Net model were associated with better model performance but resulted in significantly longer model training times. Thus, a compromise between model performance and training time needs to be struck in operational large-scale canopy height mapping. In our case, the models with an encoder depth of 3 performed adequately well on validation and independent test data, and we would recommend this as a good compromise. While high-resolution imagery may be attractive for detailed canopy height mapping, better performance could be achieved at slightly coarse resolutions to improve the consistency between the input imagery and the output canopy height models. More research is still needed to assess various grid sizes and training image patch sizes to inform the use of deep learning models in forest canopy height modeling.
Our model canopy height predictions showed generally better agreement with reference airborne lidar canopy height estimates than the existing canopy height products we evaluated. Our modeling performance was comparable in some respects with the GCHM product, which was also developed with deep convolutional networks showing the superior performance of deep learning approaches over traditional regression methods. Further work is needed to address relatively lower precision in more dense environments such as temperate conifer forests. Alternative CNN model architectures should also be evaluated in such dense cover environments to examine their relative performance to U-Net-type architectures. Dilated convolutions that allow for the expansion of the receptive field of a neural network for improved spatial structural representation [20] could be one avenue for improved mapping in dense canopy environments.
The canopy height map generated over Texas provides improved and spatially detailed information on the distribution of vegetation height across the state. This enhanced data would support improved targeting and decision-making for economic activities such as timber scouting and harvesting to ecological and climate-related tasks such as carbon accounting at a 30 m spatial resolution. In the future, there is an opportunity to generate even detailed 2–5 m canopy height maps, which will capture various ecosystems across the state in greater detail.

Author Contributions

Conceptualization, L.M.; methodology, L.M.; validation, L.M.; formal analysis, L.M.; investigation, L.M.; resources, S.P.; data curation, L.M.; writing—original draft, L.M.; writing—review and editing, L.M. and S.P.; visualization, L.M.; funding acquisition, L.M. and S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by an International Paper Research Grants—Forest Sustainability grant and by funding from the NASA ICESat-2 Science Team, Studies with ICESat-2 (NNH19ZDA001N) grant.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be accessed here https://developers.google.com/earth-engine/datasets/catalog/USDA_NAIP_DOQQ (accessed on 20 October 2022).

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A

This section summarizes performances of canopy height maps evaluated in this study against airborne lidar in various biomes across the state of Texas including the 2020 LANDFIRE Forest Canopy Height (LFCH), the 2019 Global Land Analysis and Discovery (GLAD) GEDI Global Forest Canopy Height (GLAD) and the 2020 Global Canopy Height Model by [11] (GCHM).
Table A1. Summary of canopy height product performance against airborne lidar data heights. n = number of samples. Bias and MAE are in meters.
Table A1. Summary of canopy height product performance against airborne lidar data heights. n = number of samples. Bias and MAE are in meters.
OverallDXSTBMFTCFTGSSTSGSS
STUDY
N22,83248983683486646944691
R20.920.780.420.260.560.56
Bias1.781.191.581.491.952.67
MAE2.661.483.053.082.213.63
GCHM
N22,83248983683486646944691
R20.890.530.430.340.380.35
Bias−1.08−0.20−0.33−4.881.65−1.39
MAE2.991.542.565.082.243.41
GLAD
N22,83248983683486646944691
R20.750.480.190.220.000.20
Bias−6.04−5.89−6.67−9.02−1.84−6.80
MAE6.345.897.039.032.407.40
LFCH
N22,83248983683486646944691
R20.550.080.160.060.020.12
Bias−3.42−7.07−5.47−1.971.54−4.49
MAE7.177.116.718.982.8810.00

References

  1. McMahan, C.A.; Frye, R.G.; Brown, K.L. The Vegetation Types of Texas; Texas Parks and Wildlife Department: Austin, TX, USA, 1984.
  2. Elliott, L. Descriptions of systems, mapping subsystems, and vegetation types for texas. In Texas Parks and Wildlife Ecological Systems Classification and Mapping Project; Texas Parks and Wildlife Department: Austin, TX, USA, 2014. [Google Scholar]
  3. Malambo, L.; Popescu, S.; Liu, M. Landsat-scale regional forest canopy height mapping using icesat-2 along-track heights: Case study of eastern texas. Remote Sens. 2023, 15, 1. [Google Scholar] [CrossRef]
  4. Simard, M.; Pinto, N.; Fisher, J.B.; Baccini, A. Mapping forest canopy height globally with spaceborne lidar. J. Geophys. Res. Biogeosci. 2011, 116, G4. [Google Scholar] [CrossRef]
  5. Olariu, H.G.; Malambo, L.; Popescu, S.C.; Virgil, C.; Wilcox, B.P. Woody plant encroachment: Evaluating methodologies for semiarid woody species classification from drone images. Remote Sens. 2022, 14, 1665. [Google Scholar] [CrossRef]
  6. USGS EROS Center. Usgs Eros Archive—Aerial Photography—National Agriculture Imagery Program (NAIP). Available online: https://www.usgs.gov/centers/eros/science/usgs-eros-archive-aerial-photography-national-agriculture-imagery-program-naip?qt-science_center_objects=0#qt-science_center_objects (accessed on 21 January 2023).
  7. Potapov, P.; Li, X.; Hernandez-Serna, A.; Tyukavina, A.; Hansen, M.C.; Kommareddy, A.; Pickens, A.; Turubanova, S.; Tang, H.; Silva, C.E. Mapping global forest canopy height through integration of gedi and landsat data. Remote Sens. Environ. 2021, 253, 112165. [Google Scholar] [CrossRef]
  8. Liu, X.; Su, Y.; Hu, T.; Yang, Q.; Liu, B.; Deng, Y.; Tang, H.; Tang, Z.; Fang, J.; Guo, Q. Neural network guided interpolation for mapping canopy height of china’s forests by integrating gedi and icesat-2 data. Remote Sens. Environ. 2022, 269, 112844. [Google Scholar] [CrossRef]
  9. Hudak, A.T.; Lefsky, M.A.; Cohen, W.B.; Berterretche, M. Integration of lidar and landsat etm+ data for estimating and mapping forest canopy height. Remote Sens. Environ. 2002, 82, 397–416. [Google Scholar] [CrossRef]
  10. Xiao, R.; Carande, R.; Ghiglia, D. A neural network approach for tree height estimation using ifsar data. In IGARSS’98. Sensing and Managing the Environment. 1998 IEEE International Geoscience and Remote Sensing. Symposium Proceedings, Seattle, WA, USA, 6–10 July 1998; (Cat. No. 98CH36174); IEEE: Piscataway, NJ, USA, 1998; pp. 1565–1567. [Google Scholar]
  11. Lang, N.; Kalischek, N.; Armston, J.; Schindler, K.; Dubayah, R.; Wegner, J.D. Global canopy height regression and uncertainty estimation from gedi lidar waveforms with deep ensembles. Remote Sens. Environ. 2022, 268, 112760. [Google Scholar] [CrossRef]
  12. Lang, N.; Schindler, K.; Wegner, J.D. Country-wide high-resolution vegetation height mapping with sentinel-2. Remote Sens. Environ. 2019, 233, 111347. [Google Scholar] [CrossRef]
  13. Illarionova, S.; Shadrin, D.; Ignatiev, V.; Shayakhmetov, S.; Trekin, A.; Oseledets, I. Estimation of the canopy height model from multispectral satellite imagery with convolutional neural networks. IEEE Access 2022, 10, 34116–34132. [Google Scholar] [CrossRef]
  14. Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep learning classification of land cover and crop types using remote sensing data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
  15. Mahdianpari, M.; Salehi, B.; Rezaee, M.; Mohammadimanesh, F.; Zhang, Y. Very deep convolutional neural networks for complex land cover mapping using multispectral remote sensing imagery. Remote Sens. 2018, 10, 1119. [Google Scholar] [CrossRef]
  16. Al-Najjar, H.A.; Kalantar, B.; Pradhan, B.; Saeidi, V.; Halin, A.A.; Ueda, N.; Mansor, S. Land cover classification from fused dsm and uav images using convolutional neural networks. Remote Sens. 2019, 11, 1461. [Google Scholar] [CrossRef]
  17. Malambo, L.; Popescu, S.; Ku, N.-W.; Rooney, W.; Zhou, T.; Moore, S. A deep learning semantic segmentation-based approach for field-level sorghum panicle counting. Remote Sens. 2019, 11, 2939. [Google Scholar] [CrossRef]
  18. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  19. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  20. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef]
  21. Lu, H.; She, Y.; Tie, J.; Xu, S. Half-unet: A simplified u-net architecture for medical image segmentation. Front. Neuroinform. 2022, 16, 911679. [Google Scholar] [CrossRef] [PubMed]
  22. Fan, Y.; Ding, X.; Wu, J.; Ge, J.; Li, Y. High spatial-resolution classification of urban surfaces using a deep learning method. Build. Environ. 2021, 200, 107949. [Google Scholar] [CrossRef]
  23. Engle, D. Oak Ecology. Available online: https://texnat.tamu.edu/library/symposia/brush-sculptors-innovations-for-tailoring-brushy-rangelands-to-enhance-wildlife-habitat-and-recreational-value/oak-ecology/ (accessed on 12 December 2021).
  24. Tolleson, D.R.; Rhodes, E.C.; Malambo, L.; Angerer, J.P.; Redden, R.R.; Treadwell, M.L.; Popescu, S.C. Old school and high tech: A comparison of methods to quantify ashe juniper biomass as fuel or forage. Rangelands 2019, 41, 159–168. [Google Scholar] [CrossRef]
  25. Krishnan, S.; Crosby, C.; Nandigam, V.; Phan, M.; Cowart, C.; Baru, C.; Arrowsmith, R. Opentopography: A services oriented architecture for community access to lidar topography. In Proceedings of the 2nd International Conference on Computing for Geospatial Research & Applications, Washington, DC, USA, 23–25 May 2011; pp. 1–8. [Google Scholar]
  26. Dinerstein, E.; Olson, D.; Joshi, A.; Vynne, C.; Burgess, N.D.; Wikramanayake, E.; Hahn, N.; Palminteri, S.; Hedao, P.; Noss, R.; et al. An ecoregion-based approach to protecting half the terrestrial realm. Bioscience 2017, 67, 534–545. [Google Scholar] [CrossRef]
  27. Rollins, M.G. Landfire: A nationally consistent vegetation, wildland fire, and fuel assessment. Int. J. Wildland Fire 2009, 18, 235–249. [Google Scholar] [CrossRef]
  28. Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  29. Malambo, L.; Popescu, S.C. Assessing the agreement of icesat-2 terrain and canopy height with airborne lidar over us ecozones. Remote Sens. Environ. 2021, 266, 112711. [Google Scholar] [CrossRef]
  30. Schimel, D.; Pavlick, R.; Fisher, J.B.; Asner, G.P.; Saatchi, S.; Townsend, P.; Miller, C.; Frankenberg, C.; Hibbard, K.; Cox, P. Observing terrestrial ecosystems and the carbon cycle from space. Glob. Chang. Biol. 2015, 21, 1762–1776. [Google Scholar] [CrossRef] [PubMed]
  31. Hansen, M.C.; Potapov, P.V.; Goetz, S.J.; Turubanova, S.; Tyukavina, A.; Krylov, A.; Kommareddy, A.; Egorov, A. Mapping tree height distributions in sub-saharan africa using landsat 7 and 8 data. Remote Sens. Environ. 2016, 185, 221–232. [Google Scholar] [CrossRef]
  32. Borders, B.E.; Bailey, R.L. Loblolly pine—Pushing the limits of growth. South. J. Appl. For. 2001, 25, 69–74. [Google Scholar] [CrossRef]
Figure 2. Masking low-height and non-forest pixels: (a) A sample NAIP 256 by 256 pixel image patch, (b) A matching canopy height model raster, (c) Vegetation mask generated by thresholding NDVI data, (d) Final mask after height masking and morphological opening overlaid on the NAIP image.
Figure 2. Masking low-height and non-forest pixels: (a) A sample NAIP 256 by 256 pixel image patch, (b) A matching canopy height model raster, (c) Vegetation mask generated by thresholding NDVI data, (d) Final mask after height masking and morphological opening overlaid on the NAIP image.
Remotesensing 15 05391 g002
Figure 3. Scatterplots of predicted versus canopy height colored by point density (blue hues correspond with low point density, yellow hues indicate high density): (ac) Models fit with 1 m data and encoder depth 2, 3, and 4, respectively, (df) Models fit with 2 m data and encoder depth 2, 3, and 4, respectively. The high density of points around the red dashed 1:1 line shows general agreement between predicted and reference canopy heights.
Figure 3. Scatterplots of predicted versus canopy height colored by point density (blue hues correspond with low point density, yellow hues indicate high density): (ac) Models fit with 1 m data and encoder depth 2, 3, and 4, respectively, (df) Models fit with 2 m data and encoder depth 2, 3, and 4, respectively. The high density of points around the red dashed 1:1 line shows general agreement between predicted and reference canopy heights.
Remotesensing 15 05391 g003
Figure 4. Comparison of predicted versus reference canopy height models: (ac) DXS environment: false color 256 × 256 pixel NAIP image patch, 256 × 256 pixel reference canopy height model, 256 × 256 pixel predicted canopy height model, respectively. (df) TGSS environment: false color 256 × 256 pixel NAIP image patch, 256 × 256 pixel reference canopy height model, 256 × 256 pixel predicted canopy height model, respectively. (gi) TBMF environment: false color 256 × 256 pixel NAIP image patch, 256 × 256 pixel reference canopy height model, 256 × 256 pixel predicted canopy height model, respectively.
Figure 4. Comparison of predicted versus reference canopy height models: (ac) DXS environment: false color 256 × 256 pixel NAIP image patch, 256 × 256 pixel reference canopy height model, 256 × 256 pixel predicted canopy height model, respectively. (df) TGSS environment: false color 256 × 256 pixel NAIP image patch, 256 × 256 pixel reference canopy height model, 256 × 256 pixel predicted canopy height model, respectively. (gi) TBMF environment: false color 256 × 256 pixel NAIP image patch, 256 × 256 pixel reference canopy height model, 256 × 256 pixel predicted canopy height model, respectively.
Remotesensing 15 05391 g004
Figure 5. CHM product comparison with airborne lidar height estimates: (a) R2 values, (b) mean biases, and (c) MAE values achieved with respect to airborne lidar canopy heights for the CHM products evaluated, respectively.
Figure 5. CHM product comparison with airborne lidar height estimates: (a) R2 values, (b) mean biases, and (c) MAE values achieved with respect to airborne lidar canopy heights for the CHM products evaluated, respectively.
Remotesensing 15 05391 g005
Figure 6. Thirty-meter gridded canopy height product over Texas. Topographic base maps courtesy of ESRI ArcGIS®.
Figure 6. Thirty-meter gridded canopy height product over Texas. Topographic base maps courtesy of ESRI ArcGIS®.
Remotesensing 15 05391 g006
Table 1. Summary of model performance on validation and independent test sets.
Table 1. Summary of model performance on validation and independent test sets.
Model ParametersValidation SetIndependent Test Set
Training Time (h:min:s)Image ResolutionEncoder DepthN PatchesR2Bias (m)MAE (m)N PatchesR2Bias (m)MAE (m)
46:47:491 m24000.70−0.032.215000.82−0.132.57
60:18:241 m34000.75−0.042.005000.82−0.282.49
77:55:241 m44000.740.042.015000.82−0.102.49
50:01:242 m24000.850.061.605000.88−0.462.18
77:32:562 m34000.88−0.011.405000.90−0.281.87
98:01:122 m44000.890.011.345000.900.121.78
Table 2. Summary of modeling performance by biome.
Table 2. Summary of modeling performance by biome.
Pixel Size × Encoder Depth Parameter Combination
BiomeMetric1 m × 21 m × 31 m × 42 m × 22 m × 32 m × 4
DXSR20.650.690.690.810.830.84
Bias (m)0.13−0.140.050.22−0.070.01
MAE (m)1.261.181.180.830.750.73
TBMFR20.460.540.550.690.740.74
Bias (m)−0.10−0.200.21−0.13−0.02−0.12
MAE (m)3.433.083.002.562.292.32
TCFR20.440.420.410.640.680.69
Bias (m)−0.68−0.35−0.26−2.50−0.970.49
MAE (m)4.414.374.394.273.433.12
TGSSR20.680.610.660.670.690.75
Bias (m)0.100.08−0.120.080.320.08
MAE (m)0.800.840.780.770.760.62
TSGSSR20.280.580.570.570.750.77
Bias (m)−2.400.05−0.58−0.500.22−0.26
MAE (m)4.703.123.173.192.512.39
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Malambo, L.; Popescu, S. Image to Image Deep Learning for Enhanced Vegetation Height Modeling in Texas. Remote Sens. 2023, 15, 5391. https://doi.org/10.3390/rs15225391

AMA Style

Malambo L, Popescu S. Image to Image Deep Learning for Enhanced Vegetation Height Modeling in Texas. Remote Sensing. 2023; 15(22):5391. https://doi.org/10.3390/rs15225391

Chicago/Turabian Style

Malambo, Lonesome, and Sorin Popescu. 2023. "Image to Image Deep Learning for Enhanced Vegetation Height Modeling in Texas" Remote Sensing 15, no. 22: 5391. https://doi.org/10.3390/rs15225391

APA Style

Malambo, L., & Popescu, S. (2023). Image to Image Deep Learning for Enhanced Vegetation Height Modeling in Texas. Remote Sensing, 15(22), 5391. https://doi.org/10.3390/rs15225391

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop