Next Article in Journal
HY-1C Observations of the Impacts of Islands on Suspended Sediment Distribution in Zhoushan Coastal Waters, China
Next Article in Special Issue
Modeling Salt Marsh Vegetation Height Using Unoccupied Aircraft Systems and Structure from Motion
Previous Article in Journal
Evaluation of Cotton Emergence Using UAV-Based Narrow-Band Spectral Imagery with Customized Image Alignment and Stitching Algorithms
Previous Article in Special Issue
An UAV and Satellite Multispectral Data Approach to Monitor Water Quality in Small Reservoirs
 
 
Letter
Peer-Review Record

Comparison of Smartphone and Drone Lidar Methods for Characterizing Spatial Variation in PAI in a Tropical Forest

Remote Sens. 2020, 12(11), 1765; https://doi.org/10.3390/rs12111765
by Tamara E. Rudic 1,2,*, Lindsay A. McCulloch 2,3 and Katherine Cushman 2,3,4
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2020, 12(11), 1765; https://doi.org/10.3390/rs12111765
Submission received: 2 April 2020 / Revised: 26 May 2020 / Accepted: 28 May 2020 / Published: 30 May 2020
(This article belongs to the Special Issue She Maps)

Round 1

Reviewer 1 Report


The manuscript presents results of canopy structure index estimation using drone lidar data and two smartphones with applicable hemispherical lens. The manuscript is submitted as "Technical Letter", but there is a lot of technical information missing (see comments below). Second concern is related to the loose usage of terminology and abbreviations concerning leaf area index. This is not too difficult to fix. Third and the most critical issue is related to the data processing and analysis. The study skips analysis of simple variables like vertical canopy cover, angular canopy closure, gap fraction angular dependence and gap size distribution. Instead, effective plant area index witch is calculated using different software and methods and therefore includes additional assumptions and possible errors. The effective plant area indices are finally calculated for old forest and secondary forest and compared but it is not accounted that the distribution of forest classes was probably different on the plots measured with different devices and the results depended on on smartphone devices.

My estimate is that the current manuscript has to be rejected. However, I suggest that the authors study more carefully the collected dataset to describe the components of uncertainty depending on devices and methods and resubmit the results.

** Details **

Abstract is not informative.
What is the reason to believe that drone lidar data based estimates of the value are correct?
"hemispherical canopy images" is not method. What method was used to process the images?
Give the value ranges in abstract.
What was actually the estimated variable? There is no information in the abstract about how woody part was accounted for and how clumping was corrected.

R34 - calculating LAI
Do you mean: calculating by using measured one sided area and area under plant canopy?

R48 - non-photosynthetic woody material
This does not cause underestimation of LAI_eff.

R51 - using site specific level destructive sampling
What level?

R62-R64 - They found that while LAI-2000 yielded more accurate LAI estimations, DHPs are a more practical method and can be as effective in estimating and characterizing landscape level tropical forest LAI if corrections are made for leaf clumping.
: Do you mean that LAI-2000 measurements based values that are output from the corresponding software do not require clumping correction?

 

R98 - Overview of study area at La Selva, Costa Rica.
What is the point of the sentence?

R98-R104 - Lines denote trails through the research reserve, and points denote locations where smartphone images were taken within 2 weeks of lidar collection (top). Example smartphone fisheye image taken at trail location CES750 (location indicated
on overview map with white ring; bottom left). Example lidar point cloud data centered at trail
location CES750 (bottom right).The station includes a mixture of evergreen old growth and secondary
tropical wet forest.

: This is a copy of the Fig 1 caption. Instead, please give the description of the forest structure (sampling points), species topography and other relevant information that may explain your results and make them comparable with other publications.

R109 - Lidar data from BPAR have a location error < 5 cm; this and other characteristics of BPAR are provided in Kellner et al. (2019) [25].
I do not understand - did you use the same dataset as [25] or is this just a reference to BPAR characteristics?


R112 - Lidar were collected using two sets of orthogonal fl...
How do you collect lidar?

R113 -R115
: Maximum scan angle?
: Beam footprint size at canopy level?
: Share of 1st, 2nd etc. returns in dataset?
: 3500 pts m -2 . This means 1.7 cm distance on a flat surface between return locations. The position (horizontal or vertical?) error is given above as 5 cm. So there is a lot of random noise in the data at decimetre scale.

R118 - intercept =-1.406
: So, there is 1.4 m systematic difference. In which direction was the BPAR dataset shifted and how did you account for this when subtracting ground elevation model?


R126 within a given radius around hemispherical image locations
: How was the radius calculated?

R131 We will refer to LAI estimated using drone based discrete return lidar data as “lidar LAI”
: Here and elsewhere in the text. In rows R31-R32 you give "LAI is a dimensionless
quantity defined as the one-sided total leaf area (m 2 ) per unit horizontal ground area (m 2 )". Since the lidar data - based values were not corrected for woody part then use some other variable or a subscript for the indirect estimates. Something like L_eff,lidar will avoid confusion.


R137-R138 - ...we will use the term LAI to be consistent with other
literature using indirect methods to quantify LAI
: No, you just carry over incorrect terminology and confusion. In R1 is "Technical Letter", so please keep your abbreviations clear and consistent.


R141 on a 39.5-inch tripod stand
: Use metric units. Was this the height of the sensor over ground surface?


R145 Trail locations SAZ150-950 and STR600-STR1000 were taken with the iPhone X model and the remaining were taken with the SE model.
: So, you do not have comparable measurements in same locations with smartphones. How much was the forest different on the tracks?

 

R149 - We will refer to LAI estimated using smartphone fisheye images as “smartphone LAI”
: See previous comment on using abbreviation LAI.

R140-R150
: There is no information about camera settings used during photography.
: What was the radius in pixels corresponding to 60 deg view zenith?

R150-R170
: How was the operator subjectivity handled in image processing in CAN_EYE?

R167 "but given that our calculated optical centre was very close to the theoretical (assuming a perfect lens) optical centre, we did not perform the suggested calibration and instead used the theoretical projection function."
: So you mean that optical centre determines the projection mode of lens? This is not true.

:: The lens is removable. How much did the centre position change when lens was repeatedly attached?


R206 - Figure 2. (a) MAE and (b) Pearson’s
: Is this for a single point? If not then plot 95 confidence intervals.
: Which smartphone model?


R209 - Figure 3. Relationship between smartphone
: Use same range for both axis or add 1:1 line.

R244 - R245 - Specifically, the iPhone X consistently underestimated LAI (positive
MSE) compared to lidar LAI, while the iPhone SE consistently ...
: I do not understand, was the metric obtained from smartphone (screen) or after processing the images with CAN_EYE ?

R249 Table 3. MAE, MSE, Pearson’s c
: The results show that the image data processing has systematic errors. Please take a look on your image data, image projection calibration in CAN_EYE and also the classification in CAN_EYE.
:: The different smartphone models were used on different tracks. There is no information in the manuscript about the number of old-growth and secondary forest plots measured by each smartphone. The estimated index values from one of the smartphone models were systematically biased. Unless the errors are removed, it is not correct to calculate mean mean values for forest age classes. Figure in supplementary data "Fig S1: Map of forest types." hints that different smartphone models may be used for different age forest classes.


R254-R256 - For both lidar and smartphone estimates, average LAI was greater in old growth forest locations compared to secondary forest locations, however, the magnitude of the difference between old growth and secondary forest LAI was smaller for lidar LAI than ...
: Here is one example, why PAI must be used instead of "LAI", because the information about leaf area index of rainforests is in interest of many researchers and it is easy to misinterpret the results.


R270-272 "Therefore, our results indicate that inexpensive smartphone methods are valid
for characterizing relative spatial variation in LAI within a tropical forest canopy but do not support
the application of smartphone fisheye images for accurately estimating the magnitude of LAI."

: How do you know that point cloud-based values did gave correct magnitude? There was no attempt to analyse possible causes of differences between smartphone models. The forest in test site seem to be well studied (https://en.wikipedia.org/wiki/La_Selva_Biological_Station) - what is the magnitude of leaf area index in the forests where the sample points were established?

Author Response

**Reviewer's comments are in bold and authors' responses are in regular text. R#* denotes rows in the revised manuscript.

The manuscript presents results of canopy structure index estimation using drone lidar data and two smartphones with applicable hemispherical lens. The manuscript is submitted as "Technical Letter", but there is a lot of technical information missing (see comments below). Second concern is related to the loose usage of terminology and abbreviations concerning leaf area index. This is not too difficult to fix. Third and the most critical issue is related to the data processing and analysis. The study skips analysis of simple variables like vertical canopy cover, angular canopy closure, gap fraction angular dependence and gap size distribution. Instead, effective plant area index witch is calculated using different software and methods and therefore includes additional assumptions and possible errors. The effective plant area indices are finally calculated for old forest and secondary forest and compared but it is not accounted that the distribution of forest classes was probably different on the plots measured with different devices and the results depended on on smartphone devices.

My estimate is that the current manuscript has to be rejected. However, I suggest that the authors study more carefully the collected dataset to describe the components of uncertainty depending on devices and methods and resubmit the results.

We thank the reviewer for the very thorough comments. We have implemented the majority of the suggested changes/clarifications, and we feel that the manuscript is improved. 

We would like to clarify the issue of Remote Sensing’s “Technical Letter” format. It appears that what was a single “Technical Letter” format has now been separated into two separate formats, “Letter” and “Technical Note”. We believe that this manuscript is appropriate as a “Letter”, which is described as “typically brief (less than 10 pages) explanations of a single concept, technique, or study. They contain less information than an article and are suitable for rapid dissemination of results.” We think that, after formatting, this manuscript will be approximately 10 pages long, even after adding material to address reviewer concerns.

We have clarified the terminology throughout, as suggested by the reviewer. Specifically, we use “lidar PAIeff” and “smartphone PAIeff” instead of LAI to clarify that our estimates contain contributions from non-photosynthetic vegetation (PAI instead of LAI) and that our methods cannot fully account for leaf clumping and angle (hence PAIeff, for effective PAI).

In this revision, we still choose to focus on the metric of PAIeff to evaluate the efficacy of smartphone methods (as opposed to vertical canopy cover, angular canopy closure, gap fraction angular dependence and gap size distribution). We feel that this metric is most intuitive to readers and is an appropriate decision for the short “Letter” format. However, we have incorporated many of the specific data processing clarifications/revisions suggested below.

Abstract is not informative. 

We have made edits throughout the abstract and believe it is now more informative.

What is the reason to believe that drone lidar data based estimates of the value are correct? 

The goal of our manuscript was to explore the efficacy of using the inexpensive clip-on lens to estimate LAI. The only way to directly validate LAI estimates is to harvest and measure leaf area, which is not feasible at the scale of hemispherical images in a tropical forest. Instead, we chose to use the lidar-based algorithm of Detto et al. (2015) because its efficacy has been demonstrated using simulated lidar data. Further, this algorithm (unlike simpler algorithms based on the MacArthur-Horn method) is based in radiative transfer theory, incorporating information about lidar return number and angle to estimate total (not projected) leaf area. Using a drone-based platform, around each fisheye image we were able to collect well above the number of lidar pulses necessary to reduce the expected relative error and bias below 5% (~1000 pulses, detailed in Detto et al. 2015).

We appreciate the reviewer’s feedback that more information should be included to justify this choice. We added additional text to the abstract and the end of the introduction to clarify (R22-25* and R115-119*).

"hemispherical canopy images" is not method. What method was used to process the images? 

We have edited this sentence to clarify that the image processing software CAN-EYE was used to estimate PAIeff from fisheye images acquired with a smartphone (R19-22*).

Give the value ranges in abstract. 

The abstract now includes the range of PAIeff values from our study, as well as the agreement with lidar based values (R27-28*).

What was actually the estimated variable? There is no information in the abstract about how woody part was accounted for and how clumping was corrected.

We have changed our estimated variable from LAI to effective PAI (PAIeff) to reflect the fact that neither method (smartphone nor lidar) in our paper corrects for non-photosynthetic material. 

R34 - “calculating LAI”

Do you mean: calculating by using measured one sided area and area under plant canopy?

We have revised this phrasing to clarify that one-sided forest canopy LAI is the variable of interest (R48*).

R48 - “non-photosynthetic woody material”

This does not cause underestimation of LAI_eff.

Bréda’s findings were that “Analysis of the literature shows that most cross-validations between direct and indirect methods have pointed to a significant underestimation of LAI with the latter techniques, especially in forest stands. The two main causes for the discrepancy, clumping and contribution of stem and branches, are discussed.” We have revised the sentence in our manuscript to be more accurate with this statement. (R65-68*) 

R51 - using site specific level destructive sampling

What level?

We clarified this sentence to specify that the level of destructive sampling was within scaffolding towers; the sentence now reads: “LVIS (Laser Vegetation Imaging Sensor) airborne waveform lidar LAI measurements, which have been validated using destructive sampling within scaffolding towers”. (R69-71*)

R62-R64 - They found that while LAI-2000 yielded more accurate LAI estimations, DHPs are a more practical method and can be as effective in estimating and characterizing landscape level tropical forest LAI if corrections are made for leaf clumping.

: Do you mean that LAI-2000 measurements based values that are output from the corresponding software do not require clumping correction?

The LAI-2000 software assumes that foliage is randomly distributed (i.e. no clumping). Indeed, in this reference the authors did not apply any additional clumping correction to LAI-2000 results. They found that even without a clumping correction, LAI estimates from the LAI-2000 software were relatively accurate (underestimated by < 1 LAI)--DHPs could be more accurate if the clumping parameter was tuned in the LAI estimation. We clarified this in the text, which now reads: “They found that while LAI-2000 yielded more accurate LAI estimations without including a leaf clumping parameter...” (R82-85*)

R98 - “Overview of study area at La Selva, Costa Rica.” 

What is the point of the sentence?

We apologize for this editing oversight and thank the reviewer for bringing it to our attention.

R98-R104 - Lines denote trails through the research reserve, and points denote locations where smartphone images were taken within 2 weeks of lidar collection (top). Example smartphone fisheye image taken at trail location CES750 (location indicated on overview map with white ring; bottom left). Example lidar point cloud data centered at trail location CES750 (bottom right).The station includes a mixture of evergreen old growth and secondary tropical wet forest.

: This is a copy of the Fig 1 caption. Instead, please give the description of the forest structure (sampling points), species topography and other relevant information that may explain your results and make them comparable with other publications.

We again apologize for this editing oversight and have edited this section to include more detailed information about forest structure in our study and to specify the distribution of forest types among our trail location data: “Our study area was La Selva Biological Station, situated within an intact lowland tropical forest of northeastern Costa Rica at 10°26’ N and 83°59’ W. The mean annual temperature is 26° C and the mean annual rainfall is 4000 mm [24]. Average daytime temperature remains fairly constant year-round while the months of January through April and September and October see drier conditions – however, even during the drier period, monthly total rainfall rarely fails to exceed 100 mm [25]. La Selva forests are multilayered and biodiverse, containing species of trees, lianas, epiphytes, and broad-leafed monocots; the leguminous tree species Pentaclethra macroloba is particularly abundant in old growth regions [25]. The station includes a mixture of evergreen old growth and secondary tropical wet forest, and our data was acquired from both forest age classes (Figure 1a). The CES trail along with portions of the CEN trail traverse old terrace primary forest. In addition to Pentaclethra, the upper canopy in this forest type is characterized by large emergent such as Dipteryx panamensis and Hymenolobium mesoamericanum (both members of Fabaceae), the subcanopy is abundant in Warscewiczia coccinea (Rubiaceae), and the understory is dominated by Capparis pittieri (Caparaceae) and the palm Bactris porschiana. Secondary forest, which encompasses most of the STR and SAZ trail locations included here, is dominated by tree species such as Cecropia insignis and C. obtusifolia (both Cecropiaceae), Goethalsia meiantha (Tiliaceae), and Laetia procera (Flacourtiaceae) [25].” (R132-153*)

R109 - Lidar data from BPAR have a location error < 5 cm; this and other characteristics of BPAR are provided in Kellner et al. (2019) [25].

I do not understand - did you use the same dataset as [25] or is this just a reference to BPAR characteristics?

This is a reference to the BPAR characteristics--data described in that reference are from a different campaign but were collected using analogous flight conditions. We clarified this to read: “Lidar data from BPAR have a location error < 5 cm; this and other characteristics of BPAR are provided in Kellner et al. (2019), which describes data collected by this platform with analogous flight design at a different location.” (R158-160*)

R112 - Lidar were collected using two sets of orthogonal fl...

How do you collect lidar?

Lidar data were collected by the BPAR drone platform, as described above. We clarified this sentence to read: “Lidar were collected from the BPAR…” (R162*)

R113 -R115

: Maximum scan angle?

We describe the maximum scan angle used in our analysis in the lidar PAIeff estimation section below. (R182*)

: Beam footprint size at canopy level?

We added that the footprint size at canopy level is ~ 5 cm. (R163*)

: Share of 1st, 2nd etc. returns in dataset? 

These characteristics are described in detail in the given Kellner et al. (2019) reference. We choose not to include them here because this is a short-format manuscript and we feel that they are not particularly important for the analysis at hand.

: 3500 pts m -2 . This means 1.7 cm distance on a flat surface between return locations. The position (horizontal or vertical?) error is given above as 5 cm. So there is a lot of random noise in the data at decimetre scale.

Yes, there is considerable random noise at the decimeter scale. However, as described below, our lidar PAIeff estimates are carried out over scales orders of magnitude greater--over areas of > 300 m2 in size, in vertical bins of 1 m height. Therefore, noise at the decimeter scale will have negligible effects on our PAIeff estimates. 

R118 - intercept =-1.406

: So, there is 1.4 m systematic difference. In which direction was the BPAR dataset shifted and how did you account for this when subtracting ground elevation model?

In re-checking this information, we found an error in this sentence; the systematic difference should be -0.406 and we corrected this value. Given the uncertainty in ground-based estimates of elevation, it is impossible to say whether the DTM or ground-based control points were biased in this previous study. However, given that the total bias (now corrected – R168*) was < 1 m (the vertical resolution of our LAI calculation) and that the R2 value of the ground model verification was 0.994 with a slope of 0.999, we do not expect that this affects our results on spatial variation on PAIeff. We further verified by visual inspection (e.g.. Fig. 1c) that the ground pulse in lidar density is reasonable. 

R126 - “within a given radius around hemispherical image locations”

How was the radius calculated?

We have added the following sentence to explain the calculation of the radius used for identifying close points: “Initially, the radius was calculated as the mean forest canopy height, 20m, times  because the field of view of the fish eye lens used to capture the smartphone images was 60Ëš.” (R178-180*)

R131 - “We will refer to LAI estimated using drone based discrete return lidar data as “lidar LAI””: Here and elsewhere in the text. In rows R31-R32 you give "LAI is a dimensionless quantity defined as the one-sided total leaf area (m 2 ) per unit horizontal ground area (m 2 )". 

Since the lidar data - based values were not corrected for woody part then use some other variable or a subscript for the indirect estimates. Something like L_eff,lidar will avoid confusion.

We agree with the reviewer that a less ambiguous term should be used to refer to the LAI estimates in our paper and have therefore changed our variable name from LAI to PAIeff. We have added the following sentence to the introduction where we introduce our measurements: “We choose to use the term PAIeff to indicate that we have not attempted to tune models for local leaf clumping values or remove contributions from non-photosynthetic material.” (R110-112*)

R141 on a 39.5-inch tripod stand

: Use metric units. Was this the height of the sensor over ground surface?

We have changed our units to meters and clarified that this height was the height of the smartphone that took the canopy images over the ground. (R200-202*)

R145 Trail locations SAZ150-950 and STR600-STR1000 were taken with the iPhone X model and the remaining were taken with the SE model.

: So, you do not have comparable measurements in same locations with smartphones. How much was the forest different on the tracks?

We thank the reviewer for pointing out this flaw in our explanations. We now explain that this limitation of the methods was not by intention (to compare phones) but by fieldwork necessity. (R202-203*)

To further clarify, we have added the following sentences to describe the distribution of old growth and secondary growth forest locations between the two iPhone models (we realized we had used an iPhone XR, not an X) used to acquire hemispherical images: “Of the 22 images acquired by the iPhone XR, 1 was taken in the old growth forest type and 21 were taken in the secondary growth forest type. Of the 20 images acquired by the iPhone SE, 13 were taken in the old growth forest type and 7 were taken in the secondary growth forest type. In total, we took 14 hemispherical photos of old growth canopy and 28 hemispherical photos of secondary growth canopy.” (R207-211*) Furthermore, while we acknowledge that the differences in these phone models may introduce error between LAIeff estimated with CAN-EYE from images taken with the XR versus images taken with the SE, we also point out that the major camera features of both models are mostly the same: “Both the iPhone SE and the iPhone XR have a single 12-megapixel wide camera with an f/1.8 aperture and an optical image stabilization feature.” (R114-115*)

Finally, we have modified all figures to explicitly indicate which data points come from each phone model and forest type.

R140-R150

: There is no information about camera settings used during photography. 

We have addressed this comment by adding the following sentence to our paragraph on hemispherical image acquisition: “Camera settings were set to the default modes, and neither flash photography nor digital zoom were used during image acquisition.” (R117-119*)

: What was the radius in pixels corresponding to 60 deg view zenith?

The radius in pixels is 1580 pixels--we have added this to the text. (R119*)

R167 - "but given that our calculated optical centre was very close to the theoretical (assuming a perfect lens) optical centre, we did not perform the suggested calibration and instead used the theoretical projection function."

: So you mean that optical centre determines the projection mode of lens? This is not true.

We agree with the reviewer that this justification was incorrect and have revised our explanation of the projection function estimation as follows: “We determined the projection function (i.e. the function that relates view angle to distance from the optical center) assuming a perfect lens, but CAN-EYE also proposes a more complicated method for manual calibration of the projection function. Assuming a perfect lens, the projection function is assumed to be a first order polynomial where the coefficient is found as the maximum field of view of the lens divided by the length (in pixels) of the diagonal of the hemispherical image.” (R249-256*) While it would be ideal to perform a calibration of the projection function and we acknowledge that the lack of such calibration may affect the accuracy of our image processing, we are unable to carry this out at this time because the lens used in this study is currently located our office building on our University’s campus, which is closed to all traffic during the current pandemic.

:: The lens is removable. How much did the centre position change when lens was repeatedly attached?

We have added the following sentence to explain our process for reducing variability in lens position and thank the reviewer for pointing out this important potential source of error: “The position of the lens was carefully marked on both smartphones during the image acquisition process to reduce error that might be introduced into the optical center calibration by reattaching the lens in the wrong position.” (R246-249*)

R150-R170 

: How was the operator subjectivity handled in image processing in CAN_EYE?

We thank the reviewer for pointing out the need to address this issue in our manuscript and have added the following sentences to explain our efforts to reduce operator subjectivity during the pixel classification process: “Random error introduced by operator subjectivity is not completely avoidable, but all images were processed in CAN-EYE by the same user to avoid issues of subjectivity between users. Optical distortions were carefully classified in the same manner for each photo...” (R228-230*) Further, we include an additional figure in the supplemental material to show an example classification scheme around distorted leaves and explain that the classification guidelines shown in this figure were applied to each subsequent image. (R230-232*)

R206 - Figure 2. (a) MAE and (b) Pearson’s

: Is this for a single point? If not then plot 95 confidence intervals. 

: Which smartphone model?

We have updated this figure to include 95% confidence intervals for both plots, and we have clarified in the figure caption that all 42 trail location images, including all images taken by both iPhone models, were used in the calculations displayed in the images. (R342-347*)

R209 - Figure 3. Relationship between smartphone

: Use same range for both axis or add 1:1 line.

We have updated this figure to include a 1:1 line. As noted above, we have additionally updated the points to differentiate between old growth and secondary growth locations as well as between PAIeff estimated from images taken with an iPhone SE and an iPhone XR. (R348-352*)

R244 - R245 - “Specifically, the iPhone X consistently underestimated LAI (positive MSE) compared to lidar LAI, while the iPhone SE consistently …”

: I do not understand, was the metric obtained from smartphone (screen) or after processing the images with CAN_EYE ?

This section (originally 3.3) has been deleted.

R249 Table 3. MAE, MSE, Pearson’s c

: The results show that the image data processing has systematic errors. Please take a look on your image data, image projection calibration in CAN_EYE and also the classification in CAN_EYE. 

We examined our image processing methods and discovered an error in our initial projection function estimation. While it would be more accurate to do a formal projection function calibration, we are not able to access the lens used in this study due to current travel restrictions. However, refining our projection function estimation using the methods outlined in the CAN-EYE user manual and more carefully minimizing user subjectivity during the pixel classification process did increase the correlation between the smartphone and lidar PAIeff values. We did not expect the actual magnitude of PAIeff estimations to be equivalent for two main reasons: 1) although both the fisheye image processing and lidar analysis include non-photosynthetic material in PAIeff estimations, fisheye images are acquired from below the canopy while lidar data is acquired from above the canopy, which very likely introduces systematic bias between the two sets of calculations, and 2) smartphone fisheye images introduce distortions that more sophisticated equipment such as digital cameras do not, and therefore are likely to give less accurate PAIeff estimations. We think the value of our study is that given the inherent limitations of smartphone photography to estimating PAIeff, the significant positive correlation between smartphone PAIeff and lidar PAIeff provides evidence that smartphone fisheye photography can still be used to assess spatial variation in PAIeff in a complex and heterogeneous tropical forest canopy. This accessible method of PAIeff spatial variation assessment can be used to supplement a variety of tropical forest studies that otherwise may be hindered by a lack of more expensive LAI estimation equipment; however, it is acknowledged here that the smartphone fisheye PAIeff estimation using the CAN-EYE program has not been validated with direct LAI measurements in tropical forests, and this is still a limitation to the method.

Additionally, we discovered in our re-analysis that the correlation between smartphone and lidar PAIeff values increased substantially when the leaf angle distribution (LAD) used in the lidar PAI calculation was optimized for each forest age class (spherical for old growth and planophile for secondary), and we explain in the discussion why we believe this to be ecologically appropriate. (R525-527*) We have also edited Table 3 and added Figure 4 to display the results of comparing LAD – we show that when LAD is accordingly optimized for both forest types, the correlation rises to 0.77 and the mean absolute and mean signed errors are minimized (compared to other LADs).

:: The different smartphone models were used on different tracks. There is no information in the manuscript about the number of old-growth and secondary forest plots measured by each smartphone. The estimated index values from one of the smartphone models were systematically biased. Unless the errors are removed, it is not correct to calculate mean mean values for forest age classes. Figure in supplementary data "Fig S1: Map of forest types." hints that different smartphone models may be used for different age forest classes.

We have added information about the comparison of the two iPhone model cameras (R214-215*) and a description of the number of old growth and secondary growth trail location images acquired by each model (R207-211*).

In response to comments from reviewer 3, we added an additional model comparison aspect that more thoroughly tests for and describes residual bias due to phone model and forest type. We describe this analysis in our methods section 2.6 and present the results in section 3.3.

R254-R256 - For both lidar and smartphone estimates, average LAI was greater in old growth forest locations compared to secondary forest locations, however, the magnitude of the difference between old growth and secondary forest LAI was smaller for lidar LAI than ...

: Here is one example, why PAI must be used instead of "LAI", because the information about leaf area index of rainforests is in interest of many researchers and it is easy to misinterpret the results.

As described above, we now use “lidar PAIeff” and “smartphone PAIeff” throughout. 

R270-272 "Therefore, our results indicate that inexpensive smartphone methods are valid for characterizing relative spatial variation in LAI within a tropical forest canopy but do not support the application of smartphone fisheye images for accurately estimating the magnitude of LAI."

: How do you know that point cloud-based values did gave correct magnitude? There was no attempt to analyse possible causes of differences between smartphone models. The forest in test site seem to be well studied (https://en.wikipedia.org/wiki/La_Selva_Biological_Station) - what is the magnitude of leaf area index in the forests where the sample points were established?

We agree with the reviewer on this point--while we are confident that the lidar method captures spatial variation in PAIeff, it is possible that the magnitude of lidar PAIeff is biased. We edited this section to clarify and provide further context to our research site at La Selva: “While we are confident that our lidar method is appropriate for estimating spatial variability in PAIeff, without destructive measurements of leaf area we cannot know if the magnitude of lidar PAIeff is itself biased. Both lidar and smartphone PAIeff estimates in this study are smaller than those reported previously from DHP (3.76 ± 0.11 SE) for 18 0.5 ha plots in the old growth forest at La Selva [34], but our measurements were taken following an unprecedented blowdown disturbance that caused widespread mortality at La Selva in May 2019 so our lower PAIeff values may be realistic [35].” (R486-491*)

Reviewer 2 Report

The paper researches on the possibility of employing images taken from the relatively inexpensive fisheye smartphone cameras, to derive spatial variation in leaf area index in a heterogeneous tropical forest canopy. The paper is well written and interesting! However, there are some concerns that need to be addressed.

Please find the comments below:

Page 4, Line 153: How are the CAN-EYE parameters selected in this paper?

Page 4, Line 124: "The algorithm was originally 124 written in MatLab, and we translated the code for R [27]" This information is irrelevant here, as the importance is on the method used, and not how/where it was implemented. I suggest removing it.

Page 4, Line 161: "We carried out our initial LAI estimations using all three ring methods, which we call smartphone LAI3, LAI4, and LAI5.". There is some error here! Did you mean "We carried out our initial LAI estimations using all the three ring methods, which we call smartphone LAI3, LAI4, and LAI5, respectively."?

Page 6, Figure 3: It would be informative to show the R^2 value and fitted line in the figure.

Page 7, Line 217: The gradual change in MAE and 216 correlation broke down at radii less than 16m and greater than 47m. Please reason a bit on the possible cause of the breakdown!

Page 8, Line 266: Were the data acquired from a helicopter (page 3, Line 107) or a drone? I suggest using terminologies consistently.

Author Response

**Reviewer's comments are in bold and authors' responses are in regular text. R#* denotes row number in revised manuscript.

 

The paper researches on the possibility of employing images taken from the relatively inexpensive fisheye smartphone cameras, to derive spatial variation in leaf area index in a heterogeneous tropical forest canopy. The paper is well written and interesting! However, there are some concerns that need to be addressed.

Please find the comments below:

Page 4, Line 153: How are the CAN-EYE parameters selected in this paper?

We used the calibration method in the CAN-EYE user manual to find the optical center and estimated the projection function coefficient using a method in the user manual assuming a perfect lens. We initially used all three LAI2000 PAIeff estimations (PAI3, PAI4, and PAI5) and found that the PAI4 method consistently reduced mean absolute error when compared to lidar PAIeff, so we used this method for all subsequent analyses. All other parameters were automatically set to CAN-EYE’s default mode and are listed in Table 1. We decided not to discuss these parameters in more detail here because we feel this detail is beyond the scope of a Letter format, which aims to be a quick dissemination of results, and the information is available in full detail in the CAN-EYE user manual.

Page 4, Line 124: "The algorithm was originally 124 written in MatLab, and we translated the code for R [27]" This information is irrelevant here, as the importance is on the method used, and not how/where it was implemented. I suggest removing it.

We agree with the reviewer and have removed this sentence from the manuscript.

Page 4, Line 161: "We carried out our initial LAI estimations using all three ring methods, which we call smartphone LAI3, LAI4, and LAI5.". There is some error here! Did you mean "We carried out our initial LAI estimations using all the three ring methods, which we call smartphone LAI3, LAI4, and LAI5, respectively."?

We have made this correction.

Page 6, Figure 3: It would be informative to show the R^2 value and fitted line in the figure.

After further analysis, we added a new figure (figure 4) that displays the correlation between smartphone and lidar PAIeff when leaf angle distribution (LAD) is optimized for both forest classes which leads to a stronger correlation than the one depicted in figure 3 (r = 0.77 when LAD is optimized versus 0.62 when it is not). This new figure does include the fitted line as suggested, and the r2 value is in the text immediately below.

Page 7, Line 217: The gradual change in MAE and 216 correlation broke down at radii less than 16m and greater than 47m. Please reason a bit on the possible cause of the breakdown!

We have added the following sentence to address this comment: “We suspect this is due to the spatial structure of the forest canopy not being consistent at radii extremely smaller or larger than the radii of the smartphone images.” (R334-336*)

Page 8, Line 266: Were the data acquired from a helicopter (page 3, Line 107) or a drone? I suggest using terminologies consistently.

We have clarified in R156* that data was acquired by a helicopter-style drone to be more consistent with subsequent references to “drone lidar”.  

Reviewer 3 Report

1) Why did you examine the relationships between LiCOR LAI 2000-based LAI and smartphone LAI? Are you sure that Drone LiDAR-based LAI is accurate (reference) LAI?  Response variable? Please provide some more info about the LAI estimation accuracy of the drone-based LiDAR system.

2) I would want to see a model based statistic approach to understand the relationships between drone Lidar LAI and smartphone LAI.  Yes, the Correlation coefficient provides an insight to estimate LAI from smartphone Fish Eye photos. However, a robust statistic approach together with a validation test could be applied to understand the predictivity capacity of smartphone Fish Eye photos very well.

3) Why did you take 42 plots? Do the 42 plots cover all forest stand types in the study area? I recommend that at least 60 plots (40 for model training and 20 for model validation) for such researches.

4) The graphics in Figure 3 show that LAI estimation from smartphone is somewhat difficult. Yes there is a weak relation exist between smartphone LAI and Lidar LAI, however we cannot judge from correlation coefficient alone.  A model and model validation may say us more than correlation coefficient.

Author Response

**Reviewer's comments appear in bold and authors' responses appear in regular text. R#* denotes row number in revised manuscript.

 

1) Why did you examine the relationships between LiCOR LAI 2000-based LAI and smartphone LAI? Are you sure that Drone LiDAR-based LAI is accurate (reference) LAI?  Response variable? Please provide some more info about the LAI estimation accuracy of the drone-based LiDAR system.

 

We did not examine the relationships between LiCOR LAI 2000 and smartphone LAIs - instead, smartphone PAIeff as calculated from fish-eye images by CAN-EYE is based on the LAI2000 estimation method.

 

We thank the review for pointing out that the rationale should be further clarified. This comment is similar to that of Reviewer 1. As noted in our response to Reviewer 1, we have added text to the abstract, introduction, and methods to clarify. 

 

2) I would want to see a model based statistic approach to understand the relationships between drone Lidar LAI and smartphone LAI.  Yes, the Correlation coefficient provides an insight to estimate LAI from smartphone Fish Eye photos. However, a robust statistic approach together with a validation test could be applied to understand the predictivity capacity of smartphone Fish Eye photos very well.

 

In response to this suggestion, we added a more cohesive model comparison approach to evaluate variation in smartphone PAIeff explained by forest structure (lidar PAIeff), and whether or not there is significant residual bias explained by phone model and/or forest type (Section 2.6, Table 3, Section 3.3). We feel that this approach has greatly improved the explanation of our results.

 

3) Why did you take 42 plots? Do the 42 plots cover all forest stand types in the study area? I recommend that at least 60 plots (40 for model training and 20 for model validation) for such researches.

 

While we agree with the reviewer that more plots would allow a more thorough investigation, we were only able to collect 42 images given the time and weather constraints during our initial field campaign (images can only be taken during a short window of diffuse light condition at dawn and dusk, and it rains often at La Selva). Unfortunately, we are currently unable to collect more data due to global travel restrictions and stay at home orders.

 

However, the total number of images included is similar to other published work comparing hemispherical photo and lidar LAI, which had 48 sample locations:

 

Kamoske, Aaron G., et al. "Leaf area density from airborne LiDAR: Comparing sensors and resolutions in a temperate broadleaf forest ecosystem." Forest ecology and management 433 (2019): 364-375.

 

4) The graphics in Figure 3 show that LAI estimation from smartphone is somewhat difficult. Yes there is a weak relation exist between smartphone LAI and Lidar LAI, however we cannot judge from correlation coefficient alone.  A model and model validation may say us more than correlation coefficient.

 

We thank the reviewer for this feedback. We feel that this comment is also addressed by the model comparison approach now included.

Round 2

Reviewer 1 Report

This version of manuscript is much improved. Some minor points must be addressed.


R32-R33 "...be necessary to use more samples to characterize LAI variation using this inexpensive method compared to traditional hemispherical lenses."
: It is not about lenses. It is about the data that is processed. If it would be possible to get sensor raw data (linear response values depending on incident radiation) out from smartphones, then the smartphone fisheye images (SFE-s) could be really comparable to LAI PCA device. However, user has access only to images that are result of a long processing chain to produce nice-looking pictures for human eye. The canopy gap angular distribution (and all predictions based on it) calculated from such images has substantial uncertainty including random and systematic errors.

: Increasing the sample size does not remove the uncertainty that is caused by cameras when they convert sensor linear data into pictures for human vision.

R90-R93 Figure 1. Overview ..

: Align subfigure (a) to left. There will be enough space for longer and self explaining labels for smartphones. This avoids long text in caption.
: Colour legends are self-explaining. "is shown at a 1-m resolution" does not add useful information.
: Change legend form "Height" to "Canopy height".
: What does the label "200 m" above black line stand for on the figure?


R177-R182
: One image was taken at each sampling point. What was the positioning accuracy of sampling points? Please give this information in short or add a reference where this information can be found. With time consuming RTK GPS or land-survey instruments errors smaller than 1 m can be achieved. With common handheld GPS (separate or in smartphones) the positional accuracy may have up to 40m errors in forest. The errors are greater on slopes.


R203 "CAN-EYE includes multiple to estimate PAI eff ,"
: Check the sentence. Multiple of what?

R215-R219 "We determined the projection function (i.e. the function that relates view angle to distance from the optical centre)
assuming a perfect lens, but CAN-EYE also allows a more complicated method for manual calibration
of the projection function [28]. Assuming a perfect lens, the projection function is assumed to be a
first order polynomial where the coefficient is found as the maximum field of view of the lens divided
by the length (in pixels) of the diagonal of the hemispherical image."

: What about stating simply: "We assumed perfect optical system and used linear projection model. The impact of this assumption on results is small considering other components of measurement uncertainty."

 

R286-289 Figure 3. Relationship between smartphone and ...
: There is no need to rewrite figure legends in figure caption.
: The subfigures seem to be identical. Please check if this can be true.
: Add a small legend "1:1" (perhaps upper left) for the line.
: The square symbols on figure 3b may be misleading, because there are not any. What about removing them and just using corresponding colour for label "Old Growth" and "Secondary"?


R 312+ ... Table 2
* denotes p < 0.001.

: Usually the level p<0.05 is used for significant values ( can be also p < 0.001) . Italics is used for non-significant values.


R326-R331 Figure 4. Relationship between ...

: Check the journal layout. If single column per page then there will be sufficient space to construct a legend for the figure and avoid long text in caption. Even " Data points are distinguished as in Figure 3" this is sufficient as readers ca find the Figure 3.

R405 : "... taken compared to other, more expensive equipment."
: Expensive equipment itself is not the warranty for reliable measurements. What about first calibrating the SFE based measurement procedure on points where canopy gap fraction is measured with reliable methods (e.g. LinearRatio or LAI-2200 PCA, TRAC) and the uncertainty is really known?

 

Author Response

**Reviewer's comments are in bold and authors' responses are in regular text. R#* denotes the row number in the newly revised manuscript. 

 

This version of manuscript is much improved. Some minor points must be addressed.

 

R32-R33 "...be necessary to use more samples to characterize LAI variation using this inexpensive method compared to traditional hemispherical lenses."

: It is not about lenses. It is about the data that is processed. If it would be possible to get sensor raw data (linear response values depending on incident radiation) out from smartphones, then the smartphone fisheye images (SFE-s) could be really comparable to LAI PCA device. However, user has access only to images that are result of a long processing chain to produce nice-looking pictures for human eye. The canopy gap angular distribution (and all predictions based on it) calculated from such images has substantial uncertainty including random and systematic errors.

We thank the reviewer for this comment and agree that a more detailed explanation is necessary. To do this, we have modified the abstract to say “Our results suggest that smartphone images can be used to characterize spatial variation in PAIeff in a complex, heterogeneous tropical forest canopy, with only small reductions in explanatory power compared to true digital hemispherical photography.” (R35-37*). 

To acknowledge this source of error, we have also added the following sentence in the discussion: “PAIeff estimations derived from smartphone images are likely affected by random and systematic errors introduced by the image processing chains raw data undergo before becoming available to the user.” (R531-533*).

: Increasing the sample size does not remove the uncertainty that is caused by cameras when they convert sensor linear data into pictures for human vision.

We thank the reviewer for making this point and have removed this claim from the abstract. We have also edited similar statements about sample size in the conclusion section as suggested by the final comment.

 

R90-R93 Figure 1. Overview ..

: Align subfigure (a) to left. There will be enough space for longer and self explaining labels for smartphones. This avoids long text in caption. 

: Colour legends are self-explaining. "is shown at a 1-m resolution" does not add useful information. 

: Change legend form "Height" to "Canopy height".

: What does the label "200 m" above black line stand for on the figure?

We have made the suggested changes to Figure 1 and shortened it’s caption. The black line was a 200 m scale bar, which we clarified by changing the text to “Scale: 200 m”.  (R124*, R128-134*)

 

R177-R182 

: One image was taken at each sampling point. What was the positioning accuracy of sampling points? Please give this information in short or add a reference where this information can be found. With time consuming RTK GPS or land-survey instruments errors smaller than 1 m can be achieved. With common handheld GPS (separate or in smartphones) the positional accuracy may have up to 40m errors in forest. The errors are greater on slopes.

Sampling location data were collected with a handheld GPS in 2005. We have added the GPS make and model to the methods section. We agree with the reviewer that this could be an additional source of error in the relationship between smartphone and lidar PAIeff, and we added this caveat to our discussion section:

“We acknowledge two additional sources of error that could cause the relationship between smartphone and lidar PAIeff to be weaker in our study compared to previous analyses. … Second, there may be geo-referencing errors between the smartphone and data because trail marker locations were geo-located using a handheld GPS unit.” (R529-534*)

 

R203 "CAN-EYE includes multiple to estimate PAI eff ,"

: Check the sentence. Multiple of what?

We have edited this sentence to say “CAN-EYE includes multiple methods to estimate…” (R241-242*)

 

R215-R219 "We determined the projection function (i.e. the function that relates view angle to distance from the optical centre) assuming a perfect lens, but CAN-EYE also allows a more complicated method for manual calibration of the projection function [28]. Assuming a perfect lens, the projection function is assumed to be a first order polynomial where the coefficient is found as the maximum field of view of the lens divided by the length (in pixels) of the diagonal of the hemispherical image."

: What about stating simply: "We assumed perfect optical system and used linear projection model. The impact of this assumption on results is small considering other components of measurement uncertainty."

We agree with the reviewer that these sentences are more concise and have made these changes in the manuscript. (R254-256*) 

 

R286-289 Figure 3. Relationship between smartphone and ...

: There is no need to rewrite figure legends in figure caption. 

: The subfigures seem to be identical. Please check if this can be true. 

: Add a small legend "1:1" (perhaps upper left) for the line. 

: The square symbols on figure 3b may be misleading, because there are not any. What about removing them and just using corresponding colour for label "Old Growth" and "Secondary"?

We have edited the caption, added a 1:1 line legend, and changed the colors of the forest type legend text in place of the square symbols as suggested. There was a mistake in the subfigures; we have fixed this as well.

 

R 312+ ... Table 2 

* denotes p < 0.001.

: Usually the level p<0.05 is used for significant values ( can be also p < 0.001) . Italics is used for non-significant values.

We have made these changes to Table 2 and the caption accordingly. (R355*, R357-360*)

 

R326-R331 Figure 4. Relationship between ...

: Check the journal layout. If single column per page then there will be sufficient space to construct a legend for the figure and avoid long text in caption. Even " Data points are distinguished as in Figure 3" this is sufficient as readers ca find the Figure 3.

As suggested by the reviewer, we shortened the caption by adding a legend to the figure. (R455-461*)

 

R405 : "... taken compared to other, more expensive equipment."

: Expensive equipment itself is not the warranty for reliable measurements. What about first calibrating the SFE based measurement procedure on points where canopy gap fraction is measured with reliable methods (e.g. LinearRatio or LAI-2200 PCA, TRAC) and the uncertainty is really known?

We have updated this sentence as follows: “It may be possible to further explain the remaining variation in the relationship between smartphone and lidar estimates by considering reliable measurements of other canopy structural properties (e.g. canopy gap fractions), or by comparing to direct harvest LAI measurements.” (R587-592*)

Back to TopTop