Next Article in Journal
Spectral Aerosol Radiative Forcing and Efficiency of the La Palma Volcanic Plume over the Izaña Observatory
Next Article in Special Issue
Tree Species Classification Using Airborne LiDAR Data Based on Individual Tree Segmentation and Shape Fitting
Previous Article in Journal
Array Configuration Design for Mirrored Aperture Synthesis Radiometers Based on Dual-Polarization Measurements
Previous Article in Special Issue
Optimization Method of Airborne LiDAR Individual Tree Segmentation Based on Gaussian Mixture Model
 
 
Article
Peer-Review Record

Tree Reconstruction Using Topology Optimisation

Remote Sens. 2023, 15(1), 172; https://doi.org/10.3390/rs15010172
by Thomas Lowe *,† and Joshua Pinskier †
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Remote Sens. 2023, 15(1), 172; https://doi.org/10.3390/rs15010172
Submission received: 17 October 2022 / Revised: 15 December 2022 / Accepted: 22 December 2022 / Published: 28 December 2022 / Corrected: 25 May 2023
(This article belongs to the Special Issue 3D Point Clouds in Forest Remote Sensing II)

Round 1

Reviewer 1 Report

This paper discusses a technique for robustly reconstructing plausible 3D models of tree trunks and branches using topology optimization from Lidar point clouds of trees measured under unfavorable conditions and with many noisy and occlusion areas.

 The proposed method for generating tree models from 3D point clouds might be similar to computer graphics. However, it is unprecedented as a modeling method based on actual 3D point cloud data. Therefore, the originality of the method should be highly appreciated. The paper is wellーorganized and highly readable. The theory is carefully explained and verified with experimental data.

 In the related work section, it would be helpful to add some references to the trends in tree modeling in computer graphics. In addition, a minor revision to the explanation of the experimental part of Section 4 would further improve readability.

 

Minor comments

1.       Line 12: “Surface Error” might be better to fix to “surface error”.

2.       Line 33: The authors should indicate some representative references about the “leading methods” they stated.

3.       Line 56: Please clarify if the method is applicable to the point cloud obtained by aerial lidar sensor or just limited to the ground based lidar or image sensors.

4.       Section 1: Figure 2 was not referenced in the main text, and it should be modified.

5.       Line 113: The reference number of “treeQSM” is missing.

6.       Line 152: The definition of the variable “N” should be indicated. Is it the total number of voxels?

7.       Line before Eq(4): The function might be “smoothly approximated” Heaviside function. Please check the definition.

8.       Line 196: The authors should a bit state why it can be though as a “sand” model of the terrain height.

9.       Eq(8): The definition of the parameter “w” should be indicated.

10.    Section 4: Tables 1, 3 and 4 should be referenced in the beginning part of section 4 to help readers understand the examples as early as possible. (In the original manuscript, table 3 was referenced before tables 1 and 2.)

11.    It might be better to fix “Meshed result” to “BSG” in Tables 1, 3 and 4.

12.    The author should touch some tree-modeling methods that have been developed in the computer graphics and clarify their issues. Also review paper of  tree modeling in the context of 3D urban modeling should be included. For example;

l  Boris Neubert, Thomas Franken, and Oliver Deussen. 2007. Approximate image-based tree-modeling using particle flows. In ACM SIGGRAPH 2007 papers (SIGGRAPH '07). Association for Computing Machinery, New York, NY, USA, 88–es. https://doi.org/10.1145/1275808.1276487

l  I. Shlyakhter, M. Rozenoer, J. Dorsey and S. Teller, "Reconstructing 3D tree models from instrumented photographs," in IEEE Computer Graphics and Applications, vol. 21, no. 3, pp. 53-61, May/Jun 2001, doi: 10.1109/38.920627.

l  R. Wang, J. Peethambaran and D. Chen, "LiDAR Point Clouds to 3-D Urban Models :  A Review," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 2, pp. 606-627, Feb. 2018, doi: 10.1109/JSTARS.2017.2781132.

 

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

There are two major problems in this manuscript. 

(1) There is no publication indicating that tree branch structure follows topological optimization. So the authors have to prove it theoretically or empirically. 

(2) There is no real validation. The validation should be based on the real trees (such as branch lengths, diameters, and angles). The tables should show the real data of tree structure and the modeled data by topological optimization approach. 

Others minors:

The introduction part needs to include some references. 

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

This paper presents a new approach to tree reconstruction from lidar point clouds, based on topology optimisation and considering ray clouds to potentially overcome the issues of occlusion. The proposed method shows the capability to reconstruct woody parts of trees from point clouds of a mixed-objects scene rather than a cleaned single tree point cloud. The authors also claim that this method can deal with occlusion problems by estimating tree structure via structural topology optimisation.

 

I think the approach is potentially very interesting. In its current form however, the paper is lacking in a number of areas and needs significant work before it could be published. I provide some general comments below, and then the annotated manuscript. I am somewhere between the decision of major revision, or reject and encourage resubmission of a much revised manuscript. This is not because there isn't interesting work here, it is just the evidence presented for how good (or not) the approach is, is really lacking and does not justify the conclusions drawn.

 

General comments

The assumption of wind-loading to optimize - this is a very big assumption, and while TO may be a potentially generic approach eg to work with other constraints, the wind-loading hypothesis is never tested, but effectively just assumed based on one or two papers. In practice, trees face many constraints - and wind is not even the most serious one for many. So optimizing against this, particularly in the case of occlusion, is a very strong assumption that needs more justification (and ideally testing). Much is made of the heuristics used by other approaches - which is totally fair. But then to introduce this assumption and not test it - even in a sensitivity sense - means you have done effectively the same thing. Also, the key part of the TO described in Section 2.2 shows that it requires a specific cost function and many given parameters and boundary conditions. How do you determine these parameters and boundary conditions? Are they applicable for trees in various sizes and shapes? What is the sensitivity to these choices?

 

Test data

The reasons for the choice of particular point clouds is never described - why these trees? What trees are they - species, age, size etc? You mention various problem cases, but what were your criteria here? The cases shown in table 1 are not really useful in the way they are presented or used. They are too small to view other than superficially, and are only used qualitatively. What are we supposed to infer from these figures, and why? They don't present evidence to show one method is better or worse than another. That would need systematic choices of data, and then quantitatively and systematically assessed, ideally against properties of the trees that you have measured (ground truth), that could then be estimated from the TO reconstruction and compared. Table 5 is much more useful in this regard - a more systematic exploration. But even that - does removing the sphere replicate occlusion we might see in a lidar point cloud? And how did you go about choosing the sizes etc? But ok this is still a more convincing way to do it than any of the other examples.

 

Accuracy assessment. 

The aim or aims of the tree reconstruction method are not clear, which makes the accuracy assessment and result comparison less persuasive. For example, if the key feature of this method is the capability to generate topological connection under occlusion circumstance, then it is not reasonable to evaluate reconstruction accuracy by the average surface error (SE) between model and point clouds, because SE can neither evaluate the accuracy for the missing points area nor assess the correctness of topological connection between branches. What the resulting models are to be used for will (or should) affect how you assess the accuracy. More generally, the metric of SE seems like a poor choice. It is a global metric, but many reconstruction methods might work well locally, while being biased at the whole-tree scale. So the accuracy of reconstruction needs to be viewed in different ways for different applications - volume for biomass, branch size for some ecological applications, topology for others etc. A single metric like SE just doesn't represent these things well and particularly prone to missing bias eg where volume may be right, but only because few, larger volumes are slightly overestimated but many small ones are underestimated. The resolution and nature of the point clouds can really affect this. You mention ground truth but this is never elucidated (or even used as far as I can tell). So without some real measurements of any of these things, you would have to rely on doing sensitivity analysis eg varying point cloud properties etc. And that is not done. As far as I can tell, the assessment in table 1 comes from manual assessment of the point clouds - so that will depend on the operator and what they were looking for. Those things can be tested eg by using multiple people to do the same task.




Comments for author File: Comments.docx

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

I accept your revision.

Author Response

Thank you for your contribution to the paper, we appreciate your efforts.

Reviewer 3 Report

Author responses and my comments and responses. The authors have done a reasonable job of addressing some of my concerns, but some of them remain. I still think there's an issue with how they are phrasing the work, the generality of the results, qualitative language throughout, and use of some of the metrics. I think much of this could be dealt with by acknowledging some of the limitations of the work - this doesn't make it weaker it actually should be a strength. I also still think that the reader is being asked to take too much on trust from some of the results and figures. I strongly suggest that Table 1 is addressed somehow. If I have to rely on zooming in to some unspecified level of detail to figure to judge visually that something is 'good' or not when I don't have a scale and I don't know what I'm supposed to be looking at to judge that, then it's definitively not good. Tell me what I'm looking for and where, and why. These figures could be great! At the moment, they are not helpful.   Re some of the specific comments the authors make.   "This provides a relatively general solution which naturally prevents undesirable artifacts like multiple trunk solutions and disconnected branches." I agree that the generality is attractive, but it's not established in the paper. Disconnected branches are definitely an issue, but trees with multiple main trunks occur frequently, so are not undesirable or an artifact at all (see natural coppicing, suckering growth etc etc). This is what I mean by needing to be a lot more careful about strong assumptions that effectively constrain the resulting fit to the way you "expect" it to go, without considering whether that's useful or not. Or testing it and showing it works or doesn't.   The improved discussion of the assumptions is good - and necessary. But it still means there is not a systematic test of the breaking of these assumptions. Yes some of the test cases may do so, but there's no systematic testing. I'd say this is prob less than the bare minimum given the statements in the paper. If you said something like: "Rather than use these heuristics, we make a set of (at least testable) assumptions x, y, z - we do not fully test them here, but that is something we will do later or invite others to do etc etc. and you can because they are testable...." Then this is a fairer statement of the strengths and weaknesses of your approach v others.   "It is the scope of papers that develop these optimisation methods (such as refs 37,38,39) to analyse their behaviour." If you make claims as to the generality of the application of these methods to your case &/or the lack of heuristics, then it's up to you to show that.   "We do not mention the term ground truth" - you dop2 line 50 of the original version, p2 line 71 of the revised version & also in this response ("The quantitative assessment is in Table 2, it is a systematic assessment against ground truth.") The explanation of this is clearer, but the point remains that this is a measure of the fitting of a point cloud to a modelled surface, not a measure of the properties of the tree. That's why ground truth is not a great expression more generally, but in this case definitely not as it implies (rightly or wrongly) that you are comparing to the thing you're trying to reconstruct ie measured tree properties. Again, you could fix this by being clearer that you are deliberately not doing that, and why (see also comments below).
 

"The right hand column visually demonstrates the segmentation quality of the established TreeSeparation method." but what does this mean? This is the difficulty of appealing to quantitative at one point and qualitative at another. Quantitative is absolutely better than qualitative for determining some things (eg model fit to data), unless​ you make absolutely clear what qualitative aspects you are considering. And that's not the case here. Just saying, look you can see it's better is not helpful.

"We consider our choice of data to be systematic." But what if I don't - who is right? At least if the criteria you are using are clearly laid out then we can argue about those. Otherwise .... ?   Re use of SE metric   "The metric is intermediate between length and volume-based weighting, so is the most accurate single option to cover all three use-cases." but is it - where do you show that? It might be, but if you're not showing it then don't claim to. And if a key application is concerned with volume (as I note below), then a metric that is somewhere between length and volume isn't necessarily the best metric to use, for that application. So I think this could be dealt with much more head on - there's no problem in just saying that there are other ways to judge error that may be more or less suitable for a given application, & other people who want to try your method can try that out. But the point is to acknowledge that, and without further testing you're not in a position to say which is better or worse.   "It is true that a reconstruction with leaning trees (for example) would give similar volume estimates, but total volume is not the principle objective of tree reconstruction." This is definitely the objective of eg Fan et al AdQSM 2015, Fan et al 2022, Calders et al 2015, Momo Takoudjou et al 2018, Demol et al. 2022 etc. Arguably it is one of the widest uses of tree reconstruction so far from TLS (and now UAV-LS). Others are coming but this is a key one. So this statement is just not correct. You then go on to give specific use cases where some metrics will be more useful than others, which just reinforces my point. "We title our paper as ‘Tree Reconstruction’ as it is a general-case reconstruction of the data, rather than solving any one use-case." - which is the issue. It may or may not be general - you haven't established that either way. But acknowledging that there are different ways to assess accuracy, that are likely to depend on the use case, would actually help other people decide how and when to use your approach. Again, I would see this as a strength of a paper like this to make that point, not a weakness.
 

"We do not mention the term ground truth, however we consider the lidar observations to be the best measurement of the structure of the original tree." - see above, you do mention ground truth, several times. And just saying there's lots of branches so we prefer the SE metric lidar, doesn't make it the best. Sure, it's a pragmatic approach - but best? You're trying to assess the accuracy of a method - would a few measurements of branches, angles, etc be better than none? Maybe, maybe not but just saying SE is better isn't justified at all. Are yousaying it's impossible to truly validate your approach at all? I'm not sure you are, but maybe so. There are clearly things you could do with eg modelled tree structures and point clouds for eg which are half-way towards this at least, and also some datasets of trees that have been TLS and destructively harvested with geometric measurements made. I'm not saying you have to do that, but acknowledging that it would be a possibility is important.

 

Still a lot of qualitative or unsupported descriptions - moderately well; robust; weconsider these both to be good results forthe chosen set of data; generally less correct; appears to realistically ..... As I noted before, these don't help - you need to be much tighter in terms of language, and leave the reader to decide. The more of this you leave in, the more it looks like you are trying to get us to look the other way.

 

 

Author Response

Please see the attachment

Author Response File: Author Response.docx

Back to TopTop