Next Article in Journal
Navigating the Path to Smart and Sustainable Cities: Insights from South Korea’s National Strategic Smart City Program
Previous Article in Journal
Rural Resilience Evaluation and Risk Governance in the Middle Reaches of the Heihe River, Northwest China: An Empirical Analysis from Ganzhou District, a Typical Irrigated Agricultural Area
Previous Article in Special Issue
Regeneration of Military Brownfield Sites: A Possible Tool for Mitigating Urban Sprawl?
 
 
Opinion
Peer-Review Record

Enhancing Explainable AI Land Valuations Reporting for Consistency, Objectivity, and Transparency

by Chung Yim Yiu * and Ka Shing Cheung
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Submission received: 20 March 2025 / Revised: 19 April 2025 / Accepted: 22 April 2025 / Published: 24 April 2025
(This article belongs to the Special Issue Land Development and Investment)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The manuscript presents a framework for improving the explainability of AI-driven land valuation reports by emphasizing consistency, objectivity, and transparency. The paper critiques current industry guidelines, particularly those from the Royal Institution of Chartered Surveyors (RICS), for lacking specific directives on integrating Automated Valuation Models (AVMs) while preserving core valuation principles. To address this, it provides a checklist for reporting AI/ML valuations, ensuring clarity, completeness, and compliance with legal and professional standards. A practical case study, detailed in Appendix B, illustrates these practices using data from Auckland Central, New Zealand, demonstrating how SHAP and linear regression enhance transparency by detailing factors like building age and zoning impacting property values.

Strengths of the Paper

  • Addresses a critical issue in AI applications in land valuation.
  • Offers a well-structured approach to improving explainability.
  • Provides a case study to illustrate practical application.
  • Proposes structured methods for standardizing AI-generated reports.

Weaknesses and Areas for Improvement

  • The case study, currently placed in Appendix B, provides essential insights that should be incorporated into the main body for better readability and impact. A balanced approach could involve summarizing key findings from the case study in the main text, such as the SHAP results and their implications for transparency, while retaining detailed methodology (e.g., data schema, model performance metrics) in an appendix.
  • The discussion could benefit from more detailed consideration of potential biases in AI models for land valuation. Include a section addressing possible limitations, such as data availability and bias in AI-driven decision-making.
  • The paper lacks an explicit comparison with existing AI-based land valuation models. A brief comparative analysis with other methodologies would enhance the paper’s contextual positioning.
  • Revision Required , While the paper presents a valuable contribution to AI-driven land valuation, The recommendation:
  • To include the case study in the main text is reasonable, given its illustrative value, but should be implemented through summarization to balance length and coherence.
  • Expand on potential biases and limitations in AI models. Address the limitations of the Auckland case study by discussing potential adaptations for other markets, possibly in the conclusions or a new subsection.
  • Include a comparative analysis with existing methodologies. Include a brief comparison with international valuation standards (e.g., IVSC) to enhance the paper's global relevance, particularly in Section 3.

Author Response

Thanks for the encouraging and constructive comments, we have incorporated all your comments in our revised manuscript. Our responses are detailed in the attached.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The article provides a timely exploration of the issues associated with using AI for property valuation, particularly what this means from a legal and ethical perspective in relation to opaque automated valuation models (AVMs). The authors successfully persuade the reader that the fundamental problem with the current iteration of AI is that these systems cannot be considered to comply with the professional accountability standards related to professional practice because they lack transparency. They construct their argument through a combination of legal analysis, and technical discussion. The authors draw on appropriate case law and relevant professional standards, showing they have a sound understanding of the requirements of the valuation profession, and the limitations imposed by current AI methods. The legal analysis, in particular, is very effective, as it establishes transparently the fundamental principles of duty of care associated with AI valuation outputs.

The research adopts a methodological strategy that merges theoretical discussion with practical exposition through the application of SHAP (Shapley Additive Explanations) values to increase interpretability of the model. In the context of the Auckland case study, a real, practical example of incorporating explainable AI technologies might be used in valuation practice. Nonetheless, the empirical aspect of the research seems underexplored - while the case study is used to demonstrate the applicability of XAI, there is little, if anything, that rigorously tests whether those methods will actually hold up to legal and/or professional scrutiny in practice.  The checklist for AI valuation reporting represents a potentially meaningful contribution, and supports a useful, practical, and implementable step for valuers interested in developing more transparent means of valuation. However, like many of the report's recommendations, this largely remains theoretical. This research would greatly benefit from, either testing the proposed checklist for valuation professionals or consider the ways in which its identification would fit in with any developing regulatory frameworks for accountability for AI.

There are various areas for improvement for me. In some places the discussion of transparency would repeat without adding new information, and without advancing the narrative. The emphasis on the legal context in New Zealand is warranted by the authors' expertise, but it impacts generalizability, particularly since other legal jurisdictions are developing more comprehensive regulation of AI. The explanation of the technical implementation of SHAP is clear, however it does not fully address if these methods provide sufficient explainability for legal purposes.

Despite these limitations, the paper serves an important purpose as it explicitly discusses the challenges of incorporating AI into the valuation space while upholding an acceptable level of professional credibility. The discussion of explainability as a precursor to accountability is particularly well reasoned.

The paper achieves its main goal by highlighting the need for more transparency in AI assisted valuations and suggesting steps to help achieve that transparency. While some of the details of incorporation will need to be explored more fully, the basic argument that AI valuation tools need to be explainable to receive professional acceptance is sound and necessary. This article is a useful starting point for ongoing dialogue surrounding the duty for AI performance measures to be responsible and accountable in property fields.

Author Response

Thanks for the encouraging and constructive comments, we have incorporated all your comments in our revised manuscript. Our responses are detailed in the attached.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

Dear Authors,

Thank you for an interesting manuscript! I enjoyed reading it.

Please see my comments and questions below.

In your article you discuss the relevance of AI in enhancing property/land valuation accuracy and efficiency and propose the checklist tailored to AI/ML in valuations.

  1. I suggest to add a section/paragraph where you describe the methodology for the article itself. I see that you have described the specific method used in your case-study in the Appendix and provided an outline in lines 103-119, but the main text of the article lacks information about scientific methodology(approach) and/or method(s)/research question(s) that are specifically relevant for the main text of the article. 
  2. Please specify the main objective/purpose/aim of the article. You discuss it throughout the text in different sections, but the article will benefit if you will state it more clearly(directly) in the introduction and in the abstract.
  3. What is the reason that you have not used the latest editions (effective from 2025) of International Valuation Standards and RICS Valuation Standards? Your text and Reference list include Standards only from 2020-2022.
  4. Since you are discussing ethical implications I suggest that you also add RICS Ethical standards (Rules of Conduct) to you text and reference list.
  5. Line 399  AL/ML - What does "AL" means in this abbreviation? I can not find it in Abbreviations list.

Best regards,

Reviewer

 

Author Response

Thanks for the encouraging and constructive comments, we have incorporated all your comments in our revised manuscript. Our responses are detailed in the attached.

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

Comments and Suggestions for Authors

Dear Authors,

Thank you for the revised version of manuscript!

I see that you have taken into account my comments and provided your explanations where necessary. The paper has been improved. I consider it as ready for publication.

Best regards,

Reviewer

 

 

Author Response

Thank you very much for your positive comments and support for publishing our manuscript.

 

Back to TopTop