Next Article in Journal
Assessing Efficiency in Artificial Neural Networks
Previous Article in Journal
Pockels Effect at the Interface between Water and Ti Electrode
 
 
Article
Peer-Review Record

A Purely Entity-Based Semantic Search Approach for Document Retrieval

Appl. Sci. 2023, 13(18), 10285; https://doi.org/10.3390/app131810285
by Mohamed Lemine Sidi * and Serkan Gunal
Reviewer 1:
Reviewer 2:
Reviewer 3:
Appl. Sci. 2023, 13(18), 10285; https://doi.org/10.3390/app131810285
Submission received: 2 August 2023 / Revised: 7 September 2023 / Accepted: 8 September 2023 / Published: 14 September 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Round 1

Reviewer 1 Report

 This work, purely entity-based semantic search approach for document retrieval, proposes a novel solution called PESS4IR that leverages entity-based representation to improve the document retrieval task. The experiments performed on queries annotated by REL and DBpedia-Spotlight showed that PESS4IR achieved the maximum nDCG@5 score of 1.000 for queries whose all terms are annotated and whose average annotation scores are greater than or equal to 0.75 . However, this purely entity-based method is appropriate for only completely annotated queries, and the results are partial. Among the completely annotated queries, the ones with higher scores are doing better than the rest . Thus, the purely entity-based approach is recommended for highly represented queries whose entities have high annotation scores.

 

My main concern with this work is related to the evaluation, since: (i) the repository or source code is not provided, which makes replicating results difficult, (ii) no ablation tests or error analysis are provided, (iii) the evaluation is limited to fully labeled queries. Could the results not have been reported for all cases, even distinguishing both categories? (iv) Results are not reported with other methods of semantic search that are not based on entity labeling and linking.

 

In summary, I believe that while the article is interesting and well-founded, it requires additional effort in its evaluation. Therefore, I suggest that prior to its publication, efforts are made both at the level of the method itself (ablation tests, error analysis) and in terms of its validation with other methods, such as those pointed out in the state of the art, for example.

Author Response

Thank you very much for your constructive comments and suggestions. 

 

Point 1: (i) the repository or source code is not provided, which makes replicating results difficult,

Response 1: We do have a demo repository of our approach (PESS4IR) already uploaded on GitHub   at https://github.com/El-Emin/PESS4IR-Demo. This repository provides the experiment that shows the best performance achieved by PESS4IR, the maximum nDCG@5 score of 1.000, on Robust collection, where queries are annotated by the REL entity linking tool. This repository contains only the source code of the document retrieval and ranking method. It also provides the annotated collection (of Robust collection) as an index file. It also contains the queries annotations files. 

For the entire source code, we should note that because the indexing and entity linking (EL4DT) methods use a large amount of data. For example, just to construct the surface forms of EL4DT method, as we mentioned in the manuscript at line (200), we have used the DBpedia (more than 5 GB) and Facc1[1] (about 100 GB) knowledge bases. Also, the surface forms extracted from these knowledge bases are more than 4 GB. Therefore, due to the significantly large amount of data, we did not put them into the repository. However, we have now provided the source code of our approach (PESS4IR) in the following repository: (https://github.com/El-Emin/PESS4IR-source-code).

 

Point 2: (ii) no ablation tests or error analysis are provided,

Response 2: The ablation test investigates the performance of an AI system by removing certain components to understand the contribution of the components to the overall system[2]. However, our approach does not use AI systems as we mentioned in the manuscript at line (142). On the other hand, we use the trec_eval[3] tool, which is used to evaluate document retrieval systems[4], just like similar works in the literature. Specifically, we have considered the evaluation metrics computed by the trec_eval tool such as nDCG, MAP, and P for document retrieval task, as many research in the field use these metrics. If you are recommending additional tests suitable for the document retrieval task, we would be glad to do them.

 

Point 3: (iii) the evaluation is limited to fully labeled queries.  Could the results not have been reported for all cases, even distinguishing both categories?

Response 3: In tables (4, 5, 8,10), we actually reported all classes. Among them, the AVGs >= Min class represents all fully labeled/annotated queries. As we tested PESS4IR with the queries annotated by different entity linking methods (REL and DBpedia-Spotlight). The output of these methods encompasses three cases: for certain queries, no entities are identified; for others, only certain terms are annotated; and for some, are fully annotated. Hence, we have considered only fully annotated queries.

 

Point 4: (iv) Results are not reported with other methods of semantic search that are not based on entity labeling and linking.

Response 4: In order to assess the performance of PESS4IR, we conducted a comparative study using a subset of queries. To conduct this comparison against state-of-the-art methods, we required the results for corresponding queries, which could be extracted from their respective result or run files. In our pursuit of these result files from various researchers who have developed state-of-the-art methods, we were able to receive the result files from the authors of the LongP (Longformer) model [16]. It is noteworthy that the LongP (Longformer) model is not based on entity labeling and linking.

 

[1] http://lemurproject.org/clueweb12/FACC1/

[2] https://en.wikipedia.org/wiki/Ablation_(artificial_intelligence)

[3] https://trec.nist.gov/trec_eval/

[4] https://trec.nist.gov/results.html

Author Response File: Author Response.docx

Reviewer 2 Report

Review Comments for Authors (manuscript: applsci-2566073)

Paper Title: A purely entity-based semantic search approach for document retrieval

Recommendation: Revision of the paper while incorporating comments as below:

1)    Refer to page 4: At line 178, Figure 3 may be corrected as Figure 1. And in Table 1, Eqt may be corrected as Edt.

2)    Refer to pages 6~8: Please include references for equations 1 to 7.

3)    Refer to page 10: Graphics in Figure 3(a) and 3(b) are not clear. Please improve these Figures.

4)    Refer to page 11 Table 4 and Table 5: Why lesser number of queries i.e. 3 and 2 are considered at Avg 1.0 and Avg >= 0.75 respectively as these are the parameters where the proposed model outperform the Galgo model.

5)    Refer to page 12, Table 8: At column 6, TREC DL 2019 may be replaced with TREC DL 2020.

Comments for author File: Comments.docx

Author Response

Thank you very much for your constructive comments and suggestions.  

 

Point 1: Refer to page 4: At line 178, Figure 3 may be corrected as Figure 1. And in Table 1, Eqt may be corrected as Edt.

Response 1:  At line 183, Figure 3 was corrected as Figure 1. And in Table 1, Eqt was corrected as Edt.

 

Point 2: Refer to pages 6~8: Please include references for equations 1 to 7.

Response 2: These equations (1 to 8) are designed and developed by us.

 

Point 3: Refer to page 10: Graphics in Figure 3(a) and 3(b) are not clear. Please improve these Figures.

Response 3: Figure 3(a) and 3(b) were improved.

 

Point 4: Refer to page 11 Table 4 and Table 5: Why lower number of queries i.e. 3 and 2 are considered at Avg 1.0 and Avg >= 0.75 respectively as these are the parameters where the proposed model outperform the Galgo model.

Response 4: We tested our approach (PESS4IR) by the queries annotated/labeled by different entity linking methods (DBpedia-Spotlight and REL) and only fully annotated queries were considered. As we described in the paragraph of line (376), the queries are classified into four classes according to their annotation average scores regarding each entity linking method. In Tables 4 and 5, the queries whose all terms are annotated and whose average annotation scores are greater than or equal to 1.0 (for DBpedia-Spotlight tool) and greater than or equal to 0.75 (for REL tool); the number of queries are respectively 3 and 2. Appendices A1 and A2 respectively contain the details for each entity linking tool.

 

Point 5: Refer to page 12, Table 8: At column 6, TREC DL 2019 may be replaced with TREC DL 2020.

Response 5: In Table 8: At column 6, TREC DL 2019 was replaced with TREC DL 2020.

Author Response File: Author Response.docx

Reviewer 3 Report

The efforts in this paper is highly appreciated; The paper presents an upper bound (72% and 6.8% queries of the used datasets) for an entity based information retrieval method.  It is well known that such approaches were augmented and integrated with other methods to overcome the semantic gaps when the information is not complete, and/or it is ambiguous. Thus the novelty of the work is not obvious nor the results, which are expected ( i.e, if you have a fully annotated with high certainty/confidence  -low ambiguity- then you can retrieve a precise results; your fourth category); recall that the categories were manually defined. Very minor (small %) number of the queries the method  reported to outperform other methods;  again heuristic weights are considered . e.g when to consider a weak vs strong entity. Moreover, an important detail of the candidate selection scoring is missing. Further. The experiment and discussion should be symmetrically  for both configurations ( both datasets). I.e use and report the same evaluation metrics and annotation tools; else you need to state why you ignore them.   Further  comments and suggestions.   

- Overall language revision is needed, I recommend you to shorten your sentences.

- ad-hoc vs ad hoc... through the paper  - First used ref's are relatively old to motivate the work. -Since our approach is designed to handle only completely annotated queries,...!!!!! Then you mentioned a score equal to 0.75, !!! - for MSMARCO collections. Not defined/mentioned # as the trac2004 in the Intro section   - rename : 3.1.1. Overview of our Entity Linking Method... Entity Linking for Document Text - define : which the surface forms are constructed.  - define: document coherence relationships.  - line 185: It is important to note that sure entities (entities with a score greater than or equal to 0.5) do not need a disambiguation step.... is it a heuristic rule or based  on experimental findings.  - Same for...annotation score higher than or equal to 0.85. - Define "strong entity" to describe entities annotated by EL4DT with high score ... vs highest score  - Line 72  stopwords are not obligatory.  Vs table 2 ; have stopwords impacted the results. - Line 201 ...is to select the more probable candidate entity ... how you do that - Line 214 ... use break line before thus - ??????(??) gives the entity score computed by EL4DT in the candidate selection step. There are 3 factors, how you compute it. - You assumed completely annotated queries.. what if it include week entities.. would it be equivalent to partially annotated queries  - Explain table 3 - NDCG@20, define it  - Line 350... annotations could be checked by the ... is this included or not  - PESS4IR when it is used alongside Galago...?? How - define : SOTA method ? - The added value achieved by PESS4IR appears when it is used alongside with LongP model?? - What u mean with "+"; and how you combine the annotation scores - Explain why table 6 differ then 7 in the evaluation matrix ( @20) - Also which annotation tool is used erl or spotlight  - experiment on MSMARCO...we test it only by the DBpedia Spotlight tool ... - Galago (Dirichlet) not used  ; only nDCG@10 is used  ; What is the total # of queries  -line 426 : when it is tested with queries annotated by REL entity linking method....what about the spotlight annotation  -The discussion considered rel which has less than 7% coverage ...i expected at least for spotlight.  -In 5.2 TagMe tools,  is not included in your experiment  -Line 463  the weakness of a purely-based approach ?? -line 488: results indicate that as the average annotation score increases, the ranking score gets higher, as well. - In conclusion : I believe not "as it increase" instead high scores only.  

Needs further revision 

Author Response

Thank you very much for your constructive comments and suggestions

 

Point 1: The efforts in this paper is highly appreciated; The paper presents an upper bound (72% and 6.8% queries of the used datasets) for an entity based information retrieval method.  It is well known that such approaches were augmented and integrated with other methods to overcome the semantic gaps when the information is not complete, and/or it is ambiguous. Thus the novelty of the work is not obvious nor the results, which are expected ( i.e, if you have a fully annotated with high certainty/confidence  -low ambiguity- then you can retrieve a precise results; your fourth category); recall that the categories were manually defined. Very minor (small %) number of the queries the method  reported to outperform other methods;  again heuristic weights are considered . e.g when to consider a weak vs strong entity.

Response 1: We investigated the performance of our approach (PESS4IR), by testing it with queries annotated by two different entity linking tools (REL and DBpedia-Spotlight). And according to the experimental findings, we found that PESS4IR gives a its better performance when queries annotated by REL (with fully annotation of avg >=0.75), and when queries annotated by DBpedia-Spotlight (with fully annotation of avg =1.0).

 

Point 2: Moreover, an important detail of the candidate selection scoring is missing.

Response 2: To compute entity score by EL4DT, these factors are in forms of functions (hundreds of code lines) in which many cases and scenarios of the different surface forms are treated, regarding the three type of entity non-sure, sure, and strong entity. The source codes are provided in the following repository: (https://github.com/El-Emin/PESS4IR-source-code).

 

Point 3: Further. The experiment and discussion should be symmetrically  for both configurations ( both datasets). I.e use and report the same evaluation metrics and annotation tools; else you need to state why you ignore them.

Response 3: In the last paragraph of section (4.4) we gave the reason why we did not provide the PESS4IR+(methods) on MSMARCO collection. Moreover, we added the results of DBpedia Spotlight in the discussion section (section 5.2 in the revised version).

 

Point 4: Overall language revision is needed, I recommend you to shorten your sentences.

Response 4: We got help from a colleague who is fluent in English to improve the language of the manuscript.

 

Point 5: ad-hoc vs ad hoc... through the paper 

Response 5: Both expressions can be preferred in the literature. For consistency, only ‘ad-hoc’ expression was used throughout the manuscript.

 

Point 6: First used ref's are relatively old to motivate the work.

Response 6: If you are referring to the references provided in the literature review, we addressed the recent models for the document retrieval tasks such as [23], [25], and [28] in the Related Works section (Sections 2.1 and 2.2).

If you are referring to the references ([9] and [10]) used for the motivation of our work provided in the phrases of lines (13 and 42), to the best of our knowledge, this motivation represents an issue, which is still not resolved. Also, it is one of the two main motivations of our work. The second one is that, as we described in the phrase at lines (47), “Although an approach has a decent performance regarding the entire query set, another weak approach might have better performance for some of those queries”. For example, in Table 10, for the queries whose AVGs =>0.75, Galago (Dirichlet) outperforms the LongP (Longformer) model, but the LongP model largely has better performance than Galago (Dirichlet) for the entire query set. Contrary to the Galago (Dirichlet) and LongP (Longformer) models, with PESS4IR, we have the advantage of knowing when the method gets its best. So, with highly annotated queries, PESS4IR offers its best performance. So, by using the PESS4IR method for that set of queries and another method (e.g., a SOTA method) for the rest of the queries, we come up with an even better performance than the performance of a SOTA method alone (for all queries), as we indicated in Table 7.

 

Point 7: Since our approach is designed to handle only completely annotated queries,...!!!!! Then you mentioned a score equal to 0.75, !!!

Response 7: We investigated the performance of our approach (PESS4IR) on two collections (Robust and MSMARCO), by labelling/annotating their queries by two different entity linking tools (REL and DBpedia-Spotlight). We analyzed the performance for the queries with different levels of annotations. In other words, instead of manually or selectively labeling the queries, we chose to test our approach objectively by the queries labeled by these entity linking methods. We explained that limitation and its reason at line (462).

 

Point 8: for MSMARCO collections. Not defined/mentioned # as the trac2004 in the Intro section

Response 8: In line (82), in the Introduction, we briefly mentioned that we use the MSMARCO and Robust collections. Also, in section 4.1, we provided their details.

 

Point 9: rename : 3.1.1. Overview of our Entity Linking Method... Entity Linking for Document Text

Response 9: If you find it suitable, we would like to keep it as “Overview” because 3.1.1. is an “Overview“ of “3.1. Entity Linking Method”, and so on for 3.1.5 is the “Algorithm“ of of “3.1. Entity Linking Method”.

 

Point 10: define : which the surface forms are constructed. 

Response 10: Table 2 contains the constructed surface forms that correspond to each component in the table. We added a line to indicate that “The surface forms of our EL4DT are constructed from the components listed in Table 2, where each Component/Class in the table corresponds to a surface form.”. Also, we renamed Table 2 title as “Surface forms of EL4DT”.

 

Point 11: define: document coherence relationships. 

Response 11: It is defined in line (237). We completed that definition in lines (188 and 237).

 

Point 12: line 185: It is important to note that sure entities (entities with a score greater than or equal to 0.5) do not need a disambiguation step.... is it a heuristic rule or based  on experimental findings. 

Response 12: The scoring system of entity linking method (EL4DT) has a scoring threshold (0.5) for mentions that match certain criteria so that when an annotation score of an entity is >=0.5, the entity is qualified as a sure entity.

 

Point 13: Same for...annotation score higher than or equal to 0.85.

Response 13: According to the experimental findings, 0.85 annotation score is adopted for “strong entity”. It plays a main role in our approach.

 

Point 14: Define "strong entity" to describe entities annotated by EL4DT with high score ... vs highest score

Response 14: In the phrase of line (238) we defined “strong entity” as :”The strong entity concept refers to the entities annotated by EL4DT with an annotation score higher than or equal to 0.85.”. However, ‘highest score’ is not used as a concept, but an intention of the literal meaning.

 

Point 15: Line 72  stopwords are not obligatory.  Vs table 2 ; have stopwords impacted the results.

Response 15: Stopwords are not obligatory for query entity linking. So, when we annotate queries by REL and DBpedia-Spotlight tools, to qualify a query as fully annotated, stopwords are not obligatory. In other words, if all non-stopword terms of the queries are labeled/annotated, the query is considered fully annotated even if some stopwords are not labeled. On the other hand, Table 2 contains the surface forms of our entity linking method (EL4DT), which is used to annotate documents.

 

Point 16: Line 201 ...is to select the more probable candidate entity ... how you do that

Response 16: “The more probable candidate entity” is based on the probability concept (number of mention terms which match entity terms / total number of mention terms).

 

Point 17: Line 214 ... use break line before thus

Response 17: We used a break line.

 

Point 18: ??????(??) gives the entity score computed by EL4DT in the candidate selection step. There are 3 factors, how you compute it.

Response 18: To compute entity score by EL4DT, these factors are in forms of functions (hundreds of code lines) in which many cases and scenarios of the different surface forms are treated, regarding the three type of entity non-sure, sure, and strong entity. The source codes are provided in the following repository: (https://github.com/El-Emin/PESS4IR-source-code).

 

Point 19: You assumed completely annotated queries.. what if it include week entities.. would it be equivalent to partially annotated queries 

Response 19: Unfortunately, we could not understand the question because ‘weak entities’ concept is used describe the ambiguated entities in the disambiguation algorithm of our EL4DT method, document annotation, whereas ‘completely annotated queries’ are used for query annotation. If you clarify the issue, we would be glad to address it.

 

Point 20: Explain table 3

Response 20: Table 3 was explained in the paragraph of line (340).

 

Point 21: NDCG@20, define it 

Response 21: NDCG@20 was defined in the paragraph of line (344).

 

Point 22: Line 350... annotations could be checked by the ... is this included or not 

Response 22: The phrase was corrected by a new one at line (362). Furthermore, we considered the average score of query entities' annotation scores to be the indicator of the query annotation quality.

 

Point 23: PESS4IR when it is used alongside Galago...?? How

Response 23: In this paper, PESS4IR is recommended to be used with Galago (or any other method), if the queries annotated by the REL tool, for queries whose all terms are annotated and whose average annotation scores are greater than or equal to 0.75; or if queries annotated by DBpedia Spotlight tool, for the queries whose all terms are annotated and whose average annotation scores are equal to 1.

It is handled by using PESS4IR for the highly annotated queries among the query set, whereas Galago processes the rest of the queries. Then, the performances of both methods were obtained by regrouping the two run files of each method in a single file.

 

Point 24: define : SOTA method ?

Response 24: State of the art (SOTA) method in document retrieval task, is the current best performing method that we provide at line (105) that Gao & Callan (2022) [25] claim that “the approaches based on pre-trained Transformer language models such as BERT [24] are the current state-of-the-art for text re-ranking”. 

In order to assess the performance of PESS4IR, we conducted a comparative study using the queries annotated/labeled by different entity linking methods (DBpedia-Spotlight and REL) and only fully annotated queries were considered. To conduct this comparison against state-of-the-art methods, we required the results for corresponding queries, which could be extracted from their respective result or run files. In our pursuit of these result files from various researchers who have developed state-of-the-art methods, we were able to receive the result files from the authors of the LongP (Longformer) model [16].

 

Point 25: The added value achieved by PESS4IR appears when it is used alongside with LongP model??

Response 25: Yes, Table 7 elaborates this situation. As mentioned earlier, although an approach has a good performance regarding the entire query set, another approach may have better performance for some of those queries. With highly annotated queries, PESS4IR offers its best performance. So, by using the PESS4IR method for that set of queries and another method (e.g., a SOTA method) for the rest of the queries, we come up with an even better performance than the performance of a SOTA method alone (for all queries), as we indicated in Table 7.

 

Point 26: What u mean with "+"; and how you combine the annotation scores

Response 26: If you are referring to "+" in “LongP (Longformer)+PESS4IR” and in “Galago+PESS4IR”, please refer to the Response 20 for clarification.

 

Point 27: Explain why table 6 differ then 7 in the evaluation matrix ( @20)

Response 27: In table 7 only for @5 metric (the top-5 ranked documents) PESS4IR provides an added value this is why we did not add the @20 in table 7.

And now to have a similar representation of table 6 and table 7, we removed the @20 metric column in table 6. Moreover, as we mentioned in line (349), nDCG@5 evaluation metric is used to evaluate the performances of many ranking models.

 

Point 28: Also which annotation tool is used erl or spotlight 

Response 28: The annotation tool is REL as we provided in phrase of line (412).

 

Point 29: experiment on MSMARCO...we test it only by the DBpedia Spotlight tool ...

Response 29: By using the REL tool, no query whose annotation avg score is greater than 0.75 was considered (for TREC-DL-2020 query set, see appendix B.1). We explained the reason in line (421).

 

Point 30: Galago (Dirichlet) not used  ; only nDCG@10 is used  ; What is the total # of queries 

Response 30: Galago (Dirichlet) is a baseline method and we wanted to used it for MSMARCO. However, Galago gives an error about the .tsv extension of MSMARCO collection.

According to the repositor of MSMARCO collection, the nDCG@10 is the official evaluation metric of nDCG, this is why we had gave results with nDCG@10, but to make them in similar presentation with nDCG@20 for query avg groupes and nDCG@5 for added values tables, in the Results section (section 4).

We note that we gave the number of queries for each class of query annotation AVG scores for each set of queries of each collection.

 

Point 31: line 426 : when it is tested with queries annotated by REL entity linking method....what about the spotlight annotation 

Response 31: We added an explanation in the discussion section for the results of DBpedia Spotlight.

 

Point 32: The discussion considered rel which has less than 7% coverage ...i expected at least for spotlight. 

Response 32: We added the results of DBpedia Spotlight in the discussion section (section 5.2 in the revised version).

 

Point 33: In 5.2 TagMe tools,  is not included in your experiment

Response 33: Yes, TagMe is not included in our experiment. It is just used to compare the case provided and discussed in section 5.2 (older version)

 

Point 34: Line 463  the weakness of a purely-based approach ??

Response 34: We corrected the phrase.

 

Point 35: line 488: results indicate that as the average annotation score increases, the ranking score gets higher, as well.

Response 35: For the queries annotated by both tools (REL and DBpedia Spotlight), when the avg annotation score gets higher as the corresponding ranking score of PESS4IR increases.   

 

Point 36: In conclusion : I believe not "as it increase" instead high scores only.  

Response 36: For the highest avg annotation score of query annotation scores, the increase of PESS4IR performance is more significant.

Author Response File: Author Response.docx

Round 2

Reviewer 3 Report

Overall, the authors have sufficiently considered the recommendations and suggestions. However, it is important also to amend the answer &the clarifications in the manuscript- including other reviewers' comments (Recall that, such clarifications might support readers to understand your position).

Still, it seems that their is no consistency in reporting the results (see the tables). It is recommended to use the same measures for both datasets with its variations (i.e., report the results using  nDCG@5,@10,@20, MAP, P@5,@20) or at least the same for all tables( elsewhere, justify your position) )

overall, fine.

Author Response

Thank you very much for your constructive comments and suggestions.

 

Point 1: Overall, the authors have sufficiently considered the recommendations and suggestions. However, it is important also to amend the answer &the clarifications in the manuscript- including other reviewers' comments (Recall that, such clarifications might support readers to understand your position).

Response 1:

We further added two clarifications. The first one is about how the entity score is computed by ??????(??), at line (250). The second is about one of our motivations at line (473).

If you need further clarifications, we would be happy to add them into the manuscript.

 

Point 2: Still, it seems that their is no consistency in reporting the results (see the tables). It is recommended to use the same measures for both datasets with its variations (i.e., report the results using  nDCG@5,@10,@20, MAP, P@5,@20) or at least the same for all tables( elsewhere, justify your position) )

Response 2: We have updated Table 6, replacing P@5 with P@20.

In the results section (section 4), Tables 4 and 5 for Robust collection, and Table 8 for MSMARCO collection, PESS4IR and other methods are performed separately on these collections, in this case we used nDCG@20. And table 6 and 7 use nDCG@5 when PESS4IR is used alongside other methods.

And in the Discussion section (5), we explain the added values achieved by PESS4IR by giving the details of nDCG@5 results in Tables 6 and 7.

 

Author Response File: Author Response.docx

Back to TopTop