Next Article in Journal
Study of the Movement Dynamics of a Beet Leaves Harvester
Next Article in Special Issue
Design Principles of a Mixed-Reality Shopping Assistant System in Omnichannel Retail
Previous Article in Journal
Mechatronics Design of a Gait-Assistance Exoskeleton for Therapy of Children with Duchenne Muscular Dystrophy
Previous Article in Special Issue
A Method to Improve the Design of Virtual Reality Games in Healthcare Applied to Increase Physical Activity in Patients with Type 2 Diabetes Mellitus
 
 
Article
Peer-Review Record

Perceptual Similarities between Artificial Reverberation Algorithms and Real Reverberation

Appl. Sci. 2023, 13(2), 840; https://doi.org/10.3390/app13020840
by Huan Mi, Gavin Kearney * and Helena Daffern
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Appl. Sci. 2023, 13(2), 840; https://doi.org/10.3390/app13020840
Submission received: 8 November 2022 / Revised: 22 December 2022 / Accepted: 4 January 2023 / Published: 7 January 2023
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality - 2nd Volume)

Round 1

Reviewer 1 Report

The manuscript analyzes the perceptual similarity between artificial and real reverberations, mainly focused on seven mainstream reverberation algorithms, as well as one new presented Hybrid Moorer-Schroeder algorithm. The standard used to judge the quality of reverberation simulation is the average score by human participants. Results show different outstanding algorithms in long/median/short reverberation environments. Besides, the authors also performed statistical and computational cost analysis, as one trade-off of the quality assessment. Overall, the experimental design is well-formed and motivated. One other comment is that I am curious about displaying the 95% confidence interval of median in the box plots of the average score (where there should be another small box), given the statistical analysis result. 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

The paper deals with a comparison of seven reverberation time algorithms and possible differences in the perception of listeners when playing different sound samples. Although I find the method valid and the procedure very clear and well expressed, I have some comments and suggestions for the authors for the improvement of the paper. Perhaps most troubling about this paper is that although a great deal of work is done in the study, many sentences are not supported by a careful literature search. In the available literature there are numerous papers, researches and works that deal with the issues related to this work, but in certain cases, which will be discussed in more detail in the comments below, I found a lack of careful research.

1 -From the background point of view it is good practice to not repeat the same reference many times in the introduction or background (in this case reference n.1) or it will appear that there are limited references for that topic, which is not for sure the case of the topics dealt with the sentences rows 114-116 and 118-121.

2 -Furthermore, I found the background well expressed in sentences but with very few refernces. When introducing a topic and when giving a background, particularly when I look to the overall of 14 pages (introduction+background) I expect more than 42 references.. Many time the same reference is repeated all over the text, this can be avoided and new papers can be referred to to improve the quality.

3- at page 7 of 28 in the sentence "where prime prii is 2, 3, 5, 7, 11, etc. [] means rounding to the nearest whole number. mi is the delay of each delay line and it is a power of a distinct prime" it seems that a refernce between the squared bracket is missing "[]".

4- to justify the choice of the four sample of voice to be reproduced (rows 285-288), it is always good practice to refer to the literature and to compare the choices referring to other papers, to show that not only has the choice been made on the basis of the researchers  experience, but is substantiated by the rich bibliographic research behind it.

5- I suggest to divide the section "experimental design" with a subsection called "Limitation Design" in which all the COVID related choices are described.

6- The sample of listener is quite narrow, thus I suggest to express the limitation to find voluntary participants due to COVID. From what I understand and perceive from the article and also from personal research experience, finding people to participate in studies remotely is difficult, so analysing a sample of only about 20 participants would be justifiable due to the hectic times we currently live in. However, I find it necessary to express this in the limitations of the research.

7- In the conclusion and discussion of the future work I think is missing a further development of the study in person, without the COVID narrowing that could affect the perception of the stimuli listener. It is quite important to confirm what you find also with an on person test.

8- I found the position of Tables 14, 15 and 16 after the Conclusion very confusing (I will also alert the editor in case this decision was made in the editorial phase). The Abbreviation should be the first thing after the Conflict of Interest Section. If the Tables 14-16 have a marginal importance in the paper, they should be in the Appendix A section.

9- When referring to consecutive references (i.e. [15],[16],[17] at the row 106) it is good practice to shorten into [15]-[17]

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

The paper performs a listening test of different reverberation algorithms of four stimuli with three reverberation times. It finds a Hybrid Moorer-Schroeder algorithm proposed in the paper to work best, but still not for all stimuli. The paper discusses the problem with respect to Augmented Reality (AR).

The paper is of interest, especially in terms of the new algorithm proposed. Still, there are some issues to address.

It is not clear in which connection the listening test is the AR. In the first paragraphs, AR is discussed briefly and confusing. Either a clear connection between the results and AR are drawn or AR could be mentioned as a possible application only.

A long section gives details on the algorithms used. This is very interesting as a summary and should be kept in the paper. Still, it would only be necessary for the listening test results if the results were discussed with respect to algorithmic architecture or parameter sets. This would especially be of interest as the listening test results are complex, e.g., in the difference between ambient and percussive sounds. I would strongly suggest relating test results with algorithms in the Discussion.

A discussion about the usefulness of IACC and D/A parameters for subjective judgments would be interesting. In which way do these parameters tell about the similarity of judgments?

 

Details

26: The three items are not clearly distinguished. What is the difference between item one and three? I guess it means that item three is a time-varying feature? I think I know what the authors mean, but still this first paragraph should be more elaborate.

32ff: Why these algorithms?

36: ‘line structure’ need to be explained here.

Sec. 2.1: The section tries to convince that BRIR is capable of simulating a real audio room nearly perfectly. The section should more carefully distinguish between audio and speech and discuss the parameters of timbre, source width, etc., more carefully. Also, these studies have not been done in AR, so without head movements. This should also be discussed.

Sec. 2.3-2.4 are very informative but are only necessary for the paper if a relation between methods and parameters of the models and listening test results can be drawn. This might be included in the Conclusions.

Fig. 15-22- need quality improvement

Sec. 5 first paragraph summarizes the results. It is not a discussion. This paragraph should be in the results section.

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop