Next Article in Journal
Lane-Changing Recognition of Urban Expressway Exit Using Natural Driving Data
Previous Article in Journal
Experimental Investigation of the Dynamic Response of a Sliding Bearing System under Different Oil Pressure Levels
 
 
Article
Peer-Review Record

Multi-Scale Feature Fusion for Interior Style Detection

Appl. Sci. 2022, 12(19), 9761; https://doi.org/10.3390/app12199761
by Akitaka Yaguchi 1,*, Keiko Ono 2, Erina Makihara 2, Naoya Ikushima 1 and Tomomi Nakayama 1
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Appl. Sci. 2022, 12(19), 9761; https://doi.org/10.3390/app12199761
Submission received: 2 September 2022 / Revised: 22 September 2022 / Accepted: 24 September 2022 / Published: 28 September 2022

Round 1

Reviewer 1 Report

Image recognition is normally a task that is straightforward for the human brain but extraordinarily difficult for the computer. A quick glance around a room tells us a huge amount about features such as the contents, the decoration and the style. Using software to extract this sort of information from room images is plainly very hard. This paper examines an approach to the analysis of such room images to determine the "style" of decoration of the room. This may not be a worthwhile or useful thing to do in order to simply classify room images, but it is a problem that demonstrates many of the poorly-understood tasks needed for image recognition and so is well-worth pursuing. So the work described in this paper is important and it offers new insights into some of the problems of image recognition.

This paper is well-structured and follows the standard pattern of proposing a new approach, describing it, comparing it with existing techniques and evaluating the results. The abstract is fine, providing a good overview of the work. The methodology is sound, but maybe lacking a more formal statistical analysis of the results. The introduction is simple and clear and it is followed by a brief description of related work. This section could be expanded - it would certainly improve the paper, but maybe it isn't essential. Again, the description of the proposed method is compact, but adequate. The evaluation and discussion sections are generally ok, presenting some interesting results. The paper is well-referenced, but the list of references needs some copy-editing to, for example, make the capitalisation of journal names, etc. consistent.

I have some more detailed and specific comments.

line 18 - under when -> when

line 19 - often at image -> often fails at image

line 22 - image retrieval with -> image retrieval based on

line 25 - various -> there are various

line 38 - textual information, is -> textual information is

line 48 - interior style detection method using -> interior style detection using

line 52 - information better -> information can better

line 53 - Lab -> CIELAB (throughout)

line 55 - Our method can recommend -> Our method can select

line 58 - Omit the sentence starting Figure ??

line 98 - for each training images. -> for each of the training images.

line 181 - traditional styles. Whereas, ResNet -> traditional styles whereas, ResNet

line 186 - analyzed in detail in the next chapter in the following section. -> analyzed in detail in the following section.

line 212 - which is Table ??


Author Response

Please see the attachment.

Author Response File: Author Response.doc

Reviewer 2 Report

Comments to Author:

In this manuscript, the authors proposed a new interior style detection method by using bag of visual words. This gives this article a certain novelty. However, there are many grammar problems and formatting errors in the manuscript. Additionally, the superiority of the method is not clearly reflected in the evaluation indicators. Despite the detailed analysis by the authors within the manuscript, the limitations of the method cannot be ignored. In a whole, the paper can not be accepted due to a lack of persuasiveness.

Other comments:

1.        The second paragraph of the related work, “Huadong et al.”, is not properly cited.

2.        It is recommended to bold and highlight the optimal data in the graph.

3.        There is a label error at “table??” on line 212.

4.        It is recommended to make the lines of Figure 4 thinner and add partial enlargement.

5.        Table 3 only compares the results of the last round, which is not convincing.

6.        In Table 4, only the Japanese style has an obvious lead in F-measure. Except for Modern and Traditional, the Rustic style also has no obvious advantage.

Author Response

Please see the attachment.

Author Response File: Author Response.doc

Reviewer 3 Report

Abstract: Abstract needs to be improved following problem under study, research methods, results, conclusion and significance.

Related work needs to be expanded more with recent and relevant works.

There should be a section methodology that gives the clear picture how the study has been rolled out. Instrumental setup and Experimental setup should come under this.

More discussion and reflection with previous work is required.

Conclusion: The conclusion needs to be rewritten mapping the objectives, limitations and future work.

How the 3rd contribution is mapped that is not clear. How the data imbalance has been catered?

References should be used from resent years and the need for extensive literature review is required so that the comparison form other works can be reflected in the article. There are only 7 articles referencing from 2020-2021. No articles from 2022

Author Response

Please see the attachment.

Author Response File: Author Response.doc

Round 2

Reviewer 2 Report

Thanks to the authors for answering my concerns. I had read the revised manuscript. Compared with the previous version, the current had a certain improvement.

Author Response

Please see the attachment.

Author Response File: Author Response.doc

Reviewer 3 Report

Authors have made significant changes according to the comments given. Better to include accuracy percentage in the abstract and the 5 new references mentioned in the manuscript was not reflected.

Author Response

Please see the attachment.

Author Response File: Author Response.doc

Back to TopTop