**4. Conclusions**

In this work, we have proposed an efficient scene and place classification scheme using background objects and the designed weighting matrix. We designed this weighting matrix based on the open dataset which is widely used in the scene and object classifications. Also, we evaluated the proposed classification scheme which was based on semantic segmentation comparing to the existing image classification methods such as VGG [15], Inception [16,17], ResNet [18], ResNeXt [19], Wide-ResNet [20], DenseNet [21], and MnasNet [22]. The proposed scheme is the first approach of object-based classification that can classify outdoor categories as well. We have built a custom dataset of 500 images for testing which can help researchers who are dealing with scene classification. We crawled frames from Korean movies and labeled each image manually. The images were labeled as three major scene categories (i.e., indoor, nature, and city).

Experimental results showed that the proposed classification model outperformed several well-known CNNs mainly used for image classification. In the experiment, our model achieved 90.8% of verification accuracy and improved over 2.8% of the accuracy when comparing to the existing CNNs.

The Future work is to widen the scene classes to classify not just indoor (library, bedroom, kit) and outdoor (city, nature), but also more subcategories. It would be helpful for searching in videos with such semantic information.

**Author Contributions:** Conceptualization, B.-G.K.; methodology and formal analysis, W.-H.Y.; validation, Y.-J.C. and Y.-J.H.; writing–original draft preparation, W.-H.Y. and Y.-J.C.; writing–review and editing, B.-G.K.; supervision, B.-G.K. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research project was supported by Ministry of Culture, Sports and Tourism (MCST) and from Korea Copyright Commission in 2020.

**Acknowledgments:** Authors thank for all reviewers who helped to improve this manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest.
