Next Article in Journal
Multi-Armed-Bandit Based Channel Selection Algorithm for Massive Heterogeneous Internet of Things Networks
Previous Article in Journal
(Bio)Tribocorrosion in Dental Implants: Principles and Techniques of Investigation
Previous Article in Special Issue
Mapping Server Collaboration Architecture Design with OpenVSLAM for Mobile Devices
 
 
Article
Peer-Review Record

HTTP Adaptive Streaming Framework with Online Reinforcement Learning

Appl. Sci. 2022, 12(15), 7423; https://doi.org/10.3390/app12157423
by Jeongho Kang and Kwangsue Chung *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Appl. Sci. 2022, 12(15), 7423; https://doi.org/10.3390/app12157423
Submission received: 23 June 2022 / Revised: 21 July 2022 / Accepted: 22 July 2022 / Published: 24 July 2022
(This article belongs to the Special Issue Research on Multimedia Systems)

Round 1

Reviewer 1 Report

This work addresses an interesting problem which is AI driven adaptive streaming.

Methodology is solid, quality of the presentation is good, scientific soundness is good.

I have only a major concern regarding the motivation of the work. The claim of the paper is to overcome limitation of other ML-based approach due to their scarce adaptivity to environment change. The term environment is used 36 times in the paper, however, the author do not consider the environment as a mathematical entity. Environment changes are, at the end of the day, network condition fluctuations. Network conditions fluctuations are something normal in the network and the motivation for implementing DASH. Thus, I do not see a particular limitation of existing approaches related to environmental changes. The author in the reviewer opinion should just claim that the proposed approach performs better than existing approaches. Please clarify this aspect.

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Authors approach a very current problem, optimization of HTTP streaming, mainly DASH protocol, by using machine learning techniques such as deep learning reinforcement learning. The theory is sound and the experiment results support the hypothesis. The only comment is that a Discussions section should be added to compare the results with the existing state-of-art.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

The paper in general is written and organized well. However, following shall be taken into consideration in the review:

Main Comments:

1. In section 2.3, it is stated that “RL is an appropriate learning method for achieving optimal QoE”, do you mean to say local optimal or global optimal here?

2. A pseudo-code or a flowchart of how the complete algorithm works is needed. Presently, it is left to the reader to interpolate the links given in the paper.

3. In Fig. 6, the curves shall include benchmark schemes for comparison.

Minor Comments:

1. Some of the acronyms are not defined at first use, e.g. DASH and ABR in abstract, and NAT in the manuscript.

2. Uniformity in the style of manuscript is needed. e.g., acronyms are defined by two methods “MPD (Media Presentation Description)” as well as “Dynamic adaptive streaming over HTTP (DASH)”. It is suggested to follow the later and make the whole manuscript consistent.

3. Caption of Fig. 2 should be revised.

4. Change “3.1. Basic Assumption” to “3.1. Basic Assumptions”.

5. Introducing expressions and then defining variables within equations need to be further refined. It is suggested to adopt equation styles for well cited publication in the field.

6. In equation (5), what does the “.” represent? Is it dot product or a multiplication? If it is later then use appropriate symbol.

7. What is reference number 21?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

Authors have addressed my comments well. The paper can be accepted for publication now.

Back to TopTop