Next Article in Journal
Digital Citizenship and Digital Literacy in the Conditions of Social Crisis
Previous Article in Journal
Emotion Transfer for 3D Hand and Full Body Motion Using StarGAN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Adaptation Parameters for Dynamic Video Adaptation in Wireless Network Using Experimental Method

by
Gururaj Bijur
1,
Ramakrishna Mundugar
2,*,
Vinayak Mantoor
3 and
Karunakar A Kotegar
3
1
Department of Computer Science, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
2
Department of Information & Communication Technology Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
3
Department of Computer Applications, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
*
Author to whom correspondence should be addressed.
Computers 2021, 10(4), 39; https://doi.org/10.3390/computers10040039
Submission received: 13 February 2021 / Revised: 12 March 2021 / Accepted: 13 March 2021 / Published: 24 March 2021

Abstract

:
A wireless network gives flexibility to the user in terms of mobility that attracts the user to use wireless communication more. The video communication in the wireless network experiences Quality of Services (QoS) and Quality of Experience (QoE) issues due to network dynamics. The parameters, such as node mobility, routing protocols, and distance between the nodes, play a major role in the quality of video communication. Scalable Video Coding (SVC) is an extension to H.264 Advanced Video Coding (AVC), allows partial removal of layers, and generates a valid adapted bit-stream. This adaptation feature enables the streaming of video data over a wireless network to meet the availability of the resources. The video adaptation is a dynamic process and requires prior knowledge to decide the adaptation parameter for extraction of the video levels. This research work aims at building the adaptation parameters that are required by the adaptation engines, such as Media Aware Network Elements (MANE), to perform adaptation on-the-fly. The prior knowledge improves the performances of the adaptation engines and gives the improved quality of the video communication. The unique feature of this work is that, here, we used an experimental evaluation method to identify the video levels that are suitable for a given network condition. In this paper, we estimated the adaptation parameters for streaming scalable video over the wireless network using the experimental method. The adaptation parameters are derived using node mobility, link bandwidth, and motion level of video sequences as deciding parameters. The experimentation is carried on the OMNeT++ tool, and Joint Scalable Video Module (JSVM) is used to encode and decode the scalable video data.

1. Introduction

Video communication applications, such as video conferencing, telemedicine, video chat, and video-on-demand, are attracting users more and more in this COVID-19 pandemic situation. The communication applications involve a wireless network to reach a large number of users and enable seamless communication. Wireless networks provide the flexibility of mobility and ease of use to the users, which increases the streaming challenges and issues in providing better communication quality [1,2].
The recent advances in wireless technology and video coding formats have opened many challenges in real-time video streaming over wireless networks. As video communication is very sensitive to jitter and throughput, it is difficult to achieve Quality of Services (QoS)and Quality of Experience (QoE) in wireless networks. The challenges of providing better quality can be handled with the help of technologies, such as Content-Aware Networking (CAN), Content-Centric Networking (CCN) [3,4], layered video coding, and in-network video adaptation. Scalable Video Coding (SVC) [5,6] is one such layered video coding technique that supports adaptation on-the-fly. The video encoded using SVC can be thinned as per the demand of the network and end-users to provide the best quality with the existing infrastructure. This can be realized with the help of an intelligent intermediate device, such as Media Aware Network Elements (MANE) [6], which learns the network conditions and considers the end-users requirements for extracting the partial bit-stream (thinning) to form an adapted video sequence.
The MANE mainly consists of Adaptation Decision Module (ADM) and Extractor. The ADM decides the number of layers that need to be delivered to the network by considering the network, terminal, and user capabilities, availability of the network resources, and video meta-data to provide the best QoE. The adaptation parameters are used by the extractor to form an adapted video sequence from a fully scalable video bit-stream. The process of identifying suitable adaptation parameters is the major issue that needs to be addressed to improve the decision-making at ADM.
This research work aimed at addressing the challenges of estimating the extraction points in wireless networking environments [7]. The work considers the influence of routing protocol and node mobility on video streaming, and the knowledge derived is used for identifying the suitable scalable video layer for adaptation. To study the impact of routing algorithm overhead on video quality, we considered node mobility in the experimentation. In this experiment, video sequences are considered based on the object’s motion in the video sequences. The video sequences are categorized as high, medium, and low motion. The influences of motion levels are also used for deciding the adaptation parameters. The adaptation parameters are considered as intelligence or pre-knowledge of MANE to perform video adaptation. In this work, the video sequences are coded using Joint Scalable Video Module (JSVM) version 9.18 [8] reference software, and the wireless network environment simulated using the simulation tool OMNeT++ [9].
The key features of this research work are as follows: the experimental evaluation is carried to study the performance of the scalable video for dynamic network conditions. Second, the network dynamics, such as node mobility and node distance, are simulated to evaluate the network conditions. Third, the scalable video with different motion levels are generated and streamed over the wireless network. Finally, the adaptation parameters are estimated for dynamic network conditions and different motion levels in the video sequence. These contributions help the adaption decisions on-the-fly.
The organization of the paper is as follows: Section 2 briefly explains Scalable Video Coding standards, the challenges involved in video streaming over wireless environments are discussed in Section 4, experimentation and results are discussed in Section 5 and Section 6 concludes the work that is carried.

2. Scalable Video Coding (SVC) an Extension to H.264/Advanced Video Coding (AVC)

The layered video coding techniques enable the video adaptation. The H.26x series of video compression techniques is popular and mostly used currently. In this series, H.264/AVC [5] is most commonly used, and the majority of the video communication applications use it to encode and decode. The H.265/HEVC (High-Efficiency Video Coding) [10] is the latest compression that is available, but the application that uses this compression technique is limited. Recently, an international committee was formed to develop a new compression method called H.266/VVC (Versatile Video Coding) [11]. In this experiment, we considered Scalable Video Coding (SVC) an extension to H.264/AVC [6], as streaming and adaptation using SVC encoding is still a major research challenge that needs to be addressed. Adaptation is a method that can be used with any layered encoders.
SVC supports scalability in terms of spatial, temporal, and quality resolutions. Spatial scalability represents variations of the spatial resolution with respect to the original picture. The temporal scalability describes subsets of the bit-stream, which represent the source content with a varied frame rate. Quality scalability is also commonly referred to as fidelity or Signal-to-Noise Ratio (SNR) scalability.
In Scalable Video Coding, one base layer and multiple enhancement layers are generated. The base layer is the independent layer, and each enhancement layer is coded keeping previous layers as a reference layer. As a result, it generates a single bit-stream and enables the removal of partial bit-stream in such a way that it forms a valid bit-stream, as shown in Figure 1. The base layer consumes more bandwidth compared to the enhancement layer. Consequently, effective bandwidth consumption is much less. The increase in efficiency comes at the expense of some increase in complexity as compared to simulcast coding. In simulcast, multiple video sequences are generated to meet the different resolution and frame rates.
The SVC standard enhances the temporal prediction feature of AVC. Here, instead of a single-layer coding, multi-layer method is followed. The major difference between AVC and SVC in terms of temporal scalability is signaling the temporal layer information. The hierarchical prediction concept is being used in SVC. The dyadic hierarchical prediction has more coding efficiency than that of other prediction structures, like a non-dyadic and no-delay prediction.
The spatial layer represents the spatial resolution, and the dependency identifier used for it is D. The base layer is equated to level 0 and the level is increased for each enhanced layer. In each spatial layer, motion-compensated prediction and intra-prediction are employed for single-layer coding. In simulcast, the same video will be coded with different spatial resolution, but, in spatial dependency, inter-layer prediction mechanisms are incorporated to improve the coding efficiency. The inter-layer prediction includes techniques for motion parameter and residual prediction, and the temporal prediction structures of the spatial layers should be temporally aligned for efficient use of the inter-layer prediction.
For Quality or SNR scalability, coarse-grain SNR scalability (CGS) and medium grain SNR scalability (MGS) are distinguished in scalable video coding. The Quality scalable layer is identified by Q, where each spatial layer will have many quality layers. The decoder selects the Q value based on the requirement and decodes the quality for each spatially enhanced frame.
The SVC encoder combines all the above-mentioned scalability to code a video sequence as shown in Figure 2. The original video sequence is initially down-sampled up to the minimum resolution expected in video communication. Later, the base layer is coded independently. By keeping the base layer and spatial levels as a reference, enhancement layers are generated. The number of the enhancement layer is decided by the temporal levels. Finally, coded video is packetized according to Network Abstraction Layer standards and later used to store or stream over the network.

Media Aware Network Elements (MANE)

MANEs are CAN-enabled intermediate devices that implement intelligent modules, such as routing, adaptation, and so on [12,13,14]. Figure 3 depicts the architecture of MANE. It mainly implements Adaptation Decision Engine (ADE) and Extractor.
The modules, such as Network Analyzer and SVC Header Analyzer, are supporting the ADE module. The Network Analyzer monitors the network conditions and availability of the network resources, and then it feeds the same to ADE, which decides the number of layers that need to be extracted. The module continuously monitors the congestion status and Packet Delivery Ratio (PDR) of the network to understand the dynamic nature of the network resources. This monitored data helps in improving the efficiency of adaptation. Along with these parameters, Bandwidth, Buffer availability, and terminal availabilities are monitored for improving the decision-making. In wireless communication, reachability is also an important parameter because it decides the performance of routing protocols and the quality of the data received. Wireless routing protocols have many overheads, such as hello packets and echo packets. Hence, studying the influence of routing protocol and bandwidth availability was the aim of this research.
The SVC header analyzer parses the packets and then extracts the layer information from the bit-stream. The scalable video levels that are decided at ADE are fed into Extractor to extract the SVC levels accordingly. Unwanted scalable layers are removed from the fully scalable video bit-stream, and an adaptation bit-stream is delivered to the network. The adapted video bit-stream provides maximum video quality that can be achieved in the available resources and conditions.

3. Related Works

The majority of the research works support receiver and sender-driven adaptation methods, which are carried at end devices and server-side, respectively. In the receiver-driven approach [15], the content is adapted by the receiving device just before displaying the content. Guo et al. [16] proposed a multi-quality steaming method using SVC video coding. In this approach, multiple qualities of video data are streamed in a multicast communication and receivers will choose the quality of the video. Ruijian et al. [17] developed a Resource Allocation and Layer Selection method to choose the scalable video levels in a mobility scenario. In the sender-driven method [18], receivers signal the device capabilities while creating the session; then, accordingly, the sender adapts the content and streams the adapted content. A Video Optimizer Virtual Network Function [19] has been proposed to implement dynamic video optimization, where a video processing module at kernel and Network function virtualization (NFV) are used to improve the quality of the video in 5G network. There are many work on Hypertext Transfer Protocol (HTTP)-based dynamic video adaptation methods [20,21,22,23,24] that use server-driven method. In these techniques, the server will collect the feedback on video quality, and accordingly, the video will be streamed to improve the QoE of the communication.
There are many adaptation techniques for client-side adaptation, which mainly use Dynamic Adaptive Streaming over HTTP (DASH) and HTTP Adaptive Stream (HAS) for streaming adapted video data. Pu et al. [25] proposed a Dynamic Adaptive Streaming over HTTP mechanism for wireless domain (WiDASH). Similarly, Kim et al. [26] proposed a client-side adaptation technique to improve QoE of HTTP Adaptive Stream. They considered the dynamic variation of both network bandwidth and buffer capacity of the client. Tian et al. [27] demonstrate the video adaptation using a feedback mechanism. Similarly, there few implementation that use client-side adaptation method and resource allocation technique [28,29]. These implementations display the adapted content once it is fully received by the receivers. However, these techniques consider full quality while streaming from sender to receivers; hence, they consume more resources in the network. This leads us to explore more about in-network adaptation methods.
Chen et al. [30] presented a dynamic adaptation mechanism to improve QoE of the video communication. The model considers the multiple video rates for the communication. In Reference [31], a traffic engineering method has been proposed to feature the video adaptation. Here, a study has been carried to understand the importance of SDN for video streaming. A physical layer-based dynamic adaptation has been proposed in Reference [32]. In this work, the carrier sensing-based method has been developed. Quinaln et al. [33] proposed a streaming class-based method to stream scalable video. Here, quality levels and each level are streamed independently.
These research works aim at in-network adaptation techniques, but they fail in handling network dynamics and video motions together. The adaptation while streaming is difficult because the adaptation module requires dynamic network conditions and video metadata to decide the adaptation parameters. The literature does not discuss the role of video metadata and network parameters, such as mobility and bandwidth availability. Additionally, adaptation requires prior knowledge of the adaptation parameters to improve the adaptation on-the-fly. The majority of the literature concentrate on adaptation techniques and lack in discussing the prior knowledge required by the adaptation engine. Hence, we are carrying experimental analysis and then derive the adaptation parameters in this research work.

4. Scalable Video Streaming over Wireless Network

The video adaptation over wireless network experiences the following major challenges:
  • Node Mobility: The nodes in a wireless network are free to move and that leads to disruption in the communication. The node mobility affects the bandwidth availability between the source and receiver. The change in the bandwidth degrades the quality of the communication.
  • Routing Overhead: The routing algorithms in the wireless network use the periodic exchange of messages to monitor and manage the routes. These management messages consume network resources, hence decreasing the resource availability for the data transmission. In addition, the frequent update of the route in the wireless environment leads to the dropping of data packets.
  • Motion Levels: The movements of an object in the video sequences influence the encoding and decoding of the video data. The increase in the objects’ motion lead high bitrate of the video data. Hence, that is also an important parameter that needs to be considered in video communication over a bandwidth-constrained network.
  • Estimation of Adaptation Parameters: The estimation of the adaptation parameters considering the latter said dynamic conditions is one of the research challenges.
This experimental work considers an ad-hoc wireless environment consisting of mobile nodes and forwarders. The routing algorithm considered for the experimentation is Ad-hoc On-Demand Distance Vector (AODV) [34], which is considered to be a stable routing algorithm in the wireless domain.
AODV is capable of routing both unicast communication and multicast packets. It is an on-demand algorithm; it means that the route between source and destination is created when the source has data or packets to send. The routes established are preserved as long as the source requires them for communication. Furthermore, AODV forms trees that connect multicast group members by removing routing loops. To obtain knowledge of network topology, nodes exchange the HELLO packet and Reply packets. Once the route is established, it starts streaming the video packets. In the wireless domain, nodes are acting as a source, forwarder, and destination; they can read the packets. Hence, it is assumed that each node in the wireless network is acting as MANE.
The size of the scalable video bit-stream varies with the motion level of the video sequence. Here, we considered three video sequences, which are Honeybee, Jockey, and Bosphorus, which represent high, medium, and low motion, respectively. Figure 4 shows the video dataset that is considered in this work. SVC encodes motion parameters along with the video data, therefore as motion increases in the video, the number of packets that need to be transmitted over the network also increases. When these packets are transmitted over a bandwidth-limited network, the dropping of a packet that has motion parameters adversely affects the decoded video at the receiver. That is the reason that a decision taken at ADE varies with the motion level of video sequences.
With the above-said methodology, we address the challenges listed in this section. The planned wireless network setup considers the listed network challenges and dynamic conditions. Additionally, we consider the background communications and noises by explicitly creating the communications. These setups make the simulation environment more real-time.

5. Experimentation and Discussion

In order to study the performance of Scalable Video streaming over wireless networks, we chose OMNeT++ tool [9], and, for encoding and decoding the video sequences, Joint Scalable Video Module (JSVM) tool with version 9.18 [8] was used.
The OMNeT++ is a simulator which supports the connection of real devices to a simulation environment. Hence, real network applications, such as VLC streaming, can be used. JSVM is a reference software developed by Joint Video Team (JVT). It supports up and down-sampling, SVC encoding and decoding, and bit-stream extraction of video data.
In this experimental study, video sequences are encoded for 3 temporal levels, 3 spatial levels, and 2 quality levels. The frame rates considered are 15 fps, 30 fps, and 60 fps, which are represented by value T. In spatial scalability, 480p, 720p, and 1080p standard resolutions are considered. The value of D denotes the spatial level. The quality scalability Q denotes 2 levels of video quality, which is achieved by considering 2 different quantization levels. Table 1 represents the video levels and bitrates of each level that are considered in this experimental study. The network scenario and parameters considered in this experimentation are as shown in Table 2.
The wireless networking environment is created using OMNeT++ Tool, as shown in Figure 5. The source and destination nodes of the simulation environment are attached to the real computer device. The node mobilities are predefined in the OMNeT++ Tool and the same used to study the performances of the video streaming. To realize the wireless environment, background communications using UDP were used. The UDP communication is set in such a way that, periodically, a device broadcasts 100 kB data, and, additionally, routing consumes resources for route management. The variation in the network resources influences the delivery of the video data packets, and that is studied in this experimental work.
The video streaming is carried with the help of a VLC media player at source and destination. The VLC player is used for generating the stream-ready video data and then streaming over the simulation environment. It uses Real Time Streaming Protocol (RTSP) and UDP protocols for streaming over the network. In addition, the VLC is used to capture the video sequence at the receiver side. Since VLC does not support SVC encoding and decoding, we use VLC for streaming and capturing only. The Packet Delivery Ratio is calculated using Wireshark and also OMNeT++ tool, where total packets transmitted and received are considered.
Figure 6 depicts the PDR obtained for streaming Bosphours video over a wireless network. Here, bandwidth variations with 24 Mbps and 48 Mbps are considered for streaming the video sequences. From the result, it is evident that a wireless network having 48 Mbps can transmit a fully scalable video sequence that has all scalable levels. As there are background communications to keep the network active and ready for communication, the portion of the network bandwidth is allocated for network routing overheads. Hence, video streaming experiences a lack of network resources for the streaming of fully scalable bit-stream. In 24 Mbps, scalable video with lower frame-rates, i.e., up to 15 fps and 720p gives better PDR. However, higher quality and resolutions consume more video bit-rate and additional resources in the network for streaming. As a result, high-quality video streaming suffers a lack of network resources to provide better communication quality.
The experiment is carried to study the influence of node mobility and routing overheads on video communication. The results obtained are plotted as shown in Figure 7. In a mobility scenario, the nodes keep changing the position and, hence, connectivity. This change in the node connectivity leads to re-route computation in the network, which affects the communication by dropping the video data packets. Figure 7a,b show the PDR calculation for 24 Mbps and 48 Mbps network, respectively. From the result, it is observed that the base layer is stable both with and without node mobility scenario. The bitrate of the base layers is less compared to the higher levels. However, the increased bitrate leads to more packets in the communication and fails to provide stable quality in the mobility scenario. In a non-mobility, the life-time of the calculated route leads to re-computation of the streaming path; hence, the packet drops are observed in the experiment.
The experimentation is carried to study the influence of video bitrates on video streaming over a wireless network. The results are shown in Figure 8. In this experiment, all scalable video bit-streams are streamed over the network and captured the received video at the receiver side, and then PDR is calculated to analyze the streaming performances. The video sequence with more motions produces more bit-rate; hence, more packets are generated while streaming over the network. The Bosphorus video sequence is more stable compared to Jockey and Honeybee sequences. The Bosphorus video sequence has a slow-moving object in the front and very far background objects; hence, the bitrate generated by the SVC encoded is less compared to Jockey and Honeybee. In Jockey and Honeybee, objects in the foreground and background are moving frequently; therefore, the bitrate generated is higher. Figure 9 shows the PDR calculation for non-mobility wireless scenario.
From these experiments, the influence of node mobility, bandwidth, and motion levels are observed used for deriving the pre-knowledge required for ADE. As the aim of this research work was to generate the extraction points for ADE, adaptation parameters were estimated considering 80 % PDR is required for decoding the bit-stream and display visual quality video. The adaptation parameters for ADE are derived as shown in Table 3, Table 4 and Table 5. These tables show the adaptation parameters to obtain better QoE in regard to PDR in a bandwidth-constrained network.
Table 3, Table 4 and Table 5 are the adaptation parameters for video streaming over the wireless network. These parameters are estimated based on the PDR that is observed in the experimental evaluation process. Now, the MANE can use these Tables as a reference to decide the extraction points for an available network resource and network condition. This helps the MANE to implement the adaptation on-the-fly since the knowledge that is built is available for reference and decides the scalable levels for removal. Additionally, the adaptation parameters are estimated from the experimentation that confirms that the video received meets the quality requirements (PDR) of the video communication. The major advantage of the estimation is a reduction in the delay that is involved in the decision-making. The reduced processing delay ensures the implementation of the adaption process in-network and on-the-fly.

6. Conclusions

The SVC is a suitable encoding technique to attain better QoE/QoS in wireless communication. The network topology is highly dynamic, and routing protocols have more overhead in the wireless network. In addition, most bandwidth is used for maintaining the topology knowledge. In this paper, we estimated the adaptation parameters considering mobility, bandwidth availability, and motion levels of video sequences for deciding the adaptation parameter. The experimental method uses different scalable video levels and network conditions to stream the video over the wireless network. Here, High-definition video sequences are considered for estimating the adaptation parameters. The knowledge built in this work help in the continuous streaming of video over a CAN-enabled Wireless network. Hence, the adaptation is on-the-fly, considering dynamic network conditions and resource availability.
In the future, the knowledge built will be used for developing a dynamic video adaptation method. We plan to use machine learning-based approaches to develop dynamic adaptation techniques. In addition, we will simulate various network scenarios and estimate more adaptation parameters to use in dynamic adaptation algorithms.

Author Contributions

Conceptualization, R.M.; formal analysis, G.B., V.M.; project administration, R.M.; software, G.B., R.M.; supervision, K.A.K.; writing—original draft, G.B.; writing—review and editing, R.M., V.M., K.A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Authors can confirm that all relevant data are included in the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, D.; Pan, J. Performance Evaluation of Video Streaming over Multi-Hop Wireless Local Area Networks. Trans. Wirel. Commun. 2010, 9, 338–347. [Google Scholar] [CrossRef]
  2. Xing, M.; Xiang, S.; Cai, L. A Real-Time Adaptive Algorithm for Video Streaming over Multiple Wireless Access Networks. IEEE J. Sel. Areas Commun. 2014, 32, 795–805. [Google Scholar] [CrossRef] [Green Version]
  3. Qiao, X.; Wang, H.; Tan, W.; Vasilakos, A.V.; Chen, J.; Blake, M.B. A survey of applications research on content-centric networking. China Commun. 2019, 16, 122–140. [Google Scholar] [CrossRef]
  4. Kuo, W.; Kaliski, R.; Wei, H. A QoE-Based Link Adaptation Scheme for H.264/SVC Video Multicast Over IEEE 802.11. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 812–826. [Google Scholar] [CrossRef]
  5. Wiegand, T.; Sullivan, G.J.; Bjontegaard, G.; Luthra, A. Overview of the H.264/AVC video coding standard. IEEE Trans. Circuits Syst. Video Technol. 2003, 13, 560–576. [Google Scholar] [CrossRef] [Green Version]
  6. Schwarz, H.; Marpe, D.; Wiegand, T. Overview of the Scalable Video Coding Extension of the H.264/AVC Standard. IEEE Trans. Circuits Syst. Video Technol. 2007, 17, 1103–1120. [Google Scholar] [CrossRef] [Green Version]
  7. Ramakrishna, M.; Karunakar, A.K. Estimation of Scalable Video adaptation parameters for Media Aware Network Elements. In Proceedings of the 2014 Sixth International Conference on Communication Systems and Networks (COMSNETS), Bangalore, India, 6–10 January 2014; pp. 1–4. [Google Scholar]
  8. JSVM Software Manual. 2006. Available online: http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:JSVM+Software+Manual#0 (accessed on 15 March 2021).
  9. OMNeT++ Discrete Event Simulator. Available online: http://web.scalable-networks.com/content/exata (accessed on 15 March 2021).
  10. Sullivan, G.J.; Ohm, J.; Han, W.; Wiegand, T. Overview of the High Efficiency Video Coding (HEVC) Standard. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 1649–1668. [Google Scholar] [CrossRef]
  11. Team, J.V.E. Versatile Video Coding. Available online: https://jvet.hhi.fraunhofer.de/ (accessed on 15 March 2021).
  12. Zahariadis, T.B.; Lamy-Bergot, C.; Schierl, T.; Grüneberg, K.; Celetto, L.; Timmerer, C. Content Adaptation Issues in the Future Internet. In Towards the Future Internet—A European Research Perspective; Tselentis, G., Domingue, J., Galis, A., Gavras, A., Hausheer, D., Krco, S., Lotz, V., Zahariadis, T.B., Eds.; IOS Press: Amsterdam, The Netherlands, 2009; pp. 283–292. [Google Scholar] [CrossRef]
  13. Gardikis, G.; Xilouris, G.; Kourtis, A.; Negru, D.; Chen, Y.; Anapliotis, P.; Pallis, E. Media Ecosystem deployment in a content-aware Future Internet architecture. In Proceedings of the 2011 IEEE Symposium on Computers and Communications (ISCC), Kerkyra, Greece, 28 June–1 July 2011; pp. 544–549. [Google Scholar]
  14. Borcoci, E.; Negru, D.; Timmerer, C. A Novel Architecture for Multimedia Distribution Based on Content-Aware Networking. In Proceedings of the 2010 Third International Conference on Communication Theory, Reliability, and Quality of Service, Athens, Greece, 13–19 June 2010; pp. 162–168. [Google Scholar]
  15. Wang, Z.J.; Chan, S.P.; Kok, C.W. Receiver-Buffer-Driven Layered Quality Adaptation for Multimedia Streaming. In Proceedings of the Conference Record of the Thirty-Ninth Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 30 October–2 November 2005; pp. 1235–1239. [Google Scholar] [CrossRef]
  16. Guo, C.; Cui, Y.; Ng, D.W.K.; Liu, Z. Multi-Quality Multicast Beamforming With Scalable Video Coding. IEEE Trans. Commun. 2018, 66, 5662–5677. [Google Scholar] [CrossRef]
  17. An, R.; Liu, Z.; Ji, Y. SVC-based video streaming over highway vehicular networks with base layer guarantee. In Proceedings of the 2017 IFIP/IEEE Symposium on Integrated Network and Service Management (IM), Lisbon, Portugal, 8–12 May 2017; pp. 1168–1173. [Google Scholar] [CrossRef]
  18. Khan, A.; Mkwawa, I.; Sun, L.; Ifeachor, E. QoE-Driven Sender Bitrate Adaptation Scheme for Video Applications over IP Multimedia Subsystem. In Proceedings of the 2011 IEEE International Conference on Communications (ICC), Kyoto, Japan, 5–9 June 2011; pp. 1–6. [Google Scholar] [CrossRef]
  19. Salva-Garcia, P.; Alcaraz-Calero, J.M.; Wang, Q.; Arevalillo-Herráez, M.; Bernal Bernabe, J. Scalable Virtual Network Video-Optimizer for Adaptive Real-Time Video Transmission in 5G Networks. IEEE Trans. Netw. Serv. Manag. 2020, 17, 1068–1081. [Google Scholar] [CrossRef]
  20. Zhan, C.; Hu, H.; Sui, X.; Liu, Z.; Wang, J.; Wang, H. Joint Resource Allocation and 3D Aerial Trajectory Design for Video Streaming in UAV Communication Systems. IEEE Trans. Circuits Syst. Video Technol. 2020. [Google Scholar] [CrossRef]
  21. Yuan, H.; Hu, X.; Hou, J.; Wei, X.; Kwong, S. An Ensemble Rate Adaptation Framework for Dynamic Adaptive Streaming Over HTTP. IEEE Trans. Broadcast. 2020, 66, 251–263. [Google Scholar] [CrossRef] [Green Version]
  22. Xie, G.; Jin, X.; Xie, L.; Chen, H. A Quality-driven Bit Rate Adaptation Method for Dynamic Adaptive Streaming over HTTP. In Proceedings of the 2018 10th International Conference on Wireless Communications and Signal Processing (WCSP), Hangzhou, China, 18–20 October 2018; pp. 1–6. [Google Scholar] [CrossRef]
  23. Wang, Z.; Jiang, X. A QoE-Driven Rate Adaptation Approach for Dynamic Adaptive Streaming Over HTTP. In Proceedings of the 2019 International Conference on Computing, Networking and Communications (ICNC), Honolulu, HI, USA, 18–21 February 2019; pp. 224–229. [Google Scholar] [CrossRef]
  24. Gohar, A.; Lee, S. Multipath Dynamic Adaptive Streaming over HTTP Using Scalable Video Coding in Software Defined Networking. Appl. Sci. 2020, 10, 7691. [Google Scholar] [CrossRef]
  25. Pu, W.; Zou, Z.; Chen, C.W. Video adaptation proxy for wireless Dynamic Adaptive Streaming over HTTP. In Proceedings of the 2012 19th International Packet Video Workshop (PV), Amherst, MA, USA, 21 June 2012; pp. 65–70. [Google Scholar] [CrossRef]
  26. Kim, S.; Yun, D.; Chung, K. Video quality adaptation scheme for improving QoE in HTTP adaptive streaming. In Proceedings of the 2016 International Conference on Information Networking (ICOIN), Kota Kinabalu, Malaysia, 13–15 January 2016; pp. 201–205. [Google Scholar] [CrossRef]
  27. Tian, G.; Liu, Y. Towards Agile and Smooth Video Adaptation in HTTP Adaptive Streaming. IEEE/ACM Trans. Netw. 2016, 24, 2386–2399. [Google Scholar] [CrossRef]
  28. Xiang, S.; Cai, L.; Pan, J. Adaptive Scalable Video Streaming in Wireless Networks. In Proceedings of the 3rd Multimedia Systems Conference, MMSys’12, New York, NY, USA, 1 January 2012; pp. 167–172. [Google Scholar] [CrossRef]
  29. Zhou, H.; Wang, X.; Liu, Z.; Ji, Y.; Yamada, S. Resource Allocation for SVC Streaming Over Cooperative Vehicular Networks. IEEE Trans. Veh. Technol. 2018, 67, 7924–7936. [Google Scholar] [CrossRef]
  30. Chen, B.W.; Ji, W.; Jiang, F.; Rho, S. QoE-Enabled Big Video Streaming for Large-Scale Heterogeneous Clients and Networks in Smart Cities. IEEE Access 2016, 4, 97–107. [Google Scholar] [CrossRef]
  31. Awobuluyi, O.; Nightingale, J.; Wang, Q.; Calero, J.M.A.; Grecos, C. In-network adaptation of SHVC video in software-defined networks. In Real-Time Image and Video Processing 2016; Kehtarnavaz, N., Carlsohn, M.F., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2016; Volume 9897, pp. 89–96. [Google Scholar] [CrossRef]
  32. Yu, D.; Wang, Y.; Tonoyan, T.; Halldórsson, M.M. Dynamic Adaptation in Wireless Networks Under Comprehensive Interference via Carrier Sense. In Proceedings of the 2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS), Orlando, FL, USA, 29 May–2 June 2017; pp. 337–346. [Google Scholar]
  33. Quinlan, J.J.; Zahran, A.H.; Sreenan, C.J. Efficient Delivery of Scalable Video Using a Streaming Class Model. Information 2018, 9, 59. [Google Scholar] [CrossRef] [Green Version]
  34. Perkins, C.E.; Royer, E.M. Ad-hoc on-demand distance vector routing. In Proceedings of the Second IEEE Workshop on Mobile Computing Systems and Applications, Proceedings WMCSA’99, New Orleans, LA, USA, 25–26 February 1999; pp. 90–100. [Google Scholar]
  35. Ultra Video Group—Tampere University. Available online: http://ultravideo.fi/ (accessed on 15 March 2021).
Figure 1. Scalable video layer extraction and delivery.
Figure 1. Scalable video layer extraction and delivery.
Computers 10 00039 g001
Figure 2. Combined scalability.
Figure 2. Combined scalability.
Computers 10 00039 g002
Figure 3. Architecture of Media Aware Network Elements (MANE).
Figure 3. Architecture of Media Aware Network Elements (MANE).
Computers 10 00039 g003
Figure 4. Video Dataset that are considered in the experimentation [35].
Figure 4. Video Dataset that are considered in the experimentation [35].
Computers 10 00039 g004
Figure 5. Simulation setup.
Figure 5. Simulation setup.
Computers 10 00039 g005
Figure 6. Packet Delivery Ratio (PDR) observed in Honeybee video streaming for bandwidth variations (with mobility).
Figure 6. Packet Delivery Ratio (PDR) observed in Honeybee video streaming for bandwidth variations (with mobility).
Computers 10 00039 g006
Figure 7. PDR observed in Honeybee video streaming for mobility and non-mobility.
Figure 7. PDR observed in Honeybee video streaming for mobility and non-mobility.
Computers 10 00039 g007
Figure 8. Comparison of PDR observed for Jockey, Honeybee, and Bosphorus (with mobility).
Figure 8. Comparison of PDR observed for Jockey, Honeybee, and Bosphorus (with mobility).
Computers 10 00039 g008
Figure 9. Comparison of PDR observed for Jockey, Honeybee, and Bosphorus (without mobility).
Figure 9. Comparison of PDR observed for Jockey, Honeybee, and Bosphorus (without mobility).
Computers 10 00039 g009
Table 1. Bitstream statistics and scalable video level of Bosphrous, Jockey, and Honeybee video sequence.
Table 1. Bitstream statistics and scalable video level of Bosphrous, Jockey, and Honeybee video sequence.
LayerResolution (W × H)Framerate (fps)Bosphorus Bitrate (kbps)Jockey Bitrate (kbps)Honeybee Bitrate (kbps)(D,T,Q)
L0864 × 480151229.604585.007947.00(0,0,0)
L1864 × 480301404.007781.0013,055.00(0,1,0)
L2864 × 480153083.009354.0015,548.00(0,0,1)
L3864 × 480303484.0015,636.0025,340.00(0,1,1)
L41280 × 720153765.0011,729.0018,140.00(1,0,0)
L51280 × 720304370.0019,780.0027,980.00(1,1,0)
L61280 × 720604610.0021,210.0028,460.00(1,2,0)
L71280 × 720155875.0014,024.0020,840.00(1,0,1)
L81280 × 720306758.0023,670.0030,750.00(1,1,1)
L91280 × 720607154.0025,810.0031,340.00(1,2,1)
L101920 × 1088156095.0017,150.0023,030.00(2,0,0)
L111920 × 1088307190.0028,600.0032,990.00(2,1,0)
L121920 × 1088607756.0031,960.0033,720.00(2,2,0)
Table 2. Simulation Parameters.
Table 2. Simulation Parameters.
Simulation area (m × m)1000 × 1000
Simulation time (s)100
Number of nodes25
MAC layer protocolIEEE 802.11
Transmission range (m)200
Maximum velocity (m/s)25
Physical Wireless LayerIEEE 802.11b
Routing ProtocolAODV
Bandwidth between links (Mbps)24, 48
Table 3. Adaptation parameters for medium motion video sequence at bandwidth variations.
Table 3. Adaptation parameters for medium motion video sequence at bandwidth variations.
ParameterScalable Video Layer
24 MbpsL1 (480p)
48 MbpsL4 (720p)
Table 4. Adaptation parameters for medium motion video sequence at node mobility.
Table 4. Adaptation parameters for medium motion video sequence at node mobility.
Parameter24 Mbps48 Mbps
Without MobilityL7 (720p)L10 (1080p)
With MobilityL1 (480p)L4 (720p)
Table 5. Adaptation parameters.
Table 5. Adaptation parameters.
ParametersMobilityNon-Mobility
24 Mbps48 Mbps24 Mbps48 Mbps
LowL11 (1080p)L12 (1080p)L12 (1080p)L12 (1080p)
MediumL2 (480p)L4 (720p)L7 (720p)L10 (1080p)
HighL0 (480p)L0 (480p)L3 (480p)L5 (720p)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bijur, G.; Mundugar, R.; Mantoor, V.; A Kotegar, K. Estimation of Adaptation Parameters for Dynamic Video Adaptation in Wireless Network Using Experimental Method. Computers 2021, 10, 39. https://doi.org/10.3390/computers10040039

AMA Style

Bijur G, Mundugar R, Mantoor V, A Kotegar K. Estimation of Adaptation Parameters for Dynamic Video Adaptation in Wireless Network Using Experimental Method. Computers. 2021; 10(4):39. https://doi.org/10.3390/computers10040039

Chicago/Turabian Style

Bijur, Gururaj, Ramakrishna Mundugar, Vinayak Mantoor, and Karunakar A Kotegar. 2021. "Estimation of Adaptation Parameters for Dynamic Video Adaptation in Wireless Network Using Experimental Method" Computers 10, no. 4: 39. https://doi.org/10.3390/computers10040039

APA Style

Bijur, G., Mundugar, R., Mantoor, V., & A Kotegar, K. (2021). Estimation of Adaptation Parameters for Dynamic Video Adaptation in Wireless Network Using Experimental Method. Computers, 10(4), 39. https://doi.org/10.3390/computers10040039

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop