In this section, we will summarize and categorize the existing literature on QoS and QoE management in ICN. Depending on the strategy utilized to optimize QoS or QoE metrics, we have divided the research works into the following categories: (i) in-network caching, (ii) name resolution and routing, (iii) software-defined networking (SDN)-based ICN solutions, (iv) transmission and flow control, and (v) media-streaming strategies.
5.1. In-Network Caching
In-network caching is a core feature of ICN architectures that directly enhances both QoS and QoE. By storing data at intermediate network nodes, in-network caching minimizes the need to retrieve content from the original source, which reduces latency, routing delays, server load, and overall network traffic [
46,
47]. In addition, content-aware caching enables the network to identify cached data independently of the application layer. A trace-driven analysis in [
48] demonstrated that in-network caching can reduce three or more hops for 30% of packets, with 11% of requests handled within the requester’s domain—improving access speed and reliability for end-users. Moreover, in-network caching is also attractive to content providers since it can mitigate the capital expense of their content distribution network (CDN) servers while still maintaining satisfactory QoE levels.
ICN architectures use ubiquitous in-network caching, although the design objectives and details may vary depending on the architecture. Based on their operational characteristics, the existing caching mechanisms can be divided into the following categories: (1) Homogeneous and Heterogeneous Caching, (2) Cooperative and Non-Cooperative Caching, and (3) On-Path and Off-Path Caching [
10]. In uniform caching systems (homogeneous), all content routers uniformly store passing data, each allocated identical storage capacity. Conversely, selective caching (heterogeneous) restricts data retention to specific routers positioned along the return transmission path. Collaborative caching models enable routers within a network to pool resources, collectively storing a broader array of content segments and sharing cached data to fulfill external requests. In contrast, isolated caching (non-cooperative) operates independently: routers autonomously decide which data to store without disclosing their cached content inventory to peers, eliminating coordination.
Caching in ICN is one of the most popular research areas, and many researchers have focused on utilizing this attribute to improve QoS and QoE. Wu et al. [
49] presented a memory update strategy for the content store (CS) of NDN routers (
Figure 1) considering data lifetime. Different types of data are assigned different lifetimes, which means instead of using cache update algorithms such as first-in, first-out (FIFO), or Least Recently Used (LRU), an IP packet will instead be discarded from the CS based on how long it has been in the network. Wu et al. [
49] and Khelifi et al. [
50] utilized traffic classification strategies to improve cache efficiency. In [
49], a new column titled “lifetime” was added to the CS, which held the lifetime value for different types of data.
On the other hand, a QoS-aware cache replacement policy for vehicular NDN was presented in [
50]. The authors categorized traffic into different classes and divided the CS into sub-cache stores; content was retained or removed based on its popularity density value. In [
51], a class-based DiffServ model was proposed for NDN, where traffic classification and conditioning occurred at receiver-side edge networks. The model claimed to offer greater flexibility than traditional IP DiffServ, allowing users to request the same content with different service classes through modified interest aggregation.
Although in-network caching offers many benefits, it also creates some challenges to support Internet-scale ICN cache routers [
52]. One major challenge is efficiently managing limited memory resources, as high-speed caching demands fast but costly and low-capacity memory chips. Second, integrating and optimizing multiple memory technologies with different read–write characters is challenging. To tackle these challenges, Li et al. [
53] presented HCaching, a hierarchical caching method that uses SRAM (faster than DRAM but costly) as the secondary structure of DRAM to improve the overall throughput of ICN routers.
Figure 3 shows a simplified architecture of HCaching, where a large Layer 2 (L2) cache (DRAM) is masked behind a small Layer 1 (L1) cache (SRAM). The SRAM contains three main components: (i) the L2
Index, which uses the
buffering algorithm [
54] to index L2 cache, (ii) the Reading Cache, managed by the LRU algorithm, and (iii) the Writing Cache, which also utilizes the LRU algorithm.
Simulation results showed that HCaching required much smaller SRAM than LRU and DRAM_SSD [
55] for all DRAM sizes and OPC [
56] for DRAM sizes more than 1 GB. In addition, HCaching achieved a throughput close to 100 Gb/s, LRU 10 Gb/s, DRAM_SSD 15 Gb/s, and OPC less than 10 Gb/s.
Wang et al. [
57] investigated the financial implications of ICN, specifically evaluating its cost-effectiveness prior to widespread adoption within internet service provider (ISP) infrastructures. The authors introduced a caching strategy informed by cost considerations to analyze the trade-off between enhancing QoS and minimizing the economic overheads.
Content popularity, especially for multimedia objects, was studied in [
58,
59]. Li et al. [
59] proposed DASCache, a framework that finds optimal video content placement in the transmission queue of the router.
Figure 4 illustrates the queueing model for DASCache’s adaptive video streaming. In each round, the most popular video chunk is placed in the front of the queue, which minimizes the average access time per bit to all users and thereby increases their respective throughput.
Lv et al. [
60] and Sellami et al. [
61] proposed cooperative and distributed caching strategies for CCN. In [
60], a heterogeneous cache allocation method distributed the cache capacity across different CRs. The importance of CRs was evaluated based on network topology and traffic characteristics. The proposed model demonstrated an improved cache hit ratio and utilization and lower routing delay. However, this framework had a high computational complexity, and the study was based on a single topology. On the other hand, a fog computing-based model was proposed in [
61]. The authors integrated CCN with fog computing to improve content availability and energy efficiency by optimizing content placement within the network.
Another downside of in-network caching is that content producers do not know where their content is cached, which introduces a problem when the producer wants to update or change the content. Lal et al. [
62] proposed a content update scheme for CCN that added extra fields to the interest packets to facilitate version records and backtracking. ndnSIM simulations demonstrated reduced network overhead, lower delay, and improved QoE.
Table 2 lists various ICN frameworks that utilize new caching and allocation strategies to enhance QoS and QoE metrics, the experimental platform they used, the type of network (if details available), and the QoS or QoE metrics they optimized.
Comparative Analysis of Caching-Based Solutions
Based on the references listed in
Table 2, we can categorize the strategies into a few general subgroups: (i) popularity-based, (ii) cooperative, (iii) topology-aware, (iv) allocation-based, (v) energy-efficient, and (vi) hybrid.
Popularity-based caching techniques prioritize cache content based on its access frequency. Caching frequently accessed content increases the likelihood of satisfying future requests from the local cache, increasing the cache hit ratio and reducing delay compared to popularity-agnostic methods [
72]. However, accurately estimating content popularity (local or global) is a challenge. Local estimation might miss broader trends, while global estimation can introduce communication overhead. In addition, pure popularity-based approaches might overlook factors like content freshness, content size, network topology, or the location of the consumer.
Cooperation can lead to a wider variety of content being cached within a neighborhood, increasing the chance of a cache hit and decreasing redundancy [
60,
61]. However, cooperation requires exchanging additional messages, which can increase network overhead, especially for larger networks. Also, implementing and managing cooperative caching systems can be more complex than non-cooperative ones.
Topology-aware techniques consider the network topology (e.g., node centrality, distance to consumers/sources, router connectivity) when making caching decisions. Caching at central or well-connected nodes can serve a larger number of potential requests, improving the cache hit ratio and reducing network load. The downside is that determining the “central” or important nodes can be computationally intensive [
60], and caching heavily on a few “important” nodes might lead to these nodes becoming congested.
Allocation-based strategies focus on how the total available cache capacity is distributed among the network nodes (homogeneous vs. heterogeneous). Heterogeneous allocation can help distribute the caching load more evenly across the network and can optimize the deployment of larger ICN caches. However, deciding how much cache to allocate to each node based on various factors (topology, traffic, cost) is a complex optimization problem [
60].
Primarily relevant in resource-constrained environments like vehicular networks and IoT, energy-efficient caching strategies aim to minimize energy consumption related to caching operations. A challenge faced by these techniques is increasing metrics such as cache hit ratio and retrieval latency [
68].
Hybrid strategies combine elements from different caching approaches to leverage their individual strengths [
66]. On the other hand, combining different strategies can lead to more complex designs and implementations.
5.2. Name Resolution and Routing
Name resolution in ICN involves matching an object name to a server or source that can supply that object to a requester. This can be achieved through hierarchical namespaces, resembling domain names in DNS, or through flat namespaces, where each content name is globally unique. Hierarchical namespaces enable scalable and efficient name resolution by utilizing hierarchical routing tables, while flat namespaces often rely on distributed lookup mechanisms like distributed hash tables (DHTs). Efficient name resolution ensures that content requests are quickly directed to the correct network locations, contributing to the scalability and decentralization of ICN architectures.
Routing means constructing a path to transfer the object from the server to the client. In ICN, routers route packets based on content names instead of destination IP addresses. Content-centric routing and interest-based forwarding prioritize delivering content efficiently by caching and forwarding content toward the nearest or most suitable locations. Plus, content-based routing decisions optimize content delivery by considering factors such as content popularity, proximity, and network conditions. One aspect that sets apart various ICN approaches is their treatment of name resolution and routing. These functions may either be interlinked or operate separately from each other.
Kerrouche et al. [
73] and McCarthy et al. [
74] presented QoS-aware forwarding strategies for NDN environments. In [
73], an ant colony optimization-based forwarding strategy was proposed, which utilized both forward and backward ants (interest and data packets) to rank interfaces and then selected the best interface to forward incoming Interests. To improve data transmission efficiency, this framework incorporated computed probability metrics from all available interfaces into a roulette wheel selection mechanism. This methodology enabled the inclusion of interfaces exhibiting reduced probability metrics (pheromone levels) in the decision process, ensuring a more equitable dispersion of network traffic. On the other hand, McCarthy et al. [
74] proposed a QoS-aware algorithm that extended the NDN model to embed QoS information into request and corresponding data packets and utilized multi-hop and multi-path forwarding techniques to meet the QoS constraints.
Thomas et al. [
75] proposed a multi-path selection strategy for the PSIRP architecture, considering bandwidth and error rate constraints. This solution facilitates QoS-aware routing decisions within individual network domains. The strategy specifically targets the requirements of on-demand video-streaming applications by prioritizing increased bandwidth allocation and minimizing packet loss while also including configurable limits on permissible path counts. The authors also studied the interaction between their model and a video-on-demand streaming service to deliver high-definition content. Scalable video technology (H.264/SVC) [
76] was used to provide a layered representation of the content (three SNR-scalable layers, 1080p). To evaluate QoE, the Pseudo-Subjective Quality Assessment (PSQA) [
77] approach was used, and the proposed method was shown to improve QoE and path computation.
Hou et al. [
78] presented an optimization technique leveraging particle swarm intelligence, which analyzes historical data from content object transmissions to dynamically adjust routing likelihoods in forwarding tables (FIB). This method enables intelligent path selection by directing user requests toward the most efficient content source (publisher or server) among redundant providers of identical data, balancing QoS parameters during decision-making. Another swarm optimization technique was employed by Cheng et al. [
79]. By mapping user QoE to specific QoS metrics, they calculated satisfaction levels and determined forwarding probabilities for router interfaces. This method allowed for dual-path content discovery, significantly improving routing success.
Rani et al. [
80] introduced a fuzzy-based congestion-aware routing method to address congestion from flooding attacks in ICN. Their approach builds on the OSPF algorithm, evaluating paths using satisfaction ratio and packet loss metrics. These metrics, along with trust values indicating link reliability during high traffic, are shared across the network via link state packet vectors, enabling the selection of secure, congestion-free routes that improve QoS.
Kuang and Yu [
81] proposed a QoS-aware architecture for multimedia applications in wireless ICN. Their architecture unifies QoS management across path selection, transmission policies, resource reservation, source selection, and bandwidth allocation to optimize performance in decentralized settings. Another QoS-aware forwarding strategy was proposed by Abdelaal et al. [
82]. This framework utilizes a cooperative forwarding approach where routers share data names and interfaces to estimate optimal paths toward cached versions of the requested data in NDN.
Mizunaga and Kobayashi [
83] proposed a flat naming scheme for CCN instead of the standard hierarchical naming scheme. By utilizing multiple keywords, the flat naming scheme simplifies content retrieval for users, making it more intuitive and user-friendly. Unlike traditional hierarchical naming schemes that require precise hierarchical addresses, the flat naming scheme reduces the burden on content requesters by offering a more flexible and accessible way to identify and retrieve content, thus enhancing QoE in CCN.
Table 3 lists various ICN frameworks that utilize new name resolution and routing strategies to enhance QoS and QoE metrics.
Comparative Analysis of Name Resolution and Routing-Based Solutions
To compare the strategies listed in
Table 3, we can divide them into the following subgroups: (i) name-based forwarding, (ii) name-resolution architecture, (iii) FIB management, and (iv) cooperative forwarding schemes.
Name-based forwarding techniques select interfaces or paths that promise improved QoS or QoE metrics, e.g., better bandwidth, lower delay, or reduced loss. Their advantage lies in their direct integration with ICN’s fundamental name-based forwarding, allowing for fine-grained control at each hop. However, they often rely on accurate and timely QoS parameter estimation, which can introduce overhead through probing or information exchange. Also, per-hop optimizations might not always guarantee end-to-end QoS, and these strategies need to balance exploration of better paths with exploitation of known good ones.
The second category focuses on name resolution architectures, which are crucial for scalability and efficient content discovery, especially in ICN with flat naming. These solutions improve network performance by providing a structured way to map content names to network locations, reducing the need for extensive flooding and potentially offering deterministic resolution latency. Note that the scalability of the name resolution system depends heavily on its design (e.g., DHT-based vs. hierarchical), and issues such as single points of failure or challenges in maintaining mapping consistency need to be addressed.
Forwarding Information Base (FIB) management solutions proactively discover content availability and maintain accurate FIBs. By making routers more aware of content locations, these strategies can reduce network congestion and improve response times. However, maintaining an accurate and up-to-date FIB can be resource-intensive, especially in highly dynamic networks.
Cooperative forwarding strategies utilize collaboration between ICN routers to find cached content closer to the consumer, and the content may be delivered from either single or multiple sources. By distributing the load across multiple providers, cooperative forwarding can lead to reduced congestion. However, the effectiveness of these strategies relies on the network’s up-to-date knowledge of available providers and accurate bandwidth estimation, and stale or incorrect information can lead to suboptimal decisions.
5.3. SDN-Based ICN Solutions
Software-defined networking (SDN) [
89] is essential for enhancing and managing QoS and QoE in modern networks due to its centralized control and traffic engineering capabilities. SDN separates the control plane from the data plane [
90], allowing efficient resource allocation, load-balancing congestion control, and traffic prioritization, all of which are essential to maintaining high QoS and QoE.
SDN plays a pivotal role in advancing ICN by enabling dynamic control and programmatically efficient management of network resources. SDN enhances ICN’s content distribution, caching, and retrieval capabilities by providing a centralized platform for managing content routing, caching policies, and QoS requirements. SDN controllers leverage a variety of protocols, such as OpenFlow, to communicate with forwarding devices, enabling real-time adjustments to network policies and traffic flows. In addition, SDN allows for the seamless integration of ICN with existing network infrastructures, facilitating incremental deployment and migration strategies.
One strategy used by some research works in this domain is utilizing the traffic engineering ability and independent control layer of SDN. Zhang et al. [
91] proposed an information-centric wireless network virtualization technique that transmits time-sensitive multimedia data with the delay-bounded QoS guaranteed in SDN environments. The authors combined the inherent advantages of SDN with the in-network caching feature of ICN to provide maximum effective capacity and minimum transmission delay. Adnan et al. [
92] utilized the SDN controller to keep track of the routing information for mobile users in an NDN environment and showed improvement in several QoS metrics.
Another strategy adopted by some SDN-based approaches is implementing an intermediate layer to translate names into IP addresses. This layer, often functioning as a proxy and overlay, is prevalent in modern networks where proxy servers are common. Several works [
93,
94,
95] have leveraged proxy servers to manage ICN networks efficiently. Nguyen et al. [
94] utilized a proxy to envelop and hash messages between the CCNx daemon and the OpenFlow switch. This proxy’s wrapper module hashed the content name prefix in the interest packet and embedded it within the existing IP packet. The authors considered off-path caching and addressed cache replication issues to enhance hit ratios. However, the wrapper added a slight delay and marginally reduced forwarding efficiency by 5%. Trajano et al. [
95] employed multiple proxies, each including a shared and distributed hash table that served as the content index. Upon receiving interests, the controller directed them to the nearest proxy for data retrieval. If the proxy located the requested data in its hash table, it sent the interest to the cache node holding the data; otherwise, it forwarded the interest to the closest proxy server. However, this method risks overloading proxies as they handle all interests by resolving and forwarding them to the controller.
Liu et al. [
96] proposed an SDN-based NDN architecture to support IP-compatible ICN packets. The Multi-Protocol Label Switching (MPLS) technique was used to encapsulate and label the NDN payload within IP packets. Flores et al. [
97] presented an OpenFlow-compatible key-based routing solution to enable content-centric networks to be deployed on the current TCP/IP-based architecture. End hosts were distinguished by virtual identifiers utilized for network communication. This SDN-over-ICN model demonstrated improved transmission efficiency and high QoE.
Table 4 lists various ICN frameworks that utilize SDN to enhance QoS and QoE metrics.
Comparative Analysis of SDN-Based Strategies
To compare the strategies listed in
Table 4, we can categorize them into the following subgroups: (i) optimized centralized management and (ii) NFV-based solutions.
The first category leverages the global view and programmability of SDN to enhance core ICN functions. An advantage of this strategy is the ability to make informed decisions regarding caching, routing, and load balancing across the entire network. For instance, Nguyen et al. [
94] uses a central controller to determine popular content and optimal caching locations to reduce retrieval time and bandwidth consumption. This centralized control directly improves QoS and QoE by reducing delays and increasing network efficiency. However, a potential limitation is the scalability of the central controller in very large and dynamic networks, as well as the risk of a single point of failure.
The second category focuses on SDN and NFV for flexible and coexistent ICN deployments, which emphasizes the practical aspects of deploying ICN incrementally. Architectures like ContentSDN [
95] utilize NFV to deploy caching functionalities dynamically, allowing for flexible adjustments based on demand and policies. SDNDN, proposed in [
96], directly addresses the coexistence of ICN (specifically NDN) with existing IP networks by extending OpenFlow switches to handle both types of packets. The primary advantage here is the flexibility and ease of incremental deployment without requiring a complete overhaul of the network infrastructure. This approach improves QoE by bringing content closer to users in a manageable and scalable way and by enabling the use of ICN benefits for specific traffic types.
5.4. Transmission and Flow Control
Transmission and flow control mechanisms play fundamental roles in enabling efficient and reliable content delivery. Transmission in ICNs involves the dissemination of content packets from producers to consumers across the network. To ensure efficient transmission, ICNs often employ caching strategies at intermediary nodes to store frequently accessed content, reducing the need for long-distance transmissions and improving content delivery latency. Additionally, transmission in ICNs may involve multicast or anycast mechanisms to efficiently transmit content to multiple consumers or replicate content across distributed nodes, enhancing scalability and resilience.
Flow control mechanisms in ICNs regulate the rate of content transmission to prevent congestion and ensure fair resource utilization. This is particularly important in scenarios where multiple consumers are competing for the same content or where network resources are limited. Flow control strategies typically employ congestion detection and avoidance algorithms, such as Explicit Congestion Notification (ECN) and window-based flow control. ECN allows routers to notify senders of congestion by setting a specific bit in packet headers, enabling senders to adapt their transmission rates accordingly. Window-based flow control, on the other hand, adjusts the transmission window size based on network conditions to avoid overwhelming the network with excessive traffic. However, the traditional congestion flow control mechanisms designed for the Transmission Control Protocol (TCP) may not be directly employed in some ICN architectures (e.g., CCN or NDN) because the RTT estimation could be inaccurate due to how NDN interest (request packet) and content objects (response packet) work [
100,
101]. Furthermore, managing data flow in ICN systems is challenging due to unpredictable user requests, dynamic network conditions, and in-network caching [
5,
102].
Dynamic Adaptive Streaming over HTTP (DASH) has become a standard for many streaming services and platforms due to its ability to deliver high-quality media streaming over the Internet despite network congestion or bandwidth fluctuations. DASH dynamically adjusts the quality of the video stream based on the available network bandwidth and the capabilities of the receiving device. However, in ICN, this client-driven bit rate adaptation mechanism may inaccurately gauge available bandwidth, as cached content fragments along transmission paths reduce data retrieval latency. This mismatch between perceived and actual end-to-end bandwidth triggers unstable oscillations in streaming quality, where the system alternates between high and low bit rates [
103].
Figure 5 illustrates the bandwidth overestimation problem in DASH over CCN implementation. Suppose the video is available in several different qualities, and the entire file is divided into four segments. Router
has the first segment cached in its local content store, while the entire video is available at the server (producer). When the consumer requests segment #1, router
quickly sends the content using its stored copy. Since the request is filled fast from a nearby cache, the user might assume the communication channel has a high bandwidth and then request a higher-quality version of segment #2. But the links between routers
–
and
–
are incapable of supporting the higher bandwidth needed for that better-quality video. This will result in client-side buffer depletion and stalling in video playback and, consequently, a poor QoE.
Liu and Wei [
104] addressed this throughput estimation problem by proposing a hop-by-hop quality adjustment strategy named HAVS-CCN. The authors introduced a new scheduling window (SDQ) for CCN routers, which helps manage data flow by controlling how fast packets are sent at the router level. To adapt video quality, routers organize content in queues based on how they affect video playback—prioritizing critical data to avoid buffering. When cached content or new data need to be sent out, routers use SDQ rules to decide what goes first. Also, instead of relying on standard DASH with AVC encoding, their system uses scalable video coding (SVC) [
76], which allows smoother quality adjustments.
Zhu and Zhang [
105] introduced a negotiation-based gaming framework to manage the complexities of multiple users simultaneously accessing popular content from distributed network caches while meeting statistically defined QoS criteria. Their multilateral negotiation framework involves bidirectional interactions between players—content repositories and users—who engage in price-based bargaining with all relevant counterparts to maximize their respective utilities. For users, utility is defined as the net benefit of achieved QoS improvements minus incurred expenses, while content stores evaluate utility as revenue from service agreements offset by operational costs. The strategy was evaluated based on the utility outcomes for all participants and the mathematical modeling of consensus likelihood across bargaining scenarios.
Van and Mau [
106] studied a multi-source CCN architecture in which data are segmented and dispersed across multiple servers, deviating from traditional single-server storage models. The authors claimed that this distributed caching could help tackle the limited cache size constraint of CCN routers, especially in the case of multimedia streaming.
Han et al. [
107] and Kuang et al. [
108] designed solutions for wireless ICN architectures. In [
107], an adaptive retransmission algorithm was proposed for wireless CCN that addresses packet loss and round-trip time (RTT) fluctuations caused by in-network caching. This approach dynamically adjusts retransmission timeouts via sequential hypothesis testing while accounting for packet loss causes (e.g., congestion or signal issues). NS-2 simulations showed the scheme outperformed default CCN and RTO methods [
109], achieving better loss recovery and video quality. On the other hand, a real-time content streaming framework for wireless ICN was presented in [
108]. This strategy considers the different QoS requirements of media consumers and selects the optimal content provider. Experimental results from the OMNeT++ simulator for an IEEE 802.11g network demonstrated lower delay and jitter and increased throughput.
In [
110], a DiffServ-based congestion control method was proposed for ICN that adjusts the traffic flow at the router’s forwarding interface to improve QoS. The framework classifies ICN traffic into distinct service categories and applies service-category-specific traffic regulation at each network node along the transmission path. Additionally, it uses a resource-availability signaling system to communicate current bandwidth conditions to content requesters. Simulation results from ndnSIM demonstrated higher throughput, lower latency, and minimal packet loss.
Table 5 lists various ICN frameworks that enhance QoS and QoE metrics by optimizing transmission, flow, and congestion control.
Comparative Analysis of Transmission and Flow Control-Based Solutions
To compare the strategies listed in
Table 5, we can divide them into the following subgroups: (i) transmission fairness, (ii) hop-by-hop flow control, and (iii) client-driven adaptations.
The fairness-focused flow control strategies primarily aim to improve the end-user experience by ensuring equitable resource sharing and preventing highly popular content from monopolizing bandwidth. A potential drawback is that strict fairness might sometimes lead to suboptimal overall network utilization if resources are not allocated based purely on demand.
Hop-by-hop flow control techniques involve intermediate routers along the forwarding path making local decisions to manage network traffic and congestion. The localized control and class-based interest shaping allow better management of Differentiated Services and improved QoS and QoE support. The hop-by-hop control approaches add state management overhead on routers; plus, they require implementing new scheduler components.
In client-driven adaptation methods, consumers actively participate in optimizing their content retrieval. Clients fetch content fragments from multiple sources over different paths and dynamically adjust retransmission timeouts and window size. The negotiation-based gaming approach enables clients to negotiate with content caches based on price and desired QoS levels. Note that these methods rely on accurate client-side measurements (as shown in
Figure 5).
5.5. Media-Streaming Strategies
Traditionally, adaptive streaming has been widely used to improve the consumer QoE for multimedia streaming. Adaptive streaming schemes such as DASH enable video quality adjustment during playback to accommodate the current network conditions, which allows for avoiding bigger issues like stalling.
While DASH has been very effective in IP-based networks, new challenges associated with media streaming in ICN make it less effective to implement DASH without modification. For instance, DASH utilizes end-to-end bandwidth estimation, which is difficult in ICN due to content retrieval from multiple sources [
115,
116]. In addition, DASH could also suffer from bandwidth overestimation issues illustrated in
Figure 5. Furthermore, DASH cannot utilize the content cached at nearby ICN routers’ content stores (CSs) as it is designed for a client–server architecture, while ICN focuses on one-to-many content dissemination. Some studies investigating QoS and QoE in ICN from the multimedia-specific perspective are described in this section and listed in
Table 6. These media-focused studies pay particular attention to QoE due to its importance in multimedia consumption.
Ramakrishnan et al. [
115] proposed a CCN adaptive video-streaming framework based on network coding to enable a CCN client to fetch a video from multiple sources using multiple interfaces of a CCN router. The model was tested on the CCNx 0.8.2 platform in conjunction with the CCN-VLC plugin made available by the ITEC-DASH project [
117]. Concurrent streaming from multiple sources (two servers with network coding enabled) produced better QoE and higher throughput than streaming from a single server.
Nunome [
118] studied the access time of contents within the content store of a CCN router and how users could exploit it. The author considered a scenario where multiple consumers access the same video or audio content simultaneously. If a terminal deliberately delays access time, it opens up the possibility of exploiting cached data in the CS.
Sadat et al. [
116] investigated the impact of distributed content caching on video streaming in CCN. The authors quantified the number of source switchings during video streaming and the impact on users’ QoE. Similar to [
118], simultaneous retrieval of the same multimedia content was considered in [
116]. A new adaptive video-streaming framework was developed utilizing MOS from human subject tests to guarantee satisfactory QoE. Experimental results derived from the ccnSim simulator and CCNx emulator demonstrated the efficiency of the proposed streaming framework compared to standard DASH and SVC-based streaming in terms of QoE, delay, and bit rate/request.
Takada et al. [
119] and Kobayashi et al. [
120] proposed solutions for CCN audio and video transmissions and investigated the impacts of different caching strategies on QoE and application-layer QoS. In [
121], authors proposed a strategy to improve QoS and QoE for repeatedly accessed multimedia content (“multi-view”) through coordinated scheduling of content retrieval initiation times across users.
Table 6.
Media-streaming strategies: architectures, experimental platforms, and QoS/QoE metrics.
Table 6.
Media-streaming strategies: architectures, experimental platforms, and QoS/QoE metrics.
# | Proposal andPublication Year | Architecture | Experimental Environment | Optimized QoS/QoE Metric |
---|
1. | Ramakrishnan et al. [115], 2016 | CCN | CCNx 0.8.2, LAN | QoE, throughput |
2. | Sadat et al. [116], 2019 | CCN | ccnSim, CCNx [63], LAN and 802.11n WLAN | QoE (MOS from human subject test), delay, bit rate/request |
3. | Toshiro Nunome [118], 2022 | CCN | CeforeNet, LAN | MOS, video media unit (MU) loss ratio, video error concealment ratio, audio media unit loss ratio |
4. | Nunome and Takada [121], 2023 | CCN | Cefore [122], LAN | Audio media unit loss ratio, video media unit loss ratio, viewpoint change delay, MOS |
5. | Takada and Nunome [119], 2024 | CCN | Cefore, LAN | MOS, cache utilization ratio, video media unit loss ratio |
6. | Kobayashi and Nunome [120], 2024 | CCN | NS-3 and ccns3sim [123] | Cache hit ratio, audio media unit loss ratio, video error concealment ratio |