3.2.1. HEA partitioning algorithm
We have proposed a Latency-aware Hierarchical Partitioning (LHP) algorithm, which is a graph-partitioning-based algorithm, to solve the problem of HM placement. The LHP algorithm takes
(abbreviated as
) as an input set of potential locations for deploying resolvers with the latency matrix D, threshold parameter set {
} and hierarchical parameter L, and returns the HM location sets and HEA node sets as outputs. For a given
and matrix D, the steps of the LHP algorithm to divide the graph into
hierarchies with a set of {
} by using the top-down partitioning method are as follows. The symbols used in the LPH algorithm are shown in
Table 1. The pseudocode of the LHP algorithm is shown in Algorithm 1.
Step 1: Initialization: , , , .
Step 2: Read and get the partitioning latency bound for i-th level HEAs;
Step 3: Cut the edges in whose weight is larger than . In this way, may be divided into one or more connected subgraphs , . Sort by descending order according to .
Step 4: For each , if , then set the isolated node as of i-th level HEA, . Then, add into and . , . At the end of partitioning, go to step 9, or else go to step 5.
Step 5: If , firstly, select the from , whose weighted sum of bandwidth (CPU) storage capacity is the best, then set as . Set and add into . Next, go to step 6.
Step 6: Set as the root, use Breath-First Search (BFS) to find the satisfied on the path to , whose latency is less than and add them into . , .
Step 7: Delete and all the selected from . If , then add and into , which is the set of all the nodes to form the (i-1)-th level HEA inside the .
Step 8: Check whether is empty. If , then go to step 9. Otherwise, go back to step 5, repeating the process of selecting new anchor nodes and using BFS to find satisfied to form new i-th level HEAs until is empty.
Step 9: Check if all have been partitioned. If satisfied, then go to step 10. Otherwise, return to step 4.
Step 10: If , then set .
Step 11: Get the latency information of nodes in from and D to generate a new graph .
Step 12: Then set as the input graph of the (i-1)-th level HEA partitioning. That is, . Next, return to step 2 until .
Algorithm 1. Latency-aware HEA Partitioning |
| Input:, , , |
| Output:, , , |
| Parameters: , , , , |
1: | Initialization: , , , |
2: | While () do: |
3: | Get ; |
4: | Delete the edges in where to cut off ; |
5: | Assume that we get M counts maximum connected component , ; |
6: | Sort all of the by descending order according to ; |
7: | for from 1 to do: |
8: | Get ; |
9: | if then: |
10: | Set the isolated node as of i-th level HEA; |
11: | Add the node into ; |
12: | k = k + 1; |
13: | j = j + 1; |
14: | end if |
15: | else: |
16: | Select an anchor node from whose weighted sum of bandwidth, CPU, storage capacity is the best as ; ; |
17: | Set as root, then use BFS to find those on the path to whose latency is less than or equal than and add them into ; |
18: | Delete and all the selected from ; |
19: | if : |
20: | Add and selected into ; |
21: | end if |
22: | k = k + 1; |
23: | j = j + 1; |
24: | if then: |
25: | finish processing ; |
26: | end if |
27: | else: |
28: | return to 16; |
29: | end if |
30: | end if |
31: | end for |
32: |
|
33: | if : |
34: |
|
35: | get the connection information of nodes in from to generate a new graph ; |
36: | set as the input graph of the (i-1)-th level HEA partitioning; ; |
37: | return to 2; |
38: | end if |
39: | end while |
3.2.2. Name Registration and Name Resolution Schemes
The name resolution function in ENRS follows the Publish/Subscribe paradigm, i.e., publishers (content providers) and subscribers (content consumers) will interact with ENRS through a set of HMs. Publishers and subscribers will interact with ENRS via different local HMs, which are designated through network attachment. Name resolution is developed based on the following control messages. We define three types of massages in ENRS as follows:
REGISTER (, EID, NA, , );
QUERY (EID, , , , );
UPDATE (, EID, NA, , ).
(A) Customized Parameters
These parameters are defined to assist the selections of HMs and to decide the diffusion scopes of registration and resolution messages. The definition of parameters as follows:
(i) Service Type Option (STO) parameter: represented as , 8 bits, which is used to distinguish the service type to decide the scopes of latency requirements. The service type can be factory automatic, vehicular networking service, mobile video service, etc.
(ii) Enhanced Name Resolution Threshold (ENRT) parameter: represented as , 8 bits, it ranges from 0–255 ms; is the latency requirement threshold provided by ENRS.
(iii) Name Resolution Delay Requirement (NRDR) parameter: represented as , 8 bits. ranges from 0 to 255 ms. is a service-oriented and predetermined round-trip response time requirement parameter of name registration and resolution, which is utilized to assist the strategy selection. If there is no special delay requirement for an EID, then . Usually, the EIDs generated by the same application or service have the same . is set based on the requestor’s criteria or the contextual information.
(v) Timestamp parameter: represented as , 32 bits, is the timestamp when the message is sent.
(vi) Remaining Tolerable Latency (RTL) optional parameter: represented as , 8 bits. is the remaining available time to query a name. will decide whether to forward the request to the peer neighbors of HM when the Tolerable Latency Based Peer HM Forwarding algorithm is available, which will be discussed further below.
(B) Name Registration and Name Resolution
Name registration is aimed at publishing the bindings of EIDs and NAs in the ENRS’s distributed resolvers. Name resolution finds the residing place of one copy of a name binding. Hierarchical NRSs have different name resolution approaches. For example, in CURLING, Content Resolution Servers (CRS) propagate registration and resolution requests only to their provider AS, which is the root of the tree structure of hierarchical resolution nodes, as shown in
Figure 3a.
Similar to CURLING, ENRS only forwards the registration and resolution requests to certain HMs by single-hop, but is not limited to AS-level. ENRS develops area-partitioning based registration and resolution request schemes according to the response time requirements, not only to realize constant query hops, but also to achieve local resolution and forwarding locality.
In ENRS, the REGISTER message will be forwarded to proper HMs in different hierarchical HEA the requestor belongs to by quantitative comparison results of the round-trip response time requirement
and the transmission latency constraints
between the user nodes to the i-th level HM. The message forwarding conditions are divided into three cases as below. The examples of name registration and resolution processes are shown in
Figure 3b–d.
(a) Name Registration
Case 1: if , the REGISTER message will be forwarded to the HM of bottom-level HEA that the requestor belongs to, e.g., HM11.
Case 2: if , , the REGISTER message will be forwarded to the HM of the i-th level HEA the requestor belongs to, e.g., HM21.
Case 3: if , the REGISTER message will be forwarded to the HM of the top-level HEA, e.g., HM31.
In ENRS, the procedure of the name update is similar to name registration. For a global NRS, it is expected to handle about 10 billion mobile devices’ connecting requests through more than 100 networks a day, equivalent to 10 million resolutions and updates per second due to mobility. ENRS is a local NRS, the load of which is much smaller than the global ENRS, while the resolution and update frequencies are still worth considering. We assume that 100 ENRS sub-systems form a global NRS, thus each ENRS is expected to process about 10 million mobile devices’ a day and 10 thousand update requests per second. Thus, fast update and synchronization mechanisms under mobility conditions must be designed to make EID accessible. The issue of name binding synchronization is beyond the scope of this paper.
(b) Name Resolution
The name resolution procedure is the inverse of name registration. For a given EID with , the QUERY message will be forwarded to the i-th HM according to the quantitative comparison results of and . The situations can also be divided into three cases as follows:
Case 1: if , the QUERY message will be forwarded to the HM of bottom level HEA that the requestor belongs to, e.g., HM11.
Case 2: if , , the QUERY message will be forwarded to the i-th level HM that the requestor belongs to, e.g., HM21.
Case 3: if , the QUERY message will be forwarded to the HM of top-level HEA that the requestor belongs to, e.g., HM31.
In order to improve the hit ratio of ENRS, we define three message forwarding types between the neighboring HMs in the DNHT:
HEA-Unicast: A HM server unicasts to a certain peer HM.
HEA-Broadcast: A HM server broadcasts the messages to the entire DNHT.
HEA-Multicast: A HM server casts the messages to a group of HMs in the DNHT.
As we described above, in Case 1 of the name resolution scheme, the QUERY request is forwarded to the bottom level HM. If the HM does not have requested EID, it will not disseminate the request to its neighbors, since for an ultra-low latency service, the timeout response is meaningless. While different from Case 1, in Case 2 and Case 3, if the QUERY request is forwarded to the middle level HM, whether to forward the name resolution message to the neighbors of HM will be decided by the Tolerable Latency-Based Peer HM Forwarding algorithm, which is subject to
,
, and
. We propose a Tolerable Latency-Based Peer HM Forwarding algorithm. The parameter updating and message forwarding methods are showed in Algorithm 2. The meaning of symbols used in Algorithm 2 are described in
Table 2. If
is set as −1, Algorithm 2 is invalidated, while if
is a positive integer, Algorithm 2 is available. When the QUERY message is sent by the requestor.
will be updated each time the message is forwarded. If the load of ENRS is small, we can simply use HEA-Broadcast method to flood the query messages. If the load of ENRS is high, we can use Algorithm 2 to reduce the signalling overhead of ENRS.
Algorithm 2. Tolerable Latency-Based Peer HM Forwarding |
| Input: , , , , , , {}. |
| Output: . |
| Parameters: , . |
1: | Initialization: , , |
2: | |
3: | |
4: | |
5: | if then: |
6: | for each in the do: |
7: | if there exist whose latency to , that is satisfies then: |
8: | Add into . |
9: | end if |
10: | else: |
11: | = |
12: | end if |
13: | end for |
14: | end if |
15: | if or then: |
16: | return query failure response to . |
17: | end if |
18: | else: |
19: | writes the updated and in the QUERY message. |
20: | forwards the QUERY message to the node(s) in . |
21: | end if |
As shown in
Figure 3b–d, we demonstrate different situations of providers and subscribers that are located in the same HEAs of different levels in ENRS. Based on our name resolution schemes, if the providers and subscribers are in the same network domain, the resolution path can also be contained in that domain. Thus, local resolution at different granularity and the forwarding locality can be achieved; as a consequence, the inter-domain traffic will be minimized. In Case 2 and Case 3, if the scope of latency to the peer HMs is tolerable, the query request will be forwarded to the peer links of HMs, which can effectively manage the spreading scopes of the messages. What is more, ENRS can avoid frequent updating of name bindings in the same HEA when UEs move within a HEA, which is beneficial for reducing the update traffic and decreasing the maintenance overhead.
Mobility has become a basic requirement of communication networks because of the explosive growth of wireless and mobile devices. Mobility objectives can be divided into two types in ICN: producer mobility and consumer mobility. In our system, both the producer and consumer mobility can be primarily handled by the ENRS by supporting late-binding technology [
49]. To support mobility if an entity (e.g., UE, IoT device, data entity) is moved and attached to a new Point of Access (PoA), it has to re-register in ENRS with its new NA(s) and must be updated in time to make itself accessible for other entities in the ENRS.
From a new aspect, ENRS is a local NRS. Thus, different levels of HMs represent different control traffic offloading capabilities from the global NRS. If a name query request is satisfied by ENRS, then the query load is offloaded by ENRS from the global NRS. The offloading rate is equal to the hit ratio and the merit of ENRS is that it is more intuitive when thinking about the traffic load.