Next Article in Journal
Exploring the Impact of Resource Management Strategies on Simulated Edge Cloud Performance: An Experimental Study
Previous Article in Journal
Advanced Security Framework for 6G Networks: Integrating Deep Learning and Physical Layer Security
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Phase Adaptive Recoding: An Analogue of Partial Retransmission in Batched Network Coding †

by
Hoover H. F. Yin
1,2,*,
Mehrdad Tahernia
3 and
Hugo Wai Leung Mak
4,5,*
1
Department of Information Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
2
Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
3
n-hop technologies Limited, Unit 316, 3/F, Building 8W, Phase Two, Hong Kong Science Park, Pak Shek Kok, New Territories, Hong Kong, China
4
Department of Mathematics, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
5
Department of Mathematics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
*
Authors to whom correspondence should be addressed.
This paper is an extended version of our paper published in Yin, H. H. F.; Tahernia, M. Multi-Phase Recoding for Batched Network Coding. In Proceedings of the 2022 IEEE Information Theory Workshop (ITW), Mumbai, India, 1–9 November 2022; pp. 25–30.
Network 2024, 4(4), 468-497; https://doi.org/10.3390/network4040024
Submission received: 1 September 2024 / Revised: 7 October 2024 / Accepted: 23 October 2024 / Published: 30 October 2024

Abstract

:
Batched network coding (BNC) is a practical realization of random linear network coding (RLNC) designed for reliable network transmission in multi-hop networks with packet loss. By grouping coded packets into batches and restricting the use of RLNC within the same batch, BNC resolves the issue of RLNC that has high computational and storage costs at the intermediate nodes. A simple and common way to apply BNC is to fire and forget the recoded packets at the intermediate nodes, as BNC can act as an erasure code for data recovery. Due to the finiteness of batch size, the recoding strategy is a critical design that affects the throughput, the storage requirements, and the computational cost of BNC. The gain of the recoding strategy can be enhanced with the aid of a feedback mechanism, however the utilization and development of this mechanism is not yet standardized. In this paper, we investigate a multi-phase recoding mechanism for BNC. In each phase, recoding depends on the amount of innovative information remained at the current node after the transmission of the previous phases was completed. Relevant information can be obtained via hop-by-hop feedback; then, a more precise recoding scheme that allocates networking resources can be established. Unlike hop-by-hop retransmission schemes, the reception status of individual packets does not need to be known and packets to be sent in the next phase may not be the lost packets in the previous phase. Further, due to the loss-tolerance feature of BNC, it is unnecessary to pass all innovative information to the next node. This study illustrates that multi-phase recoding can significantly boost the throughput and reduce the decoding time as compared with the traditional single-phase recoding approach This opens a new window in developing better strategies for designing BNC rather than sending more batches in a blind manner.

1. Introduction

Multi-hop wireless network is an emerging trend of network topology [1]. They can be found in various scenarios including Internet of Things [2,3], sensor networks [4], vehicle ad hoc networks [5,6], smart lamppost networks [7,8], free-space optical networks [9], field area networks [10], and terahertz communications [11], etc. Wireless links can be easily interfered by other wireless signals and undesirable environmental constraints and events. A packet with a mismatched checksum is regarded as a corrupted packet and is dropped by the network node. In other words, in traditional networking that adopts end-to-end retransmission, the destination node can receive a packet only when this packet is correctly transmitted through all network links. This probability diminishes exponentially with the number of hops.
The widely adopted reliable communication protocol, TCP, is not designed for wireless communications. For example, packet loss in wireless communications may not be caused by congestion but can be attributed to interference and temporary shadowing. However, such a packet loss event will trigger the TCP congestion control mechanism and, as a result, reduce the transmission rate. Variations of TCP such as ATCP [12] were proposed to handle relevant packet loss scenarios. Advanced technologies such as Aspera [13] will only send feedback and retransmission requests when a packet is lost. However, these TCP-alike protocols are still end-to-end protocols. In other words, the default store-and-forward strategy is applied at the intermediate network nodes and an end-to-end retransmission is adopted. Any packet can reach the destination node only if it has not been lost at any network link. In a multi-hop wireless network (with packet loss), the end-to-end retransmission approach may further degrade the system performance due to the high chance of losing retransmitted packet at one of the lossy links.

1.1. Network Coding Approaches

To truly enhance the performance of multi-hop wireless communications with packet loss and a large number of hops, the intermediate network nodes have to adopt a strategy other than forwarding. Random linear network coding (RLNC) [14] is a realization of network coding [15,16] that allows the intermediate network nodes to transmit new packets generated by the received packets. Previous works showed that RLNC can achieve the capacity of networks with packet loss for a wide range of scenarios [14,17,18,19,20,21]. Instead of forwarding, recoding (i.e., re-encoding) is performed at the intermediate network nodes, which generates recoded packets by linearly combining the received packets randomly. Various directions on applying RLNC were investigated in the literature, such as machine learning approaches [22], secure communications [23,24], speedup via codebook design [25], hardware implementations [26,27], vehicular networks [28,29], mobile networks [30,31], interactions with existing network protocols [32,33], energy harvesting [34], and Internet of Things [35].
A direct implementation of the above RLNC scheme has a few technical problems and constraints. First, each intermediate network node has to buffer all the received packets, which may consume a huge amount of memory. Second, a coefficient vector being attached to each packet for recording the recoding operations is essential for decoding; however, the length of the vector can be very long when the number of input packets is large, which induces significant consumption of network resources. Third, the destination node has to solve a big, dense system of linear equations, which consumes significant computational resource for Gaussian elimination [36].
To resolve these several problems, a generation-based RLNC was proposed in [37]. In this approach, the input packets are first partitioned into multiple disjointed subsets called the generations. RLNC is then applied to each generation independently. This approach can be viewed as applying RLNC to multiple pieces of data (generations) independently. A generation can be discarded after its transmission. The main idea is that each generation is smaller than the original whole piece of data. Therefore, generation-based RLNC has smaller storage and computational requirements. Also, due to a smaller number of input packets in a generation, the length of coefficient vector is also shorter. However, the advantage of RLNC diminishes when the generation size is smaller—that is, we need a sufficiently large generation size to enjoy the gain of RLNC but the size cannot be too large; otherwise, we cannot mitigate the issues of RLNC mentioned above. Therefore, further optimizations on the design of generation-based RLNC were discussed and proposed in numerous studies, such as parallel decoding [38], speedup via codebook-based approach [39], smaller decoding delay and complexity [40,41,42,43,44,45], input packet size selection [46,47,48,49,50], and coefficient overhead reduction [51,52,53].
In generation-based RLNC, data in a generation cannot be decoded using packets from another generation. The information carried by a generation has to be completely received in order to recover the data. This feature suggests that the optimal theoretical rate cannot be achieved. Towards the optimal rate, overlapped subsets of input packets were considered [54,55,56,57]. More sophisticated approaches restrict the application of RLNC to small subsets of coded packets generated from the input packets [58,59,60,61,62,63]. The coded packets can be generated by LDPC [60,61], generalized fountain codes [62,63], etc. This application of coding theory in network coding leads to another variant of RLNC called batched network coding (BNC). The small subsets of coded packets are called batches. In early literature, a batch is also known as a class [54], a chunk [55], or a segment [64]. A notable difference of BNC from the generation-based RLNC is that BNC decodes the batches jointly such that the decoder does not need to receive all the packets within every batch. The small memory requirement of BNC enables hardware acceleration to boost the real-time performance [65,66,67].
The ordinary RLNC can be regarded as a “BNC with a single batch”, so that we can perform recoding on the data until the transmission is completed. When there are multiple batches, one has to specify the number of recoded packets to be generated for each batch. The information carried by each batch, known as the “rank” of the batch, jointly affects the throughput of BNC. More specifically, the achievable rate of BNC is upper bounded by the expectation of the ranks of the batches arriving at the destination node [68]. Therefore, our goal is to design a recoding scheme that preserves more “ranks” across the batches.

1.2. Recoding of Batched Network Coding

The simplest recoding scheme for BNC is known as the baseline recoding, which generates the same number of recoded packets for all batches regardless of their ranks. Due to its simple and deterministic structure, it appears in many BNC designs and analyses such as [69,70,71]. However, the throughput cannot be optimized [72] as the scheme assigns too many recoded packets for those low-rank batches. Adaptive recoding [64,73,74] is an optimization framework for deciding the number of recoded packets to be sent in order to enhance the throughput. The framework depends only on the local information at each node; thus, it is capable of being applied distributively and adopted for more advanced formulations [75,76].
On the other hand, systematic recoding [72,77] is a subsidiary recoding scheme that forwards the received linearly independent recoded packets at a node (i.e., those packets that carry innovative rank to the node) and regards them as the recoded packets generated by the node itself. This way, fewer recoded packets have to be generated and transmitted later on, thus potentially reducing the delay of receiving a batch at the next node as well as the time taken for decoding. The number of extra recoded packets to be generated is independent of systematic recoding but is related to the results of baseline or adaptive recoding. Theoretically speaking, the throughput of systematic recoding is better than generating all recoded packets by RLNC, though the benefit is indistinguishable in practice [77]. The actual enhancement in decoding time, however, is not well studied.
Although many works focus on the recoding of BNC, they consider a fire-and-forget approach: the reception status of the recoded packets generated by the current node is not taken into consideration. This fire-and-forget approach can be useful in extreme applications when feedback is expensive or not available, such as deep-space [78,79,80] and underwater communications [81,82,83]. However, the use of feedback is very common in daily networking applications. For fire-and-forget adaptive recoding, feedback can be used to update the channel condition for further transmission in adaptive recoding [74]. Yet, it was also shown in [74] that although the throughput can be enhanced, the gain is insignificant. Therefore, another better way of making use of feedback for BNC has to be sorted out.

1.3. Aims and Objectives

In this paper, we consider the use of “reception status” as a feedback for the previous node. As every recoded packet is a random linear combination of the existing packets, all recoded packets of the same batch have the same importance with respect to information carriage. In other words, the “reception status” is not the status for each individual packet but a piece of information that can represent the rank of the batch.
From the reception status, the node can infer the number of “innovative” ranks that is still capable of providing the next node. Thus, the node may start another phase of transmission for sending these innovative ranks by recoding, where the number of recoded packets in this phase depends on the innovative ranks. The throughput can be enhanced because this approach can be regarded as “partial” hop-by-hop retransmission. Partial retransmission is emphasized in this context because in BNC, not all ranks (or packets) in a batch are necessary to be received for data recovery purposes. The “retransmitted” packets, which are recoded packets generated by random linear combinations, are likely to be different from what the node has previously sent. On the other hand, due to recoding, more packets apart from the innovative rank of a batch can be sent before receiving the feedback. These features of BNC make the concept of “retransmission” different from the conventional one.
In general, one can perform more than two phases of recoding, i.e., multi-phase recoding. To understand the idea of “phase”, we first illustrate in Figure 1 a three-phase recoding. To transmit a batch, the current node begins phase 1 and sends a few recoded packets. The number of recoded packets depends on the recoding scheme, which will be discussed in Section 3. The lost packets are marked by crosses in Figure 1. After the current node sends all phase 1 packets to the next node, the next node sends a feedback packet for phase 1, which describes the reception status of the batch. According to the feedback packet, the current node infers the number of “ranks” of the batch that can still be provided to the next node, i.e., the innovative rank of the batch; then, the transmission of phase 2 will begin. The number of recoded packets depends on the innovative rank inferred. After the current node has sent all phase 2 packets, the next node gives feedback for phase 2. Lastly, the current node starts phase 3 and discards the batch afterwards. Although the reception state after the last phase does not seem useful as it will not trigger a new phase, it can be used for synchronization in the protocol design. Details can be found in Section 3.4.
Although the idea looks simple, the main difficulty arises from the allocation of the expected number of packets to be sent in each phase. This problem does not exist in generation-based RLNC, as BNC does not require the next node to receive all the ranks. The optimization problem is a generalization of the adaptive recoding framework with high dependence between phases.

1.4. Paper Organization and Our Contributions

The organization of this paper is as follows: The background of BNC and the formulation of the adaptive recoding framework are described in Section 2. This formulation aims for a fire-and-forget strategy, i.e., a one-phase adaptive recoding scheme. Next, the optimization model of multi-phase adaptive recoding is described in Section 3. The general problem in Section 3.1 is complex and hard to solve due to the high dependency between phases. Therefore, a relaxed model is formulated in Section 3.2 and a heuristic to efficiently solve the two-phase problem is proposed. After that, the mixed use of systematic recoding and multi-phase recoding is investigated in Section 3.3. This version is easy to be solved and gives a lower bound on the other multi-phase recoding models. Next, the design of a protocol that supports multi-phase recoding is described in Section 3.4, which allows the evaluation of the decoding time. At last, the numerical evaluations of the throughput and the decoding time are presented in Section 4, and the study is concluded in Section 5. The flow of this research is highlighted in the flowchart illustrated in Figure 2.
The first contribution of this study lies on the formulation and investigation of the throughput of multi-phase recoding. As a tradition in BNC literature, the throughput is defined as the expected value of the rank distribution of the batches arriving at the destination node, i.e., the theoretical upper-bound of the achievable rate [68]. This is because there exists BNC (e.g., BATS codes [63]) that can achieve a close-to-optimal rate. The throughput is the objective to be maximized in the framework of adaptive recoding. However, the full multi-phase problem is hard to optimize in general; thus, we formulate a relaxed problem as a baseline for comparison. For the two-phase relaxed problem, we also propose a heuristic so that we can approximate the solution efficiently.
The throughput defined for BNC does not consider the time needed for decoding. It is a measurement of the amount of information carried by the transmissions. Yet, decoding time is a crucial factor in Internet of Things and real-time applications. When the previous node has finished sending all recoded packets of the current batch, it will be an appropriate time to start recoding a batch in traditional BNC, as this arrangement can maximize the throughput (i.e., expected rank) [84,85]. With systematic recoding, the received linearly independent recoded packets are directly forwarded to the next node; thus, the number of recoded packets to be generated at later stages is reduced. Although this approach can potentially reduce the decoding time, its generalization into a multi-phase variation to become aligned with the multi-phase recoding scheme is necessary, which lays down the second contribution of this study.
An example of two-phase variation of systematic recoding is illustrated in Figure 3. The flow of the phases between the previous and the current nodes is the same as that shown in Figure 1. When the current node receives a linearly independent recoded packet, it forwards the packet to the next node directly. Starting from phase 2, the nodes (both the previous and the current nodes) generate new recoded packets by RLNC to avoid (with high probability) having the same packet arriving at the next node again. Except the last phase, the number of recoded packets is the same as the innovative rank in this phase; thus, the idea is similar to the original systematic recoding. In the last phase, a fire-and-forget adaptive recoding is applied to complete the batch transmission process. There might be many idling time intervals between the current and the next nodes in the figure, but this can be attributed to the consideration of a single batch (as shown in the figure). During these time intervals, packets of other batches can be sent to utilize the link, and a more detailed discussion can be found in Section 3.4. Although the throughput is not optimized, the multi-phase systematic recoding problem can be solved very efficiently; therefore, it can be adopted in systems where the number of recoded packets per phase cannot be precomputed in advance.
The third contribution lies on the description of a baseline protocol that supports multi-phase recoding. The technical shortcoming of the existing protocols for BNC such as those in [72,85,86] is that they assume a certain sequential order of the batches to determine whether no more packets of a batch will be received in the future. This way, one can determine when the batch recoding process can be started. When it comes to the multi-phase scenario, instead of waiting for the completion of all phases of a batch before transmitting the packets of another batch, one can interleave the phases of different batches to reduce the number of idling time intervals. However, the sequential order of the batches is then messed up. When the multi-phase variant of systematic recoding is enforced, the systematic recoded packet can be transmitted at any time, which induces more chaos to the sequential order. In other words, without a renovation on the existing protocols for BNC, multi-phase recoding cannot be realized.
The purpose of our protocol design is to give a preliminary idea for real-world deployment of the multi-phase recoding strategy. In our design, the problem of deciding the number of recoded packets per batch per phase is separated from the protocol so that one can apply different multi-phase recoding schemes for comparison. With the help of this protocol, we evaluate the decoding time of multi-phase recoding and compare it with that of single-phase recoding. As a baseline workable solution, we derive the protocol from the minimal protocol for BNC [72] so that the simple (if not the simplest) structure is inherited. Further adjustment and improvement would be needed to adopt multi-phase recoding in a more complicated system.

2. Preliminary

Although there are many possible routes to connect a source node and a destination node, only one route will be selected for each packet; thus, line networks are the fundamental building blocks to describe the transmission. A line network is a sequence of network nodes where network links only exist between two neighboring nodes. A recoding scheme for line networks can be extended for general unicast networks and certain multicast networks [63,64]. On the other hand, in scenarios such as bridging the communications of remote areas, street segments of smart lamppost networks, deep space [79,80], and deep sea [82,83] communications, the network topologies are naturally full or segments of multi-hop line networks. Therefore, we only focus on line networks in this paper. Although there are works on dependent packet loss models [75,76] and time-variant channels [87,88], we consider independent packet loss at each link to further simplify the analysis. This assumption was made in many BNC studies such as [61,63,64,70,89].

2.1. Batched Network Coding

Denote by Z and Z + the sets of integers and positive integers, respectively. Suppose we want to send a file from a source node to a destination node via a multi-hop line network. The data to be transmitted are divided into a set of input packets of the same length. Each input packet is regarded as a column vector over a finite field. An unsuitable choice of input packet size has significant implications as we need to pad useless symbols to the file [46,90]. A similar issue also occurs in the ordinary RLNC, and there are various schemes to reduce this overhead [46,47,48,49,50]. For BNC, there are other types of padding overheads due to the finite size of batches. To select a good input packet size, one solution is to apply the heuristic in [91]. We consider a sufficiently large field size so that we can make an approximate assumption that two random vectors over this field are linearly independent of each other. This assumption is common in network coding literature such as [70,75,89,92,93]. This way, any n random vectors in an r-dimensional vector space can span a min { n , r } -dimensional vector space with high probability.
The source node runs a BNC encoder to generate batches. Each generated batch consists of M Z + coded packets, where M is known as the batch size. Each coded packet is a linear combination of a subset of the input packets. The formation of the subset depends on the design of the BNC. The choice of batch size is a trade-off between the advantages and disadvantages of RLNC. A large batch size can achieve a higher rate, but this reintroduces the drawback of the ordinary RLNC. A small batch size leads to a lower rate, but it allows practical deployments of BNC. Common choices of batch sizes are M = 4 , 8 , and 16.
Take BATS codes [63,77] as an example, where BATS coding is a matrix generalization of fountain codes such as Luby transform codes [94], Raptor codes [95,96], and online codes [97]. The encoder samples a predefined degree distribution to obtain a degree, where the degree is the number of input packets contributed to the batch, and this set of packets is randomly chosen from the input packets. Depending on the application, there are various ways to formulate the degree distribution based on differential equation analysis [63] or tree analysis [98]. Examples of applications include streaming via sliding windows [99,100] or expanding windows [101,102], finite-length data transmission [103], unequal data protection [104], and multicast communication [105]. The throughput degradation issue due to unmatched degree distribution was investigated and mitigated in [77,105,106], and was resolved with a close-to-optimal throughput via a Wasserstein distributionally robust optimization framework in [107].
The encoding and decoding of batches form the outer code of BNC. As a general description, let S be a matrix formed by juxtaposing the input packets and  G be a width-M generator matrix of a batch. Each column in the product of S and G corresponds to a packet of the batch.
A coefficient vector is attached to each of the coded packets. The juxtaposition of the coefficient vectors in a freshly generated batch is an M × M full-rank matrix, e.g., an identity matrix [85,86]. The rank of a batch at a node is defined as the dimension of the vector space spanned by the coefficient vectors of the packets of the batch at the node. In order words, by juxtaposing the coefficient vectors of the packets of a batch at the node to form a matrix H , the matrix rank of H is the rank of the batch at this node. At the source node, every batch has rank M.
Recoding is performed at any non-destination node, which is known as the inner code of BNC—that is, we can also apply recoding to generate more packets at the source node before transmitting them to the next node. Simply speaking, recoding generates recoded packets of a batch by applying RLNC on the packets of the batch at the node. Let c i and v i be the coefficient vector and the payload of the i-th packet in the batch, respectively. A new recoded packet generated by RLNC has a coefficient vector i β i c i and payload i β i v i , where β i ’s are randomly chosen from the field. The number of recoded packets depends on the recoding scheme [64,70,71,75].
Before generating new recoded packets, the packets of a batch on hand can be considered as part of the recoded packets. This strategy is called systematic recoding [72,77], which is independent of the decision on the number of recoded packets to be sent. A more refined version is that only the linearly independent packets of a batch are being forwarded. This way, we would not forward too many “useless” packets if the previous node generates excessive recoded packets.
The destination node runs a BNC decoder, where the decoding method depends on the BNC, e.g., Gaussian elimination, belief propagation, and inactivation [96,108]. The ranks of the batches at the destination node form sufficient statistics to the performance of the BNC [68,77]. Besides the standard approach, there are also advanced encoding and decoding schemes such as improved BP decoders [109,110], systematic encoding [111], partial data recovery [112], protograph-based construction [113], encoding for expanding windows with feedback [114], decoding with coupled batch size and degree distribution [115], decoding in cooperative broadcasting scenarios [116,117], joint spatial–temporal encoding [118], and joint BP decoder with LDPC precoding [119,120,121].

2.2. Expected Rank Functions

The distribution of the ranks of the batches is closely related to the throughput of BNC. Hence, the expectation of the rank distribution is a core component in recoding strategy. We first describe the formulation of this expectation at the next network node.
Let Bin ( n , p ) be a binomial distribution with failure probability p. Its probability mass function, where x Z is defined, is as follows:
Bin ( n , p ; x ) = n x ( 1 p ) x p n x if x = 0 , 1 , , n , 0 otherwise .
We further define a variant of Bin ( n , p ) , denoted by Bin ( n , p , c ) , which we called the condensed binomial distribution. The word “condensed” is borrowed from the Bose–Einstein condensate in physics, which has a similar phenomenon on the distribution. Here, we give a brief description in layman terms. Suppose we have a distribution of finding a boson, which is a type of particle, in different (quantum) states. When we lower the temperature, the distribution shifts towards the lower states. However, there is a lowest accessible state. The probability masses that shifted beyond the aforementioned state accumulate and “condense” into the lowest accessible state. Its probability mass function is defined as
Bin ( n , p , c ; x ) = Bin ( n , p ; x ) if x = 0 , 1 , , c 1 , i = c n Bin ( n , p ; i ) if x = c , 0 otherwise
for x Z . The hooked arrow symbolizes that we move the probability masses of the tail towards where the arrows point to. The parameter k is called the condensed state of the distribution.
We can interpret these distributions this way. Let p be the packet loss rate of the outgoing channel. Assume the packet loss events are independent. The random variable of the number of received packets of a batch at the next node when the current node sends n packets of this batch follows Bin ( n , p ) , while the random variable of the rank of this batch at the next node follows Bin ( n , p , r ) when this batch has rank r at the current node. This is because when the field size is sufficiently large, any i random vectors from an r-dimensional vector space can span a min { i , r } -dimensional vector subspace with high probability. One can find the exact formula for the probability mass function for the rank in [63]. Yet, it is common to assume a sufficiently large field size in the context of recoding of BNC [70,74,75].
Let V and V be the vector spaces spanned by (the coefficient vectors of) the packets of a batch at the current node and the next node, respectively. The “rank” of the batch at the current node that can still provide the next node is the “useful” information of the batch at the current node. This concept, known as the innovative rank, was adopted in [122], although there is context for handling overhearing networks. Formally, it is defined as
dim ( V + V ) dim ( V ) .
Before the current node sends any packet to the next node, V is an empty space. In this case, the innovative rank is the same as the rank of the batch—that is, for a fire-and-forget recoding scheme, the rank equals the innovative rank.
The goal of recoding is to retain as much “rank” as possible. In a distributed view, we want to maximize the expected rank of the batches at the next node, which is the core idea of adaptive recoding. Denote by E ( r , t ) the expected rank of a batch at the next node when this batch has innovative rank r at the current node and the current node sent t recoded packets—that is,
E ( r , t ) = i = 0 t Bin ( t , p ; i ) min { i , r } = i = 0 M i Bin ( t , p , r ; i ) .
This is called the expected rank function. As the independent packet loss pattern is a Bernoulli process, which is stationary, E ( r , t ) has the following properties according to [73]:
  • E ( r , t ) is concave and monotonically increasing with respect to t;
  • 0 = E ( r , 0 ) E ( r , t ) r for any non-negative integer t.
We can understand these properties by the following intuitions:
  • A newly received recoded packet of a batch at the next node is either linearly independent or not of the already received recoded packets of the same batch. The chance of being linearly independent is smaller when there are more received recoded packets at the node, i.e.,  E ( r , t ) is concave with respect to t.
  • Receiving a new recoded packet will not decrease the rank of the batch, as spanning a vector space with one more vector will not decrease the dimension of the vector space—that is, E ( r , t ) is monotonically increasing with respect to t.
  • The recoded packets are random vectors in an r-dimensional vector space, and the dimension of their span cannot exceed r, i.e.,  E ( r , t ) is upper-bounded by r.
  • When no recoded packet is sent, the rank of the batch at the next node must be 0, i.e.,  E ( r , 0 ) = 0 .
As in [73], we extend the domain of t from the set of non-negative integers to the set of non-negative real numbers by linear interpolation as
E ( r , t ) = ( 1 ϵ ) E ( r , t ) + ϵ E ( r , t + 1 ) ,
where ϵ = t t . That is, we first send t packets; then, the fractional part of t represents the probability that we send one more packet. The concavity of E ( r , t ) with respect to t is preserved. The reasoning behind this extension is that if we want to maximize the expected rank with a probability distribution for transmitting different numbers of recoded packets, the optimal distribution is either deterministic (Dirac distribution) or has a consecutive support of size 2, which is due to applying Jensen’s inequality on the concave expected rank function [73].
For simplicity, we also define in a similar manner that
Bin ( t , p , r ; i ) = ( 1 ϵ ) Bin ( t , p , r ; i ) + ϵ Bin ( t + 1 , p , r ; i ) .
With the formulation of expected rank functions, we can then discuss the formulation of adaptive recoding.

2.3. Traditional Adaptive Recoding Problem (TAP)

We now discuss the adaptive recoding framework, which we call the Traditional Adaptive recoding Problem (TAP) in this paper. The formulation of our multi-phase recoding models is a generalization of this TAP framework. After that, we reduce our models into instances of TAPs so that we can solve them efficiently.
Let R be the random variable of the (innovative) rank of a batch. When the node receives an (innovative) rank-r batch, it sends t r recoded packets to the next node. As we cannot use the outgoing link indefinitely for a batch, there is a “resource” limitation for the transmission. This resource, denoted by t avg , is the average number of recoded packets per batch, which is an input to the adaptive recoding framework. Depending on the scenario, this value may be jointly optimized with other objectives [122].
From [73], we know that if the packet loss pattern is a stationary stochastic process, e.g., the Bernoulli process we used in this paper for independent packet loss, then the expected rank function E ( r , t ) is concave with respect to t. Further, under this concavity condition, the adaptive recoding problem that maximizes the average expected rank at the next node can be written as
max { t i } i = 0 M r = 0 M Pr ( R = r ) E ( r , t r ) s . t . r = 0 M Pr ( R = r ) t r = t avg .
Our multi-phase recoding formulations in the remaining text of this paper are constructed based on (1). For ease of reference, we reproduce in Algorithm 1 the greedy algorithm for solving TAP [73] with the initial condition embedded. In the algorithm, Δ r , t is defined to be E ( r , t + 1 ) E ( r , t ) , which can be calculated via dynamic programming or regularized incomplete beta functions [74].
Algorithm 1: Greedy Algorithm for Solving TAP.
     Network 04 00024 i001
As a remark, when there is insufficient resource for forwarding, i.e.,  E [ R ] t avg , any feasible solution satisfying t r r is an optimal solution of (1), and the optimal objective value is ( 1 p ) t avg . This is stated in the following lemma, which is a continuous analogue of [123], Lemma 2.
Lemma 1.
When E [ R ] t avg , any feasible solution satisfying t r r is an optimal solution of (1), and the optimal objective value is ( 1 p ) t avg .
Proof. 
According to [73], Corollary 2, all Δ r , t have the same value when t < r for all r. On the other hand, we know that Δ r + 1 , t > Δ r , t for all t r > 0 from [73], Theorem 1. In other words, as Algorithm 1 selects those largest Δ · , · until the resources deplete, we conclude that it will terminate with t r r for all r. As E ( r , t r ) = ( 1 p ) t r for t r r , the objective value is r = 0 M Pr ( R = r ) ( 1 p ) t r = ( 1 p ) t avg . □

3. Multi-Phase Adaptive Recoding

The idea of an S-phase transmission is simple: In phase i, for every batch having innovative rank r i at the beginning of this phase, we generate and transmit t i , r i recoded packets. For simplicity, we call this innovative rank the phase i innovative rank of the batch. Afterwards, the phase i transmission is completed. If i < S , the next node calculates the total rank of the batch formed by all the received packets of this batch from phase 1 to phase i. By sending this information back to the current node, the current node can calculate the phase ( i + 1 ) innovative rank of the batch, i.e., the rank of the batch at the current node minus that at the next node. Then, the current node starts the phase ( i + 1 ) transmission. We can see that besides S, the main component under our control to optimize the throughput is the number of recoded packets to be generated and transmitted in each phase.
For one-phase transmission, the problem reduces to the fire-and-forget TAP. In this case, the number of recoded packets can be obtained via the traditional adaptive recoding algorithm due to the fact that the phase 1 innovative rank of a batch is the same as the rank of this batch at the current node, because the next node knows nothing about this batch yet. In the following text, we investigate the number of recoded packets in multi-phase transmission.

3.1. Multi-Phase General Adaptive Recoding Problem (GAP)

To formulate the optimization problem for multi-phase transmission, we first derive the probability mass functions of the innovative rank distributions. Let R i be the random variable of the phase i innovative rank. In the following, we write Pr ( R i = r i ) as Pr ( r i ) to simplify the notation. Note that R i + 1 depends on the phase i innovative rank and the number of recoded packets t i , · to be sent. When all t i , · are given, R 1 R 2 R S forms a Markov chain. Therefore, we have
Pr ( r i ) = r 1 , r 2 , , r i 1 Pr ( r 1 ) j = 1 i 1 Pr ( r j + 1 r j ) = r 1 , r 2 , , r i 1 Pr ( r 1 ) j = 1 i 1 Bin ( t j , r j , p , r j ; r j r j + 1 ) ,
where each index r 1 , r 2 , , r i 1 in the summation starts from 0 to M.
Recall that the principle of adaptive recoding is to maximize the average expected rank at the next node such that the average number of recoded packets per batch is t avg . The average number of recoded packets per batch can be obtained by
r 1 , r 2 , , r S Pr ( r 1 , r 2 , , r S ) i = 1 S t i , r i = i = 1 S r i Pr ( r i ) t i , r i .
Note that the r i in t i , r i is the index of the outer summation, which sums from 0 to M. Let R ¯ be the random variable of the rank of the batch at the next node after the current node completes the phase S transmission.
Theorem 1.
E [ R ¯ ] = i = 1 S r i Pr ( r i ) E ( r i , t i , r i ) .
Proof. 
The probability mass function of R ¯ is
Pr ( R ¯ = k ) = r 1 , r S Pr ( r 1 , r S ) Bin ( t S , r S , p , r S ; k ( r 1 r S ) ) .
Hence, E [ R ¯ ] equals
k = 0 M r 1 , r S Pr ( r 1 , r S ) Bin ( t S , r S , p , r S ; k ( r 1 r S ) ) k = r 1 , r S Pr ( r 1 , r S ) ( r 1 r S + E ( r S , t S , r S ) ) = r 1 , r 2 , , r S Pr ( r 1 , r 2 , , r S ) i = 1 S 1 ( r i r i + 1 ) + r S Pr ( r S ) E ( r S , t S , r S ) = i = 1 S 1 r i Pr ( r i ) r i + 1 Bin ( t i , r i , p , r i ; r i r i + 1 ) ( r i r i + 1 ) + r S Pr ( r S ) E ( r S , t S , r S ) = i = 1 S r i Pr ( r i ) E ( r i , t i , r i ) .
Let T i = ( t i , r ) r = 0 M and T = i = 1 S T i . We first model an S-phase General Adaptive recoding Problem (GAP) that jointly optimizes all the phases:
max T i = 1 S r i Pr ( r i ) E ( r i , t i , r i ) s . t . i = 1 S r i Pr ( r i ) t i , r i = t avg .
Any S-phase optimal solution is a feasible solution of the S -phase for S > S ; thus, increasing the phase would not give a worse throughput. In this model, it is not guaranteed that maximizing the outcomes of a phase would benefit the latter phases, i.e., it is unknown that whether a suboptimal phase would lead to a better overall objective. This dependency makes the problem challenging to solve.

3.2. Multi-Phase Relaxed Adaptive Recoding Problem (RAP)

To maximize the throughput of BNC, we should maximize the expected rank of the batches at the destination node. However, it is unknown whether a suboptimal decision at an intermediate node would lead to a better throughput at the end. To handle this dependency, TAP relaxes the problem into maximizing the expected rank at the next node. We can see that this dependency is very similar to that we have in GAP. By adopting a similar idea, we relax the formulation of GAP by separating the phases into an S-phase Relaxed Adaptive recoding Problem (RAP):
max α i 0 i = 1 S max T i r i Pr ( r i ) E ( r i , t i , r i ) s . t . i = 1 S α i = t avg , r i Pr ( r i ) t i , r i = α i , i = 1 , 2 , , S .
This relaxation gives a lower bound of (3) for us to understand the throughput gain. Also, by knowing α i , RAP becomes S individual TAPs with different t avg s—that is, we can solve individual phases independently by Algorithm 1 when α i s are fixed.
The formulation of RAP looks like it can be solved by considering all Δ · , · among the phases altogether and then applying Algorithm 1. Unluckily, this is not the case. Consider a two-phase problem: The expected rank achieved by phase 1 increases when we increase some t 1 , r for some r 0 ; however, at the same time, the distribution of R 2 is changed. After that, the expected rank achieved by phase 2 decreases according to the following theorem, which can be proved by heavily using the properties of adaptive recoding and the first-order stochastic dominance of binomial distributions. If the decrement is larger than the increment, then the overall objective decreases, which means that we should increase some t 2 , r for some r 0 instead, although  Δ r , t 2 , r may be smaller than Δ r , t 1 , r .
Theorem 2.
If the resource for phase ( i + 1 ) remains the same or is decreased when some t i , r , r 0 , are increased, then the expected rank achieved by phase ( i + 1 ) in the objective of RAP decreases.
Proof. 
See Appendix A. □
Below is a concrete example. Let M = 4 , the source node sends four packets per batch, and the packet loss rate is 10 % . We consider the node right after the source node. At this node, let t avg = 4 and p = 10 % , i.e., the same setting as the source node. This way, both nodes have the same set of values of Δ r , t . We run a brute-force search for ( α 1 , α 2 ) with a step size of 0.001 . The solution is T 1 = ( 0 , 1 , 2 , 3 , 4 ) , T 2 = ( 0 , 1 , 2.9263 , 4 , 5 ) , when corrected to four decimal places. For our channel settings, we can use some properties of Δ r , t proved in [74]. First, Δ r , t = 1 p when t < r and  Δ r , t < 1 p when r 0 . Therefore, we can perform a similar greedy algorithm as that in Algorithm 1, starting from T 1 = T 2 = ( 0 , 1 , 2 , 3 , 4 ) . Next, there is another property that Δ 4 , 4 > Δ 3 , 3 > Δ 2 , 2 > Δ 1 , 1 > Δ 0 , 0 = 0 . Now, the algorithm finds the largest Δ r , t i , r , which is Δ 4 , 4 . There is a freedom to choose that Δ 4 , 4 in phase 1 or in phase 2. We can see that in T 2 , we have t 2 , 4 = 5 . This means that the Δ 4 , 4 in phase 2 is chosen. According to the greedy algorithm again, we should choose the next largest Δ r , t i , r , which is Δ 4 , 4 in phase 1. However, we can see that t 1 , 4 = 4 but  t 2 , 3 = 4 > 3 . That is, in order to achieve an optimal solution, we need to choose Δ 3 , 3 in phase 2 instead of the larger Δ 4 , 4 in phase 1. In other words, we cannot group all Δ · , · among the phases altogether and then apply Algorithm 1 to obtain an optimal solution.
Up to this point, we know that the hardness of RAP is raised from the selection of α i . Our simulation shows that two-phase RAP already has a significant throughput gain; therefore, we propose an efficient bisection-like heuristic for two-phase RAP in Algorithm 2 for practical deployment. The heuristic is based on our observations that the objective of RAP looks unimodal with respect to α 1 . Note that although we solve TAP multiple times in the algorithm, we can reuse the previous output of TAP by applying the tuning scheme in [73] so that we do not need to repeatedly run the full Algorithm 1.
Algorithm 2: Heuristic for two-phase RAP.
     Network 04 00024 i002
Before we continue, we discuss a subtle issue in adaptive recoding here. Consider α i < E [ R i ] = r i Pr ( r i ) r i . That is, the amount of resource is too little, so it is insufficient to send a basis of the innovative rank space. For simplicity, we call the packets in this basis the innovative packets. Recall that any feasible solution is where t i , r r is an optimal solution for this phase. However, different optimal solutions can lead to different innovative rank distributions in the next phase. This issue actually also occurs when there are more than one r having the same value of Δ r , t r in the general case, but it was ignored in previous works.
In this paper, we consider the following heuristic. First, we obtain the set of candidates of r, i.e., arg max r { 0 , 1 , 2 , , M } Δ r , t r . If it is the case discussed above where the amount of resource is too little, then this set would be { 1 , 2 , 3 , , M } . Next, our intuition is that we should let those with smaller t r send more recoded packets. Although they have the same importance in terms of expected rank, their packets on average—i.e., the expected rank of the batch (at the next node) divided by t r —possess a different importance. This means that it is more risky for us to lose a packet of that batch with smaller t r . At last, another intuition is that, among the r’s with the same t r , we should let the higher rank one send more recoded packets so that the shape of the rank distribution at the next node is skewed towards the higher rank side, i.e., potentially more information (rank) is retained. To conclude, we have r = max arg min r arg max r { 0 , 1 , 2 , , M } Δ r , t r t r .

3.3. Multi-Phase Systematic Adaptive Recoding Problem (SAP)

We now investigate a constrained version of GAP. In systematic recoding, we forward the received, linearly independent packets of a batch to the next node—that is, the number of packets we forwarded is the rank of the batch. As a multi-phase analogue of systematic recoding, in each phase, the number of packets we “forwarded” is the innovative rank of the batch of that phase. However, except the first phase, the so-called “forwarded” packets are generated by RLNC so that we do not need to learn and detect which packets sent by the previous phases are lost during the transmission process.
We now formulate the S-phase Systematic Adaptive recoding Problem (SAP), where for each batch, the node sends the innovative packets to the next node in all phases except the last one. That is, except the last phase, we have t i , r = r for all r, where these phases are called the systematic phases. For the last phase, we perform adaptive recoding on the phase S innovative rank with the remaining resource. By imposing the systematic phases as constraints to (3), we obtain
max T S r S Pr ( r S ) E ( r S , t S , r S ) + i = 1 S 1 r i Pr ( r i ) E ( r i , r i ) s . t . r S Pr ( r S ) t S , r S + i = 1 S 1 r i Pr ( r i ) r i = t avg .
When i = 1 S 1 r i Pr ( r i ) r i > t avg , i.e., there is insufficient resources to perform the first ( S 1 ) phases, (5) has no feasible solution. Thus, we need to choose a suitable S in practice. On the other hand, as the variables are all for the last phase, we can regard SAP as a single-phase problem; thus, it has the same form as TAP. That is, SAP can be solved by Algorithm 1.
When (5) has a feasible solution, then the optimal solution of (5) is a feasible solution of (3). As SAP is easier to solve compared to RAP, by investigating higher-phase SAP, we can obtain a lower bound on the performance of higher-phase GAP. Let R ˜ i be the random variable of the phase i innovative rank where all the previous phases are systematic phases. For the first phase, define R ˜ 1 = R 1 . Then, R ˜ 1 R ˜ 2 R ˜ S forms a Markov chain.
For any k Z + , systematic recoding uses t k , r k = r k . That is, we have Bin ( r k , p , r k ; · ) = Bin ( r k , p ; · ) . The following lemma and theorem give the formulae on the performance of SAP.
Lemma 2.
Assume i = 1 k 1 r i Pr ( r i ) r i t avg , i.e., there are enough resources to perform ( k 1 ) systematic phases, then E [ R ˜ k ] = p k 1 E [ R 1 ] .
Proof. 
By using (2), we have
E [ R ˜ k ] = r k r k Pr ( r k ) = r 1 , r 2 , , r k r k Pr ( r 1 ) j = 1 k 1 Bin ( r j , p ; r j r j + 1 ) .
We prove this lemma by induction. When k = 1 , we have E [ R ˜ 1 ] = E [ R 1 ] = p ( 1 ) 1 E [ R 1 ] . Assume E [ R ˜ k ] = p k 1 E [ R 1 ] for some k Z + . For the case of k + 1 , we have
E [ R ˜ k + 1 ] = r 1 , r 2 , , r k + 1 r k + 1 Pr ( r 1 ) j = 1 k Bin ( r j , p ; r j r j + 1 ) = r 1 , r 2 , , r k Pr ( r 1 ) r k + 1 r k + 1 Bin ( r k , p ; r k r k + 1 ) j = 1 k 1 Bin ( r j , p ; r j r j + 1 ) = p r 1 , r 2 , , r k r k Pr ( r 1 ) j = 1 k 1 Bin ( r j , p ; r j r j + 1 ) = p ( k + 1 ) 1 E [ R 1 ] .
Theorem 3.
Assume i = 1 k r i Pr ( r i ) r i t avg , i.e., there are enough resources to perform k systematic phases; then, the resources consumed by these k phases are 1 p k 1 p E [ R 1 ] and the average expected rank at the next node is ( 1 p k ) E [ R 1 ] .
Proof. 
We first prove the amount of resources consumed. When k = 1 , the resources consumed is r 1 r 1 Pr ( r 1 ) = E [ R 1 ] = 1 p ( 1 ) 1 p E [ R 1 ] . Assume that the resources consumed by k phases is i = 1 k r i r i Pr ( r i ) = 1 p k 1 p E [ R 1 ] . When there are k + 1 phases, the resources consumed is
i = 1 k + 1 r i r i Pr ( r i ) = 1 p k 1 p E [ R 1 ] + E [ R ˜ k + 1 ] = ( a ) 1 p k 1 p E [ R 1 ] + p k E [ R 1 ] = 1 p ( k + 1 ) 1 p E [ R 1 ] ,
where (a) follows Lemma 2. This part is proved by induction.
Next, we prove the expected rank (at the next node) by induction. Note that for any r { 0 , 1 , 2 , } , we have E ( r , r ) = ( 1 p ) r . When k = 1 , the expected rank is r 1 Pr ( r 1 ) E ( r 1 , r 1 ) = ( 1 p ( 1 ) ) E [ R 1 ] . Assume that the expected rank after k phases is i = 1 k r i Pr ( r i ) E ( r i , r i ) = ( 1 p k ) E [ R 1 ] . When there are k + 1 phases, the expected rank is
i = 1 k + 1 r i Pr ( r i ) E ( r i , r i ) = ( 1 p k ) E [ R 1 ] + r k + 1 Pr ( r k + 1 ) E ( r k + 1 , r k + 1 ) = ( 1 p k ) E [ R 1 ] + ( 1 p ) E [ R ˜ k + 1 ] = ( b ) ( 1 p k ) E [ R 1 ] + ( 1 p ) p k E [ R 1 ] = ( 1 p ( k + 1 ) ) E [ R 1 ] ,
where (b) follows Lemma 2 as well. The proof is then completed. □
An intuitive meaning of the above statements is as follows. Every systematic phase remains p portion of innovative rank on average, which is the average rank lost during the transmission. Therefore, after ( k 1 ) systematic phases, the expectation of the innovative rank remaining is the p k 1 portion of E [ R 1 ] . The resources consumed by k systematic phases is the sum of the expected innovative rank in all the phases, which is ( 1 + p + p 2 + + p k 1 ) E [ R 1 ] = 1 p k 1 p E [ R 1 ] . The average expected rank at the next node is the part that is not innovative, i.e., E [ R 1 ] p k E [ R 1 ] by Lemma 2.
When k , the resources consumed is E [ R 1 ] 1 p . This means that if E [ R 1 ] 1 p t avg , then there are always enough resources to perform more systematic phases. Otherwise, we can only perform S systematic phases where, according to Theorem 3, S is the largest integer satisfying 1 p S 1 p E [ R 1 ] t avg . In other words, we have
S = log p 1 ( 1 p ) t avg E [ R 1 ] .
Note that if there are remaining resources, we need to apply TAP for one more phase in our setting.
Corollary 1.
Consider SAP with the highest possible number of systematic phases. Then, E [ R ¯ ] = min { ( 1 p ) t avg , E [ R 1 ] } .
Proof. 
If E [ R 1 ] ( 1 p ) t avg , then we do not have an upper limit on the number of systematic phases. By Theorem 3, we know that E [ R ¯ ] = lim k ( 1 p k ) E [ R 1 ] = E [ R 1 ] .
Now, we consider E [ R 1 ] > ( 1 p ) t avg . Let S = log p ( 1 ( 1 p ) t avg E [ R 1 ] ) . According to Theorem 3, the expected rank after S systematic phases is ( 1 p S ) E [ R 1 ] . The remaining resources after S systematic phases is t avg : = t avg 1 p S 1 p E [ R 1 ] . In the ( S + 1 ) phase, there are insufficient resources to send a basis of the innovative rank space. We gain ( 1 p ) t avg expected rank in this phase. Combining with the expected rank, we have
E [ R ¯ ] = ( 1 p S ) E [ R 1 ] + ( 1 p ) t avg = 1 p S E [ R 1 ] + ( 1 p ) t avg 1 p S 1 p E [ R 1 ] = ( 1 p ) t avg .
The proof is then completed by combining the two cases above. □
This corollary shows that when there are enough resources, -phase SAP is “lossless”. This is not surprising as the -phase SAP can be regarded as a complete hop-by-hop retransmission scheme. More specifically, if the packet loss rate p is the same at all the links and t avg = M at all the non-destination nodes, then ( 1 p ) t avg is the capacity of the network before normalization and can be achieved by SAP.
Although we say phases, the constraint in SAP states that every batch sends t avg packets on average—that is, it is unlikely to have a very large number of systematic phases before a batch has no more innovative ranks. The following theorem provides an upper bound on this expected stopping time.
Theorem 4.
Let U be the random variable for the smallest number of systematic phases required such that a batch reaches 0 innovative rank. For k 1 ,
Pr ( U = k ) = r 1 Pr ( r 1 ) ( 1 p k ) r 1 ( 1 p k 1 ) r 1 .
Also, Pr ( U = 0 ) = Pr ( R 1 = 0 ) . Further, E [ U ] < E [ R 1 ] 1 p .
Proof. 
See Appendix B. □
We will see from the numerical evaluations in Section 4 that two-phase, or at most three-phase, is already good enough in practice.

3.4. Protocol Design

To evaluate the decoding time, we need a workable protocol that supports multi-phase recoding. The purpose of the following discussion is to give a preliminary idea on a workable protocol and also to construct such a protocol for our numerical evaluation. Note that advanced protocol design may affect the performance and decoding time in practice. We leave such an optimal design as an open problem.
In a fire-and-forget recoding approach, i.e., one-phase recoding, the minimal protocol is very simple. Every BNC packet consists of a batch ID (BID), a coefficient vector, and a payload [72,85,86]. Besides acting as a batch identifier, the BID is also used as a seed for the pseudorandom generator so that the encoder and the decoder can agree with the same random sequence. Depending on the BNC design, this random sequence may be used for describing how a batch is being generated. The coefficient vector records the network coding operations—that is, it describes how a recoded packet is being mixed by random linear combination. At last, the payload is the data carried by the packet. To travel through the existing network infrastructure, the BNC packet is encapsulated by other network protocols for transmission [85,86,106]. In more complicated designs, there are control packets and control mechanisms besides the data packets [124], but they are beyond the scope of this paper.
We can see that there is no feedback packet required in the minimal protocol. To maximize the information carried by each recoded packet in a batch, recoding is performed when no packet of this batch will be received anymore. In order to identify that the previous node has transmitted all recoded packets of a batch without using feedback, the minimal protocol assumes that the batches are transmitted sequentially and there is no out-of-order packet across batches, so that when a node receives a packet from a new batch (with a new BID), we can start recoding the original batch.
In case there are out-of-order packets across batches, a simple way to handle this is to assume an ascending order of BID so that we can detect and discard the out-of-order packets. This way, the protocol does not rely on feedback. However, to support multi-phase recoding, we need to know the reception state of the batches at the next node; thus, we must introduce feedback packets in the protocol.
We modify from the minimal protocol this way. For the BNC packets (data packets), we also attach a list of BID–phase pairs when the node has finished sending all recoded packets of the corresponding phases of these batches. That is, the next node can start recoding the corresponding phase and send a feedback packet back to the current node. The number of recoded packets depends on the innovative rank of the batch, which is determined by the multi-phase recoding optimization problems. The BID–phase pair will be removed from the list when the next node notifies the reception state of this information.
The feedback packet consists of a list of BID–phase–rank triples. When the node receives a BID–phase pair, it sends an acknowledgment of receiving this BID–phase pair together with the rank of the batch with this BID at this node, i.e., a BID–phase–rank triple. A triple will be removed from the list when the corresponding BID–phase pair no longer appears in the incoming BNC packets, i.e., the previous node has received the acknowledgment. The feedback packet is constantly sent from a node like a heartbeat message to combat against feedback packet loss.
For a bidirectional transmission scenario, this list of triples can be sent together with the BNC packets. Certainly, we can separate these lists from the data packets and send them as control messages. This way, we can have a better flexibility on the length of the payload, as the length of a BNC packet is usually limited by the protocols that encapsulate it, e.g., 65,507 bytes for UDP over IPv4. To reduce the computational overhead due to the reassembling of IP fragments at every node for recoding, the length may further be limited within the size of a data-link frame.
With these modifications, the timing for recoding is no longer dependent on the sequential order of BIDs. To demonstrate how the protocol works, we consider a two-hop network (a source node, an intermediate node, and a destination node) with the following settings. We divide the timeline into synchronized timeslots. Each node can send one BNC packet to the next node at each individual timeslot. At the same time, each node can send one feedback packet to the previous node. A transmission will be received at the next timeslot. Assume a node can react to the received BID–phase pairs and BID–phase–rank triples immediately. Also, assume that a feedback packet is sent at every timeslot when acknowledgment is needed. Each BNC packet and feedback packet have independent probability to be lost during transmission. The batch size and the average number recoded packets for all phases at each node are both four, i.e., M = t avg = 4 . We consider the flow of the packets until the destination node receives a total rank of 10, so that the flow is long enough to reach the second phase of some batches while it is short enough to be illustrated in a single page.
The computational time for recoding is omitted here, as it only consists of a small amount of operations, which can be further accelerated by multi-threading, intrinsic instructions for single instruction/multiple data (SIMD), or hardware implementation. The solution of the adaptive recoding problem can be reused if the configuration is not changed. In practice, the bottleneck is the transmission rate of the channel.
Figure 4 illustrates an example flow of the protocol without adopting multi-phase systematic recoding. Consider two-phase recoding: At the beginning, the BNC packets sent by the source node do not include the list of BID–phase pairs. After finishing phase 1 of the first batch, the BID–phase pair ( 1 , 1 ) is attached to the next BNC packet. The intermediate node gives feedback about the rank of the batch it received. When the source node receives the feedback for BID 1 phase 1, it will not attach this information again in the upcoming BNC packets. This way, the intermediate node can make use of the BID–phase pairs attached to the received BNC packets to identify whether the source node has received the feedback message or not. As the feedback packet can also be lost, the intermediate node constantly sends feedback messages to the source node. A similar process is conducted between the intermediate node and the destination node. We can see in the figure that the intermediate node did not start phase 2 of the second batch (BID 2). This is because the innovative rank of the batch at the intermediate node after phase 1 is 0. The destination node cannot gain any rank by receiving more recoded packets of this batch. Right after that, the intermediate node has nothing to be transmitted; thus, an idle timeslot is induced.
We illustrate another example in Figure 5. This time we adopt two-phase SAP: The source node sends four packets per batch in phase 1. As the average number of recoded packets t avg = 4 , the source node sends nothing in phase 2. Therefore, we can see that once the source node receives the feedback of BID 1 phase 1 from the intermediate node, it sends the BID–phase pair ( 1 , 2 ) directly, which indicates the end of BID 1 phase 2. Notice that if we configure with a larger value of t avg , the source node will send packets in phase 2 if some of the phase 1 packets are lost. These extra packets will be regarded as phase 1 systematic packets sent from the intermediate node. The intermediate node sends the BID–phase pair ( 1 , 1 ) after knowing that the source node has finished sending phase 2 of the first batch. In the figure, one phase 1 packet of the first batch forwarded by the intermediate node is lost. After receiving the feedback from the destination node, the intermediate node sends a phase 2 packet of this batch and the systematic packets for BID 3 phase 1 are deferred by one timeslot.

4. Numerical Evaluations

In this section, we evaluate the throughput (i.e., the expected rank arriving at the node) and the decoding time of multi-phase recoding. We consider a multi-hop line network where all the links share the same packet loss rate p = 10 % , 20 % , or 30 % . The feedback packets in the evaluation of decoding time have the same packet loss rate as the BNC packets. We choose a batch size M = 4 or 8 in the evaluation, with t avg = M at each node.
We focus on comparison of multi-phase and single-phase recodings, as it is known that in these lossy networks, BNC outperforms traditional end-to-end transmission schemes. For end-to-end transmission schemes, a packet can reach the destination only if it is not lost at any of the links. Therefore, we can abstract the network as a single-hop one with loss rate p n , where n is the number of hops in the original topology. Then, we can approximate the (normalized) throughput by ( 1 p ) n and the decoding time by F / ( 1 p ) n . As a numerical example, for n = 9 , p = 10 % and F = 1000 , the throughput is about 0.39 and the decoding time is about 2581, which are much worse than that achieved by single-phase recoding shown in the remainder of this section.

4.1. Throughput

We first show the normalized throughput of different schemes, defined as the average total rank received at the node divided by M. This scales the throughput to the range [ 0 , 1 ] . Recall that the throughput in BNC literature only measures the amount of information carried by the packets and does not reflect the decoding time.
Figure 6 and Figure 7 show the cases when M = 4 and 8, respectively. The blue plots “RAP3” and “SAP3” correspond to three-phase RAP and three-phase SAP, respectively. The red plots “RAP2”, “SAP2”, and “heur.” correspond to two-phase RAP, two-phase SAP, and the heuristic Algorithm 2, respectively. The brown plot “TAP” is the fire-and-forget adaptive recoding in [64,73]. The cyan plot “base” is the (fire-and-forget) baseline recoding where every batch sends M recoded packets regardless of the rank. At last, the black dashed plot “cap.” is the capacity of the network, which is achievable by hop-by-hop retransmission until successful reception (which omits the resources constraint), i.e., the theoretical upper bound on the throughput of the network.
From the plots, we can see that when the loss rate p is small, SAP and RAP achieve nearly the same throughput. The difference between RAP and SAP can be observed when p becomes larger. The reason is that for SAP, there are too many remaining resources for the last phase. That is, we send too many “useless” packets in the last phase, so it is beneficial to allocate more resources to the previous phases. Note that it is not common to have a large independent packet loss rate because we can change the modulation to reduce the loss rate. On the other hand, we observe that the throughput of two-phase RAP and that given by Algorithm 2 are nearly the same, which implicates the accuracy of our heuristic.
To show the throughput gain, we take M = 4 and p = 10 % as an example. At the 9-th hop, the throughput gain of two-phase RAP (and SAP) from fire-and-forget adaptive recoding and baseline recoding are 21.67 % and 32.03 % , respectively. The throughput of three-phase RAP (and SAP) is very close to the capacity. This also suggests that instead of sending more batches, the inner code can help in combating the loss efficiently when feedback is available.

4.2. Decoding Time

Next, we evaluate the decoding time. We apply the protocol in Section 3.4 and make the same assumptions used for Figure 4 and Figure 5. Suppose a node can recover the data when the total rank reaches F = 1000 . The number of timeslots required for decoding is plotted in Figure 8 and Figure 9, respectively, for M = 4 and 8. We use the same legend as that described in the last subsection, except there is no curve for the “capacity”.
We can see that two-phase RAP (RAP2) and the heuristic (heur.) give almost the same decoding time, which again confirms the accuracy of our heuristic. The gain of SAP in decoding time becomes smaller in general when the loss rate p is larger. With a large p, SAP might give a longer decoding time than that without using systematic recoding. The reason is that we have allocated too many resources to the last phase of SAP, which leads to a poorly optimized strategy. When there are more phases, less resources are allowed to the last phase; thus, we can still observe an improvement in decoding time for three-phase SAP. For RAP, the difference between decoding time for two and three phases is smaller when M becomes larger. When M is larger, more idling timeslots are induced by deferring recoding after notifying by the previous node that all the packets of the batch for the phase have been transmitted. We cannot conclude whether using more phases is better or not when M becomes larger. However, this can be partially compensated by SAP. We can see that three-phase SAP still works well when p = 30 % . Yet, this result suggests that instead of focusing on optimizing the throughput solely, we should also consider the decoding time and the behavior of the protocol in future works of BNC.
We want to point out that there could be many possible solutions for RAP achieving almost the same throughput. Two different solutions that have almost no difference in throughput may result in a difference in the decoding time. In this case, we suggest to choose the solution that allocates more resources to early phases so that early phases have more chances to send more innovate packets. The optimal strategy is left as a future research direction.

5. Conclusions

Previous literature in BNC did not consider multi-phase transmission. When feedback is available, the fire-and-forget strategy is not the best choice. How to make use of the feedback for BNC is still an open problem, as there are too many different components involved in the deployment of BNC in real systems.
In this paper, we investigated one possible use of feedback for adaptive recoding, together with a preliminary protocol design that can support multi-phase recoding. The information carried by the feedback is the rank of the batch at the node, which is simply a small integer. From our numerical evaluations, we can see that multi-phase recoding has a significant throughput gain when compared with the traditional fire-and-forget scheme. By simulating the decoding time, we found that the conventional throughput optimization for BNC may not be able to reflect the delay induced by the protocol and the recoding scheme. This suggests that in future works, the decoding time might be a more important objective to be optimized than the throughput, and it is likely to be a joint optimization problem with the design of the protocol. We also highlight a message that instead of making more transmissions, one direction to enhance the efficiency of BNC is to craft a well-designed inner code and feedback mechanism, as proposed in this paper.
As this work is a preliminary study on multi-phase recoding, further investigations are necessary for different network setups and conditions. For example, we considered independent packet loss to simplify the formulation. Although it is common to make this assumption in the literature, a more accurate channel model should help in optimizing the gain of recoding, especially when there is a dependency between the recoding phases. The formulation of innovation rank distributions for other channel models is a major component in multi-phase recoding. Potential approaches include distributionally robust optimization for Markov chains [125] and advanced statistical models, e.g., [126]. The evaluation of the performance gap between the accurate and inaccurate channel models is another research direction. We will leave all these optimal strategies and the model development as open problems.
We remark that the protocol we proposed is not standardized yet. Inconsistencies in implementations on how to use feedback may lead to unpredicted behaviors. However, BNC deployments usually have specialized purpose in self-contained networks that the owners have full control of, e.g., smart lamppost networks. This is because network coding requires recoding at intermediate nodes and it is not feasible to ask the public to replace all existing network infrastructures. This way, the consistency of protocol implementation can be maintained. However, a standard should be proposed in the future to avoid inconsistency when combining networks with different BNC implementations. On the other hand, optimization on the design is needed. For demonstration purposes, we assumed that a feedback packet is sent at every timeslot. This is not a good solution because every feedback packet consumes network resources. Delaying the feedback, however, would delay the trigger of recoding. The investigations of advanced feedback mechanism and its impact are left as future work.

6. Patents

The multi-phase recoding algorithms can be found in the U.S. patent application 17/941,921 filed on 9 September 2022 [127].

Author Contributions

Conceptualization, H.H.F.Y. and M.T.; methodology, H.H.F.Y. and M.T.; software, H.H.F.Y.; validation, H.H.F.Y., M.T. and H.W.L.M.; formal analysis, H.H.F.Y.; investigation, H.H.F.Y.; writing—original draft preparation, H.H.F.Y. and H.W.L.M.; writing—review and editing, H.H.F.Y. and H.W.L.M.; supervision, H.H.F.Y. and M.T.; project administration, H.H.F.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

Part of the work of Hoover H. F. Yin was conducted when he was with n-hop technologies Limited and the Institute of Network Coding, The Chinese University of Hong Kong.

Conflicts of Interest

Author Mehrdad Tahernia is employed by the company n-hop technologies Limited. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BNCBatched network coding
RLNCRandom linear network coding
LDPCLow-density parity-check code
BATS codeBatched sparse code
TCPTransmission control protocol
ATCPAd-hoc TCP
TAPTraditional adaptive recoding problem
GAPMulti-phase general adaptive recoding problem
RAPMulti-phase relaxed adaptive recoding problem
SAPMulti-phase systematic adaptive recoding problem
BIDBatch ID
UDPUser datagram protocol
IPv4Internet protocol version 4
SIMDSingle instruction/multiple data

Appendix A. Proof of Theorem 2

Let { t i , r } r = 0 M be the solution for phase i RAP and { t i , r } r = 0 M be that after we increase the resource for phase i. According to Algorithm 1, we have t i , r t i , r for all r.
Let X r Bin ( t i , r , p ) and Y r Bin ( t i , r , p ) . We have that Y r first-order stochastically dominates X r ([128], Example 4.24), denoted by Y r X r . That is, Pr ( Y r s ) Pr ( X r s ) for all s. For X r Bin ( t i , r , p , r ) and Y r Bin ( t i , r , p , r ) , their cumulative distributions are the same as that of X r and Y r , respectively, from 0 to r 1 . At r, we have Pr ( Y r r ) = Pr ( X r r ) = 1 . Therefore, Y r X r .
Let X and Y be the random variables of the phase ( i + 1 ) innovative rank when the solutions for the phase i RAP are { t i , r } r = 0 M and { t i , r } r = 0 M , respectively. We have X = r i Pr ( r i ) ( r i X r i ) r i Pr ( r i ) ( r i Y r i ) = Y .
Suppose the resource allocated to phase ( i + 1 ) is fixed. Let { t i + 1 , r } r = 0 M and { t i + 1 , r } r = 0 M be the solution of phase i + 1 RAP when the phase ( i + 1 ) innovative ranks are X and Y, respectively. According to ([73], Theorem 8), we have t i + 1 , a > t i + 1 , b and t i + 1 , a > t i + 1 , b for all a > b . Hence, by ([73], Theorem 2), we have E ( a , t i + 1 , a ) E ( b , t i + 1 , a ) E ( b , t i + 1 , b ) . Similarly, we have E ( a , t i + 1 , a ) E ( b , t i + 1 , b ) .
Due to optimality, we know that
r Pr ( X = r ) E ( r , t i + 1 , r ) r Pr ( X = r ) E ( r , t i + 1 , r ) .
On the other hand, we have
r Pr ( X = r ) E ( r , t i + 1 , r ) r Pr ( Y = r ) E ( r , t i + 1 , r )
because X Y if and only if E [ F ( X ) ] E [ F ( Y ) ] for any increasing function F [129]. Therefore, we conclude that under the same available resources for phase ( i + 1 ) , the expected rank achieved at the next node by phase ( i + 1 ) RAP when the phase ( i + 1 ) innovative rank X is no smaller than that when the phase ( i + 1 ) innovative rank is Y. Finally, if we reduce the resources for phase ( i + 1 ) , according to Algorithm 1, the expected rank achieved at the next node decreases, which completes the proof of this theorem.

Appendix B. Proof of Theorem 4

Once the batch reaches 0 innovative rank, the innovative rank in the later phases will all be 0. Hence, Pr ( R ˜ k = 0 ) is the probability that the batch reaches 0 innovative rank after the transmission of some phases before entering the k-th systematic phase. Thus, Pr ( U = k ) = Pr ( R ˜ k + 1 = 0 ) Pr ( R ˜ k = 0 ) for k 1 . A special case is U = 0 , which means that the batch has 0 rank at the very beginning, so we have Pr ( U = 0 ) = Pr ( R 1 = 0 ) .
For k 1 , by (2), we have Pr ( R ˜ k + 1 = 0 ) equals
r 1 , r 2 , , r k Pr ( r 1 ) j = 1 k 1 Bin ( r j , p ; r j r j + 1 ) Bin ( r k , p ; r k 0 ) = r 1 , r 2 , r k 1 Pr ( r 1 ) j = 1 k 2 Bin ( r j , p ; r j r j + 1 ) r k = 0 r k 1 r k 1 r k ( 1 p ) r k 1 r k p r k ( 1 p ) r k = r 1 , r 2 , r k 1 Pr ( r 1 ) j = 1 k 2 Bin ( r j , p ; r j r j + 1 ) ( 1 p ) r k 1 ( 1 + p ) r k 1 = r 1 , r 2 , r k 2 Pr ( r 1 ) j = 1 k 3 Bin ( r j , p ; r j r j + 1 ) r k 1 = 0 r k 2 r k 2 r k 1 ( 1 p ) r k 2 p r k 1 1 p 2 1 p r k 1 = r 1 , r 2 , r k 2 Pr ( r 1 ) j = 1 k 3 Bin ( r j , p ; r j r j + 1 ) ( 1 p ) r k 2 1 p 3 1 p r k 2 = = r 1 Pr ( r 1 ) ( 1 p ) r 1 1 p k 1 p r 1 = r 1 Pr ( r 1 ) ( 1 p k ) r 1 .
Hence, for k 1 , we have
Pr ( U = k ) = r 1 Pr ( r 1 ) ( 1 p k ) r 1 ( 1 p k 1 ) r 1 .
To complete the proof, notice that
E [ U ] = r 1 Pr ( r 1 ) k = 1 k ( 1 p k ) r 1 ( 1 p k 1 ) r 1 = r 1 Pr ( r 1 ) k = 1 k ( ( 1 p k ) ( 1 p k 1 ) ) i = 0 r 1 1 ( 1 p k ) i ( 1 p k 1 ) r 1 1 i < r 1 Pr ( r 1 ) k = 1 k r 1 ( p k 1 p k ) ( 1 p k ) r 1 1 < r 1 Pr ( r 1 ) r 1 ( 1 p ) k = 1 k p k 1 = E [ R 1 ] 1 p .

References

  1. Yin, H.H.F.; Tahernia, M. Multi-Phase Recoding for Batched Network Coding. In Proceedings of the 2022 IEEE Information Theory Workshop (ITW), Mumbai, India, 1–9 November 2022; pp. 25–30. [Google Scholar]
  2. Kavre, M.; Gadekar, A.; Gadhade, Y. Internet of Things (IoT): A Survey. In Proceedings of the 2019 IEEE Pune Section International Conference (PuneCon), Pune, India, 18–20 December 2019; pp. 1–6. [Google Scholar]
  3. Chettri, L.; Bera, R. A Comprehensive Survey on Internet of Things (IoT) Toward 5G Wireless Systems. IEEE Internet Things J. 2020, 7, 16–32. [Google Scholar] [CrossRef]
  4. Jino Ramson, S.R.; Moni, D.J. Applications of wireless sensor networks—A survey. In Proceedings of the 2017 International Conference on Innovations in Electrical, Electronics, Instrumentation and Media Technology (ICEEIMT), Coimbatore, India, 3–4 February 2017; pp. 325–329. [Google Scholar]
  5. Bariah, L.; Shehada, D.; Salahat, E.; Yeun, C.Y. Recent Advances in VANET Security: A Survey. In Proceedings of the 2015 IEEE 82nd Vehicular Technology Conference (VTC Fall), Boston, MA, USA, 6–9 September 2015. [Google Scholar]
  6. Mahi, M.J.N.; Chaki, S.; Ahmed, S.; Biswas, M.; Kaiser, M.S.; Islam, M.S.; Sookhak, M.; Barros, A.; Whaiduzzaman, M. A Review on VANET Research: Perspective of Recent Emerging Technologies. IEEE Access 2022, 10, 65760–65783. [Google Scholar] [CrossRef]
  7. Song, Y.S.; Lee, S.K.; Min, K.W. Analysis of Smart Street Lighting Mesh Network Using I2I Communication Technology. In Proceedings of the 2020 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Republic of Korea, 21–23 October 2020; pp. 981–983. [Google Scholar]
  8. BATS: Network Coding Technology Enabling Multi-Functional Smart Lampposts. Available online: https://inno.emsd.gov.hk/en/it-solutions/index_id_236.html (accessed on 1 October 2024).
  9. Tang, X.; Wang, Z.; Xu, Z.; Ghassemlooy, Z. Multihop Free-Space Optical Communications Over Turbulence Channels with Pointing Errors using Heterodyne Detection. J. Light. Technol. 2014, 32, 2597–2604. [Google Scholar] [CrossRef]
  10. Mochizuki, K.; Obata, K.; Mizutani, K.; Harada, H. Development and field experiment of wide area Wi-SUN system based on IEEE 802.15.4g. In Proceedings of the 2016 IEEE 3rd World Forum on Internet of Things (WF-IoT), Reston, VA, USA, 12–14 December 2016; pp. 76–81. [Google Scholar]
  11. Bhardwaj, P.; Zafaruddin, S.M. On the Performance of Multihop THz Wireless System Over Mixed Channel Fading with Shadowing and Antenna Misalignment. IEEE Trans. Commun. 2022, 70, 7748–7763. [Google Scholar] [CrossRef]
  12. Liu, J.; Singh, S. ATCP: TCP for Mobile Ad Hoc Networks. IEEE J. Sel. Areas Commun. (JSAC) 2001, 19, 1300–1315. [Google Scholar] [CrossRef]
  13. Xu, Y.; Munson, M.C.; Simu, S. Method and System for Aggregate Bandwidth Control. U.S. Patent 9,667,545; filed 4 September 2007, and issued 30 May 2017,
  14. Ho, T.; Koetter, R.; Médard, M.; Karger, D.R.; Effros, M. The Benefits of Coding over Routing in a Randomized Setting. In Proceedings of the 2003 IEEE International Symposium on Information Theory (ISIT), Yokohama, Japan, 29 June–4 July 2003; p. 442. [Google Scholar]
  15. Ahlswede, R.; Cai, N.; Li, S.Y.R.; Yeung, R.W. Network Information Flow. IEEE Trans. Inf. Theory 2000, 46, 1204–1216. [Google Scholar] [CrossRef]
  16. Li, S.Y.R.; Yeung, R.W.; Cai, N. Linear Network Coding. IEEE Trans. Inf. Theory 2003, 49, 371–381. [Google Scholar] [CrossRef]
  17. Jaggi, S.; Chou, P.A.; Jain, K. Low Complexity Optimal Algebraic Multicast Codes. In Proceedings of the 2003 IEEE International Symposium on Information Theory (ISIT), Yokohama, Japan, 29 June–4 July 2003; p. 368. [Google Scholar]
  18. Sanders, P.; Egner, S.; Tolhuizen, L. Polynomial Time Algorithms for Network Information Flow. In Proceedings of the 15th Annual ACM Symposium on Parallel Algorithms and Architectures, San Diego, CA, USA, 7–9 June 2003; pp. 286–294. [Google Scholar]
  19. Lun, D.S.; Médard, M.; Koetter, R.; Effros, M. On coding for reliable communication over packet networks. Phys. Commun. 2008, 1, 3–20. [Google Scholar] [CrossRef]
  20. Wu, Y. A Trellis Connectivity Analysis of Random Linear Network Coding with Buffering. In Proceedings of the 2006 IEEE International Symposium on Information Theory (ISIT), Seattle, WA, USA, 9–14 July 2006; pp. 768–772. [Google Scholar]
  21. Dana, A.F.; Gowaikar, R.; Palanki, R.; Hassibi, B.; Effros, M. Capacity of Wireless Erasure Networks. IEEE Trans. Inf. Theory 2006, 52, 789–804. [Google Scholar] [CrossRef]
  22. Na, L.; Guanghui, S.; Yanbo, Y.; Jiawei, Z.; Teng, L. Review on the Research Progress of Machine Learning in Network Coding. In Proceedings of the 2024 International Conference on Networking and Network Applications (NaNA), Yinchuan City, China, 9–12 August 2024; pp. 510–515. [Google Scholar]
  23. Li, T.; Chen, W.; Tang, Y.; Yan, H. A Homomorphic Network Coding Signature Scheme for Multiple Sources and Its Application in IoT. Secur. Commun. Netw. 2018, 2018, 9641273. [Google Scholar] [CrossRef]
  24. Brahimi, M.A.; Merazka, F. Data Confidentiality-Preserving Schemes for Random Linear Network Coding-Capable Networks. J. Inf. Secur. Appl. 2022, 66, 103136. [Google Scholar] [CrossRef]
  25. Chao, C.C.; Chou, C.C.; Wei, H.Y. Pseudo Random Network Coding Design for IEEE 802.16m Enhanced Multicast and Broadcast Service. In Proceedings of the 2010 IEEE 71st Vehicular Technology Conference (VTC Spring), Taipei, Taiwan, 16–19 May 2010. [Google Scholar]
  26. Shojania, H.; Li, B. Parallelized Progressive Network Coding with Hardware Acceleration. In Proceedings of the 2007 IEEE International Workshop on Quality of Service (IWQoS), Evanston, IL, USA, 21–22 June 2007; pp. 47–55. [Google Scholar]
  27. Goncalves, D.; Signorello, S.; Ramos, F.V.; Medard, M. Random Linear Network Coding on Programmable Switches. In Proceedings of the 2019 ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS), Cambridge, UK, 24–25 September 2019. [Google Scholar]
  28. Xu, X.; Gao, Y.; Guan, Y.L. Applications of Temporal Network Coding in V2X Communications. In Proceedings of the 3rd EAI International Conference on Smart Grid and Innovative Frontiers in Telecommunications (SmartGIFT), Auckland, New Zealand, 23–24 April 2018; pp. 53–63. [Google Scholar]
  29. Gao, Y.; Xu, X.; Zeng, Y.; Guan, Y.L. Optimal Scheduling for Multi-Hop Video Streaming with Network Coding in Vehicular Networks. In Proceedings of the 2018 IEEE 87th Vehicular Technology Conference (VTC Spring), Porto, Portugal, 3–6 June 2018. [Google Scholar]
  30. Papanikos, N.; Papapetrou, E. Deterministic Broadcasting and Random Linear Network Coding in Mobile Ad Hoc Networks. IEEE/ACM Trans. Netw. 2017, 25, 1540–1554. [Google Scholar] [CrossRef]
  31. Vukobratovic, D.; Tassi, A.; Delic, S.; Khirallah, C. Random Linear Network Coding for 5G Mobile Video Delivery. Information 2018, 9, 72. [Google Scholar] [CrossRef]
  32. Sundararajan, J.K.; Shah, D.; Médard, M.; Jakubczak, S.; Mitzenmacher, M.; Barros, J. Network Coding Meets TCP: Theory and Implementation. Proc. IEEE 2011, 99, 490–512. [Google Scholar] [CrossRef]
  33. Enenche, P.; Kim, D.H.; You, D. Network Coding as Enabler for Achieving URLLC Under TCP and UDP Environments: A Survey. IEEE Access 2023, 11, 76647–76674. [Google Scholar] [CrossRef]
  34. Javan, N.T.; Yaghoubi, Z. To Code or Not to Code: When and How to Use Network Coding in Energy Harvesting Wireless Multi-Hop Networks. IEEE Access 2024, 12, 22608–22623. [Google Scholar] [CrossRef]
  35. Dilanchian, R.; Bohlooli, A.; Jamshidi, K. Adjustable Random Linear Network Coding (ARLNC): A Solution for Data Transmission in Dynamic IoT Computational Environments. Digit. Commun. Netw. 2024. [Google Scholar] [CrossRef]
  36. Mak, H.W.L. Improved Remote Sensing Algorithms and Data Assimilation Approaches in Solving Environmental Retrieval Problems; The Hong Kong University of Science and Technology: Hong Kong, China, 2019. [Google Scholar]
  37. Chou, P.A.; Wu, Y.; Jain, K. Practical Network Coding. In Proceedings of the Annual Allerton Conference on Communication Control and Computing (Allerton), Monticello, IL, USA, 1–3 October 2003; Volume 41, pp. 40–49. [Google Scholar]
  38. Wunderlich, S.; Fitzek, F.H.P.; Reisslein, M. Progressive Multicore RLNC Decoding with Online DAG Scheduling. IEEE Access 2019, 7, 161184–161200. [Google Scholar] [CrossRef]
  39. Benamira, E.; Merazka, F. Maximizing Throughput in RLNC-based Multi-Source Multi-Relay with Guaranteed Decoding. Digit. Signal Process. 2021, 117, 103164. [Google Scholar] [CrossRef]
  40. Pandi, S.; Gabriel, F.; Cabrera, J.A.; Wunderlich, S.; Reisslein, M.; Fitzek, F.H.P. PACE: Redundancy Engineering in RLNC for Low-Latency Communication. IEEE Access 2017, 5, 20477–20493. [Google Scholar] [CrossRef]
  41. Wunderlich, S.; Gabriel, F.; Pandi, S.; Fitzek, F.H.P.; Reisslein, M. Caterpillar RLNC (CRLNC): A Practical Finite Sliding Window RLNC Approach. IEEE Access 2017, 5, 20183–20197. [Google Scholar] [CrossRef]
  42. Lucani, D.E.; Pedersen, M.V.; Ruano, D.; Sørensen, C.W.; Fitzek, F.H.P.; Heide, J.; Geil, O.; Nguyen, V.; Reisslein, M. Fulcrum: Flexible Network Coding for Heterogeneous Devices. IEEE Access 2018, 6, 77890–77910. [Google Scholar] [CrossRef]
  43. Nguyen, V.; Tasdemir, E.; Nguyen, G.T.; Lucani, D.E.; Fitzek, F.H.P.; Reisslein, M. DSEP Fulcrum: Dynamic Sparsity and Expansion Packets for Fulcrum Network Coding. IEEE Access 2020, 8, 78293–78314. [Google Scholar] [CrossRef]
  44. Tasdemir, E.; Tömösközi, M.; Cabrera, J.A.; Gabriel, F.; You, D.; Fitzek, F.H.P.; Reisslein, M. SpaRec: Sparse Systematic RLNC Recoding in Multi-Hop Networks. IEEE Access 2021, 9, 168567–168586. [Google Scholar] [CrossRef]
  45. Tasdemir, E.; Nguyen, V.; Nguyen, G.T.; Fitzek, F.H.P.; Reisslein, M. FSW: Fulcrum Sliding Window Coding for Low-Latency Communication. IEEE Access 2022, 10, 54276–54290. [Google Scholar] [CrossRef]
  46. Torres Compta, P.; Fitzek, F.H.P.; Lucani, D.E. Network Coding is the 5G Key Enabling Technology: Effects and Strategies to Manage Heterogeneous Packet Lengths. Trans. Emerg. Telecommun. Technol. 2015, 6, 46–55. [Google Scholar] [CrossRef]
  47. Torres Compta, P.; Fitzek, F.H.P.; Lucani, D.E. On the Effects of Heterogeneous Packet Lengths on Network Coding. In Proceedings of the European Wireless 2014 (EW), Barcelona, Spain, 14–16 May 2014; pp. 385–390. [Google Scholar]
  48. Taghouti, M.; Lucani, D.E.; Cabrera, J.A.; Reisslein, M.; Pedersen, M.V.; Fitzek, F.H.P. Reduction of Padding Overhead for RLNC Media Distribution with Variable Size Packets. IEEE Trans. Broadcast. 2019, 65, 558–576. [Google Scholar] [CrossRef]
  49. Taghouti, M.; Taghouti, M.; Tömösközi, M.; Howeler, M.; Lucani, D.E.; Fitzek, F.H.P.; Bouallegue, A.; Ekler, P. Implementation of Network Coding with Recoding for Unequal-Sized and Header Compressed Traffic. In Proceedings of the 2019 IEEE Wireless Communications and Networking Conference (WCNC), Marrakesh, Morocco, 15–18 April 2019. [Google Scholar]
  50. Schütz, B.; Aschenbruck, N. Packet-Preserving Network Coding Schemes for Padding Overhead Reduction. In Proceedings of the 2019 IEEE 44th Conference on Local Computer Networks (LCN), Osnabrueck, Germany, 14–17 October 2019; pp. 447–454. [Google Scholar]
  51. de Alwis, C.; Kodikara Arachchi, H.; Fernando, A.; Kondoz, A. Towards Minimising the Coefficient Vector Overhead in Random Linear Network Coding. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, BC, Canada, 26–31 May 2013; pp. 5127–5131. [Google Scholar]
  52. Silva, D. Minimum-Overhead Network Coding in the Short Packet Regime. In Proceedings of the 2012 International Symposium on Network Coding (NetCod), Cambridge, MA, USA, 29–30 June 2012; pp. 173–178. [Google Scholar]
  53. Gligoroski, D.; Kralevska, K.; Øverby, H. Minimal Header Overhead for Random Linear Network Coding. In Proceedings of the 2015 IEEE International Conference on Communication Workshop (ICCW), London, UK, 8–12 June 2015; pp. 680–685. [Google Scholar]
  54. Silva, D.; Zeng, W.; Kschischang, F.R. Sparse Network Coding with Overlapping Classes. In Proceedings of the 2009 Workshop on Network Coding, Theory, and Applications (NetCod), Lausanne, Switzerland, 15–16 June 2009; pp. 74–79. [Google Scholar]
  55. Heidarzadeh, A.; Banihashemi, A.H. Overlapped Chunked Network Coding. In Proceedings of the 2010 IEEE Information Theory Workshop (ITW), Cairo, Egypt, 6–8 January 2010; pp. 1–5. [Google Scholar]
  56. Li, Y.; Soljanin, E.; Spasojevic, P. Effects of the Generation Size and Overlap on Throughput and Complexity in Randomized Linear Network Coding. IEEE Trans. Inf. Theory 2011, 57, 1111–1123. [Google Scholar] [CrossRef]
  57. Tang, B.; Yang, S.; Yin, Y.; Ye, B.; Lu, S. Expander Graph based Overlapped Chunked Codes. In Proceedings of the 2012 IEEE International Symposium on Information Theory (ISIT), Cambridge, MA, USA, 1–6 July 2012; pp. 2451–2455. [Google Scholar]
  58. Mahdaviani, K.; Ardakani, M.; Bagheri, H.; Tellambura, C. Gamma Codes: A Low-Overhead Linear-Complexity Network Coding Solution. In Proceedings of the 2012 International Symposium on Network Coding (NetCod), Cambridge, MA, USA, 1–6 July 2012; pp. 125–130. [Google Scholar]
  59. Mahdaviani, K.; Yazdani, R.; Ardakani, M. Linear-Complexity Overhead-Optimized Random Linear Network Codes. arXiv 2013, arXiv:1311.2123. [Google Scholar]
  60. Yang, S.; Tang, B. From LDPC to Chunked Network Codes. In Proceedings of the 2014 IEEE Information Theory Workshop (ITW), Hobart, TAS, Australia, 2–5 November 2014; pp. 406–410. [Google Scholar]
  61. Tang, B.; Yang, S. An LDPC Approach for Chunked Network Codes. IEEE/ACM Trans. Netw. 2018, 26, 605–617. [Google Scholar] [CrossRef]
  62. Yang, S.; Yeung, R.W. Coding for a network coded fountain. In Proceedings of the 2011 IEEE International Symposium on Information Theory (ISIT), St. Petersburg, Russia, 31 July–5 August 2011; pp. 2647–2651. [Google Scholar]
  63. Yang, S.; Yeung, R.W. Batched Sparse Codes. IEEE Trans. Inf. Theory 2014, 60, 5322–5346. [Google Scholar] [CrossRef]
  64. Tang, B.; Yang, S.; Ye, B.; Guo, S.; Lu, S. Near-Optimal One-Sided Scheduling for Coded Segmented Network Coding. IEEE Trans. Comput. 2016, 65, 929–939. [Google Scholar] [CrossRef]
  65. Qing, J.; Cai, X.; Fan, Y.; Zhu, M.; Yeung, R.W. Dependence Analysis and Structured Construction for Batched Sparse Code. IEEE Trans. Commun. 2024. [Google Scholar] [CrossRef]
  66. Qing, J.; Leong, P.H.W.; Yeung, R.W. Performance Analysis and Optimal Design of BATS Code: A Hardware Perspective. IEEE Trans. Veh. Technol. 2023, 72, 9733–9745. [Google Scholar] [CrossRef]
  67. Yang, S.; Yeung, W.H.; Chao, T.I.; Lee, K.H.; Ho, C.I. Hardware Acceleration for Batched Sparse Codes. U.S. Patent 10,237,782; filed 30 December 2016, and issued 19 March 2019,
  68. Yang, S.; Ho, S.W.; Meng, J.; Yang, E.H. Capacity Analysis of Linear Operator Channels Over Finite Fields. IEEE Trans. Inf. Theory 2014, 60, 4880–4901. [Google Scholar] [CrossRef]
  69. Huang, Q.; Sun, K.; Li, X.; Wu, D.O. Just FUN: A Joint Fountain Coding and Network Coding Approach to Loss-Tolerant Information Spreading. In Proceedings of the 15th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc), Philadelphia, PA, USA, 18–21 August 2014; pp. 83–92. [Google Scholar]
  70. Zhou, Z.; Li, C.; Yang, S.; Guang, X. Practical Inner Codes for BATS Codes in Multi-Hop Wireless Networks. IEEE Trans. Veh. Technol. 2019, 68, 2751–2762. [Google Scholar] [CrossRef]
  71. Zhou, Z.; Kang, J.; Zhou, L. Joint BATS Code and Periodic Scheduling in Multihop Wireless Networks. IEEE Access 2020, 8, 29690–29701. [Google Scholar] [CrossRef]
  72. Yang, S.; Yeung, R.W.; Cheung, J.H.F.; Yin, H.H.F. BATS: Network Coding in Action. In Proceedings of the Annual Allerton Conference on Communication Control and Computing (Allerton), Monticello, IL, USA, 30 September–3 October 2014; pp. 1204–1211. [Google Scholar]
  73. Yin, H.H.F.; Tang, B.; Ng, K.H.; Yang, S.; Wang, X.; Zhou, Q. A Unified Adaptive Recoding Framework for Batched Network Coding. IEEE J. Sel. Areas Inf. Theory (JSAIT) 2021, 2, 1150–1164. [Google Scholar] [CrossRef]
  74. Yin, H.H.F.; Yang, S.; Zhou, Q.; Yung, L.M.L.; Ng, K.H. BAR: Blockwise Adaptive Recoding for Batched Network Coding. Entropy 2023, 25, 1054. [Google Scholar] [CrossRef]
  75. Xu, X.; Guan, Y.L.; Zeng, Y. Batched Network Coding with Adaptive Recoding for Multi-Hop Erasure Channels with Memory. IEEE Trans. Commun. 2018, 66, 1042–1052. [Google Scholar] [CrossRef]
  76. Wang, J.; Bozkus, T.; Xie, Y.; Mitra, U. Reliable Adaptive Recoding for Batched Network Coding with Burst-Noise Channels. In Proceedings of the 2023 57th Asilomar Conference on Signals, Systems, and Computers (ACSSC), Pacific Grove, CA, USA, 29 October–1 November 2023; pp. 220–224. [Google Scholar]
  77. Yang, S.; Yeung, R.W. BATS Codes: Theory and Practice; Synthesis Lectures on Communication Networks; Morgan & Claypool Publishers: Denver, CO, USA, 2017. [Google Scholar]
  78. Breidenthal, J.C. The Merits of Multi-Hop Communication in Deep Space. In Proceedings of the 2000 IEEE Aerospace Conference (AeroConf), Big Sky, MT, USA, 25 March 2000; Volume 1, pp. 211–222. [Google Scholar]
  79. Zhao, H.; Dong, G.; Li, H. Simplified BATS Codes for Deep Space Multihop Networks. In Proceedings of the 2016 IEEE Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 20–22 May 2016; pp. 311–314. [Google Scholar]
  80. Yeung, R.W.; Dong, G.; Zhu, J.; Li, H.; Yang, S.; Chen, C. Space Communication and BATS Codes: A Marriage Made in Heaven. J. Deep Space Explor. 2018, 5, 129–139. [Google Scholar]
  81. Sozer, E.M.; Stojanovic, M.; Proakis, J.G. Underwater Acoustic Networks. IEEE J. Ocean. Eng. 2000, 25, 72–83. [Google Scholar] [CrossRef]
  82. Yang, S.; Ma, J.; Huang, X. Multi-Hop Underwater Acoustic Networks Based on BATS Codes. In Proceedings of the 13th ACM International Conference on Underwater Networks & Systems (WUWNet), Shenzhen, China, 3–5 December 2018; pp. 30:1–30:5. [Google Scholar]
  83. Sprea, N.; Bashir, M.; Truhachev, D.; Srinivas, K.V.; Schlegel, C.; Sacchi, C. BATS Coding for Underwater Acoustic Communication Networks. In Proceedings of the OCEANS 2019—Marseille, Marseille, France, 17–20 June 2019; pp. 1–10. [Google Scholar]
  84. Wang, S.; Zhou, Q.; Yang, S.; Bai, C.; Liu, H. Wireless Communication Strategy with BATS Codes for Butterfly Network. J. Physics Conf. Ser. 2022, 2218, 012003. [Google Scholar] [CrossRef]
  85. Yin, H.H.F.; Yeung, R.W.; Yang, S. A Protocol Design Paradigm for Batched Sparse Codes. Entropy 2020, 22, 790. [Google Scholar] [CrossRef] [PubMed]
  86. Yang, S.; Yeung, R.W. Network Communication Protocol Design from the Perspective of Batched Network Coding. IEEE Commun. Mag. 2022, 60, 89–93. [Google Scholar] [CrossRef]
  87. Tang, L.; Liu, H.; Yang, L.; Ma, Z.; Xiao, M. Analysis for Rank Distribution of BATS Codes under Time-Variant Channels. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC Spring), Antwerp, Belgium, 25–28 May 2020. [Google Scholar]
  88. Wang, S.; Liu, H.; Ma, Z.; Xiao, M. Chunked BATS Codes under Time-invariant and Time-variant Channels. In Proceedings of the 2022 IEEE 95th Vehicular Technology Conference: (VTC Spring), Helsinki, Finland, 19–22 June 2022. [Google Scholar]
  89. Zhang, C.; Tang, B.; Ye, B.; Lu, S. An Efficient Chunked Network Code based Transmission Scheme in Wireless Networks. In Proceedings of the 2017 IEEE International Conference on Communications (ICC), Paris, France, 21–25 May 2017; pp. 1–6. [Google Scholar]
  90. Taghouti, M.; Lucani, D.E.; Pedersen, M.V.; Bouallegue, A. On the Impact of Zero-Padding in Network Coding Efficiency with Internet Traffic and Video Traces. In Proceedings of the European Wireless 2016 (EW), Oulu, Finland, 18–20 May 2016; pp. 72–77. [Google Scholar]
  91. Yin, H.H.F.; Wong, H.W.H.; Tahernia, M.; Qing, J. Packet Size Optimization for Batched Network Coding. In Proceedings of the 2022 IEEE International Symposium on Information Theory (ISIT), Espoo, Finland, 26 June–1 July 2022; pp. 1584–1589. [Google Scholar]
  92. Ye, F.; Roy, S.; Wang, H. Efficient Data Dissemination in Vehicular Ad Hoc Networks. IEEE J. Sel. Areas Commun. (JSAC) 2012, 30, 769–779. [Google Scholar] [CrossRef]
  93. Lucani, D.E.; Médard, M.; Stojanovic, M. Random Linear Network Coding for Time-Division Duplexing: Field Size Considerations. In Proceedings of the 2009 IEEE Global Telecommunications Conference (GLOBECOM), Honolulu, HI, USA, 30 November–4 December 2009; pp. 1–6. [Google Scholar]
  94. Luby, M. LT Codes. In Proceedings of the 2002 IEEE Symposium on Foundations of Computer Science (FOCS), Vancouver, BC, Canada, 19 November 2002; pp. 271–282. [Google Scholar]
  95. Shokrollahi, A. Raptor Codes. IEEE Trans. Inf. Theory 2006, 52, 2551–2567. [Google Scholar] [CrossRef]
  96. Shokrollahi, A.; Luby, M. Raptor Codes. Found. Trends Commun. Inf. Theory 2011, 6, 213–322. [Google Scholar] [CrossRef]
  97. Maymounkov, P. Online Codes; Technical report; New York University: New York, NY, USA, 2002. [Google Scholar]
  98. Yang, S.; Zhou, Q. Tree Analysis of BATS Codes. IEEE Commun. Lett. 2016, 20, 37–40. [Google Scholar] [CrossRef]
  99. Yang, J.; Shi, Z.; Wang, C.; Ji, J. Design of Optimized Sliding-Window BATS Codes. IEEE Commun. Lett. 2019, 23, 410–413. [Google Scholar] [CrossRef]
  100. Jayasooriya, S.; Yuan, J.; Xie, Y. An Improved Sliding Window BATS Code. In Proceedings of the 2021 11th International Symposium on Topics in Coding (ISTC), Montreal, QC, Canada, 30 August–3 September 2021. [Google Scholar]
  101. Xu, X.; Zeng, Y.; Guan, Y.L.; Yuan, L. Expanding-Window BATS Code for Scalable Video Multicasting Over Erasure Networks. IEEE Trans. Multimed. 2018, 20, 271–281. [Google Scholar] [CrossRef]
  102. Yang, J.; Shi, Z.; Ji, J. Design of Improved Expanding-Window BATS Codes. IEEE Trans. Veh. Technol. 2022, 71, 2874–2886. [Google Scholar] [CrossRef]
  103. Yang, S.; Ng, T.C.; Yeung, R.W. Finite-Length Analysis of BATS Codes. IEEE Trans. Inf. Theory 2018, 64, 322–348. [Google Scholar] [CrossRef]
  104. Xu, X.; Zeng, Y.; Guan, Y.L.; Yuan, L. BATS Code with Unequal Error Protection. In Proceedings of the 2016 IEEE International Conference on Communication Systems (ICCS), Shenzhen, China, 14–16 December 2016. [Google Scholar]
  105. Xu, X.; Guan, Y.L.; Zeng, Y.; Chui, C.C. Quasi-Universal BATS Code. IEEE Trans. Veh. Technol. 2017, 66, 3497–3501. [Google Scholar] [CrossRef]
  106. Zhang, H.; Sun, K.; Huang, Q.; Wen, Y.; Wu, D. FUN Coding: Design and Analysis. IEEE/ACM Trans. Netw. 2016, 24, 3340–3353. [Google Scholar] [CrossRef]
  107. Yin, H.H.F.; Wang, J.; Chow, S.M. Distributionally Robust Degree Optimization for BATS Codes. In Proceedings of the 2024 IEEE International Symposium on Information Theory (ISIT), Athens, Greece, 7–12 July 2024; pp. 1315–1320. [Google Scholar]
  108. Shokrollahi, A.; Lassen, S.; Karp, R. Systems and Processes for Decoding Chain Reaction Codes through Inactivation. U.S. Patent 6,856,263, 15 February 2005. [Google Scholar]
  109. Yang, J.; Shi, Z.P.; Xiong, J.; Wang, C.X. An Improved BP Decoding of BATS Codes with Iterated Incremental Gaussian Elimination. IEEE Commun. Lett. 2020, 24, 321–324. [Google Scholar] [CrossRef]
  110. Yang, J.; Shi, Z.; Yang, D.D.; Wang, C.X. An Improved Belief Propagation Decoding of BATS Codes. In Proceedings of the 2019 IEEE 19th International Conference on Communication Technology (ICCT), Xi’an, China, 16–19 October 2019; pp. 11–15. [Google Scholar]
  111. Mao, L.; Yang, S.; Huang, X.; Dong, Y. Design and Analysis of Systematic Batched Network Codes. Entropy 2023, 25, 1055. [Google Scholar] [CrossRef]
  112. Mao, L.; Yang, S. Efficient Binary Batched Network Coding employing Partial Recovery. In Proceedings of the 2024 IEEE International Symposium on Information Theory (ISIT), Athens, Greece, 7–12 July 2024; pp. 1321–1326. [Google Scholar]
  113. Zhu, M.; Jiang, M.; Zhao, C. Protograph-Based Batched Network Codes. arXiv 2024, arXiv:2408.16365. [Google Scholar]
  114. Xiang, M.; Yi, B.; Qiu, K.; Huang, T. Expanding-Window BATS Code with Intermediate Feedback. IEEE Commun. Lett. 2018, 22, 1750–1753. [Google Scholar] [CrossRef]
  115. Ma, J.; Shang, B.; Khan, Z.; Yu, Y.; Fan, P. Redesign of BATS Code with Improved Decoding Rate Based on Coupled Batch Size and Degree Distribution. IEEE Commun. Lett. 2024, 28, 1196–1200. [Google Scholar] [CrossRef]
  116. Xu, X.; Praveen Kumar, M.S.G.; Guan, Y.L.; Joo Chong, P.H. Two-Phase Cooperative Broadcasting Based on Batched Network Code. IEEE Trans. Commun. 2016, 64, 706–714. [Google Scholar] [CrossRef]
  117. Gao, Y.; Xu, X.; Guan, Y.L.; Chong, P.H.J. V2X Content Distribution Based on Batched Network Coding with Distributed Scheduling. IEEE Access 2018, 6, 59449–59461. [Google Scholar] [CrossRef]
  118. Xu, X.; Guan, Y.L.; Zeng, Y.; Chui, C.C. Spatial-Temporal Network Coding Based on BATS Code. IEEE Commun. Lett. 2017, 21, 620–623. [Google Scholar] [CrossRef]
  119. Zhang, W.; Zhu, M.; Jiang, M.; Hu, N. Design and Optimization of LDPC Precoded Finite-Length BATS Codes Under BP Decoding. IEEE Commun. Lett. 2023, 27, 3151–3155. [Google Scholar] [CrossRef]
  120. Zhang, W.; Zhu, M. Weighted BATS Codes with LDPC Precoding. Entropy 2023, 25, 686. [Google Scholar] [CrossRef]
  121. Wang, S.; Liu, H.; Ma, Z.; Xiao, M. Precoded Batched Sparse Codes Transmission Based on Low-Density Parity-Check Codes. In Proceedings of the 2022 IEEE 95th Vehicular Technology Conference: (VTC Spring), Helsinki, Finland, 19–22 June 2022. [Google Scholar]
  122. Yin, H.H.F.; Xu, X.; Ng, K.H.; Guan, Y.L.; Yeung, R.W. Packet Efficiency of BATS Coding on Wireless Relay Network with Overhearing. In Proceedings of the 2019 IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; pp. 1967–1971. [Google Scholar]
  123. Yin, H.H.F.; Yang, S.; Zhou, Q.; Yung, L.M.L. Adaptive Recoding for BATS Codes. In Proceedings of the 2016 IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016; pp. 2349–2353. [Google Scholar]
  124. Yang, S.; Huang, X.; Yeung, R.; Zao, J. BATched Sparse (BATS) Coding Scheme for Multi-Hop Data Transport; RFC 9426; IETF Trust: Los Angeles, CA, USA, 2023. [Google Scholar]
  125. Li, M.; Sutter, T.; Kuhn, D. Distributionally robust optimization with Markovian data. In Proceedings of the 38th International Conference on Machine Learning (PMLR), Virtual, 18–24 July 2021; pp. 6493–6503. [Google Scholar]
  126. Mak, H.W.L.; Han, R.; Yin, H.H.F. Application of Variational AutoEncoder (VAE) Model and Image Processing Approaches in Game Design. Sensors 2023, 23, 3457. [Google Scholar] [CrossRef]
  127. Yin, H.F.H.; Tahernia, M. Systems And Methods for Multi-Phase Recoding for Batched Network Coding. U.S. Patent 17/941921; filed 9 September 2022,
  128. Roch, S. Modern Discrete Probability: An Essential Toolkit; Department of Mathematics, University of Wisconsin-Madison: Madison, WI, USA, 2020. [Google Scholar]
  129. Levy, H. Stochastic Dominance: Investment Decision Making Under Uncertainty, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
Figure 1. An example of three-phase recoding. Each arrow corresponds to the flow of a packet. The crosses represent the lost packets.
Figure 1. An example of three-phase recoding. Each arrow corresponds to the flow of a packet. The crosses represent the lost packets.
Network 04 00024 g001
Figure 2. The flowchart highlighting the flow of this research.
Figure 2. The flowchart highlighting the flow of this research.
Network 04 00024 g002
Figure 3. An example of two-phase variation of systematic recoding. Each arrow corresponds to the flow of a packet. The crosses represent the lost packets.
Figure 3. An example of two-phase variation of systematic recoding. Each arrow corresponds to the flow of a packet. The crosses represent the lost packets.
Network 04 00024 g003
Figure 4. An example flow of the protocol without adopting multi-phase systematic recoding. The hyphen in the feedback BID–phase–rank triple means that the value will not be used by the previous node.
Figure 4. An example flow of the protocol without adopting multi-phase systematic recoding. The hyphen in the feedback BID–phase–rank triple means that the value will not be used by the previous node.
Network 04 00024 g004
Figure 5. An example flow of the protocol with two-phase systematic recoding. The hyphen in the feedback BID–phase–rank triple means that the value will not be used by the previous node.
Figure 5. An example flow of the protocol with two-phase systematic recoding. The hyphen in the feedback BID–phase–rank triple means that the value will not be used by the previous node.
Network 04 00024 g005
Figure 6. The throughput of BNC when M = t avg = 4 with various p. (a) p = 10 % ; (b) p = 20 % ; (c) p = 30 % .
Figure 6. The throughput of BNC when M = t avg = 4 with various p. (a) p = 10 % ; (b) p = 20 % ; (c) p = 30 % .
Network 04 00024 g006
Figure 7. The throughput of BNC when M = t avg = 8 with various p. (a) p = 10 % ; (b) p = 20 % ; (c) p = 30 % .
Figure 7. The throughput of BNC when M = t avg = 8 with various p. (a) p = 10 % ; (b) p = 20 % ; (c) p = 30 % .
Network 04 00024 g007
Figure 8. The decoding time when M = t avg = 4 and F = 1000 with various p. (a) p = 10 % ; (b) p = 20 % ; (c) p = 30 % .
Figure 8. The decoding time when M = t avg = 4 and F = 1000 with various p. (a) p = 10 % ; (b) p = 20 % ; (c) p = 30 % .
Network 04 00024 g008
Figure 9. The decoding time when M = t avg = 8 and F = 1000 with various p. (a) p = 10 % ; (b) p = 20 % ; (c) p = 30 % .
Figure 9. The decoding time when M = t avg = 8 and F = 1000 with various p. (a) p = 10 % ; (b) p = 20 % ; (c) p = 30 % .
Network 04 00024 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yin, H.H.F.; Tahernia, M.; Mak, H.W.L. Multi-Phase Adaptive Recoding: An Analogue of Partial Retransmission in Batched Network Coding. Network 2024, 4, 468-497. https://doi.org/10.3390/network4040024

AMA Style

Yin HHF, Tahernia M, Mak HWL. Multi-Phase Adaptive Recoding: An Analogue of Partial Retransmission in Batched Network Coding. Network. 2024; 4(4):468-497. https://doi.org/10.3390/network4040024

Chicago/Turabian Style

Yin, Hoover H. F., Mehrdad Tahernia, and Hugo Wai Leung Mak. 2024. "Multi-Phase Adaptive Recoding: An Analogue of Partial Retransmission in Batched Network Coding" Network 4, no. 4: 468-497. https://doi.org/10.3390/network4040024

APA Style

Yin, H. H. F., Tahernia, M., & Mak, H. W. L. (2024). Multi-Phase Adaptive Recoding: An Analogue of Partial Retransmission in Batched Network Coding. Network, 4(4), 468-497. https://doi.org/10.3390/network4040024

Article Metrics

Back to TopTop