Next Article in Journal
Energy-Efficient, Cluster-Based Routing Protocol for Wireless Sensor Networks Using Fuzzy Logic and Quantum Annealing Algorithm
Previous Article in Journal
High Power Pulsed LED Driver for Vibration Measurements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Armed Bandit-Based User Network Node Selection

1
National Innovation Institute of Defense Technology, Academy of Military Science, Beijing 100010, China
2
Intelligent Game and Decision Laboratory, Academy of Military Science, Beijing 100091, China
3
Chinese People’s Liberation Army 32806 Unit, Academy of Military Science, Beijing 100091, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(13), 4104; https://doi.org/10.3390/s24134104
Submission received: 24 April 2024 / Revised: 14 June 2024 / Accepted: 18 June 2024 / Published: 24 June 2024
(This article belongs to the Section Physical Sensors)

Abstract

:
In the scenario of an integrated space–air–ground emergency communication network, users encounter the challenge of rapidly identifying the optimal network node amidst the uncertainty and stochastic fluctuations of network states. This study introduces a Multi-Armed Bandit (MAB) model and proposes an optimization algorithm leveraging dynamic variance sampling (DVS). The algorithm posits that the prior distribution of each node’s network state conforms to a normal distribution, and by constructing the distribution’s expected value and variance, it maximizes the utilization of sample data, thereby maintaining an equilibrium between data exploitation and the exploration of the unknown. Theoretical substantiation is provided to illustrate that the Bayesian regret associated with the algorithm exhibits sublinear growth. Empirical simulations corroborate that the algorithm in question outperforms traditional ε-greedy, Upper Confidence Bound (UCB), and Thompson sampling algorithms in terms of higher cumulative rewards, diminished total regret, accelerated convergence rates, and enhanced system throughput.

1. Introduction

The Space-Air-Ground Integrated Network (SAGIN) integrates satellite systems, aerial networks, and terrestrial communication infrastructures [1], thereby enabling global, uninterrupted coverage. It holds considerable potential for application in scenarios such as disaster response, intelligent transportation systems, and the evolution towards 6G communication networks [2]. The integration of cutting-edge technologies, including artificial intelligence, machine learning, and Software-Defined Networking (SDN), further augments the performance and adaptability of the SAGIN [3]. The pivotal technological hurdles encompass dynamic node management, interconnection of heterogeneous networks, resource allocation optimization, and the intelligent management of networks [4].
In the context of emergency communication, users are in constant motion, and terrestrial base stations may be insufficient to fulfill their communication demands, thus requiring the collaborative support of space and aerial networks [5]. To ensure the reliability of data transmission, it is imperative for users to swiftly connect to the network node at the highest rate within the signal range [6]. However, given the dynamic shifts in the location and status of network nodes, users must rapidly connect to the optimal network node guided by a specific algorithm, without the benefit of prior knowledge of the network conditions. Addressing how users in emergency communication scenarios can expeditiously access the most suitable network nodes has become an imperative issue.

2. Related Work

Online learning methodologies serve as efficacious algorithms for learning and forecasting within dynamic settings [7]. The Multi-Armed Bandit (MAB) problem, often referred to as the slot machine dilemma [8], is a quintessential issue in the realm of online learning. The MAB framework has garnered widespread application due to its capacity to facilitate access optimization even amidst a dearth of environmental information. In particular, ref. [9] investigates the selection process among multiple channels under the condition that channel information adheres to independent and identically distributed variations within a solitary user context, proposing a decision-making framework predicated on the Restless Multi-Armed Bandit (RMAB) model. Ref. [10] integrates the RMAB model, introducing a greedy algorithm for channel selection to augment the spectrum access efficiency for users. Ref. [11] pioneers the employment of index algorithms to address the archetypal MAB challenge, while ref. [12] refines the confidence parameter of the Upper Confidence Bound (UCB) algorithm to enhance its efficacy.
Existing studies on network selection are typically marred by two principal deficiencies: firstly, the oversight of the influence that immediate gains may exert on future earnings; secondly, the linear growth pattern of cumulative regret values yielded by current algorithms, which results in diminished learning efficiency and protracted convergence. Such outcomes are at odds with practical aspirations for achieving higher efficiency through straightforward methods.
Addressing these concerns, the present research introduces an enhanced index algorithm designed to refine the network selection process. This algorithm takes into account the interplay between immediate and prospective future gains, leveraging the strengths of both the UCB and Thompson sampling algorithms to achieve a harmonious equilibrium between exploration and exploitation. Simulation results indicate that, in contrast to extant methodologies, the dynamic variance sampling algorithm proposed herein not only escalates learning efficiency but also mitigates cumulative regret, thereby augmenting system throughput.

3. System Model

3.1. Network Architecture

The network architecture of this study is illustrated in Figure 1. The research area is conceptualized as a circular zone, equipped with satellites, unmanned aerial vehicles (UAVs), and base stations. The satellite signals ensure comprehensive coverage across the entire area, and UAVs operate on predefined circular trajectories, while the coverage of base stations is confined. Users are outfitted with multi-mode antennas capable of accessing multiple networks, yet they are restricted to connecting to a single network node at any given instant. The user communication is structured in a time-slotted manner, segmenting the user communication timeframe into T discrete time slots, with each slot separated by a relatively minor interval. The crux of this paper’s research lies in the decision-making process for an individual user to access the space–air–ground integrated emergency communication network node during each time slot.

3.2. Channel Model

The system includes three types of links: space-to-ground, aerial-to-ground, and base station-to-ground links. According to the literature [13], the channel state information (CSI) from the satellite to the user is as follows:
h S A T = C L b β exp ( j θ )
where C L denotes the free space loss, which can be calculated by the formula C L = λ / 4 π 2 / d 2 + l 2 , where λ denotes the signal wavelength, l denotes the horizontal distance between the center of the satellite beam and the user, d denotes the vertical height of the satellite relative to the ground, β denotes the channel gain caused by the rain attenuation, which obeys the lognormal distribution, β d B is the β form of d B , θ is a phase vector uniformly distributed within the range [ 0 , 2 π ] , and b indicates the satellite beam gain, which is defined as follows:
b = G ( J 1 ( μ 0 ) 2 μ 0 36 J 3 ( μ 0 ) μ 0 2 ) 2
where G represents the maximum gain of the satellite antenna μ 0 = 2.07123 sin ( α ) / sin ( α 3 d B ) , α is the elevation angle between the center of the beam and the user, α 3 d B is the 3   d B angle of the satellite beam, and J 1 ( · ) and J 3 ( · ) are the first-order and third-order Bessel functions of the first class, respectively.
According to the literature [13], the channel state information (CSI) from the aerial platform to the user is as follows:
a U A V = G L ( K K + 1 a L o S + 1 K + 1 a R a y )
where G L = C 0 / U d 2 + U h 2 denotes the path loss, where C 0 denotes the channel power gain for a reference distance of 1 m, U d is the horizontal distance from the UAV to the target user, and U h is the height of the UAV. The small-scale fading follows the Rice (Rician) channel model, where K is the Rician coefficient. a L o S is the line-of-sight Rician fading component and a R a y is the non-line-of-sight Rayleigh fading component.
According to the literature [13], the channel state information (CSI) from the base station to the user is as follows:
g B S = α g 0
where α = C 0 r 4 represents the large-scale fading, where C 0 denotes the channel power gain at a reference distance of 1 m, r denotes the distance between the base station and the user, and g0 represents the small-scale fading, which follows a N a k a g a m i - m distribution.

3.3. Communication Model

Based on the method of calculating the channel state information (CSI) mentioned, the communication rate that can be obtained by a user when connecting to a satellite, a UAV, and a base station, respectively (without considering inter-channel interference) can be expressed by the formula
R S A T = log 2 ( 1 + p S A T | h S A T | 2 δ S A T 2 )
R U A V = log 2 ( 1 + p U A V | a U A V | 2 δ U A V 2 )
R B S = log 2 ( 1 + p B S | g B S | 2 δ B S 2 )
In Equation (5), p S A T denotes the power of the satellite, h S A T denotes the CSI from the satellite to the user, and δ S A T denotes the noise power of the satellite received by the user. Similarly, the meaning of each symbol in Equations (6) and (7) can be known.

3.4. Benefit Model

The benefit of the proposed algorithm in this paper after the user selects a network node to access is represented as in Equation (8):
r = R n o d e R u p p e r
where R n o d e denotes the communication rate (i.e., R S A T ,   R U A V , or R B S ) that the user obtains from the network nodes, and R u p p e r denotes the upper bound of the communication rate that the user is able to obtain from all the network nodes, e.g., when the maximal rate is 100, R u p p e r 100 , which ensures that the user’s gain interval is 0 ,   1 .
Meanwhile, in order to experimentally compare the algorithm proposed in this paper with the traditional ε-greedy algorithm, UCB algorithm, and Thompson sampling algorithm, according to the literature [14], the gain r can be transformed into a Bernoulli random variable, and the current gain r obeys the Bernoulli distribution with parameter r (i.e., r ~ B e r n o u l l i r ) with r 0 ,   1 . The article has shown that the above two returns are algorithmically equivalent.
Furthermore, to experimentally compare the algorithm proposed in this paper with conventional algorithms, including the ε-greedy algorithm, the Upper Confidence Bound (UCB) algorithm, and the Thompson sampling algorithm, the reward r can be converted into a Bernoulli random variable, as outlined in reference [14]. The instantaneous reward r is governed by a Bernoulli distribution parameterized by r (i.e., r ~ B e r n o u l l i r ) with r 0 ,   1 . The aforementioned article has substantiated the algorithmic equivalence of these two forms of rewards.

3.5. Objective Function

The purpose of the algorithm based on the MAB model is to determine the user’s strategy for selecting a network node at each moment in time so as to maximize the expectation of the total gain over T time slots, i.e.,
max E [ t = 1 T r i ( t ) ]
where i t denotes the network node i selected by the user according to the algorithm at moment t , and r i t denotes the gain after selecting network node i , corresponding to r and r defined in the gain model. In order to compare the effects of different algorithms more intuitively, this paper uses the minimization of the expected total regret as the objective function corresponding to the maximization of the expected total gain, where the so-called regret is the expected rate lost at each moment due to the failure to select the best network node. Define μ i as the expected gain of network node i , so that μ * = max μ i denotes the expected gain of the optimal network node, i = μ * μ i denotes the suboptimal network gap, and also define k i t as the number of times that the network node i has been selected prior to the moment of t . Then the total regret of the user in T time slots is denoted as
E [ ( T ) ] = E [ t = 1 T ( μ * μ i ( t ) ) ] = i Δ i E [ k i ( T ) ]

4. Network Selection Mechanism Based on the MAB Model

4.1. Dynamic Variance Sampling Algorithm

The issue of user network node selection in an unknown environment has been modeled as a Multi-Armed Bandit (MAB) model. As an advanced dynamic stochastic control framework, the MAB model has excellent learning capabilities and is mainly used to solve problems of selection and resource allocation under limited resources. This includes, but is not limited to, scenarios such as channel allocation, opportunistic network access, and routing selection. Through the application of the MAB model, users can make optimal decisions in uncertain environments, thereby effectively enhancing the overall performance of the system. Reference [15] proposed an Upper Confidence Bound (UCB)-based index algorithm, which, although it reduces the algorithm complexity compared to traditional algorithms, still has a certain gap in overall revenue compared to the ideal state, and the convergence speed is slow; reference [16] studied the theoretical performance of the Thompson sampling algorithm, which, although it has a lower regret lower bound compared to the UCB index algorithm, still has a gap from the ideal state.
The index value of the UCB algorithm consists of two parts: the sample average reward of the current network and the confidence factor, which is represented as follows:
θ i ( t ) = μ ^ i ( t ) + 2 ln ( t 1 ) / k i ( t )
where k i t denotes the number of times a network node has been selected at moment t , μ ^ i t denotes the mean benefit of network node i before moment t , which reflects how well the algorithm utilizes the data, and the confidence factor 2 ln t 1 / k i t is a quantity inversely related to the number of times it has been selected, which reflects how well the data has been explored.
The Thompson sampling algorithm is a stochastic algorithm based on Bayesian ideas, which selects arms at each time step based on their posterior probability of being the best arm by assuming a Beta prior distribution for the reward distribution parameter of each arm. The index values are calculated as follows:
θ i ( t ) = Sample [ B e t a ( 1 + S i ( t ) , 1 + F i ( t ) ]
where S a m p l e B e t a · denotes the B e t a distribution sampling, S i t and F i t denote the number of times that the network node has been selected with a gain of 1 and the number of times that the network node has been selected with a gain of 0 before the moment t , respectively.
The advantage of the UCB algorithm is the introduction of the confidence factor related to the number of times selected, which enhances the exploratory nature of the algorithm, and the disadvantage is the low exploratory efficiency and slow convergence speed; the advantages of the Thompson sampling algorithm are the introduction of the Bayesian sampling idea, that the assumption of the prior distribution is more in line with the actual scenario, and that its convergence speed has been improved, but it is a large gap with the ideal value. At the same time, from the CSI model, it can be seen that the changes in the state of network nodes are closer to the normal distribution, so the normal distribution is considered as the prior distribution of network state changes.
Based on the characteristics of the above two algorithms, this paper considers the improvement in terms of the advantages of the two algorithms. Bayesian sampling and the confidence factor are introduced into the algorithm at the same time, which assumes that the return of each network node obeys the normal distribution, the mean value of the sample is used as the expectation (reflecting the utilization of the data), the number of times of being selected is introduced into the sample variance (reflecting the exploration of the data), and the index value is calculated as follows:
θ i ( t ) = S a m p l e [ N ( μ ^ i ( t ) , 1 k i ( t ) + 1 ) ]
where S a m p l e [ N ( · ) ] denotes the sampling of a normal distribution, μ ^ i t denotes the mean benefit of network node i before time t , and k i t denotes the number of times network node i is selected at time t . The update rule is as follows:
μ ^ i ( t + 1 ) = { μ ^ i ( t ) × k i ( t ) + r i ( t ) k i ( t ) + 1 ,   s e l e c t   i μ ^ i ( t )                       ,   o t h e r
k i ( t + 1 ) = { k i ( t ) + 1   ,   s e l e c t   i k i ( t )                 ,   o t h e r

4.2. Theoretical Analysis and Proof

Definition 1.
k i t  denotes the number of times network node i has been selected prior to moment t. S i t and F i t denote the number of times network node i has been selected prior to moment t with a gain of 1 and a gain of 0, respectively. i t denotes the value of the index of the network node that has been selected at moment t .
Definition 2.
Assuming that network node 1 is the optimal network and that μ i denotes the expected return of network node i , there is μ i < μ 1 for i 1 . Define x i , y i as two real numbers that satisfy μ i < x i < y i < μ 1 , and obviously x i and y i must exist. Define L i T = l n T / d ( x i , y i ) , where d x i , y i = x i ln x i / y i + ( 1 x i ) l n [ ( 1 x i ) / ( 1 y i ) ] denotes the KL dispersion between Bernoulli distributions with the parameters x i and y i , respectively, of the KL distance.
Definition 3.
θ i t  denotes the sample values sampled by the algorithm from the posterior distribution of network node  i  at moment  t  and  θ i t ~ N μ ^ i t , 1 / k i t + 1 .
Definition 4.
μ ^ i t  denotes the mean value of the returns of network node i at moment t , defined as μ ^ i t = τ = 1 , i τ = i t 1 r i ( τ ) / k i t + 1 , for i 1 , defining E i μ t as the event μ ^ i t x i , and E i θ t as the event θ i t y i .
It can be seen that both μ ^ i t and θ i t are approximations of the true expected return of network node i . The former is an empirical estimate, the latter is a sampled sample from the posterior distribution, and x i and y i are upper bounds on the true return expectation of network node i . Thus, intuitively, the significance of E i μ t and E i θ t is that these two estimates should not be overestimated too much; specifically, do not exceed the thresholds x i and y i .
Definition 5.
F t  denotes the sequence of historical policy information prior to moment t , defined as F t = i τ , r i ( τ ) , τ = 1 , , t , where i τ denotes the index value of the network node that is selected at moment τ , and r i ( τ ) is its corresponding gain. Defining F 0 = { } and satisfying F 0 F 1 F T 1 , it can be seen that the distributions of k i t , μ ^ i t , and θ i t in the above definitions, and whether or not the events E i μ t and E i θ t occur, are all determined by F t 1 .
Definition 6.
Defining p i , t = P θ 1 t > y i F t 1 , it can be seen that p i , t is a random variable determined by F t 1 .
Lemma 1.
According to Lemma 1 of reference [17], when   i 1  , we have the following for any  t   and   F t 1 :
P ( i ( t ) = i , E i μ ( t ) , E i θ ( t ) | t 1 ) 1 p i , t p i , t P ( i ( t ) = 1 , E i μ ( t ) , E i θ ( t ) | t 1 )
Lemma 2.
Let  τ j  denote the moment when network node 1 is selected for the    j th time, which can be obtained according to Lemma 2.13 of reference [18]:
j = 0 T 1 E [ 1 p i , τ j + 1 1 ] { e 11 + 5 ,   j 4 T Δ i 2         , j > 4 L i ( T )
 where L i T = 18 ln T i 2 / i 2 .
Fact 1
(Chernoff–Hoeffding Bound). Let X 1 , , X n be random variables on the interval [ 0 ,   1 ] and E X t X 1 , , X t 1 = μ , so that S n = X 1 + + X n , and there is the following for any a 0 :
P ( S n n μ + a ) e 2 a 2 / n
P ( S n n μ a ) e 2 a 2 / n
Fact 2.
Let Z denote a random variable obeying a normal distribution with mean μ and variance σ 2 for z Z with
1 4 π e 7 z 2 / 2 P ( | Z μ | > z σ ) 1 2 e z 2 / 2
Theorem 1.
The upper bound on the regret of the dynamic variance sampling algorithm is given by
E [ ( T ) ] O ( N T ln N )
 where  T  is the total time duration and  N  is the number of network nodes.
Proof. 
According to the definition of regret in Equation (10), we can see that
E [ ( T ) ] = i Δ i E [ k i ( T ) ]
Firstly, the regret is decomposed according to the events defined in Definition 4, but here instead of decomposing the regret directly, the expectation of the number of times a suboptimal network node is selected is decomposed. Because according to the definition of regret, the expectation of the number of times a network node is selected is multiplied by the suboptimal network gap and summed up to get the final regret, and the optimal network node does not contribute to the regret, it is only necessary to decompose the number of times a suboptimal network node is selected. For i 1 , there is
E [ k i ( T ) ] = E [ t = 1 T I ( i ( t ) = i ) ] = t = 1 T E [ I ( i ( t ) = i ) ] = t = 1 T ( i ( t ) = i )
where I ( · ) denotes the indicator function, and the decomposition of the above equation using the event E i μ t and its complement E i μ t ¯ is obtained:
E [ k i ( T ) ] = t = 1 T P ( i ( t ) = i , E i μ ( t ) ) + t = 1 T P ( i ( t ) = i , E i μ ( t ) ¯ )
Continuing to decompose the above equation by the event E i θ t yields
E [ k i ( T ) ] = t = 1 T P ( i ( t ) = i , E i θ ( t ) , E i μ ( t ) ) + t = 1 T P ( i ( t ) = i , E i θ ( t ) ¯ , E i μ ( t ) )                                     + t = 1 T P ( i ( t ) = i , E i μ ( t ) ¯ )
Next, derive the upper bounds for each of the above three terms, starting with the upper bound for Equation (1). Combined with the relationship stated in Definition 5, this is obtained from the properties of conditional probability and Lemma 1:
t = 1 T P ( i ( t ) = i , E i θ ( t ) , E i μ ( t ) ) = t = 1 T E [ P ( i ( t ) = i , E i θ ( t ) , E i μ ( t ) | t 1 ) ] t = 1 T E [ 1 p i , t p i , t P ( i ( t ) = 1 , E i θ ( t ) , E i μ ( t ) | t 1 ) ] = t = 1 T E [ E [ 1 p i , t p i , t I ( i ( t ) = 1 , E i θ ( t ) , E i μ ( t ) | t 1 ) ] ] = t = 1 T E [ 1 p i , t p i , t I ( i ( t ) = 1 , E i θ ( t ) , E i μ ( t ) ) ]
The second equality sign above utilizes the property that p i , t is invariant given F t 1 . Combined with p i , t = P θ 1 t > y i F t 1 , it can be seen that the value of p i , t only changes with the distribution of θ 1 t , i.e., only after network node 1 is selected. Defining τ j to denote the moment when network node 1 is selected for the jth time, the values of p i , t are equal at all t τ j + 1 , , τ j + 1 moments. Therefore,
t = 1 T E [ 1 p i , t p i , t I ( i ( t ) = 1 , E i θ ( t ) , E i μ ( t ) ) ] = j = 1 T 1 E [ 1 p i , τ j + 1 p i , τ j + 1 t = τ j + 1 τ j + 1 I ( i ( t ) = 1 , E i θ ( t ) , E i μ ( t ) ) ] j = 0 T 1 E [ 1 p i , τ j + 1 p i , τ j + 1 ]
Let L i T = 18 ln T i 2 / i 2 , and by Lemma 2, when j 4 L i , there are
E [ 1 p i , τ j + 1 p i , τ j + 1 ] 4 T Δ i 2
Substituting Equation (25) into Equation (24) gives the upper bound of Equation (1):
t = 1 T P ( i ( t ) = i , E i θ ( t ) , E i μ ( t ) ) 4 L i ( T ) ( e 64 + 4 ) + 4 Δ i 2
Next, the upper bound of Equation (2) is derived, and Equation (2) is decomposed into two parts according to the magnitude relationship between k i ( t ) and L i T :
t = 1 T P ( i ( t ) = i , E i θ ( t ) ¯ , E i μ ( t ) ) = t = 1 T P ( i ( t ) = i , k i ( t ) L i ( T ) , E i θ ( t ) ¯ , E i μ ( t ) )   + t = 1 T P ( i ( t ) = i , k i ( t ) > L i ( T ) , E i θ ( t ) ¯ , E i μ ( t ) )
First analyzing the first term of Equation (27),we can get
t = 1 T P ( i ( t ) = i , k i ( t ) L i ( T ) , E i θ ( t ) ¯ , E i μ ( t ) ) E [ t = 1 T I ( i ( t ) = i , k i ( t ) L i ( T ) ) ] L i ( T )
Next, analyzing the latter term of Equation (27), if the value of k i ( t ) is large and the event E i μ t occurs, then the probability of the event E i θ t ¯ occurring will be very small, and in conjunction with the definition of the event in Definition 4, there is
t = 1 T P ( i ( t ) = i , k i ( t ) > L i ( T ) , E i θ ( t ) ¯ , E i μ ( t ) ) E [ t = 1 T P ( i ( t ) = i , E i θ ( t ) ¯ | k i ( t ) > L i ( T ) , E i μ ( t ) , t 1 ) ] E [ t = 1 T P ( θ i ( t ) > y i | k i ( t ) > L i ( T ) , μ ^ i ( t ) x i , t 1 ) ]
From the definition, we can see that θ i ( t ) ~ N μ ^ i t , 1 / k i t + 1 , and according to the normal distribution property, given μ ^ i t < x i , there are
P [ S a m p l e [ N ( μ ^ i ( t ) , σ 2 ) ] ] P [ S a m p l e [ N ( x i , σ 2 ) ] ]
Therefore, Equation (29) can be further scaled as
E [ t = 1 T P ( θ i ( t ) > y i | k i ( t ) > L i ( T ) , μ ^ i ( t ) x i , t 1 ) ] E [ t = 1 T P ( S a m p l e [ N ( x i , 1 k i ( t ) + 1 ) > y i ] | k i ( t ) > L i ( T ) , t 1 ) ]
From the probability density function of normal distribution and its distribution characteristics, when k i ( t ) > L i T , there are
P ( S a m p l e [ N ( x i , 1 k i ( t ) + 1 ) > y i ] ) 1 2 e ( k i ( t ) + 1 ) ( y i x i ) 2 2 1 2 e L i ( t ) ( y i x i ) 2 2
Taking x i = μ i + i / 3 and y i = μ 1 i / 3 , y i x i = i / 3 and substituting in Equation (32):
P ( S a m p l e [ N ( x i , 1 k i ( t ) + 1 ) > y i ] ) 1 2 e L i ( t ) ( y i x i ) 2 2 = 1 2 T Δ i 2
Therefore,
t = 1 T P ( i ( t ) = i , k i ( t ) > L i ( T ) , E i θ ( t ) ¯ , E i μ ( t ) ) 1 2 T Δ i 2
Substituting Equations (28) and (34) into Equation (27) gives the upper bound of Equation (2) as
t = 1 T P ( i ( t ) = i , E i θ ( t ) ¯ , E i μ ( t ) ) L i ( T ) + 1 2 T Δ i 2
Finally, to derive the upper bound of Equation (3), define τ k to denote the moment of the k th selection of network node i . Let τ k = 0 . According to the definition of E i μ t , we can see that E i μ t ¯ denotes the event μ ^ i t > x i , and then we have
t = 1 T P ( i ( t ) = i , E i μ ( t ) ¯ ) k = 0 T 1 P ( E i μ ( τ k + 1 ) ¯ )
When k 1 , there is
μ ^ i ( τ k + 1 ) = S i ( τ k + 1 ) k + 1 S i ( τ k + 1 ) k
According to the Chernoff–Hoeffding Bound (Fact 2):
P ( E i μ ( τ k + 1 ) ¯ ) = P ( μ ^ i ( τ k + 1 ) > x i ) P ( S i ( τ k + 1 ) k > x i ) e k d ( x i , μ i )
Substitute Equation (38) into Equation (36):
t = 1 T P ( i ( t ) = i , E i μ ( t ) ¯ ) k = 1 T 1 e k d ( x i , μ i ) 1 + 1 d ( x i , μ i )
Taking x i = μ i + i / 3 , we have by Pinsker’s inequality:
d ( x i , μ i ) 2 ( x i μ i ) 2 = 2 Δ i 2 9
Substituting Equations (26), (35), and (41) into Equation (22), respectively, yields
E [ k i ( T ) ] 4 L i ( T ) ( e 64 + 4 ) + 4 Δ i 2 + L i ( T ) + 1 2 T Δ i 2 + 1 + 9 2 Δ i 2 72 ln ( T Δ i 2 ) Δ i 2 ( e 64 + 4 ) + 4 Δ i 2 + 18 ln ( T Δ i 2 ) Δ i 2 + 1 2 Δ i 2 + 1 + 9 2 Δ i 2
Therefore, the expectation of the regret upper bound for network node i is
Δ i E [ k i ( T ) ] 72 ln ( T Δ i 2 ) Δ i ( e 64 + 5 ) + Δ i + 9 Δ i
The value of the above equation decreases with i when i e / T , and the upper bound of the expected regret is when i e N l n N / T for all network nodes is
O ( T ln N N + 1 )
When i e / T , the upper bound on total regret is
E [ ( T ) ] = e N T ln N = O ( N T ln N )
The proof is complete. This theorem shows that the algorithm proposed in this paper is able to make the system regret value grow sublinearly and eventually converge, which is better than the algorithm with linear growth and can improve the system throughput. The notation and terminology involved in the proof are recapitulated in Table 1.

4.3. Algorithm Description and Procedure

Based on the above analysis, the algorithm based on the MAB model always selects the network node access corresponding to the maximum value of the current time-slot index, so that the algorithm proceeds along the direction of minimizing the total regret value. Users can explore the node with the optimal network state in a short time by using the algorithm proposed in this paper under the condition of an unknown network state. The steps of the algorithm are as follows (Algorithm 1):
Algorithm 1. Dynamic variance sampling algorithm flow.
for t = 1,2 , , T do
        for each node i = 1 , , N , sample θ i ( t ) independently from the N μ ^ i t , 1 k i t + 1 distribution
        select node: i ( t ) = a r g m a x i θ i ( t )
        observe reward: r i ( t )
        update selected times: k i ( t ) t + 1 = k i ( t ) t + 1
        update mean benefit: μ i t ( t + 1 ) = μ i t t × k i ( t ) t + r i ( t ) k i ( t ) + 1
end for

5. Performance and Evaluation

In this paper, we consider the problem of how users choose the optimal network node access under an unknown network environment, and adopt a dynamic variance sampling algorithm to explore and learn the unknown network environment, and choose the optimal network node access based on historical experience prediction. This subsection simulates and analyzes the algorithm from the perspective of its performance parameters by establishing a simulation scenario, and compares its performance with the traditional ε-greedy algorithm, UCB algorithm, and Thompson sampling algorithm, respectively.

5.1. Simulation Settings

According to the relevant formulas in the channel model, communication model, and revenue model, the meanings of the relevant symbols and the simulation parameter settings are shown in Table 2.
The parameter settings related to the simulation scenario are shown in Table 3.
The scene schematic is shown in Figure 2. The reference coordinate system is the polar coordinate system with the center of the scene as the coordinate origin, the satellite is located at the origin of the coordinate system (0, 0), the coordinates of the flight centers of the UAVs are set to be 250 ,   0 ,     250 ,     π / 2 ,     250 ,     π ,     250 , π / 2 ,     ( 0 ,   0 ) , the flight altitudes are set to be 200 ,   225 ,   250 ,   275 ,   300 , respectively, the initial positions are all set to be on the positive half-axis of the x-axis with the respective flight centers as the origin, and the positions of the base stations are set to be 200 ,   3 π / 2 ,     200 ,   π / 2 , respectively. The coordinates of the user’s motion center are (0, 0) and the initial position is set to (200, 0). The total time slot of the simulation is set to be 1000, and in order to overcome the randomness of the network environment, each group of experiments is repeated 100 times.

5.2. Results and Analysis

The experiment compares the dynamic variance sampling algorithm proposed in this paper with current mainstream algorithms (ε-greedy [18], UCB, and Thompson sampling, etc.) from different perspectives, ultimately verifying the superiority of this algorithm. As can be seen from Figure 3, the cumulative regret value of the algorithm proposed in this paper grows in an approximately logarithmic relationship with the time slot and ultimately converges, which confirms the deduction of Theorem 1.
From Figure 4, it can be observed that as the number of time slots increases, all four algorithms are capable of gradually reducing the average regret value towards stability. However, the dynamic variance sampling algorithm proposed in this paper has a faster convergence rate compared to the other three algorithms, and the average regret value approaches closer to 0.
Figure 5 presents the comparison results of the average throughput for the four algorithms. It can be seen that the average throughput of the algorithm proposed in this paper is significantly higher than that of the other algorithms. When the number of time slots is sufficiently large, the average throughput approaches the ideal value.
Figure 6 illustrates the mean throughput of the four algorithms following 1000 iterations. Analysis of the figure reveals that, within the simulated environment described, the mean throughput of the algorithm introduced in this study has experienced enhancements of 7.63%, 11.96%, and 3.13% relative to the ε-greedy algorithm, Upper Confidence Bound (UCB) algorithm, and Thompson sampling algorithm, respectively.

6. Conclusions

This study, predicated on the MAB model, explores the dynamic selection of and access to network nodes by users within an integrated space–air–ground emergency communication network, given the condition of uncertain node network states. Comparative simulation experiments were conducted across various online learning algorithms. The ε-greedy algorithm, rooted in a greedy heuristic, prioritizes immediate gains; the Upper Confidence Bound (UCB) algorithm focuses on static assessments and lacks consideration of the interplay between sequential strategies, leading to reduced adaptability; the Thompson sampling algorithm, conversely, offers more consistent outcomes. The simulation outcomes underscore that the algorithm introduced in this paper adeptly harmonizes the interplay between exploration and exploitation inherent in network node states. In scenarios where the network node state information is obfuscated, the algorithm proposed herein markedly diminishes the learning regret, expedites the convergence pace, and augments system throughput. Nonetheless, the current research is primarily concentrated on the network node access mechanism for individual users, and further investigation is warranted to address the network node access mechanisms for multiple users within the context of an integrated space–air–ground emergency communication network.

Author Contributions

Conceptualization, methodology, Z.X.; software, validation, formal analysis, writing—original draft preparation, Q.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Meng, Y.; Qi, P.; Lei, Q.; Zhang, Z.; Ren, J.; Zhou, X. Electromagnetic Spectrum Allocation Method for Multi-Service Irregular Frequency-Using Devices in the Space-Air-Ground Integrated Network. Sensors 2022, 22, 9227. [Google Scholar] [CrossRef] [PubMed]
  2. Liu, J.; Shi, Y.; Fadlullah, Z.M.; Kato, N. Space-Air-Ground Integrated Network: A Survey. IEEE Commun. Surv. Tutor. 2018, 20, 2714–2741. [Google Scholar] [CrossRef]
  3. Geng, Y.; Cao, X.; Cui, H.; Xiao, Z. Network Element Placement for Space-Air-Ground Integrated Network: A Tutorial. Chin. J. Electron. 2022, 31, 1013–1024. [Google Scholar] [CrossRef]
  4. Yin, Z.; Cheng, N.; Luan, T.H.; Wang, P. Physical Layer Security in Cybertwin-Enabled Integrated Satellite-Terrestrial Vehicle Networks. IEEE Trans. Veh. Technol. 2022, 71, 4561–4572. [Google Scholar] [CrossRef]
  5. Anjum, M.J.; Anees, T.; Tariq, F.; Shaheen, M.; Amjad, S.; Iftikhar, F.; Ahmad, F. Space-Air-Ground Integrated Network for Disaster Management: Systematic Literature Review. Appl. Comput. Intell. Soft Comput. 2023, 2023, 6037882. [Google Scholar] [CrossRef]
  6. Cui, H.; Zhang, J.; Geng, Y.; Xiao, Z.; Sun, T.; Zhang, N.; Liu, J.; Wu, Q.; Cao, X. Space-Air-Ground Integrated Network (SAGIN) for 6G: Requirements, Architecture and Challenges. China Commun. 2022, 19, 90–108. [Google Scholar] [CrossRef]
  7. Sridharan, K.; Yoo, S.W.W. Online Learning with Unknown Constraints. arXiv 2024, arXiv:2403.04033. [Google Scholar]
  8. Xu, Y.; Anpalagan, A.; Wu, Q.; Shen, L.; Gao, Z.; Wang, J. Decision-Theoretic Distributed Channel Selection for Opportunistic Spectrum Access: Strategies, Challenges and Solutions. IEEE Commun. Surv. Tutor. 2013, 15, 1689–1713. [Google Scholar] [CrossRef]
  9. Dong, S.; Lee, J. Greedy confidence bound techniques for restless multi-armed bandit based Cognitive Radio. In Proceedings of the 2013 47th Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 20–22 March 2013. [Google Scholar]
  10. Smallwood, R.D.; Sondik, E.J. The Optimal Control of Partially Observable Markov Processes Over a Finite Horizon. Oper. Res. 1973, 21, 1071–1088. [Google Scholar] [CrossRef]
  11. Agrawal, R. Sample mean based index policies by O(log n) regret for the multi-armed bandit problem. Adv. Appl. Probab. 1995, 27, 1054–1078. [Google Scholar] [CrossRef]
  12. Jiang, Z.; Hongcui, C.; Jiahao, X. Channel Selection Based on Multi-armed Bandit. Telecommun. Eng. 2015, 55, 1094–1100. [Google Scholar]
  13. Wang, Z.; Yin, Z.; Wang, X.; Cheng, N.; Zhang, Y.; Luan, T.H. Label-Free Deep Learning Driven Secure Access Selection in Space-Air-Ground Integrated Networks. In Proceedings of the GLOBECOM 2023–2023 IEEE Global Communications Conference, Kuala Lumpur, Malaysi, 4–8 December 2023. [Google Scholar]
  14. Agrawal, S.; Goyal, N. Analysis of Thompson Sampling for the multi-armed bandit problem. Statistics 2011, 23, 39.1–39.26. [Google Scholar]
  15. Auer, P.; Cesa-Bianchi, N.; Fischer, P. Finite-time Analysis of the Multiarmed Bandit Problem. Mach. Learn. 2002, 47, 235–256. [Google Scholar] [CrossRef]
  16. Agrawal, S.; Goyal, N. Thompson Sampling for Contextual Bandits with Linear Payoffs. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), Atlanta, GA, USA, 17–19 June 2013; pp. 513–521. [Google Scholar]
  17. Agrawal, S.; Goyal, N. Further Optimal Regret Bounds for Thompson Sampling. arXiv 2012, arXiv:1209.3353. [Google Scholar]
  18. Agrawal, S.; Goyal, N. Near-optimal regret bounds for Thompson sampling. J. ACM 2017, 64, 30. [Google Scholar] [CrossRef]
Figure 1. Network architecture.
Figure 1. Network architecture.
Sensors 24 04104 g001
Figure 2. Schematic diagram of the simulation scene.
Figure 2. Schematic diagram of the simulation scene.
Sensors 24 04104 g002
Figure 3. Cumulative regret curve of different algorithms.
Figure 3. Cumulative regret curve of different algorithms.
Sensors 24 04104 g003
Figure 4. Average regret curve of different algorithms.
Figure 4. Average regret curve of different algorithms.
Sensors 24 04104 g004
Figure 5. Average throughput curve of different algorithms.
Figure 5. Average throughput curve of different algorithms.
Sensors 24 04104 g005
Figure 6. Average throughput after 1000 iterations.
Figure 6. Average throughput after 1000 iterations.
Sensors 24 04104 g006
Table 1. Review of the main symbols.
Table 1. Review of the main symbols.
SymbolMeaning of Symbol
k i t The number of times network node i   has been selected before moment t
S i t The number of times network node i has gained 1 after being selected before moment t
F i t The number of times network node i has gained 0 after being selected before moment t
i t Index of the selected network node at moment t
r i t Benefit of network node i after being selected at moment t
μ i Expected benefit of network node i
i μ 1 μ i
x i , y i Real numbers satisfying μ i < x i < y i < μ 1
d x i , y i The KL dispersion between the Bernoulli distributions of x i and y i
T Total number of time slots
τ Variables in the definition of Ft that depend on the value taken at time t
L i T l n T / d ( x i , y i )
θ i t Sample values sampled by the algorithm in the posterior distribution of network node i at moment t
μ ^ i t The mean value of the benefit of network node i at time t
E i μ t Event μ ^ i t x i
E i θ t Event θ i t y i
F t Sequence of historical strategy information up to moment t
p i , t P θ 1 t > y i F t 1
P ( · ) Probability of an event
E ( · ) Expectation calculus
I · Indicator function
N μ , σ 2 Normal distribution
S a m p l e · Sampling the distribution
Table 2. Meanings of symbols and simulation parameter settings.
Table 2. Meanings of symbols and simulation parameter settings.
SymbolMeaning of SymbolValue
d Satellite orbital altitude 600   k m
l Horizontal distance between satellite beam center and user 0
β d B Satellite channel gain caused by rain attenuation l n ( β d B ) ~ N ( 3.125 , 1.6 )
G Maximum gain of satellite antenna 52   d B
α 3 d B 3 dB angle of satellite beam 0.4
p S A T Satellite downlink power 120   d B W
U d Horizontal distance from UAV to user 0 ~ 400   m
U h The height of the drone 200 ~ 300   m
K Rician coefficient of UAV small-scale fading 10   d B
p U A V Downlink power of UAV 3   d B W
C 0 Channel power gain with a reference distance of 1 m 40   d B
r Distance between base station and user 0 ~ 400   m
p B S Downlink power of base station 20   d B W
g 0 Small-scale fading of base station g 0 ~ N a k a g a m i m ( 2 , 1 )
δ S A T , δ U A V , δ B S Noise power received by users 1   d B W
R u p p e r Upper bound of communication rate 100
Table 3. Scene parameter settings.
Table 3. Scene parameter settings.
ParameterValue
Scene radius 400   m
Number of satellites 1
Number of base stations 2
Number of UAVs 5
Angular velocity of UAV 5 ° / s
Flight radius of UAV 150   m
User moving radius 200   m
User angular velocity 0.5 ° / s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, Q.; Xie, Z. Multi-Armed Bandit-Based User Network Node Selection. Sensors 2024, 24, 4104. https://doi.org/10.3390/s24134104

AMA Style

Gao Q, Xie Z. Multi-Armed Bandit-Based User Network Node Selection. Sensors. 2024; 24(13):4104. https://doi.org/10.3390/s24134104

Chicago/Turabian Style

Gao, Qinyan, and Zhidong Xie. 2024. "Multi-Armed Bandit-Based User Network Node Selection" Sensors 24, no. 13: 4104. https://doi.org/10.3390/s24134104

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop