Next Article in Journal
A Multi-Agent Deep Reinforcement Learning-Based Popular Content Distribution Scheme in Vehicular Networks
Previous Article in Journal
Optimal Pulse Design for Dissipative-Stimulated Raman Exact Passage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decentralized Stochastic Control with Finite-Dimensional Memories: A Memory Limitation Approach

by
Takehiro Tottori
1,* and
Tetsuya J. Kobayashi
1,2,3,4
1
Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 113-8654, Japan
2
Institute of Industrial Science, The University of Tokyo, Tokyo 153-8505, Japan
3
Department of Electrical Engineering and Information Systems, Graduate School of Engineering, The University of Tokyo, Tokyo 113-8654, Japan
4
Universal Biology Institute, The University of Tokyo, Tokyo 113-8654, Japan
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(5), 791; https://doi.org/10.3390/e25050791
Submission received: 20 February 2023 / Revised: 4 April 2023 / Accepted: 9 May 2023 / Published: 12 May 2023
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
Decentralized stochastic control (DSC) is a stochastic optimal control problem consisting of multiple controllers. DSC assumes that each controller is unable to accurately observe the target system and the other controllers. This setup results in two difficulties in DSC; one is that each controller has to memorize the infinite-dimensional observation history, which is not practical, because the memory of the actual controllers is limited. The other is that the reduction of infinite-dimensional sequential Bayesian estimation to finite-dimensional Kalman filter is impossible in general DSC, even for linear-quadratic-Gaussian (LQG) problems. In order to address these issues, we propose an alternative theoretical framework to DSC—memory-limited DSC (ML-DSC). ML-DSC explicitly formulates the finite-dimensional memories of the controllers. Each controller is jointly optimized to compress the infinite-dimensional observation history into the prescribed finite-dimensional memory and to determine the control based on it. Therefore, ML-DSC can be a practical formulation for actual memory-limited controllers. We demonstrate how ML-DSC works in the LQG problem. The conventional DSC cannot be solved except in the special LQG problems where the information the controllers have is independent or partially nested. We show that ML-DSC can be solved in more general LQG problems where the interaction among the controllers is not restricted.

1. Introduction

Optimal control problems of a stochastic dynamical system by decentralized multiple controllers appear in various practical applications, including real-time communication [1,2], decentralized hypothesis testing [3], and networked control [4]. Such problems have been extensively studied in stochastic optimal control theory as decentralized stochastic control (DSC) [5,6,7,8,9,10]. DSC consists of a target system and multiple controllers (Figure 1a) and assumes that each controller cannot completely observe the state of the system and the controls of the other controllers. The information of the target system and the other controllers was only obtained via their noisy observations. Thus, each controller should be optimized to determine its control solely from its own observation history. Even for a pair of finite-dimensional state and observation, the observation history is infinite-dimensional. As a result, ideally, the optimal controller obtained theoretically should possess infinite-dimensional memory. In practical applications, however, the available memory size of each controller is finite-dimensional and often severely limited. Thus, we have to obtain the solutions based on finite-dimensional memory by employing approximation methods heuristically, which may impair the optimality of the ideal solution, especially when the available memory size is not sufficient.
Moreover, another difficulty arises in DSC due to the decentralized setting. If the number of controllers is one, or if all controllers share their observation histories, DSC is reduced to partially observable stochastic control (POSC), in which the observation histories of all controllers can be summarized optimally as the posterior probability of the state by sequential Bayesian estimation [11,12,13,14,15,16]. The posterior probability of the state is also infinite-dimensional, and thus the same problem as DSC still survives, even for POSC. Nevertheless, in POSC, this difficulty can be circumvented by focusing only on the linear-quadratic-Gaussian (LQG) setting under which the posterior probability of the state can be represented by the finite-dimensional mean vector and covariance matrix of Gaussian distribution and the sequential Bayesian estimation can be computed by the Kalman filter [11,12,14]. Therefore, POSC is practically solved, at least in the LQG problem. The difficulty in DCS is that this nice property of LQG is not retained.
In DCS, each controller cannot access the observation histories of the other controllers, as well as the state of the system, which causes each controller to estimate all the others from their own observation history. This hampers the Bayesian estimation of the posterior to be computed sequentially and prevents the infinite-dimensional observation history from being compressed into the finite-dimensional sufficient statistics, even for the LQG problem. Some theoretical studies have addressed this issue by restricting the interaction among the controllers. If the information the controllers have is independent [8,9,10] or partially nested [17,18,19,20,21,22], DSC can enjoy the nice property of the LQG problem and be solved explicitly and optimally with finite-dimensional memory. However, the LQG problem with more general interactions as well as non-LQG problems are still an open problem in DSC.
In order to address these issues, we propose an alternative theoretical framework to DSC, memory-limited DSC (ML-DSC), which is the decentralized version of memory-limited POSC (ML-POSC) [23,24]. The two major difficulties in DSC originate from the ignorance of constraints over controllers when we derive the optimal estimation and control solution. Unlike conventional DSC, ML-DSC explicitly formulates finite-dimensional memories of the controllers and their capacities (Figure 1b). In ML-DSC, each controller is optimized to compress the infinite-dimensional observation history into the prescribed finite-dimensional memory and to determine the control based on it. In other words, each controller controls not only the dynamics of the target system but also the dynamics of its own memory. The formulation of ML-DSC enables us to evade the difficulties mentioned above.
Furthermore, we provide a way to solve the optimization problem associated with the ML-DSC formulation. Specifically, we address the optimization problem by converting ML-DSC in the state space into the deterministic optimal control problem in the probability density function space. This technique has recently been used in mean-field stochastic control [25,26] and ML-POSC [23,24], and is also effective for ML-DSC. Following that, we can solve ML-DSC in a similar way to the deterministic optimal control problem on the probability density function space; the optimal control function of ML-DSC was obtained by jointly solving the Hamilton–Jacobi–Bellman (HJB) equation and the Fokker–Planck (FP) equation. HJB–FP equations also appear in mean-field stochastic game and control [25,26,27,28,29] and ML-POSC [23,24], and numerous numerical algorithms have been proposed [24,30,31,32]. Using these numerical algorithms, ML-DSC may be solved effectively, even in general problems. It should be noted that a similar idea to ML-DSC was also employed in a decentralized partially observable Markov decision process (DEC-POMDP) with the finite-state controller for more than a decade [33,34,35,36,37,38,39]. However, the algorithms of the finite-state controller of DEC-POMDP strongly depend on the discreteness, and thus they are not applicable to ML-DSC where the continuous time and state are considered.
We applied ML-DSC and our algorithm to the LQG problem. The conventional DSC can only be solved for special LQG problems where the information of the controllers is independent [8,9,10] or partially nested [17,18,19,20,21,22]. In contrast, ML-DSC can be solved even in LQG problems with more general interactions among the controllers. In LQG problems of POSC, estimation and control are clearly separated, and are optimized by the Kalman filter and the Riccati equation, respectively [11,14]. In the LQG problems of ML-DSC, estimation and control are not clearly separated and are jointly optimized by the modified Riccati equation, which is called the decentralized Riccati equation in this paper. We noted that this coupling of estimation and control also appears in the conventional DSC [17,18,19,20,21,22] and ML-POSC [23,24]. Therefore, it may be induced by decentralized structure and memory limitation. Finally, we conducted two numerical experiments for the LQG problems of ML-DSC. One controls one-dimensional divergent state dynamics, and the other controls two-dimensional oscillatory state dynamics. These numerical experiments demonstrate that the decentralized Riccati equation is superior to the Riccati equation in the LQG problems of ML-DSC.
The rest of this paper is organized as follows. In Section 2, we briefly review the conventional DSC. In Section 3, we formulate ML-DSC. In Section 4, we solve ML-DSC. In Section 5, we apply ML-DSC to the LQG problem. In Section 6, we conduct the numerical experiments of two LQG problems in ML-DSC. In Section 7, we discuss this paper.

2. Review of Decentralized Stochastic Control

In this section, we briefly review the conventional DSC (Figure 1a) [8,9,10]. DSC consists of a target system and N controllers. Vector x t R d x is the state of the system at time t [ 0 , T ] , which evolves by the following stochastic differential equation (SDE):
d x t = b ( t , x t , u t ) d t + σ ( t , x t , u t ) d ω t ,
where x 0 obeys p 0 ( x 0 ) , ω t R d ω is the standard Wiener process, u t i R d u i is the control of controller i, and u t : = ( u t 1 , u t 2 , , u t N ) R d u is the joint control of all controllers. We noted that d u : = i = 1 N d u i . DSC often assumes that the system is composed of N agents and that the state of the system is decomposed into x t : = ( x t 1 , x t 2 , , x t N ) R d x where x t i R d x i is the state of agent i. In this paper, we do not assume such a situation, because our formulation of the state of the system includes it as a special case.
In DSC, controller i cannot completely observe the state of the system x t and the joint control of all controllers u t . It can only obtain the noisy observation y t i R d y i , which evolves by the following SDE:
d y t i = h i ( t , x t , u t ) d t + γ i ( t , x t , u t ) d ν t i ,
where y 0 i obeys p 0 i ( y 0 i ) , and ν t i R d ν i is the standard Wiener process. Controller i’s observation y t i is controlled by the other controllers through the joint control u t , which expresses the communication among the controllers. Controller i determines its control u t i based on the observation history y 0 : t i : = { y τ i | τ [ 0 , t ] } as follows:
u t i = u i ( t , y 0 : t i ) .
The objective function of DSC is given by the following expected cumulative cost function:
J [ u ] : = E p ( x 0 : T , y 0 : T ; u ) 0 T f ( t , x t , u t ) d t + g ( x T ) ,
where f is the running cost function, g is the terminal cost function, p ( x 0 : T , y 0 : T ; u ) is the joint probability of x 0 : T and y 0 : T given that u is a parameter, and E p · is the expectation with respect to p. DSC is the problem to find the optimal joint control function u that minimizes the objective function J [ u ] :
u : = arg   min u J [ u ] .
In DSC, controller i needs to memorize the infinite-dimensional observation history y 0 : t i to determine the optimal control u t i = u i ( t , y 0 : t i ) . This is one of the major obstacles in DSC for implementing controllers with finite and limited memory.

3. Memory-Limited Decentralized Stochastic Control

In this section, we formulate ML-DSC, which can circumvent the difficulty in DSC by explicitly formulating finite-dimensional memory of the controllers.

3.1. Problem Formulation

In this subsection, we formulate ML-DSC (Figure 1b). ML-DSC explicitly formulates the finite-dimensional memory of controller i by z t i R d z i . The memory dimension d z i is prescribed by the available memory size of the controller i. The controller i compresses the infinite-dimensional observation history y 0 : t i into the finite-dimensional memory z t i by the following SDE:
d z t i = c i ( t , z t i , v t i ) d t + κ i ( t , z t i , v t i ) d y t i + η i ( t , z t i , v t i ) d ξ t i ,
where z 0 i obeys p 0 i ( z 0 i ) , ξ t i R d ξ i is the standard Wiener process, and v t i R d v i is the control of the memory. Unlike the conventional DSC, ML-DSC can take into account the intrinsic stochasticity of the memory, which is modeled by the standard Wiener process d ξ t i in the memory dynamics (6). In addition, the compression of the infinite-dimensional observation history y 0 : t i into the finite-dimensional memory z t i is optimized by the memory control v t i . In ML-DSC, the controller i determines the state control u t i and the memory control v t i based on the finite-dimensional memory z t i as follows:
u t i = u i ( t , z t i ) , v t i = v i ( t , z t i ) .
The objective function of ML-DSC is given by the following expected cumulative cost function:
J [ u , v ] : = E p ( x 0 : T , y 0 : T , z 0 : T ; u , v ) 0 T f ( t , x t , u t , v t ) d t + g ( x T ) ,
where f is the running cost function, g is the terminal cost function, p ( x 0 : T , y 0 : T , z 0 : T ; u , v ) is the joint probability of x 0 : T , y 0 : T and z 0 : T given u and v as parameters, and E p · is the expectation with respect to p. Unlike the cost function f of DSC in Equation (4), the cost function f of ML-DSC in Equation (8) depends on the memory control v t as well as the state control u t . From a practical viewpoint, it should be natural to consider both costs of control and memory. ML-DSC optimizes the state control function u and the memory control function v based on the objective function J [ u , v ] :
u , v : = arg   min u , v J [ u , v ] .
The optimal memory control function v : = ( v 1 , v 2 , , v N ) optimizes the memory dynamics (6), which can be interpreted as the optimization of the compression of the observation history into the finite-dimensional memory. In the LQG problem of POSC, the optimal memory control function v makes the memory dynamics into the Kalman filter, which is the optimal compression of the observations history in this problem [23]. We expect that the optimal memory control function v is also effective for more general problems of ML-DSC.
In ML-DSC, controller i determines the optimal control functions u t i and v t i based only on the finite-dimensional memory z t i . In addition, ML-DSC can take into account the intrinsic stochasticity and the control cost of the memory. Thus, ML-DSC can explicitly accommodate various realistic constrains of the controllers such as memory size, noise in the controllers, and cost for updating memory, none of which can be explicitly addressed in the conventional DSC.
It should be noted that here we consider memory size only for storing continuous time-series with finite-dimensional vectors. While memory size also matters when we consider quantization and storing of real valued observations, these topics are out of the scope of this work.

3.2. Extended State

In this subsection, we generalize the formulation of ML-DSC based on the extended state. This generalization is useful for mathematical investigations by simplifying the notation of ML-DSC. Furthermore, it clarifies the difference between ML-DSC and the conventional stochastic optimal control problems.
We define the extended state s t R d s , the extended control u ˜ t i R d u ˜ i , the extended joint control u ˜ t R d u ˜ , and the extended standard Wiener process ω ˜ t R d ω ˜ as follows:
s t : = x t z t 1 z t N , u ˜ t i : = u t i v t i , u ˜ t : = u ˜ t 1 u ˜ t N , ω ˜ t : = ω t ν t 1 ν t N ξ t 1 ξ t N ,
where d s : = d x + i = 1 N d z i , d u ˜ i : = d u i + d v i , d u ˜ : = i = 1 N d u ˜ i , and d ω ˜ : = d ω + i = 1 N d ν i + i = 1 N d ξ i .
Based on the extended state s t , the extended joint control u ˜ t , and the extended standard Wiener process ω ˜ t , the state, observation, and memory SDEs, i.e., Equations (1), (2), and (6), are summarized as follows:
d s t = b c 1 + κ 1 h 1 c N + κ N h N = : b ˜ ( t , s t , u ˜ t ) d t + σ O O O O O κ 1 γ 1 O η 1 O O O κ N γ N O η N = : σ ˜ ( t , s t , u ˜ t ) d ω ˜ t ,
where p 0 ( s 0 ) = p 0 ( x 0 ) i = 1 N p 0 i ( z 0 i ) . Thus, the SDE of ML-DSC can be generalized as follows:
d s t = b ˜ ( t , s t , u ˜ t ) d t + σ ˜ ( t , s t , u ˜ t ) d ω ˜ t ,
where s 0 obeys p 0 ( s 0 ) . We note that the structures of b ˜ ( t , s t , u ˜ t ) and σ ˜ ( t , s t , u ˜ t ) in Equation (12) are not necessarily restricted to those in Equation (11). Importantly, in ML-DSC, controller i determines the extended control u ˜ t i based solely on the memory z t i as follows:
u ˜ t i = u ˜ i ( t , z t i ) .
The objective function of ML-DSC (8) is generalized as follows:
J [ u ˜ ] : = E p ( s 0 : T ; u ˜ ) 0 T f ˜ ( t , s t , u ˜ t ) d t + g ˜ ( s T ) ,
where f ˜ is the running cost function and g ˜ is the terminal cost function. Therefore, the generalized ML-DSC is the problem to find the optimal extended joint control function u ˜ that minimizes the objective function J [ u ˜ ] :
u ˜ : = arg   min u ˜ J [ u ˜ ]
under the constraint of Equation (13).
This generalization (12)–(15) clarifies the difference between ML-DSC and the conventional stochastic optimal control problems. If controller i determines the extended control u ˜ t i based on the whole extended state s t : = ( x t , z t 1 , , z t N ) as u ˜ t i = u ˜ i ( t , s t ) , this problem becomes equivalent to completely observable stochastic control (COSC), which is the most basic stochastic optimal control problem (Figure 2a) [13,14,40]. Furthermore, if controller i determines the extended control u ˜ t i based on the joint memory z t : = ( z t 1 , , z t N ) as u ˜ t i = u ˜ i ( t , z t ) , this problem is reduced to ML-POSC in which all controllers share their information (Figure 2b) [23,24]. ML-DSC determines the extended control u ˜ t i based solely on its own memory z t i as u ˜ t i = u ˜ i ( t , z t i ) (13), which is different from COSC and ML-POSC (Figure 2c). While ML-DSC cannot be solved in a similar way as COSC [14,40,41], as is shown in the next section, it can be solved in a similar way as ML-POSC [23,24] because the method of ML-POSC is more general than that of COSC.
In the following section, we mainly consider the formulation of this subsection rather than that of Section 3.1 because it is simpler and more general. Moreover, we omit · ˜ for the notational simplicity.

4. Derivation of Optimal Control Function

In this section, we solve ML-DSC by employing the technique in mean-field stochastic control [25,26] and ML-POSC [23,24].

4.1. Derivation of Optimal Control Function

In this subsection, we derive the optimal control function of ML-DSC. In ML-DSC, each controller cannot directly access the information about the state of the system and the memories of the other controllers. This constraint makes ML-DSC unable to be solved by the conventional methods of COSC, such as Bellman’s dynamic programming principle on the extended state space [13,14,40]. In order to address this issue, we converted ML-DSC on the extended state space into the deterministic optimal control problem on the probability density function space. The similar technique has also been used in mean-field stochastic control [25,26] and ML-POSC [23,24], and it is more effective for a broader class of stochastic optimal control problems than the conventional methods of COSC.
The extended state SDE (12) can be converted into the following Fokker–Planck (FP) equation:
p t ( s ) t = L u p t ( s ) ,
where the initial condition is given by p 0 ( s ) , and L u is the forward diffusion operator, which is defined by
L u p t ( s ) : = i = 1 d s ( b i ( t , s , u ) p t ( s ) ) s i + 1 2 i , j = 1 d s 2 ( D i j ( t , s , u ) p t ( s ) ) s i s j ,
where D ( t , s , u ) : = σ ( t , s , u ) σ ( t , s , u ) . The objective function (14) can be calculated as follows:
J [ u ] = 0 T f ¯ ( t , p t , u t ) d t + g ¯ ( p T ) ,
where f ¯ ( t , p , u ) : = E p ( s ) [ f ( t , s , u ) ] and g ¯ ( p ) : = E p ( s ) [ g ( s ) ] . We note that · ˜ is omitted for the notational simplicity. From Equations (16) and (18), ML-DSC on the extended state space is converted into the deterministic optimal control problem on the probability density function space.
If being represented by the extended state, each controller cannot completely access the information of the extended state in ML-DSC, which hampers the conventional methods of COSC. By lifting the state variable from the extended state to its probability density function, such a difficulty can be avoided, because any controllers can completely access the probability density function from its deterministic nature. As a result, the optimal condition of ML-DSC was obtained by a similar way to the deterministic optimal control problem, i.e., Pontryagin’s minimum principle on the probability density function, which can be interpreted as the generalization of Bellman’s dynamic programming principle on the extended state space [23,24]:
Theorem 1.
The optimal control function of ML-DSC satisfies the following equation:
u i ( t , z i ) = arg   min u i E p t ( s i | z i ) H t , s , ( u i , u i ) , w , i { 1 , 2 , , N } ,
where H is the Hamiltonian, which is defined as follows:
H t , s , u , w : = f ( t , s , u ) + L u w ( t , s ) ,
where L u is the backward diffusion operator, which is defined as follows:
L u w ( t , s ) : = i = 1 d s b i ( t , s , u ) w ( t , s ) s i + 1 2 i , j = 1 d s D i j ( t , s , u ) 2 w ( t , s ) s i s j ,
which is the conjugate of L u as follows:
w ( t , s ) L u p ( t , s ) d s = p ( t , s ) L u w ( t , s ) d s .
Variables s i R d s i , u i R d u i , and ( u i , u i ) R d u are defined as follows:
s i : = x z 1 z i 1 z i + 1 z N , u i : = u 1 u i 1 u i + 1 u N , ( u i , u i ) : = u 1 u i 1 u i u i + 1 u N ,
where d s i : = d s d z i and d u i : = d u d u i . Function w ( t , s ) is the solution of the following Hamilton–Jacobi–Bellman (HJB) equation:
w ( t , s ) t = H t , s , u , w ,
where w ( T , s ) = g ( s ) . Function p t ( s i | z i ) : = p t ( s ) / p t ( s ) d s i is the conditional probability density function of s i given z i , and p t ( s ) is the solution of FP Equation (16) driven by u . We note that · ˜ is omitted for the notational simplicity.
Proof. 
The proof is shown in Appendix A. □
We note that the optimality condition (19) is a necessary condition of the optimal control function of ML-DSC, not a sufficient condition. The optimality condition (19) becomes a necessary and sufficient condition when the expected Hamiltonian H ¯ t , p , u , w : = E p ( s ) H t , s , u , w is convex with respect to p and u. This proof is almost the same with Reference [24]. In the following, the control function of ML-DSC that satisfies the optimality condition (19) is called the optimal control function of ML-DSC.

4.2. Numerical Algorithm

The optimal control function of ML-DSC (19) is obtained by jointly solving FP Equation (16) and HJB Equation (24). HJB-FP equations also appear in mean-field stochastic game and control [25,26,27,28,29] and ML-POSC [23,24], and numerous numerical algorithms have been developed [24,32]. As a result, ML-DSC may be solved practically by employing these numerical algorithms.
One of the most basic numerical algorithms to solve HJB-FP equations is the forward-backward sweep method (the fixed-point iteration method) [24,32,42,43,44], which computes the forward FP Equation (16) and the backward HJB Equation (24) alternately. While the convergence of the forward-backward sweep method is not guaranteed in mean-field stochastic game and control [32,42,43,44], it is guaranteed in ML-POSC because the coupling of HJB-FP equations is limited to the optimal control function in ML-POSC [24]. The convergence of the forward-backward sweep method is also guaranteed in ML-DSC for the same reason as ML-POSC. This proof is almost the same as Reference [24].

4.3. Comparison with Completely Observable Stochastic Control or Memory-Limited Partially Observable Stochastic Control

COSC and ML-POSC can be solved in a similar way to ML-DSC [23,24]. In COSC, controller i can completely observe the state of the target system x t and the memories of the other controllers z t j ( j i ) , as well as its own memory z t i (Figure 2a) [13,14,40]. As a result, the control u t i is determined based on the whole extended state s t : = ( x t , z t 1 , , z t N ) as u t i = u i ( t , s t ) . From Pontryagin’s minimum principle on the probability density function space, the optimal control function of COSC is given by the following equation:
u i ( t , s ) = arg   min u i H t , s , ( u i , u i ) , w ,
where w ( t , s ) is the solution of HJB Equation (24). This result is the same with Bellman’s dynamic programming principle on the state space. Thus, Pontryagin’s minimum principle on the probability density function space can be interpreted as the generalization of Bellman’s dynamic programming principle on the state space.
In ML-POSC, controller i can observe the memories of the other controllers z t j ( j i ) as well as its own memory z t i (Figure 2b) [23,24]. As a result, the control u t i is determined based on the joint memory z t : = ( z t 1 , , z t N ) as u t i = u i ( t , z t ) . From Pontryagin’s minimum principle on the probability density function space, the optimal control function of ML-POSC is given by the following equation:
u i ( t , z ) = arg   min u i E p t ( x | z ) H t , s , ( u i , u i ) , w ,
where w ( t , s ) is the solution of HJB Equation (24). p t ( x | z ) : = p t ( s ) / p t ( s ) d x is the conditional probability density function of x given z, and p t ( s ) is the solution of FP Equation (16) driven by u .
Although HJB Equation (24) is the same for COSC, ML-POSC, and ML-DSC, the optimal control function is different. Notably, the optimal control functions of ML-POSC and ML-DSC depend on FP Equation (16) because they need to estimate unobservables from observables. ML-POSC needs to estimate the state of the system x t from the joint memory of all controllers z t . ML-DSC needs to estimate the memories of the other controllers z t j ( j i ) as well as the state of the system x t from its own memory z t i .
In COSC, the optimal control function depends only on HJB Equation (24). As a result, it can be obtained by solving the HJB Equation (24) backward in time from the terminal condition, which is called the value iteration method [45,46,47]. By contrast, in ML-POSC and ML-DSC, the optimal control function cannot be obtained by the value iteration method because it depends on FP Equation (16) as well as HJB Equation (24). Instead, it can be obtained by the forward-backward sweep method, which computes the forward FP Equation (16) and the backward HJB Equation (24) alternately [24].

5. Linear-Quadratic-Gaussian Problem

In this section, we demonstrate how ML-DSC works by applying it to the LQG problem. In the conventional DSC, the LQG problem can be solved only when the information of the controllers is independent [8,9,10] or partially nested [17,18,19,20,21,22]. By contrast, in ML-DSC, the LQG problem can be solved without restricting the interaction among the controllers.

5.1. Problem Formulation

In this subsection, we formulate the LQG problem of ML-DSC. In this problem, the extended state SDE (12) is given as follows:
d s t = A ( t ) s t + B ( t ) u t d t + σ ( t ) d ω t = A ( t ) s t + i = 1 N B i ( t ) u t i d t + σ ( t ) d ω t ,
where the initial condition is given by the Gaussian distribution p 0 ( s ) : = N s μ 0 , 0 . Furthermore, we note that · ˜ is omitted for the notational simplicity. In ML-DSC, controller i determines the control u t i based on the memory z t i as follows:
u t i = u i ( t , z t i ) .
In the extended state SDE (27), the interaction among the controllers is not restricted. For example, each controller is allowed to control the memories of the other controllers from its own memory through the state or the observations. In this case, the memories of the controllers do not become independent or partially nested. It becomes obvious in the numerical experiments in Section 6.
The objective function (14) is given as follows:
J [ u ] : = E p ( s 0 : T ; u ) 0 T s t Q ( t ) s t + u t R ( t ) u t d t + s T P s T = E p ( s 0 : T ; u ) 0 T s t Q ( t ) s t + i = 1 N j = 1 N ( u t i ) R i j ( t ) u t j d t + s T P s T ,
where Q ( t ) O , R ( t ) O , and P O . The objective of this problem is to find the optimal control function u that minimizes the objective function J [ u ] :
u : = arg   min u J [ u ] .
In this paper, we assume that R ( t ) is the block diagonal matrix, i.e., the costs of different controllers are independent as follows:
R ( t ) = R 11 ( t ) O O O R 22 ( t ) O O O R N N ( t ) ,
where R i i ( t ) O . In this case, the objective function (29) can be calculated as follows:
J [ u ] = E p ( s 0 : T ; u ) 0 T s t Q ( t ) s t + i = 1 N ( u t i ) R i i ( t ) u t i d t + s T P s T .
If this assumption does not hold, the optimal control function cannot be derived explicitly. This problem is similar with the Witsenhausen’s counterexample, which demonstrates the difficulty of DSC and is historically recognized as an important problem [5,48,49]. However, this assumption is not critical in many applications as the control cost matrix is usually diagonal.

5.2. Derivation of Optimal Control Function

In this subsection, we derive the optimal control function of the LQG problem of ML-DSC by applying Theorem 1. In this problem, the probability density function of the extended state s at time t is given by the Gaussian distribution p t ( s ) : = N s | μ ( t ) , ( t ) , in which μ ( t ) is the mean vector and ( t ) is the covariance matrix. Defining the stochastic extended state s ^ : = s μ , the conditional expectation E p t ( s i | z i ) s can be calculated as follows:
E p t ( s i | z i ) s = μ ( t ) + K i ( t ) s ^ ,
where K i ( t ) R d s × d s is defined as follows:
K i ( t ) : = O O x z i ( t ) z i z i 1 ( t ) O O O O z 1 z i ( t ) z i z i 1 ( t ) O O O O z i 1 z i ( t ) z i z i 1 ( t ) O O O O I O O O O z i + 1 z i ( t ) z i z i 1 ( t ) O O O O z N z i ( t ) z i z i 1 ( t ) O O .
Function K i ( t ) is the zero matrix except for the columns corresponding to z i . By applying Theorem 1 to the LQG problem of ML-DSC, we obtained the following theorem:
Theorem 2.
In the LQG problem, the optimal control function of ML-DSC satisfies the following equation:
u i ( t , z i ) = R i i 1 B i Ψ μ + Φ K i s ^ ,
where K i ( t ) is defined by Equation (34), which depends on ( t ) . Functions μ ( t ) and ( t ) are the solutions of the following ordinary differential equations (ODEs):
d μ d t = A B R 1 B Ψ μ ,
d d t = σ σ + A i = 1 N B i R i i 1 B i Φ K i + A i = 1 N B i R i i 1 B i Φ K i ,
where μ ( 0 ) = μ 0 and ( 0 ) = 0 . Functions Ψ ( t ) and Φ ( t ) are the solutions of the following ODEs:
d Ψ d t = Q + A Ψ + Ψ A Ψ B R 1 B Ψ ,
d Φ d t = Q + A Φ + Φ A Φ B R 1 B Φ + i = 1 N ( I K i ) Φ B i R i i 1 B i Φ ( I K i ) ,
where Ψ ( T ) = Φ ( T ) = P .
Proof. 
The proof is shown in Appendix B. □
In the LQG problem of ML-DSC, FP Equation (16) is reduced to Equations (36) and (37), and HJB Equation (24) is reduced to Equations (38) and (39). The optimal control function (35) is decomposed into the deterministic part and the stochastic part, which correspond to the first term and the second term, respectively. The first term of the optimal control function (35) also appears in the linear-quadratic (LQ) problem of deterministic control, and Equation (38) is called the Riccati equation [14,40]. In contrast, the second term of the optimal control function (35) is new in the LQG problem of ML-DSC, and Equation (39) is called the decentralized Riccati equation in this paper.

5.3. Comparison with Completely Observable Stochastic Control or Memory-Limited Partially Observable Stochastic Control

In the LQG problem, the optimal control function of COSC is given as follows [14,40]:
u i ( t , s ) = R i i 1 B i Ψ μ + Ψ s ^ ,
where Ψ ( t ) is the solution of the Riccati Equation (38). Function Ψ ( t ) appears in the second term and the first term in COSC. In addition, the optimal control function of ML-POSC is given as follows [23,24]:
u i ( t , z ) = R i i 1 B i Ψ μ + Π K s ^ ,
where Π ( t ) is the solution of the following partially observable Riccati equation:
d Π d t = Q + A Π + Π A Π B R 1 B Π + ( I K ) Π B R 1 B Π ( I K ) ,
where Π ( T ) = P and K ( t ) is defined by
K ( t ) : = O x z ( t ) z z 1 ( t ) O I .
Thus, the decentralized Riccati Equation (39) is a natural extension of the partially observable Riccati Equation (42) from a centralized problem to a decentralized problem.
While the first deterministic term of the optimal control function is the same for COSC (40), ML-POSC (41), and ML-DSC (35), the second stochastic term is different, reflecting the fact that the observable part of the stochastic extended state is different.

5.4. Decentralized Riccati Equation

In this subsection, we analyze the decentralized Riccati Equation (39) by comparing it with the Riccati Equation (38). The Riccati Equation (38) is the control gain matrix of deterministic control and COSC. While the Riccati Equation (38) optimizes control, it does not improve estimation, because estimation is not needed in deterministic control and COSC. In contrast, the decentralized Riccati Equation (39) may improve estimation as well as control because the controllers of ML-DSC need to estimate the state of the system and the memories of the other controllers from their own memories.
In order to support this discussion, we analyze the last term of the decentralized Riccati Equation (39), which is denoted as follows:
Q i : = ( I K i ) Φ B i R i i 1 B i Φ ( I K i ) .
This term is the main difference between the Riccati Equation (38) and the decentralized Riccati Equation (39), and thus accounts for the contribution of estimation in ML-DSC. For the sake of simplicity, we focused on Q N . A similar discussion is possible for Q i by the permutation of the controllers’ indices. We also denoted a : = s N and b : = z N for notational simplicity. a is unobservable and b is observable for the controller N. Q N can be calculated as follows:
Q N = P a a P a a a b b b 1 b b 1 b a P a a b b 1 b a P a a a b b b 1 ,
where P a a : = ( Φ B N R N N 1 B N Φ ) a a . Because P a a O and b b 1 b a P a a a b b b 1 O , Φ a a and Φ b b may be larger than Ψ a a and Ψ b b , respectively. Because Φ a a and Φ b b are the negative feedback gains of a and b, respectively, Q N may contribute to decreasing a a and b b . Moreover, when a b is positive/negative, Φ a b may be smaller/larger than Ψ a b , which may increase/decrease a b . The similar discussion is possible for b a , Φ b a , and Ψ b a because , Φ , and Ψ are symmetric matrices. As a result, Q N may contribute to decreasing the following conditional covariance matrix:
a | b : = a a a b b b 1 b a ,
which corresponds to the estimation error of the unobservable a from the observable b. It indicates that the decentralized Riccati Equation (39) may improve estimation as well as control.
It should be noted that estimation and control are not clearly separated in the LQG problem of ML-DSC. In the LQG problem of POSC, estimation and control are clearly separated, and they are optimized by the Kalman filter and the Riccati equation, respectively [11,14]. By contrast, in the LQG problem of ML-DSC, both estimation and control are optimized by the decentralized Riccati equation. This coupling of estimation and control also appears in the conventional DSC [17,18,19,20,21,22] and ML-POSC [23,24], which indicates that it may be caused by a decentralized structure and memory limitation.

6. Numerical Experiments

In this section, we demonstrate the significance of the decentralized Riccati Equation (39) by conducting the numerical experiments of two LQG problems in ML-DSC. One is the one-dimensional state case, and the other is the two-dimensional state case.

6.1. One-Dimensional State Case

In this subsection, we conduct a numerical experiment in one-dimensional state case (Figure 3a). In this case, we consider the state x t R , the observation y t i R and the memory z t i R of the controller i { 1 , 2 } , which evolve by the following SDEs:
d x t = x t + u x , t 1 + u x , t 2 d t + d ω t ,
d y t 1 = x t + u y , t 2 d t + d ν t 1 ,
d y t 2 = x t + u y , t 1 d t + d ν t 2 ,
d z t 1 = v t 1 d t + d y t 1 ,
d z t 2 = v t 2 d t + d y t 2 .
The initial conditions are given by the standard Gaussian distributions. ω t R , ν t 1 R , and ν t 2 R are the independent standard Wiener processes. u t i : = ( u x , t i , u y , t i ) = u i ( t , z t i ) R 2 and v t i : = v i ( t , z t i ) R are the controls of controller i. Each controller can control the memory of the other controller through u y , t i , which can be interpreted as the communication among the controllers. The objective function to be minimized is given as follows:
J [ u , v ] : = E p ( x 0 : T , y 0 : T , z 0 : T ; u , v ) 0 10 x t 2 + i = 1 2 ( u x , t i ) 2 + ( u y , t i ) 2 + ( v t i ) 2 d t ,
where u : = ( u 1 , u 2 ) and v : = ( v 1 , v 2 ) . Therefore, the objective of this problem is to minimize the state variance by small controls (Figure 3b).
In this problem, the information of the controllers is neither independent nor partially nested. The information of the controller 1’s memory z t 1 propagates to the controller 2’s memory z t 2 through the state control u x , t 1 and the observation control u y , t 1 , and vice versa (Figure 3a). While such a problem cannot be solved by the conventional DSC, ML-DSC can address it.
Figure 3. Schematic diagram of the LQG problem in Section 6.1. (a) The state of the system x t is one-dimensional. (b) The two controllers control the state of the system to be close to 0.
Figure 3. Schematic diagram of the LQG problem in Section 6.1. (a) The state of the system x t is one-dimensional. (b) The two controllers control the state of the system to be close to 0.
Entropy 25 00791 g003
In the representation using the extended state s t : = ( x t , z t 1 , z t 2 ) R 3 , the extended control u ˜ t i : = ( u x , t i , u y , t i , v t i ) = u ˜ i ( t , z t i ) R 3 , and the extended standard Wiener process ω ˜ t : = ( ω t , ν t 1 , ν t 2 ) R 3 as in Equations (27) and (32), the SDEs defined by Equations (47)–(51) can be described as follows:
d s t = 1 0 0 1 0 0 1 0 0 s t + 1 0 0 0 0 1 0 1 0 u ˜ t 1 + 1 0 0 0 1 0 0 0 1 u ˜ t 2 d t + d ω ˜ t ,
which corresponds to Equation (27). The objective function (52) can be rewritten as follows:
J [ u ˜ ] : = E p ( s 0 : T ; u ˜ ) 0 10 s t 1 0 0 0 0 0 0 0 0 s t + i = 1 2 ( u ˜ t i ) 1 0 0 0 1 0 0 0 1 u ˜ t i d t ,
which corresponds to Equation (32). In addition, it satisfies the block diagonal matrix assumption of R ( t ) (31).
The Riccati Equation (38) can be solved backwards in time from the terminal condition. In contrast, the decentralized Riccati Equation (39) cannot be solved in a similar way to the Riccati Equation (38) because it depends on the covariance matrix ( t ) via the estimation gain matrix K i ( t ) , which is the solution of the time-forward ODE (37). In order to solve the decentralized Riccati Equation (39), which is the time-backward ODE of Φ ( t ) , we use the forward-backward sweep method, which computes the time-forward ODE of ( t ) (37) and the time-backward ODE of Φ ( t ) (39) alternately [24]. We note that the partially observable Riccati Equation (42) can also be solved by the forward-backward sweep method [24].
Figure 4 shows the trajectories of Ψ ( t ) , Π ( t ) , and Φ ( t ) , which are the optimal control gain matrices of COSC, ML-POSC, and ML-DSC, respectively. Functions Ψ a b ( t ) , Π a b ( t ) , and Φ a b ( t ) are the negative control gains from b to a. We noted that Ψ a b ( t ) , Π a b ( t ) , and Φ a b ( t ) are also the negative control gains from a to b because Ψ ( t ) , Π ( t ) , and Φ ( t ) are symmetric matrices. While the elements of Ψ ( t ) related with the memories z 1 and z 2 are always 0, those of Π ( t ) and Φ ( t ) are not (Figure 4b–f). Thus, the controls of the memories do not appear in COSC, but appear in ML-POSC and ML-DSC. This result indicates that the controls of the memories play an important role in estimation.
We first compares Φ with Ψ in more detail to investigate Φ . Φ x x and Φ z i z i are larger than Ψ x x and Ψ z i z i (Figure 4a,d,f), which may decrease x x and z i z i . Moreover, Φ x z i is smaller than Ψ x z i (Figure 4b,c), which may strengthen the positive correlation between x and z i . Therefore, Φ x x , Φ z i z i , and Φ x z i may improve estimation, which is consistent with our discussion in Section 5.4. However, Φ z 1 z 2 is larger than Ψ z 1 z 2 (Figure 4e), which may weaken the positive correlation between z 1 and z 2 . It seems to be contrary to our discussion because it may worsen estimation.
In order to understand Φ z 1 z 2 , we also compared Φ with Π . Estimation in ML-DSC is more challenging than that in ML-POSC because the controllers cannot completely share their information. Thus, except for Φ z 1 z 2 , the absolute values of Φ are larger than those of Π (Figure 4a–d,f) for the same reason as the comparison with Ψ . The problem is only Φ z 1 z 2 (Figure 4e). In ML-POSC, the estimation between z 1 and z 2 is not needed because the controllers share their information. As a result, Π z 1 z 2 is determined only from a control perspective, not an estimation perspective. Π z 1 z 2 is almost the same with Π z i z i (Figure 4d–f), presumably because the cooperative control by the controllers 1 and 2 is more efficient than the independent control. By contrast, in ML-DSC, the estimation between z 1 and z 2 is necessary because each controller cannot directly access the other controller. Φ z 1 z 2 is smaller than Π z 1 z 2 (Figure 4e), which may strengthen the positive correlation between z 1 and z 2 . Therefore, Φ z 1 z 2 may be determined by a trade-off between control and estimation.
In order to clarify the significance of the decentralized Riccati Equation (39), we compared the performance of the optimal control function of ML-DSC (35) with that of the following control functions:
u i , Ψ ( t , z i ) = R i i 1 B i Ψ μ + Ψ K i s ^ ,
u i , Π ( t , z i ) = R i i 1 B i Ψ μ + Π K i s ^ ,
which replaced Φ with Ψ and Π , respectively. We noted that the first terms are not important because μ ( t ) = 0 is satisfied in this set-up. The result is shown in Figure 5. The variances of the state and the memories are u Ψ > u Π > u (Figure 5a–c). Similarly, the expected cumulative costs are u Ψ > u Π > u (Figure 5d). These orders are consistent with the extent of the optimization of estimation. u Ψ does not take into account estimation at all, and its performance is the worst. u Π takes into account only the state estimation, and it performs better than u Ψ , but not optimally. u takes into account the estimation of the other memories and the state, and its performance is optimal.

6.2. Two-Dimensional State Case

In this subsection, we conduct a numerical experiment in two-dimensional state case (Figure 6a). In this case, we formulated the target state x t tar : = ( x t tar , 1 , x t tar , 2 ) R 2 , the actual state x t act : = ( x t act , 1 , x t act , 2 ) R 2 , the observation y t : = ( y t 1 , y t 2 ) R 2 , and the memory z t : = ( z t 1 , z t 2 ) R 2 as follows (Figure 6b):
d x t tar , 1 = 2 π x t tar , 2 d t ,
d x t tar , 2 = 2 π x t tar , 1 d t ,
d x t act , 1 = 2 π x t act , 2 + u x , t 1 d t + d ω t 1 ,
d x t act , 2 = 2 π x t act , 1 + u x , t 2 d t + d ω t 2 ,
d y t 1 = x t act , 1 x t tar , 1 + u y , t 2 d t + d ν t 1 ,
d y t 2 = x t act , 2 x t tar , 2 + u y , t 1 d t + d ν t 2 ,
d z t 1 = v t 1 d t + d y t 1 ,
d z t 2 = v t 2 d t + d y t 2 ,
where x 0 tar = ( 10 , 0 ) , x 0 act N ( x 0 act | ( 10 , 0 ) , I ) , y 0 N ( y 0 | 0 , I ) , and z 0 N ( z 0 | 0 , I ) . ω t i R and ν t i R are the independent standard Wiener processes. u t i : = ( u x , t i , u y , t i ) = u i ( t , z t i ) R 2 and v t i : = v i ( t , z t i ) R are the controls of controller i. The objective function to be minimized is given as follows:
J [ u , v ] : = E u , v 0 10 i = 1 2 10 x t act , i x t tar , i 2 + ( u x , t i ) 2 + ( u y , t i ) 2 + ( v t i ) 2 d t .
The objective of this problem is to minimize the distance of the actual state x t act from the rotating target state x t tar by small controls. The solution of Equations (57) and (58) is given by x t tar = ( x t tar , 1 , x t tar , 2 ) = ( 10 cos ( 2 π t ) , 10 sin ( 2 π t ) ) . If Equations (59) and (60) do not have the standard Wiener processes d ω t 1 and d ω t 2 , respectively, u x , t 1 = 0 and u x , t 2 = 0 are optimal because x t tar = x t act is satisfied. In practice, however, the actual state x t act does not coincide with the target state x t tar without the control u x , t due to the state noise d ω t , and needs to be controlled. Controller i observes and controls the actual state x t act only x t act , i -axis direction. As a result, the communication between the controllers is more important in this problem.
Figure 6. Schematic diagram of the LQG problem in Section 6.2. (a) The state of the system x t = ( x t 1 , x t 2 ) is two-dimensional. (b) The two controllers control the actual state x t act to be close to the target state x t tar = ( 10 cos ( 2 π t ) , 10 sin ( 2 π t ) ) . Controller i observes and controls the actual state x t act only x t act , i -axis direction.
Figure 6. Schematic diagram of the LQG problem in Section 6.2. (a) The state of the system x t = ( x t 1 , x t 2 ) is two-dimensional. (b) The two controllers control the actual state x t act to be close to the target state x t tar = ( 10 cos ( 2 π t ) , 10 sin ( 2 π t ) ) . Controller i observes and controls the actual state x t act only x t act , i -axis direction.
Entropy 25 00791 g006
By defining the state x t : = x t act x t tar , Equations (57)–(64) are converted as follows:
d x t 1 = 2 π x t 2 + u x , t 1 d t + d ω t 1 ,
d x t 2 = 2 π x t 1 + u x , t 2 d t + d ω t 2 ,
d y t 1 = x t 1 + u y , t 2 d t + d ν t 1 ,
d y t 2 = x t 2 + u y , t 1 d t + d ν t 2 ,
d z t 1 = v t 1 d t + d y t 1 ,
d z t 2 = v t 2 d t + d y t 2 ,
where the initial conditions are given by the standard Gaussian distributions. Furthermore, the objective function (65) is converted as follows:
J [ u , v ] : = E u , v 0 10 i = 1 2 10 ( x t i ) 2 + ( u x , t i ) 2 + ( u y , t i ) 2 + ( v t i ) 2 d t .
As a result, the problem of controlling the actual state x t act to be close to the target state x t tar is equivalent to the problem of controlling the state x t to be close to 0.
In the representation using the extended state s t : = ( x t 1 , x t 2 , z t 1 , z t 2 ) R 4 , the extended control u ˜ t i : = ( u x , t i , u y , t i , v t i ) = u ˜ i ( t , z t i ) R 3 , and the extended standard Wiener process ω ˜ t : = ( ω t 1 , ω t 2 , ν t 1 , ν t 2 ) R 4 as in Equations (27) and (32), the SDEs defined by Equations (66)–(71) can be described as follows:
d s t = 0 2 π 0 0 2 π 0 0 0 1 0 0 0 0 1 0 0 s t + 1 0 0 0 0 0 0 0 1 0 1 0 u ˜ t 1 + 0 0 0 1 0 0 0 1 0 0 0 1 u ˜ t 2 d t + d ω ˜ t ,
which corresponds to Equation (27). The objective function (72) can be rewritten as follows:
J [ u ˜ ] : = E p ( s 0 : T ; u ˜ ) 0 10 s t 10 0 0 0 0 10 0 0 0 0 0 0 0 0 0 0 s t + i = 1 2 ( u ˜ t i ) 1 0 0 0 1 0 0 0 1 u ˜ t i d t ,
which corresponds to Equation (32). In addition, it satisfies the block diagonal matrix assumption of R ( t ) (31).
Figure 7 shows the trajectories of Ψ ( t ) , Π ( t ) , and Φ ( t ) . Unlike Ψ ( t ) , the elements of Π ( t ) and Φ ( t ) related with the memories z 1 and z 2 are not always 0 (Figure 7e–j), which indicates that the controls of the memories appear in ML-POSC and ML-DSC. The elements of Φ ( t ) only largely deviate from those of Π ( t ) in Figure 7g–j. Figure 7g,h show the negative feedback control gain of the memory z i . Φ z i z i ( t ) is larger than Π z i z i ( t ) , which indicates that the memories in ML-DSC are controlled more strongly than those in ML-POSC. Figure 7i shows the control gain between the state x t 1 and the memory z t 2 . While the controller 1 can control the state x t 1 based on the memory z t 2 in ML-POSC, it cannot in ML-DSC, because the controllers do not share their memories in ML-DSC. Furthermore, while controller 1 does not need to send the information of the state x t 1 to controller 2’s memory z t 2 , this is required in ML-DSC. As a result, Φ x 1 z 2 ( t ) differs greatly from Π x 1 z 2 ( t ) . A similar discussion is possible for Figure 7j.
In order to clarify the significance of the decentralized Riccati Equation (39), we compared the performance of the optimal control function u (35) with that of the control functions u Ψ (55) and u Π (56). The result is shown in Figure 8. The actual state x act faithfully tracks the target state x tar under the optimal control function u (Figure 8a–c (green)). Similarly, the memory z is stably controlled in the optimal control function u (Figure 8d,e (green)). As a result, the performance of the optimal control function u is optimal (Figure 8f (green)).

7. Discussion

In this paper, we proposed ML-DSC, which explicitly formulates the finite-dimensional memories of the controllers. In ML-DSC, each controller is designed to compress the infinite-dimensional observation histories appropriately into the finite-dimensional memory, and determines the optimal control based on it. As a result, ML-DSC can handle the difficulty in the conventional DSC that arises from the finiteness of actual memory of controllers. We demonstrated the effectiveness of ML-DSC in the LQG problem. While the conventional DSC needs to restrict the interaction among the controllers to solve the LQG problem, ML-DSC is free from such a restriction. We found that estimation and control are optimized by the decentralized Riccati equation in the LQG problem of ML-DSC. Our numerical experiments showed that the decentralized Riccati equation is superior to the Riccati equation and the partially observable Riccati equation in this problem.
ML-DSC can also address non-LQG problems. In DSC, the non-LQG problem cannot be solved numerically even if the number of the controllers is one, which corresponds to POSC [12,13]. This is because a functional differential equation needs to be solved in the non-LQG problem of POSC, which is generally intractable, even numerically. ML-POSC and ML-DSC are more tractable than the conventional POSC and DSC because only HJB-FP equations need to be solved, which are partial differential equations. The previous work showed that ML-POSC is more effective than the conventional POSC in a non-LQG problem [23]. Therefore, unlike the conventional DSC, ML-DSC may also be effective to non-LQG problems.
In order to solve ML-DSC with a large number of controllers, more efficient numerical algorithms are needed because HJB-FP equations become high-dimensional partial differential equations. In order to solve high-dimensional HJB-FP equations, neural network-based algorithms have been proposed in mean-field stochastic game and control [50,51]. Therefore, by exploiting these neural network-based algorithms, we may efficiently solve ML-DSC with a large number of controllers.

Author Contributions

Conceptualization, Formal analysis, Funding acquisition, Writing— original draft, T.T. and T.J.K.; Software, Visualization, T.T. All authors have read and agreed to the published version of the manuscript.

Funding

The first author received a JSPS Research Fellowship (Grant No. 21J20436). This work was supported by JSPS KAKENHI (Grant No. 19H05799) and JST CREST (Grant No. JPMJCR2011).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
COSCCompletely Observable Stochastic Control
POSCPartially Observable Stochastic Control
DSCDecentralized Stochastic Control
ML-POSCMemory-Limited Partially Observable Stochastic Control
ML-DSCMemory-Limited Decentralized Stochastic Control
DEC-POMDPDecentralized Partially Observable Markov Decision Process
HJBHamilton-Jacobi-Bellman
FPFokker-Planck
ODEOrdinary Differential Equation
SDEStochastic Differential Equation
LQLinear-Quadratic
LQGLinear-Quadratic-Gaussian

Appendix A. Proof of Theorem 1

In this section, we prove Theorem 1, which is Pontryagin’s minimum principle on the probability density function space. In this paper, we prove Pontryagin’s minimum principle via Bellman’s dynamic programming principle, which is a similar approach with reference [23]. We note that Pontryagin’s minimum principle can also be proved directly, which is almost the same with reference [24] and omitted in this paper.

Appendix A.1. Bellman’s Dynamic Programming Principle

In this subsection, we obtain the optimality condition of ML-DSC from the viewpoint of Bellman’s dynamic programming principle on the probability density function space.
The minimization of the objective function can be calculated as follows:
min u J [ u ] = min u 0 : T 0 T f ¯ ( τ , p τ , u τ ) d τ + g ¯ ( p T ) = min u 0 : t d t 0 t d t f ¯ ( τ , p τ , u τ ) d τ + min u t f ¯ ( t , p t , u t ) d t + min u t + d t : T t + d t T f ¯ ( τ , p τ , u τ ) d τ + g ¯ ( p T ) .
Therefore, the optimal joint control function at time t is given as follows:
u t = arg   min u t f ¯ ( t , p t , u t ) d t + min u t + d t : T t + d t T f ¯ ( τ , p τ , u τ ) d τ + g ¯ ( p T ) .
We define the value function
V ( t , p ) : = min u t : T t T f ¯ ( t , p τ , u τ ) d τ + g ¯ ( p T ) ,
where { p τ | τ [ t , T ] } is the solution of FP Equation (16) with p t = p . We note that V ( T , p ) = g ¯ ( p ) is satisfied. From the definition of the value function, the optimal joint control function at time t can be calculated as follows:
u t = arg   min u t f ¯ ( t , p t , u t ) d t + V ( t + d t , p t + L u t p t d t ) = arg   min u t f ¯ ( t , p t , u t ) d t + V ( t , p t ) + V ( t , p t ) t d t + δ V ( t , p t ) δ p ( s t ) L u t p t ( s t ) d s t d t = arg   min u t f ¯ ( t , p t , u t ) + δ V ( t , p t ) δ p ( s t ) L u t p t ( s t ) d s t .
Since L u t is the conjugate of L u t (22),
u t = arg   min u t f ¯ ( t , p t , u t ) + p t ( s t ) L u t δ V ( t , p t ) δ p ( s t ) d s t = arg   min u t E p t ( s t ) f ( t , s t , u t ) + L u t δ V ( t , p t ) δ p ( s t ) .
From the definition of the Hamiltonian (20),
u t = arg   min u t E p t ( s t ) H t , s t , u t , δ V ( t , p t ) δ p ( s t ) .
The minimization of the Hamiltonian can be calculated as follows:
min u t E p t ( s t ) H t , s t , u t , δ V ( t , p t ) δ p ( s t ) = min u t i E p t ( s t ) H t , s t , ( u t i , u t ) , δ V ( t , p t ) δ p ( s t ) .
Since u t i is the function of z t i in ML-DSC, the minimization by u t i can be exchanged with the expectation by p ( z t i ) as follows:
min u t i E p t ( s t ) H t , s t , ( u t i , u t ) , δ V ( t , p t ) δ p ( s t ) = E p t ( z t i ) min u t i E p t ( s t i | z t i ) H t , s t , ( u t i , u t ) , δ V ( t , p t ) δ p ( s t ) .
Therefore, the optimal control function of controller i at time t is given as follows:
u t i ( z t i ) = arg   min u t i E p t ( s t i | z t i ) H t , s t , ( u t i , u t ) , δ V ( t , p t ) δ p ( s t ) .
In order to obtain the optimal control function (A9), we need to obtain the value function V ( t , p ) . The value function V ( t , p ) can be calculated as follows:
V ( t , p ) = min u t : T t T f ¯ ( τ , p τ , u τ ) d τ + g ¯ ( p T ) = min u f ¯ ( t , p , u ) d t + min u t + d t : T t + d t T f ¯ ( τ , p τ , u τ ) d τ + g ¯ ( p T ) = min u f ¯ ( t , p , u ) d t + V ( t + d t , p + L u p d t ) = min u f ¯ ( t , p , u ) d t + V ( t , p ) + V ( t , p ) t d t + δ V ( t , p ) δ p ( s ) L u p ( s ) d s d t .
Therefore, the following equation is obtained:
V ( t , p ) t = min u f ¯ ( t , p , u ) + δ V ( t , p ) δ p ( s ) L u p ( s ) d s .
Since L u is the conjugate of L u (22),
V ( t , p ) t = min u f ¯ ( t , p , u ) + p ( s ) L u δ V ( t , p ) δ p ( s ) d s = min u E p ( s ) f ( t , s , u ) + L u δ V ( t , p ) δ p ( s ) d s .
From the definition of the Hamiltonian H (20),
V ( t , p ) t = min u E p ( s ) H t , s , u , δ V ( t , p ) δ p ( s ) .
From Equation (A6),
V ( t , p ) t = E p ( s ) H t , s , u , δ V ( t , p ) δ p ( s ) ,
where V ( T , p ) = g ¯ ( p ) . This functional differential Equation (A14) is called Bellman equation. Therefore, from Bellman’s dynamic programming principle on the probability density function space, the optimal control function of ML-DSC (A9) is obtained by solving FP Equation (16) and Bellman Equation (A14).
However, Bellman Equation (A14) is a functional differential equation, which cannot be solved even numerically. Therefore, Bellman’s dynamic programming principle on the probability density function space is not practical. In the next subsection, we resolve this problem by converting Bellman’s dynamic programming principle to Pontryagin’s minimum principle on the probability density function space. This conversion technique has also been used in mean-field stochastic control [25,26] and ML-POSC [23,24].

Appendix A.2. Conversion from Bellman’s Dynamic Programming Principle to Pontryagin’s Minimum Principle

In this subsection, we prove Theorem 1 by converting Bellman’s dynamic programming principle to Pontryagin’s minimum principle on the probability density function space.
We first define
W ( t , p , s ) : = δ V ( t , p ) δ p ( s ) ,
which satisfies W ( T , p , s ) = g ( s ) . Differentiating Bellman Equation (A14) with respect to p, the following equation is obtained:
W ( t , p , s ) t = H t , s , u , W + E p ( s ) L u δ W ( t , p , s ) δ p ( s ) .
Since L u is the conjugate of L u (22),
W ( t , p , s ) t = H t , s , u , W + δ W ( t , p , s ) δ p ( s ) L u p ( s ) d s .
We then define
w ( t , s ) : = W ( t , p t , s ) ,
where p t is the solution of FP Equation (16). The time derivative of w ( t , s ) can be calculated as follows:
w ( t , s ) t = W ( t , p t , s ) t + δ W ( t , p t , s ) δ p ( s ) p t ( s ) t d s .
By substituting Equation (A17) into Equation (A19), the following equation is obtained:
w ( t , s ) t = H t , s , u , w δ W ( t , p t , s ) δ p ( s ) p t ( s ) t L u p t ( s ) ( ) d s .
From FP Equation (16), ( ) = 0 holds. Therefore, HJB Equation (24) is obtained.
From Equations (A15) and (A18), the optimal control function of ML-DSC (A9) can be calculated as follows:
u i ( t , z i ) = arg   min u i E p t ( s i | z i ) H t , s , ( u i , u ) , w .
Therefore, the optimal control function of ML-DSC (A9) is obtained by solving FP Equation (16) and HJB Equation (24).

Appendix B. Proof of Theorem 2

In the LQG problem, the Hamiltonian (20) is given as follows:
H ( t , s , u , w ) = s Q s + i = 1 N ( u i ) R i i u i + w ( t , s ) s A s + i = 1 N B i u i + 1 2 s w ( t , s ) s σ σ .
Therefore, in the LQG problem, the optimal control function of ML-DSC (19) can be calculated as follows:
u i ( t , z i ) = 1 2 R i i 1 B i E p t ( s i | z i ) w ( t , s ) s .
We assume that p t ( s ) is the Gaussian distribution
p t ( s ) : = N ( s | μ ( t ) , ( t ) ) ,
and w ( t , s ) is the quadratic function
w ( t , s ) = s Φ ( t ) s + α ( t ) s + β ( t ) .
In this case, the optimal control function of ML-DSC (A23) can be calculated as follows:
u i ( t , z i ) = 1 2 R i i 1 B i 2 Φ K i s ^ + 2 Φ μ + α .
Since Equation (A26) is linear with respect to s ^ , p t ( s ) becomes the Gaussian distribution, which is consistent with our assumption (A24).
Substituting Equations (A25) and (A26) into HJB Equation (24), the following ODEs are obtained:
d Φ d t = Q + A Φ + Φ A Φ B R 1 B Φ + Q ,
d α d t = ( A B R 1 B Φ ) α 2 Q μ ,
d β d t = tr ( Φ σ σ ) 1 4 α B R 1 B α + μ Q μ ,
where Q : = i = 1 N ( I K i ) Φ B i R i i 1 B i Φ ( I K i ) , Φ ( T ) = P , α ( T ) = 0 , and β ( T ) = 0 . If Φ ( t ) , α ( t ) , and β ( t ) satisfy ODEs (A27), (A28), and (A29), respectively, HJB Equation (24) is satisfied, which is consistent with our assumption (A25).
Defining Υ ( t ) by α ( t ) = 2 Υ ( t ) μ ( t ) and Ψ ( t ) by Ψ ( t ) : = Φ ( t ) + Υ ( t ) , the optimal control function of ML-DSC (A26) can be calculated as follows:
u i ( t , z i ) = R i i 1 B i Ψ μ + Φ K i s ^ .
From Equations (A27) and (A28), Ψ ( t ) is the solution of the Riccati Equation (38). From Equation (A27), Φ ( t ) is the solution of the decentralized Riccati Equation (39). The detailed calculations are almost the same with reference [23].

References

  1. Mahajan, A.; Teneketzis, D. On the design of globally optimal communication strategies for real-time noisy communication systems with noisy feedback. IEEE J. Sel. Areas Commun. 2008, 26, 580–595. [Google Scholar] [CrossRef]
  2. Mahajan, A.; Teneketzis, D. Optimal Design of Sequential Real-Time Communication Systems. IEEE Trans. Inf. Theory 2009, 55, 5317–5338. [Google Scholar] [CrossRef]
  3. Nayyar, A.; Teneketzis, D. Sequential Problems in Decentralized Detection with Communication. IEEE Trans. Inf. Theory 2011, 57, 5410–5435. [Google Scholar] [CrossRef]
  4. Mahajan, A.; Teneketzis, D. Optimal Performance of Networked Control Systems with Nonclassical Information Structures. SIAM J. Control Optim. 2009, 48, 1377–1404. [Google Scholar] [CrossRef]
  5. Witsenhausen, H.S. A Counterexample in Stochastic Optimum Control. SIAM J. Control 1968, 6, 131–147. [Google Scholar] [CrossRef]
  6. Nayyar, A.; Mahajan, A.; Teneketzis, D. Decentralized Stochastic Control with Partial History Sharing: A Common Information Approach. IEEE Trans. Autom. Control 2013, 58, 1644–1658. [Google Scholar] [CrossRef]
  7. Mahajan, A.; Nayyar, A. Sufficient Statistics for Linear Control Strategies in Decentralized Systems With Partial History Sharing. IEEE Trans. Autom. Control 2015, 60, 2046–2056. [Google Scholar] [CrossRef]
  8. Charalambous, C.D.; Ahmed, N.U. Team Optimality Conditions of Distributed Stochastic Differential Decision Systems with Decentralized Noisy Information Structures. IEEE Trans. Autom. Control 2017, 62, 708–723. [Google Scholar] [CrossRef]
  9. Charalambous, C.D.; Ahmed, N.U. Centralized Versus Decentralized Optimization of Distributed Stochastic Differential Decision Systems with Different Information Structures—Part I: A General Theory. IEEE Trans. Autom. Control 2017, 62, 1194–1209. [Google Scholar] [CrossRef]
  10. Charalambous, C.D.; Ahmed, N.U. Centralized Versus Decentralized Optimization of Distributed Stochastic Differential Decision Systems with Different Information Structures—Part II: Applications. IEEE Trans. Autom. Control 2018, 63, 1913–1928. [Google Scholar] [CrossRef]
  11. Wonham, W.M. On the Separation Theorem of Stochastic Control. SIAM J. Control 1968, 6, 312–326. [Google Scholar] [CrossRef]
  12. Bensoussan, A. Stochastic Control of Partially Observable Systems; Cambridge University Press: Cambridge, UK, 1992. [Google Scholar] [CrossRef]
  13. Nisio, M. Stochastic Control Theory. In Probability Theory and Stochastic Modelling; Springer: Tokyo, Japan, 2015; Volume 72. [Google Scholar] [CrossRef]
  14. Bensoussan, A. Estimation and Control of Dynamical Systems. In Interdisciplinary Applied Mathematics; Springer International Publishing: Cham, Switzerland, 2018; Volume 48. [Google Scholar] [CrossRef]
  15. Wang, G.; Wu, Z.; Xiong, J. An Introduction to Optimal Control of FBSDE with Incomplete Information; SpringerBriefs in Mathematics; Springer International Publishing: Cham, Switzerland, 2018. [Google Scholar] [CrossRef]
  16. Bensoussan, A.; Yam, S.C.P. Mean field approach to stochastic control with partial information. ESAIM Control Optim. Calc. Var. 2021, 27, 89. [Google Scholar] [CrossRef]
  17. Lessard, L.; Lall, S. A state-space solution to the two-player decentralized optimal control problem. In Proceedings of the 2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 28–30 September 2011; pp. 1559–1564. [Google Scholar] [CrossRef]
  18. Lessard, L.; Lall, S. Optimal controller synthesis for the decentralized two-player problem with output feedback. In Proceedings of the 2012 American Control Conference (ACC), Montréal, QC, Canada, 27–29 June 2012; pp. 6314–6321. [Google Scholar] [CrossRef]
  19. Lessard, L. Decentralized LQG control of systems with a broadcast architecture. In Proceedings of the 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), Maui, HI, USA, 10–13 December 2012; pp. 6241–6246. [Google Scholar] [CrossRef]
  20. Lessard, L.; Nayyar, A. Structural results and explicit solution for two-player LQG systems on a finite time horizon. In Proceedings of the 52nd IEEE Conference on Decision and Control, Firenze, Italy, 10–13 December 2013; pp. 6542–6549. [Google Scholar] [CrossRef]
  21. Lessard, L.; Lall, S. Optimal Control of Two-Player Systems With Output Feedback. IEEE Trans. Autom. Control 2015, 60, 2129–2144. [Google Scholar] [CrossRef]
  22. Nayyar, A.; Lessard, L. Structural results for partially nested LQG systems over graphs. In Proceedings of the 2015 American Control Conference (ACC), Chicago, IL, USA, 1–3 July 2015; pp. 5457–5464. [Google Scholar] [CrossRef]
  23. Tottori, T.; Kobayashi, T.J. Memory-Limited Partially Observable Stochastic Control and Its Mean-Field Control Approach. Entropy 2022, 24, 1599. [Google Scholar] [CrossRef]
  24. Tottori, T.; Kobayashi, T.J. Forward-Backward Sweep Method for the System of HJB-FP Equations in Memory-Limited Partially Observable Stochastic Control. Entropy 2023, 25, 208. [Google Scholar] [CrossRef]
  25. Bensoussan, A.; Frehse, J.; Yam, S.C.P. The Master equation in mean field theory. J. Math. Pures Appl. 2015, 103, 1441–1474. [Google Scholar] [CrossRef]
  26. Bensoussan, A.; Frehse, J.; Yam, S.C.P. On the interpretation of the Master Equation. Stoch. Process. Their Appl. 2017, 127, 2093–2137. [Google Scholar] [CrossRef]
  27. Bensoussan, A.; Frehse, J.; Yam, P. Mean Field Games and Mean Field Type Control Theory; Springer Briefs in Mathematics; Springer: New York, NY, USA, 2013. [Google Scholar] [CrossRef]
  28. Carmona, R.; Delarue, F. Probabilistic Theory of Mean Field Games with Applications I. In Probability Theory and Stochastic Modelling; Springer Nature: Cham, Switzerland, 2018; Volume 83. [Google Scholar] [CrossRef]
  29. Carmona, R.; Delarue, F. Probabilistic Theory of Mean Field Games with Applications II. In Probability Theory and Stochastic Modelling; Springer International Publishing: Cham, Switzerland, 2018; Volume 84. [Google Scholar] [CrossRef]
  30. Achdou, Y. Finite Difference Methods for Mean Field Games. In Hamilton-Jacobi Equations: Approximations, Numerical Analysis and Applications: Cetraro, Italy 2011; Loreti, P., Tchou, N.A., Eds.; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2013; pp. 1–47. [Google Scholar] [CrossRef]
  31. Achdou, Y.; Laurière, M. Mean Field Games and Applications: Numerical Aspects. In Mean Field Games: Cetraro, Italy 2019; Achdou, Y., Cardaliaguet, P., Delarue, F., Porretta, A., Santambrogio, F., Cardaliaguet, P., Porretta, A., Eds.; Lecture Notes in Mathematics; Springer International Publishing: Cham, Switzerland, 2020; pp. 249–307. [Google Scholar] [CrossRef]
  32. Lauriere, M. Numerical Methods for Mean Field Games and Mean Field Type Control. arXiv 2021, arXiv:2106.06231. [Google Scholar] [CrossRef]
  33. Bernstein, D.S. Bounded Policy Iteration for Decentralized POMDPs. In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence, Edinburgh, UK, 30 July–5 August 2005; pp. 1287–1292. [Google Scholar]
  34. Bernstein, D.S.; Amato, C.; Hansen, E.A.; Zilberstein, S. Policy Iteration for Decentralized Control of Markov Decision Processes. J. Artif. Intell. Res. 2009, 34, 89–132. [Google Scholar] [CrossRef]
  35. Amato, C.; Bernstein, D.S.; Zilberstein, S. Optimizing Memory-Bounded Controllers for Decentralized POMDPs. In Proceedings of the Twenty-Third Conference on Uncertainty in Artificial Intelligence, Vancouver, BC, Canada, 19–22 July 2007. [Google Scholar] [CrossRef]
  36. Amato, C.; Bonet, B.; Zilberstein, S. Finite-State Controllers Based on Mealy Machines for Centralized and Decentralized POMDPs. Proc. AAAI Conf. Artif. Intell. 2010, 24, 1052–1058. [Google Scholar] [CrossRef]
  37. Kumar, A.; Zilberstein, S. Anytime Planning for Decentralized POMDPs using Expectation Maximization. In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence, Catalina Island, CA, USA, 8–11 July 2010; p. 9. [Google Scholar]
  38. Oliehoek, F.A.; Amato, C. A Concise Introduction to Decentralized POMDPs; SpringerBriefs in Intelligent Systems; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar] [CrossRef]
  39. Tottori, T.; Kobayashi, T.J. Forward and Backward Bellman Equations Improve the Efficiency of the EM Algorithm for DEC-POMDP. Entropy 2021, 23, 551. [Google Scholar] [CrossRef]
  40. Yong, J.; Zhou, X.Y. Stochastic Controls; Springer: New York, NY, USA, 1999. [Google Scholar] [CrossRef]
  41. Kushner, H. Optimal stochastic control. IRE Trans. Autom. Control 1962, 7, 120–122. [Google Scholar] [CrossRef]
  42. Carlini, E.; Silva, F.J. Semi-Lagrangian schemes for mean field game models. In Proceedings of the 52nd IEEE Conference on Decision and Control, Firenze, Italy, 10–13 December 2013; pp. 3115–3120. [Google Scholar] [CrossRef]
  43. Carlini, E.; Silva, F.J. A Fully Discrete Semi-Lagrangian Scheme for a First Order Mean Field Game Problem. SIAM J. Numer. Anal. 2014, 52, 45–67. [Google Scholar] [CrossRef]
  44. Carlini, E.; Silva, F.J. A semi-Lagrangian scheme for a degenerate second order mean field game system. Discret. Contin. Dyn. Syst. 2015, 35, 4269. [Google Scholar] [CrossRef]
  45. Kushner, H.J.; Dupuis, P.G. Numerical Methods for Stochastic Control Problems in Continuous Time; Springer: New York, NY, USA, 1992. [Google Scholar] [CrossRef]
  46. Fleming, W.H.; Soner, H.M. Controlled Markov Processes and Viscosity Solutions, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar] [CrossRef]
  47. Puterman, M.L. Markov Decision Processes: Discrete Stochastic Dynamic Programming; Wiley-Interscience: New York, NY, USA, 2014. [Google Scholar]
  48. Charalambous, C.D.; Ahmed, N. Equivalence of decentralized stochastic dynamic decision systems via Girsanov’s measure transformation. In Proceedings of the 53rd IEEE Conference on Decision and Control, Los Angeles, CA, USA, 15–17 December 2014; pp. 439–444. [Google Scholar] [CrossRef]
  49. Telsang, B.; Djouadi, S.; Charalambous, C. Numerical Evaluation of Exact Person-by-Person Optimal Nonlinear Control Strategies of the Witsenhausen Counterexample. In Proceedings of the 2021 American Control Conference (ACC), New Orleans, LA, USA, 25–28 May 2021; pp. 1250–1255. [Google Scholar] [CrossRef]
  50. Ruthotto, L.; Osher, S.J.; Li, W.; Nurbekyan, L.; Fung, S.W. A machine learning framework for solving high-dimensional mean field game and mean field control problems. Proc. Natl. Acad. Sci. USA 2020, 117, 9183–9193. [Google Scholar] [CrossRef]
  51. Lin, A.T.; Fung, S.W.; Li, W.; Nurbekyan, L.; Osher, S.J. Alternating the population and control neural networks to solve high-dimensional stochastic mean-field games. Proc. Natl. Acad. Sci. USA 2021, 118, e2024713118. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of (a) decentralized stochastic control (DSC) and (b) memory-limited DSC (ML-DSC). (a) DSC consists of a system and N controllers. x t R d x is the state of the target system at time t [ 0 , T ] . y t i R d y i , y 0 : t i : = { y τ i | τ [ 0 , t ] } , and u t i R d u i are the observation, the observation history, and the control of controller i, respectively. The controller i cannot accurately observe the state of the system x t and the controls of the other controllers u t j ( j i ) . It can only obtain their noisy observation y t i . Then, the controller i determines the control u t i based on the noisy observation history y 0 : t i , which ideally requires infinite-dimensional memory to store the observation history y 0 : t i . (b) ML-DSC explicitly formulates the finite-dimensional memory z t i R d z i . Controller i compresses the infinite-dimensional observation history y 0 : t i into the finite-dimensional memory z t i by optimally designing control over the memory v t i R d v i as well as control over the state u t i R d u i .
Figure 1. Schematic diagram of (a) decentralized stochastic control (DSC) and (b) memory-limited DSC (ML-DSC). (a) DSC consists of a system and N controllers. x t R d x is the state of the target system at time t [ 0 , T ] . y t i R d y i , y 0 : t i : = { y τ i | τ [ 0 , t ] } , and u t i R d u i are the observation, the observation history, and the control of controller i, respectively. The controller i cannot accurately observe the state of the system x t and the controls of the other controllers u t j ( j i ) . It can only obtain their noisy observation y t i . Then, the controller i determines the control u t i based on the noisy observation history y 0 : t i , which ideally requires infinite-dimensional memory to store the observation history y 0 : t i . (b) ML-DSC explicitly formulates the finite-dimensional memory z t i R d z i . Controller i compresses the infinite-dimensional observation history y 0 : t i into the finite-dimensional memory z t i by optimally designing control over the memory v t i R d v i as well as control over the state u t i R d u i .
Entropy 25 00791 g001
Figure 2. Schematic diagram of (a) completely observable stochastic control (COSC), (b) memory-limited partially observable stochastic control (ML-POSC), and (c) memory-limited decentralized stochastic control (ML-DSC). Blue and gray regions indicate the observables and the unobservables for the controller 1, respectively.
Figure 2. Schematic diagram of (a) completely observable stochastic control (COSC), (b) memory-limited partially observable stochastic control (ML-POSC), and (c) memory-limited decentralized stochastic control (ML-DSC). Blue and gray regions indicate the observables and the unobservables for the controller 1, respectively.
Entropy 25 00791 g002
Figure 4. (af) Trajectories of the elements of Ψ ( t ) R 3 × 3 (blue), Π ( t ) R 3 × 3 (orange), and Φ ( t ) R 3 × 3 (green) in Section 6.1. They are the solutions of the Riccati Equation (38), the partially observable Riccati Equation (42), and the decentralized Riccati Equation (39), respectively. Because Ψ ( t ) , Π ( t ) , and Φ ( t ) are symmetric matrices, Ψ z 1 x ( t ) , Ψ z 2 x ( t ) , Ψ z 2 z 1 ( t ) , Π z 1 x ( t ) , Π z 2 x ( t ) , Π z 2 z 1 ( t ) , Φ z 1 x ( t ) , Φ z 2 x ( t ) , and Φ z 2 z 1 ( t ) are not visualized.
Figure 4. (af) Trajectories of the elements of Ψ ( t ) R 3 × 3 (blue), Π ( t ) R 3 × 3 (orange), and Φ ( t ) R 3 × 3 (green) in Section 6.1. They are the solutions of the Riccati Equation (38), the partially observable Riccati Equation (42), and the decentralized Riccati Equation (39), respectively. Because Ψ ( t ) , Π ( t ) , and Φ ( t ) are symmetric matrices, Ψ z 1 x ( t ) , Ψ z 2 x ( t ) , Ψ z 2 z 1 ( t ) , Π z 1 x ( t ) , Π z 2 x ( t ) , Π z 2 z 1 ( t ) , Φ z 1 x ( t ) , Φ z 2 x ( t ) , and Φ z 2 z 1 ( t ) are not visualized.
Entropy 25 00791 g004
Figure 5. Stochastic simulations in Section 6.1. (ac) Stochastic behaviors of (a) the state x t , (b) the controller 1’s memory z t 1 , and (c) the controller 2’s memory z t 2 for 100 samples in Section 6.1. (d) The expected cumulative cost computed from the 100 samples. Blue, orange, and green curves are controlled by u Ψ (55), u Π (56), and u (35), respectively.
Figure 5. Stochastic simulations in Section 6.1. (ac) Stochastic behaviors of (a) the state x t , (b) the controller 1’s memory z t 1 , and (c) the controller 2’s memory z t 2 for 100 samples in Section 6.1. (d) The expected cumulative cost computed from the 100 samples. Blue, orange, and green curves are controlled by u Ψ (55), u Π (56), and u (35), respectively.
Entropy 25 00791 g005
Figure 7. (aj) Trajectories of the elements of Ψ ( t ) R 4 × 4 (blue), Π ( t ) R 4 × 4 (orange), and Φ ( t ) R 4 × 4 (green) in Section 6.2. They are the solutions of the Riccati Equation (38), the partially observable Riccati Equation (42), and the decentralized Riccati Equation (39), respectively. Ψ ( t ) , Π ( t ) , and Φ ( t ) are symmetric matrices, and duplicate elements are not visualized.
Figure 7. (aj) Trajectories of the elements of Ψ ( t ) R 4 × 4 (blue), Π ( t ) R 4 × 4 (orange), and Φ ( t ) R 4 × 4 (green) in Section 6.2. They are the solutions of the Riccati Equation (38), the partially observable Riccati Equation (42), and the decentralized Riccati Equation (39), respectively. Ψ ( t ) , Π ( t ) , and Φ ( t ) are symmetric matrices, and duplicate elements are not visualized.
Entropy 25 00791 g007
Figure 8. Stochastic simulations in Section 6.2. (ae) Stochastic behaviors of the actual state x t act = ( x t act , 1 , x t act , 2 ) (ac) and the memory z t = ( z t 1 , z t 2 ) (d,e) for 100 samples. (f) The expected cumulative cost computed from the 100 samples. Black curves indicate the target state x t tar . Blue, orange, and green curves are controlled by u Ψ (55), u Π (56), and u (35), respectively.
Figure 8. Stochastic simulations in Section 6.2. (ae) Stochastic behaviors of the actual state x t act = ( x t act , 1 , x t act , 2 ) (ac) and the memory z t = ( z t 1 , z t 2 ) (d,e) for 100 samples. (f) The expected cumulative cost computed from the 100 samples. Black curves indicate the target state x t tar . Blue, orange, and green curves are controlled by u Ψ (55), u Π (56), and u (35), respectively.
Entropy 25 00791 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tottori, T.; Kobayashi, T.J. Decentralized Stochastic Control with Finite-Dimensional Memories: A Memory Limitation Approach. Entropy 2023, 25, 791. https://doi.org/10.3390/e25050791

AMA Style

Tottori T, Kobayashi TJ. Decentralized Stochastic Control with Finite-Dimensional Memories: A Memory Limitation Approach. Entropy. 2023; 25(5):791. https://doi.org/10.3390/e25050791

Chicago/Turabian Style

Tottori, Takehiro, and Tetsuya J. Kobayashi. 2023. "Decentralized Stochastic Control with Finite-Dimensional Memories: A Memory Limitation Approach" Entropy 25, no. 5: 791. https://doi.org/10.3390/e25050791

APA Style

Tottori, T., & Kobayashi, T. J. (2023). Decentralized Stochastic Control with Finite-Dimensional Memories: A Memory Limitation Approach. Entropy, 25(5), 791. https://doi.org/10.3390/e25050791

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop