Next Article in Journal
MOBCA: Multi-Objective Besiege and Conquer Algorithm
Next Article in Special Issue
Implementation of an Enhanced Crayfish Optimization Algorithm
Previous Article in Journal
Innate Orientating Behavior of a Multi-Legged Robot Driven by the Neural Circuits of C. elegans
Previous Article in Special Issue
Dendritic Growth Optimization: A Novel Nature-Inspired Algorithm for Real-World Optimization Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Dyna-Q Algorithm Inspired by the Forward Prediction Mechanism in the Rat Brain for Mobile Robot Path Planning

1
Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
2
Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing 100124, China
*
Author to whom correspondence should be addressed.
Biomimetics 2024, 9(6), 315; https://doi.org/10.3390/biomimetics9060315
Submission received: 1 April 2024 / Revised: 4 May 2024 / Accepted: 8 May 2024 / Published: 23 May 2024
(This article belongs to the Special Issue Bioinspired Algorithms)

Abstract

:
The traditional Model-Based Reinforcement Learning (MBRL) algorithm has high computational cost, poor convergence, and poor performance in robot spatial cognition and navigation tasks, and it cannot fully explain the ability of animals to quickly adapt to environmental changes and learn a variety of complex tasks. Studies have shown that vicarious trial and error (VTE) and the hippocampus forward prediction mechanism in rats and other mammals can be used as key components of action selection in MBRL to support “goal-oriented” behavior. Therefore, we propose an improved Dyna-Q algorithm inspired by the forward prediction mechanism of the hippocampus to solve the above problems and tackle the exploration–exploitation dilemma of Reinforcement Learning (RL). This algorithm alternately presents the potential path in the future for mobile robots and dynamically adjusts the sweep length according to the decision certainty, so as to determine action selection. We test the performance of the algorithm in a two-dimensional maze environment with static and dynamic obstacles, respectively. Compared with classic RL algorithms like State-Action-Reward-State-Action (SARSA) and Dyna-Q, the algorithm can speed up spatial cognition and improve the global search ability of path planning. In addition, our method reflects key features of how the brain organizes MBRL to effectively solve difficult tasks such as navigation, and it provides a new idea for spatial cognitive tasks from a biological perspective.

1. Introduction

As a hot field, robots have been widely concerned. In recent years, a lot of research has been carried out around robots’ environmental cognition, path planning, mobile obstacle avoidance, and other tasks. Among them, spatial cognition and path planning are necessary functions of mobile robots [1]. The path planning of mobile robots mainly includes two types:
(1)
Global path planning based on complete environmental prior information. Under the condition that the external environment is known, the robot can use traditional global path planning algorithms such as A* [2] or Dijkstra algorithms to generate the best path in the existing environmental map. However, known environmental information is not always complete and accurate, and there may be deviations or unknowns in some areas.
(2)
Local path planning with uncertain environmental information, such as the dynamic window method (DWA) [3]. In the absence of environmental knowledge, such methods may fall into local optimization. They only simulate and evaluate the next action and are prone to getting into trouble when encountering ‘C’ obstacles.
The two kinds of methods both rely on the environmental map for path planning, lack the ability to learn autonomously, and are unable to complete autonomous space cognition. Moreover, users need to pre-program every situation they may encounter, which will bring high costs.
Therefore, in order for mobile robots to explore and navigate independently in unknown environments, a learning-based approach is needed. Reinforcement Learning (RL) [4] is considered an important method for achieving general artificial intelligence by interacting with the environment in a trial-and-error manner. Since RL can learn actively and adapt to a complex dynamic environment, it provides a new method for solving the problem of path planning. Researchers have introduced RL into path planning [5,6] to address the limitations of traditional methods.
Reinforcement Learning can be divided into Model-Free Reinforcement Learning (MFRL) and Model-Based Reinforcement Learning (MBRL) according to whether there is an environment model in the algorithm. In MFRL, the agent uses the experience gained through direct interaction with the environment to improve the strategy or estimate the value function. Classic MFRL methods include Q-learning and State-Action-Reward-State-Action (SARSA) algorithms. The Q-learning algorithm and its variants are widely used in robot path planning because they can learn independently without an environmental map [7,8,9,10]. However, as the scale of the environment expands and complexity increases, such model-free methods have the disadvantages of low exploration efficiency and slow convergence. Because these methods require huge amounts of interaction with the real environment, they will generate many invalid experiences, making robots easily hit walls or causing other accidents.
By contrast, MBRL includes not only direct interaction with the environment but also a virtual environment model. The agent uses the experience gained from exploring the real environment to build a virtual environment model and obtain strategies, and the agent uses this strategy to continue to collect experience and expand the model in the real environment. Using the virtual environment model to make decisions can reduce the number of trials and errors in the real environment and obtain good performance strategies, effectively reduce the probability of accidents in the robot exploration process, and enhance navigation efficiency. At the same time, the new planning process can make full use of the experience gained from interacting with the real environment and improve sample utilization.
Dyna [11] is a classic MBRL framework, which includes strategic learning and internal model building. Dyna-Q is the application of the Dyna framework in Q-learning. Some researchers [12,13] have applied the Dyna-Q algorithm to robot navigation, which works well under the condition that the obstacle is static but not well in the dynamic environment. First of all, the sparsity of environmental rewards makes it difficult for robots to find the target point and the learning efficiency is low. Secondly, it is still a challenge to plan an efficient and collision-free path in the environment with multiple dynamic obstacles [14]. In addition, RL algorithms, including Dyna-Q, are generally faced with the dilemma of exploration and exploitation. Robots need to constantly try new actions to prevent falling into the local optimum and gradually converge to the optimal strategy. How to balance exploration and exploitation remains a major challenge. In general, RL has the ability for autonomous learning compared to traditional path planning algorithms, but there are still shortcomings in learning efficiency, especially in dynamic environments.
Realizing spatial cognition in a strange environment is an important survival skill for mammals. Many studies have shown that spatial cognitive function plays an important role in animal navigation [15]. Studying the spatial cognitive mechanism of animals is of great significance for people to improve existing navigation methods and to imitate and learn from unique mechanisms in biology. Animals have excellent navigation skills. Tolman [16] found that mice could explore and learn the structure of a maze independently. Inspired by the spatial cognition and navigation functions of animals, researchers use the neurophysiological mechanism in animals for reference to conduct computational modeling, further deepening the understanding of the cognitive mechanism of animals and providing new ideas for the navigation of mobile robots.
Following such a research approach, this paper presents an improved Dyna-Q algorithm inspired by cognitive mechanisms in the hippocampus and striatum to achieve more efficient navigation in unknown environments or without sufficient environmental knowledge.
The main contributions of this paper are as follows:
(1)
We propose an improved Dyna-Q algorithm inspired by the cognitive mechanisms in the hippocampus and striatum. This model combines the forward prediction mechanism of the hippocampus to improve Dyna-Q’s action decision-making mechanism, so that the robot can virtually simulate the future multi-step operation when making action selection. This forward prediction mechanism can balance exploration and exploitation and reduce the probability of falling into the local optimum in navigation tasks. At the same time, it simulates the striatum function, evaluates the decision certainty of each forward simulation, and dynamically adjusts the sweep depth and action selection mode to improve the convergence and decision-making efficiency of the algorithm.
(2)
The experiment carried out in the T-maze in this paper verified that this model can show characteristics similar to the VTE mechanism of rats, proved that this model has biological rationality, demonstrated the feasibility of introducing the biological neural mechanism into the machine learning method, and provided a new idea for improving the existing RL algorithm and the robot’s path planning task from a brain-like perspective.
(3)
This paper implements robot navigation in unknown environments with static and dynamic obstacles and compares it with existing RL algorithms. Experimental results show that our algorithm achieves autonomous spatial cognition, can converge faster, and has better performance in path planning compared to the SARSA and Dyna-Q algorithms.
(4)
In summary, we propose a novel Dyna-Q algorithm framework simulating cognitive mechanisms in the hippocampus and striatum to improve the efficiency of navigation tasks in complex environments, providing a promising direction for future research on RL.
The rest of this paper is as follows: Section 2 mainly introduces the relevant research background. Section 3 introduces the framework, mathematical model, and working principle of the algorithm. Section 4 presents the experimental design and results. We carry out simulation experiments in a two-dimensional maze to test the model’s spatial cognition ability, and we compare our model with other models in terms of navigation performance. In Section 5, we discuss and analyze the experimental results and possible reasons for our findings. Finally, we summarize this work and draw conclusions in Section 6.

2. Related Works

2.1. Vicarious Trial and Error

Learning to predict long-term rewards is the foundation for the survival of many animals, which is actually the goal of RL. There is evidence that brain evolution has taken many ways [17] to achieve this goal. One is the learning environment model or cognitive map, which simulates the future state to generate predictions of long-term rewards. Tolman and other researchers have noticed that when rats explore a maze and encounter choice points such as crossroads, they occasionally stop and wander back and forth, which seems to indicate confusing about which path to take. These researchers speculated that rats are imagining potential future choices and called this behavior Vicarious Trial and Error (VTE) [18].
When making decisions, biological agents have a process of deliberation [19], which is based on a schema describing how the world works, such as a cognitive map, to evaluate potential possibilities, and they use the results of these assumptions as a means of decision making. VTE usually occurs in the early stages of rat learning [20], especially when rats do not know what action to take in certain positions after random exploration and a preliminary understanding of space. The VTE process is shown in Figure 1.
Current research on the mechanism of VTE is mainly focused on its biological function and explanation. Researchers have placed rats in an environment with a fork in the road, such as a T-maze, to test rat behavior characteristics during VTE. VTE is considered to reflect biological imagination and an assessment [21] of the future, so it is a flexible and deliberative decision-making process.
The VTE process is usually divided into three stages: deliberation (the VTE process is significantly enhanced), planning (the VTE process is gradually reduced), and automation (VTE is no longer performed, but a certain sequence of actions is performed), as shown in Figure 2. Studies have shown that in goal-oriented navigation, the VTE mechanism is related to the hippocampus-ventral striatum circuit [22] in the rodent brain. When navigating the maze, rats will stop at the decision point, first turning their head in one possible direction, and then turning it in another. In the process of turning back and forth, the place cells corresponding to the selected branch of the maze are activated after forward sweeping, like the rats were really passing through that path.
The VTE mechanism allows rats and other mammals to simulate the possible trajectory and locations in the brain in the next few steps when they face multiple paths from which to choose, such as a fork. Therefore, animals can evaluate the effect of different paths, thus improving the efficiency of spatial cognition and path planning.
As the VTE mechanism is beneficial for spatial cognition and goal-directed learning of animals, imitating it and presenting a novel algorithm would help increase the efficiency of robot navigation and improve RL algorithms by optimizing decision-making policies, which is the motivation for our work.

2.2. Forward Prediction and Decision Certainty Assessment

Adaptive behavior of animals in the environment requires the ability to analyze past experience, which is often forward looking and retrospective, and hippocampal function is crucial for the representation and storage of sequence information. Forward prediction represented by the hippocampal theta oscillation [23] is considered an important part of the VTE process. Rats will carry out “mental travel” to simulate the possible results. Place cells [24] in the CA1 region of the rat hippocampus can encode spatial information and activate at a specific location. Their spatial specificity allows rats to position themselves in space. In the active navigation process, place cells are internally organized, generating forward and reverse sequences in a single period of theta oscillation.
Theta sequences in the hippocampus may be the basis for human situational memory retrieval [25], which can ensure the accuracy and stability of spatial representation of place cells. Moreover, goal-directed navigation is difficult to support [26] without theta sequences. Therefore, the forward sweep mechanism based on theta sequences is crucial for memory-guided behavior in the hippocampus [27].
Theta oscillation occurs when rats stop at the selection point and exhibit VTE. In a given theta cycle, place cells activate sequentially along a virtual path and move toward a goal [28], then the next cycle begins to forward sweep in another direction. The place cell activation sequence alternates between possible future paths and the rat’s moving direction. In this way, animals virtually attempt possible future actions and observe potential outcomes of these actions in the brain, ultimately forming multiple potential pathways. The hippocampus forward prediction sequence is shown in Figure 3.
Neurophysiological studies show that the forward sweep mechanism of place cells is not a way to collect external environmental information [29]. It merely represents an alternative search process within animals, and the collection of environmental information still depends on exploration at the early stage of training. Therefore, VTE will not occur at initial exploration but only after animals have had experience with tasks and built an environment model.
On the basis of the simulation of the future path, it is also necessary to finally evaluate an optimal solution for each path, so as to improve the certainty of the decision. The striatum is adjacent to and closely connected to the hippocampus in the brain area. Research shows that the striatum is closely related to reward learning and action selection [30], and its role in animal environmental cognition is mainly to make action selection and evaluate the reward value that can be obtained by the action taken, showing a relative preference for action selection and reward expectation. Some striatum-based computational models are mainly applied to RL and action selection.
Strong projection from the CA1 region of the hippocampus to the ventral striatum (vStr) may transmit spatial context information [31,32], forming the connection between position and reward [33]. Reward-related cells in the vStr are activated during VTE to provide reward signals [34]. It receives dopamine released by dopamine cells in the ventral tegmental area (VTA) to assess the certainty of current predictions. When the hippocampus generates forward prediction sequences at difficult decision points to simulate future spatial trajectories, the ventral striatum assesses these predictions. The joint action of the two allows the animal to make a long-term plan for its own actions in mind. The above approach is similar to MBRL, which can summarize many different values, rather than representing the world from a single value level.
Existing research shows that mammalian brains can implement model-based mechanisms, that is, establish a virtual environment model inside the brain based on direct interaction with the environment, and then learn based on this model. For example, Khamassi et al. [35] reviewed model-based (preserving the representation of the world) and model-free (responding to immediate stimuli) learning algorithms, using the dorsolateral striatum to represent “model-free” learning and the dorsomedial striatum to represent “model-based” learning, and then proposed that the core role of the ventral striatum is to learn the probability of action selection of each state transition. They proposed a model-based bidirectional search model, which combines forward trajectory sampling from the current position and backward sampling by prioritized sweeping from the state related to the large reward prediction error to explain why hippocampal reactivations drastically diminish when the animal’s performance is stable. Elisa Massi et al. [36] imitated hippocampal activations, implemented an experience replay mechanism, and applied it to mobile robot navigation, giving the navigation robot a neuro-inspired RL architecture.
Stoianov et al. [37] proposed a spatial navigation calculation model based on the hippocampus-ventral striatum (HC-vStr) circuit, proving the possibility of mapping and simulating the MBRL mechanism with the HC-vStr circuit to reproduce behavior and neural data. They used a Bayesian nonparametric model to build a brain-inspired MBRL calculation model, and verified that this forward looking prediction in the rat brain could improve action selection and learning. Chai et al. [38] proposed a striatal behavior learning model consisting of striatum and matrix model to explain the generation of habitual behavior in animal navigation. In the striatum model, directional information is constantly updated based on the mechanism of operant conditioning, which leads to habitual behavior. They tested the Morris square dry maze task, and the results showed that the model was effective in explaining habit-related behavior. It could successfully solve navigation tasks with habits and display key neural characteristics of the striatum, which may be significant for the bionic navigation of robots.
As can be seen from the above studies, the introduction of the above-mentioned forward prediction and decision certainty assessment mechanism into an RL algorithm will help to improve the robot’s environmental cognitive efficiency, reduce exploration randomness, and make full use of the knowledge obtained in the previous exploration, which is also the overall idea of this model.

2.3. Dyna-Q Algorithm

Q-learning is a classical MFRL algorithm for solving MDP. Its main advantage is to use the temporal difference algorithm (TD) to achieve off-policy learning. Q π s , a is defined by the expected return of the state-action pair s , a under a policy. The calculation formula is as follows:
Q π s , a = E π [ G t   |   s t = s , a t = a ] = E π [   k = 0 γ k r t + k + 1   |   s t = s , a t = a ]
The core idea of the Q-learning algorithm is that the next Q value of the current state-action pair s , a is generated by the strategy to be evaluated, rather than the Q value of the next state-action pair s , a following the current strategy. The final strategy of Q-learning is obtained by iterating the state-action value function Q s , a . The Q table is a set of Q s , a , which is used to store the agent’s preferences for taking different actions in different states of the environment, thus promoting action selection. The update of the state-action value is regulated as follows:
Q s , a = Q s , a + α · [ r t + 1 + γ ·   m a x a Q s , a Q s , a ]
Dyna architecture is characterized by combining model-free methods with a world model, as shown in Figure 4. The world model can provide a large amount of simulation data for strategy learning of model-free algorithms. The agent interacts with the environment to obtain real data and learns the internal virtual world model. Then, the world model is used to obtain simulated interaction data between the agent and the environment based on the prediction imagination in each state to learn the value function. Although there is a certain deviation between the model and the real environment, the simulated data still have high reliability to serve as training data for RL algorithms. This method can well supplement the data needed for strategy training in model-free methods, reduce interaction and cost with the real environment, and improve sample efficiency.
Dyna-Q is the application of Dyna architecture to Q-learning. Based on Q-learning, the planning link is added to store transfer action pairs s t , a t , r t + 1 , s t + 1 obtained from interaction with the environment in the model. Then, data are randomly extracted from the model and planned in the Q-learning method to speed up learning efficiency.
However, Dyna-Q also has some shortcomings, as mentioned in the introduction:
(1)
In the path planning problem, the reward function of traditional RL rewards the agent only when it reaches the destination or encounters obstacles. When the environment is large, there are many invalid states, and the traditional reward function has the problem of sparse reward. Therefore, it is difficult for the agent to obtain positive rewards and find the goal point. It requires a lot of random searches to gain effective experience, and learning efficiency is low.
(2)
Secondly, it is still a challenge to plan an efficient and collision-free path in the environment with multiple dynamic obstacles. To achieve this goal, RL algorithms also need to increase the time cost of learning.
(3)
In addition, RL algorithms, including Dyna-Q, are generally faced with the dilemma of exploration and exploitation. In the early stages of training, exploration of the agent causes severe blindness. The agent needs to constantly try new actions to prevent falling into the local optimum, reduce exploratory behavior at a later stage, and gradually converge to the optimal strategy. How to balance exploration and exploitation remains a major challenge.
In order to solve this problem, the work to be carried out in this paper introduces the mechanism in the above biological agent on the basis of Dyna-Q. This will allow the agent to make decisions no longer limited to the Q value of the current operation, but to simulate future multi-step operations before making decisions to achieve the effect of long-term planning, so that the agent can try different future tracks as much as possible at the initial stage of exploration and more effectively collect experience. After the certainty of decision making reaches a high level, the length and frequency of sweeping are gradually reduced to speed up navigation, better achieve balance between exploration and exploitation, improve the convergence property and decision-making efficiency of the agent, and finally improve the performance of traditional RL algorithms in the task of robot environmental cognition and path planning from a biological point of view.

3. Materials and Methods

3.1. Overall Framework of the Model

Inspired by the VTE behavior of rats and other mammals in environmental cognition, we simulate the functions of brain regions such as the hippocampus and ventral striatum and introduce them into the Dyna-Q algorithm, and we propose a brain-inspired environmental cognitive computing model. The overall framework of the model is shown in Figure 5 below.
The overall framework to the right of the figure is consistent with Dyna-Q. Direct RL is based on real-world experience gained through direct interaction with the environment, and the value function is planned and updated through the virtual environment model. Based on the Dyna-Q algorithm, we improved the action selection method and added a forward sweep and decision certainty assessment mechanism. The above mechanism is performed inside the robot and depends mainly on the state action value Q s t , a t stored in the Q table in the RL algorithm, as shown in the green box on the left of Figure 5.
We define the environmental cognitive task as a Markov Decision Process (MDP), and the standard RL algorithm is a process of interacting with the environment and making mistakes under the MDP framework. This model uses a quintuple M = S , A , P , R , γ . The following formula describes the meaning of each element:
S represents the set of states s in the environment s S . S is defined as follows, where W and H are the width and height of the environment map, respectively:
S = s = x , y   |   1 x W ,   1 y H
A is a set of discrete finite actions a A that can be adopted by a robot. The action set is A = a 1 , a 2 a n .
The MBRL algorithm can build a world model in the learning process, which usually includes the state transition function and reward function. In this model, P is the probability of state transition, which represents the probability that the robot is in a certain state s in the process of interacting with the environment and taking action to transfer to state s t + 1 .
R is the reward function of this model, and R s , a is used to measure the instant reward obtained by the robot after taking action a in state s . Specific definitions of R and P are given in the following sections.
In the Q value updated formula adopted by the Dyna-Q algorithm, γ , as a discount factor, is used to discount the maximum expected value in the future.

3.2. Brain-Inspired Environmental Cognitive Computing Model

3.2.1. Design of Action Selection Mechanism Based on Forward Prediction and Decision Certainty Assessment

This model mainly simulates “mental travel” in an animal’s brain when it is at a difficult decision point to help the robot make better decisions from a long-term perspective. The following is the specific process:
1.
Take the current state st of the robot as the starting point.
2.
In the first sweep step, n actions in the action set A are simulated in sequence in s t to reach n subsequent potential states s ^ 1 1   ~   s ^ 1 n , and the action value Q s t , a i of the ith state is accumulated to Q _ s w e e p i , respectively. i is the ith direction of the sweep. The range of i is the same as the number of actions in the action set A   , i [ 1 ,   n ] . s ^ j i is the state reached during the sweep, and Q _ s w e e p i is the Q value accumulated in the ith direction during the sweep. Figure 6 shows the forward sweep mechanism in the environment:
3.
Then, the c e r t a i n t y j of the current depth j is calculated. j is the current sweep depth, dynamically adjusted by the decision certainty in the sweep process, 1 j M a x _ D e p t h . If the certainty exceeds the threshold, the sweep stops.
s o f t m a x ( Q _ s w e e p i ) = e β Q _ s w e e p i i = 1 n e β Q _ s w e e p i
c e r t a i n t y j = m a x s o f t m a x Q _ s w e e p i s u b m a x s o f t m a x Q _ s w e e p i
a i = a r g m a x   Q _ s w e e p i
If the decision certainty is greater than the threshold SweepCertThr, the sweep stops, and the robot selects the initial action a i taken by the branch with the highest cumulative state action value in n directions.
4.
If the certainty does not exceed the threshold, the next sweep will be carried out based on the potential state s ^ j i reached by the previous sweep. After the softmax function is non-negative and normalized, the value range of Q _ s w e e p i is (0,1), and the value range of the difference between the processed maximum value and the second largest value is also (0,1). Therefore, setting of the threshold S w e e p C e r t T h r is also a decimal from 0 to 1, and the specific value is fine-tuned by the experimental process.
Unlike the first sweep step, in order to reduce computational complexity, the sweep of the next step will not extend n branches separately but will determine the action a i to be simulated in this step and the possible next state s ^ j i + 1 , according to the maximum state-action value Q s ^ j i , a i in the Q table. At the same time as the sweep, the robot accumulates the state action value Q s ^ j i , a i of each direction in each depth sweep to Q _ s w e e p i and uses the discount factor, which decays with the increase in sweep depth, to reduce the weight of the distance. d i s c o u n t j is the discount factor, which decreases with the increase in sweep depth.
Q _ s w e e p i = Q _ s w e e p i + d i s c o u n t j · Q s ^ j i , a i
After the state action values are accumulated in n directions of the current depth, the robot will judge the decision certainty according to the method mentioned in step 3 according to the n accumulated values Q _ s w e e p i , until the decision certainty exceeds the threshold, and the robot selects the initial action a i with the maximum cumulative Q value. Figure 7 shows the overall process of the forward sweep and action selection mechanism and Table 1 explains the model parameters.

3.2.2. Improved ε-Greedy Method

The exploration and exploitation dilemma is also a big challenge for RL algorithms. In traditional RL algorithms, such as Q-learning and SARSA, robots adopt the ε-greedy strategy to solve this dilemma, in that they choose the action with the highest Q value most of the time and behave greedily, but sometimes with a small probability ε of randomly selecting actions, which can reduce the probability of falling into the local optimum.
In this paper, the traditional ε-greedy method is improved. In the above-mentioned forward mechanism, if the decision certainty of each depth does not exceed the threshold after the robot sweeps to the maximum depth, the improved method is adopted and the ε-greedy method randomly selects actions with a certain probability. The purpose of this is to reduce the probability of falling into the local optimum and avoid over-reliance on the experience gained, which is similar to the traditional RL algorithm.
Because the sweep in hippocampal cells does not always reflect the direction of the movement of rats [39], this indicates that the sweep reflects the VTE process in the brain rather than the means of collecting external sensory information. Exploratory behavior occurs when mice have very limited experience, while VTE occurs when animals have extensive experience but need to complete specific tasks.
Therefore, the greedy factor ε v t e we used was not set to a fixed value, but in a way that gradually decreased. ε v t e is large in the early stages of training, as robots tend to randomly select actions and explore as many unknown states in the environment as possible. As the number of iterations increases, ε v t e constantly decays to the final degree factor ε, and gradually tends to use the optimal strategy obtained to accelerate the convergence of the robot. In the formula, e p i s o d e   is the current number of training rounds, and M a x _   E p i s o d e is the maximum number of training rounds.
ε v t e = 3 ε 1 + 2 e p i s o d e M a x _ E p i s o d e
The complete pseudo code of the action selection mechanism is as follows:
Algorithm 1: Forward Sweep Policy & Improved ε-Greedy Policy
1INPUT:      Q , s t
2OUTPUT:  a t
3Initialization: n, Max_depth, SweepCertThr
4discount =  γ   ( 1   :   m a x _ d e p t h )   // Discount factor, decays with depth.
5 s ^ j i 0                              // Initialization of arrived state of each step of sweep
6 Q _ s w e e p   0              // Initialization of cumulative Q-values after sweeping.
7 c e r t a i n t y 0                   // Initialization of decision Certainty of each sweep step.
8 j 1                               // Initialization of current sweep depth
9
10for  i = 1   :   n  do
11
12    s ^ 1   i m o v e ( s t , a i )   // Predict next state by current state s t and action a i
13    Q _ s w e e p i d i s c o u n t 1 · Q ( s t , a i )
14end
15 c e r t a i n t y 1   m a x ( s o f t m a x ( Q _ s w e e p ) ) s u b m a x ( s o f t m a x ( Q _ s w e e p ) )  
16while  j < M a x _ d e p t h  &  c e r t a i n t y j <  SweepCertThr do
17      j++
18      for  i = 1 :   n  do
19             a     a r g m a x ( Q ( s ^ j i ,   : ) )       // Find best action in this sweep branch
20             s ^ j + 1 i   m o v e ( s ^ j i ,   a )               // Take all actions except reversing
21             Q _ s w e e p j = Q _ s w e e p j + d i s c o u n t j · Q ( s ^ j i ,   a )
22      end
23      calculate c e r t a i n t y j
24end
25if  j = = M a x _ d e p t h  &  c e r t a i n t y j < S w e e p C e r t T h r  then
26      Generate random number randN ,  randN ( 0,1 )
27         if  r a n d N > ε v t e  then
28             a a r g m a x ( Q ( s t ,   : ) )
29         else
30             a r a n d o m   a c t i o n  end
31         else
32             a s o f t m a x ( Q _ s w e e p i )
33end

3.2.3. Reward Function R and State Transition Model P

The reward function of this model is as follows, including reward when reaching or approaching the goal and punishment when hitting the wall or leaving the goal. The reward for distance information follows the Gaussian distribution. The closer the robot is to the goal point in the environment, the greater the reward intensity. This is similar to the GPS sensor on a real robot, which can calculate the distance between the current position and the goal. The robot evaluates the reward in the following ways: r h o l d is the initial value of the reward function, and r n e a r is the coefficient of reward for the robot approaching the goal. The closer to the destination, the greater the overall reward: r n e g   is the reward (punishment) when encountering obstacles, and r g o a l is the reward for reaching the goal.
R = r g o a l , a r r i v e   g o a l r n e a r · e d i s t a n c e 2 σ 2 , i f   g e t   c l o s e r   t o   g o a l r n e g ,   H i t   a   w a l l r h o l d ,   e l s e
In order to better abstract the function of the hippocampus, we used a simple statistical method to model the state transition model P   ( s   |   s ,   a ) of MBRL. When the environmental states are discrete, each s t + 1 , s t , a t can be stored in discrete triples. The robot counts the number of times to reach a specific subsequent state s t + 1 after taking action a t in the current state s t , and takes its proportion to the total number of times to reach all possible subsequent states s as the state transition probability P s t + 1 , s t , a t . The higher the access frequency of the robot, the greater the discount factor when executing the value iteration, and we can realize the internal representation of the environment under the stable environment structure.
c o u n t s t + 1 , s t , a t = c o u n t s t + 1 , s t , a t + 1
P s t + 1   |   s t , a t = c o u n t ( s t + 1 ,   s t ,   a t ) i s c o u n t (   i ,   s t ,   a t )
We also used a model-based value iteration method to update the Q value, and the reward was based on the observed value after taking the current action. The transition between task states is presented in the form of probability. The state transition function P s t + 1   |   s t , a t   is the probability distribution of all possible states. The influence of the future Q value on the current Q value is measured by the state transition probability. The more times the state is visited during the training process, the more significant is the effect in the model.
Q ( s t , a t ) = Q ( s t , a t ) + α · ( R + γ · P ( s t + 1 | s t , a t )   ·   m a x Q ( s t + 1 , a t + 1 ) Q ( s t , a t ) )
In the process of training the robot to find the goal point, consistent with Dyna architecture, we saved the previous and next states and action rewards ( s t , s t + 1 , a t , R ) for each step. After reaching the goal point in an episode, the robot randomly extracted the saved experience from the model and learned the state value function internally, which is consistent with Dyna-Q’s planning process. Simulation training in the virtual environment model can improve the convergence property of the model.
The pseudo-code of the algorithm is as follows:
Algorithm 2: Brain-Inspired Dyna-Q Algorithm with VTE mechanism
1Initialization: start ,  goal
2 Q ( s , a ) , c o u n t ( s , s , a ) ,   P   ( s , s , a ) 0 , s     S ,     a     A
3for  e p i s o d e = 1   :   M a x _ E p i s o d e  do
4      M o d e l   0 , s t s t a r t
5     while not arrive goal & s t e p < S t e p _ M a x  do
6          Get action a t by forward sweep policy
7          Take action a t to get s t + 1 and reward
8           d i s t a n c e |   s t + 1 g o a l   |
9          if    R r n e g  then
10           c o u n t ( s t + 1 , s t , a t ) = c o u n t ( s t + 1 , s t , a t ) + 1
11           P s t + 1   |   s t , a t = c o u n t ( s t + 1 ,   s t   ,   a t ) i s c o u n t (   i ,   s t ,   a t )
12          else  c o u n t ( s t + 1 , s t , a t )   , P s t + 1   |   s t , a t 0
13          end
14 Q ( s t , a t ) Q ( s t , a t ) + α · ( R + γ · P s t + 1   |   s t , a t · m a x Q ( s t + 1 , a t + 1 ) Q ( s t , a t ) )
15           M o d e l ( s t e p ) s t , s t + 1 , a t , R
16           s t e p + +
17     end
18     if s t e p < = s t e p _ o l d  then
19           M o d e l _ s a v e     M o d e l // Update model
20     end
21     for  i = 1   :   n  do     // Model learning inside of robot
22           s t random experienced state in Model
23           a t random action taken at state s t in Model
24          Take action a t to get s t + 1
25           R ← reward at state s t + 1 in Model
26           Q ( s t , a t ) Q ( s t , a t ) + α · ( R + P s t + 1   |   s t , a t · m a x Q ( s t + 1 , a t + 1 ) Q ( s t , a t ) )
27     end
28end

4. Results

In this section, we have conducted several sets of experiments to test the performance of our model. In this paper, a simulation experiment was carried out in the T-choice maze, and the navigation performance of the forward prediction mechanism and the navigation performance without it were compared. The advantages of this mechanism were preliminarily demonstrated and the characteristics of this model similar to the rat VTE mechanism were verified. Then, this paper conducted a complex environment navigation experiment and tested the environmental cognition and path planning ability of this model in the complex maze environment with static and dynamic obstacles. We compared SARSA, the Dyna-Q algorithm, and this algorithm without the decision certainty assessment to verify the advantages of this algorithm.

4.1. Environment and Parameter Configuration

For convenience, the experimental environment in this paper is set up as a number of square grids, each of which is the same size and represents a specific state s in space. The white grid represents the accessible road, and the black grid represents the obstacle. Our robot was set up as a four-wheel omni-directional mobile robot. The robot was equipped with a laser sensor that could sense the presence of surrounding obstacles.
Robot action set A = { a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 , a 8 } included eight action primitives: move one unit step in eight directions, namely, north, northeast, east, southeast, south, southwest, west, and northwest, and enter the adjacent state. When there are obstacles in the environment, the robot’s movement rules are shown in Figure 8. The green arrow indicates the actions that the robot can take, while the red arrow indicates the actions that the robot cannot choose. It is particularly important to note that when the robot is adjacent to the obstacle, it cannot use the oblique action that may collide with the obstacle. The robot moves at a constant speed and moves one grid per unit time step.
Rats will use a variety of sensory information when exploring the environment. Some brain-inspired models [40] introduce visual and olfactory perception modules for the robot and use scene and odor information to guide the robot. This model simulated the characteristics of rats in the maze and other experimental scenes. It was assumed that the robot has a GPS sensor, which can sense its position to obtain its state, and then the Euclidean distance was obtained from the goal, which can simplify the calculation characteristics of the model and make the model more biologically reasonable.
We used MATLAB R2021a software to carry out simulation experiments on the computer. The parameter configuration of this model and the simulation experiments is shown in Table 2, including the learning rate α , greedy factor ε , M a x _ S t e p , reward R , etc., in the experiment. In order to ensure the fairness of the experiment, all methods were compared in the same parameter settings.

4.2. Experiment on Spatial Cognition Experiment in T-Maze: Demonstrates the Bionic Characteristics of This Model

Some scholars [41,42] have tested the VTE mechanism of rats in the T-choice maze. They verified that the rat VTE process usually occurs at the high-cost selection point and gradually disappears as action selection tends to stabilize, but when reward transmission changes accidentally, VTE will reappear. As shown in Figure 9, this paper simulated a small T-maze, and tried to verify whether the model had the above characteristics, and then verified the advantages of the forward sweep mechanism for action selection through comparison.
In the T-maze, the red circle is the starting point of the robot and the two green circles are the destination. The rats start from the red point and explore freely to find the destination. When they reach the destination or exceed the maximum step length, the experiment ends.

4.2.1. Comparison between Using the Forward Sweep Mechanism and Non-Sweep

First, we tested the forward sweep mechanism. The robot navigated the goal point 1, and we compared the navigation performance of the robot when using forward sweep and non-sweep (set sweep depth to 0).
Our model successfully simulated the goal-oriented behavior of rats in the maze. Through training and reward, the robot could successfully reach a specific destination, proving that this model had the same spatial cognitive ability as animals. Figure 10 is the path planning result of the T-maze experiment.
Because the maze structure was relatively simple, there was no large gap in the planned path length, but there was a significant gap in its convergence property. After 20 episodes of training, the path length of this model was significantly reduced, while the path length without the forward sweep method was still divergent. As can be seen from Figure 11 below, the forward sweep mechanism could improve the convergence property of robot learning and thus improve navigation efficiency.

4.2.2. Characteristics Similar to Rat VTE Mechanism

In this group of experiments, the robot used the model in this paper, taking goal point 1 as the destination in the first half of the training, and switching the end point to goal point 2 in the second half of the training. Figure 12 shows the length of the forward sweep of the robot and the decision certainty during the training process. It can be seen from Figure 12 that, in the first half of the training process, the certainty was constantly improved, the sweep length was continuously reduced, and the action selection of the robot tended to be stable. When the goal point changed, VTE occurred again, the decision certainty decreased, and the sweep length increased.
The above phenomenon was basically consistent with the characteristics of the rat VTE mechanism found by researchers. Bett et al. [43] trained rats to seek food rewards in a three-choice maze with a fork in the road and tested them under sham surgery (sham) or hippocampus lesion surgery (lesion) conditions of the rat hippocampus. The results showed in Figure 13 that, as training progressed, VTE activity in rats with a normal hippocampus gradually decreased, while VTE activity in rats with a damaged hippocampus actually maintained a high level. This physiological experiment proved that, in the spatial memory task, the damaged hippocampus showed similar levels of VTE before and after recognizing the reward position. By contrast, rats with sham injuries showed higher VTE behavior in the experiment before finding the reward position rather than in the experiment after finding it.
The model also showed similar phenomena with neurophysiology, and the above experiments preliminarily confirmed the biological rationality of the model. As rats gained more experience in the environment, the frequency of VTE gradually decreased, and the hippocampus sweep also moved in the final direction. After the decision certainty exceeded the threshold, VTE no longer occurred, the rats’ moving path became stable, and place cells in the hippocampus no longer generated a forward firing sequence, which meant that the frequency and depth of forward sweep within the robot was reduced.

4.3. Path Planning in Complex Maze Conditions: Testing the Navigation Capability of Our Model

We tested the model in an environment with static and dynamic obstacles. The main purpose of the experiment was to allow the robot to explore and find the best path to the destination without prior knowledge of the environment, while avoiding all obstacles.

4.3.1. Static Obstacle Environment

First, we conducted experiments in an environment with static obstacles. The following Figure 14 shows the path the robot planned after 1500 episodes of training. Each algorithm was repeated 10 times and the average value was calculated. By comparison, it can be found that the path planned by our model was more straight, while the path planned by SARSA and Dyna-Q had many twists and turns, and sometimes would also choose the direction of deviation.
Figure 15 shows the learning curves of the four methods. It can be seen that the convergence property of our model was better and the average path length after convergence was lower, while the other three methods still fluctuated and tended to diverge from time to time.
According to statistics, the average path length of our model was 78.26 units, while SARSA and Dyna-Q both exceeded 88 units, which showed that this model had significantly improved the learning speed and environmental cognitive efficiency of the robot. Table 3 shows the length of the robot’s path.
Figure 16 and Figure 17 show the changes in forward sweep length and decision certainty in the training process of this model. It can be seen that with the continuous improvement of the decision certainty of the robot, the action selection tended to be stable and the sweep depth also gradually decreased.

4.3.2. Dynamic Obstacle Environment

Next, we added three dynamic obstacles to the environment, all of which were 3 × 3 black grids in size. Each dynamic obstacle moved back and forth along the track, as shown by the gray line in Figure 18, and the movement speed was constant at half the robot’s speed, i.e., the obstacles moved half a grid per unit step.
Figure 18 shows the path planned by the three methods after 1500 episodes. It can be seen that the robot could effectively realize environmental cognition and path planning in a dynamic environment and successfully avoided obstacles. It can be seen that the path planned by our model was more gentle, while SARSA and Dyna-Q led to detours, resulting in redundant paths.
In terms of learning efficiency in Figure 19, we can see that the speed of convergence of this model still had obvious advantages over the other two methods. The average path length of our model was shorter, the convergence property was better, and the region was stable at the later stage of training, while the other three methods still had difficulty in converging at the later stage, with large fluctuations. Table 4 shows the length of the robot’s path.
Figure 20 shows the later stage of robot training. It can be seen that with the progress of training, the robot could successfully bypass dynamic obstacles and move toward the destination.
Because the dynamic obstacles moved to different positions in each training episode, the convergence speed of the algorithm was slower than that in the static environment. Whenever the robot encountered a dynamic obstacle, it may choose different actions at the same position than when it encountered a dynamic obstacle. Accordingly, due to the increasing difficulty of spatial cognition and path planning, the frequency of VTE was also higher, the length of forward sweep was longer, and the decision certainty also fluctuated greatly, as shown in Figure 21 and Figure 22. In short, the sweep depth was reduced and the decision certainty was improved by this algorithm, indicating that this algorithm can be adapted to the dynamic environment.

5. Discussion

The experimental results showed that this model can improve the efficiency of action selection and learning, improve robot performance in environmental cognitive tasks, and is superior to the traditional model-free and model-based RL algorithms.

5.1. The Significance of the VTE-Inspired Mechanism in Our Algorithm

In the T-maze experiment, this model reproduced key features of the VTE mechanism in rats and other mammals, which was consistent with experimental results from physiological research. At the beginning of maze training, rat hippocampal activity is strong, and the VTE process is more obvious. Rats frequently simulate the possibility of future paths in the brain. With an in-depth understanding and familiarity with the environment, the role of VTE is weakened and the choice of action tends to be fixed. Hippocampus forward sweep and decision certainty assessment, corresponding to the action selection model, have a strong effect when the decision was uncertain at the initial stage of the robot’s exploration. The sweep length was longer and the robot performed a simulated behavior similar to that of rats, searching for possible future paths to improve the navigation effect. With continuous training, the decision certainty was gradually enhanced, the role of VTE was constantly weakened, the sweep length of this model was also gradually reduced, and the selection of robot actions tended to be stable. The above experimental phenomena proved the biological rationality and advantages of introducing the forward prediction mechanism and decision certainty assessment into traditional RL algorithms.
Current research to improve the Dyna-Q algorithm is rarely performed from the perspective of brain-like computing. For example, Pei et al. [14] improved Dyna-Q by incorporating heuristic search strategies, a simulated annealing mechanism, and a reactive navigation principle into the algorithm. Some neurophysiological studies, such as Bett’s work [43], have focused on verifying the characteristics of the VTE phenomenon in rats and the activities of brain areas such as the hippocampus through biological experiments. Some brain-inspired models, such as those by Khamassi [35] and Stoianov [37], also focused on mathematical modeling to reproduce and explain the possible causes of rat physiological phenomena. However, they did not apply it further to RL and robot path planning. In this paper, we combine the neural mechanism of rats with the RL method, not only verifying the rationality of current biological research on VTE by reproducing it on agents but also providing a possible application in robot navigation with the VTE mechanism.

5.2. Improved RL Algorithm to Achieve better Performance in Path Planning Tasks

Traditional MFRL frameworks, such as SARSA, learn through rewards obtained through direct interaction with the external environment, but this will result in high computational costs and slow convergence. From our experimental results, we can see that SARSA’s performance in navigation tasks was significantly poorer than that of the two other model-based algorithms. Navigation by SARSA was inefficient and led to many meaningless explorations in the environment. When SARSA was applied in a large maze, meaning that the rewards were more sparse, not only its training time but also the path length apparently increased. In addition, any environmental changes in the dynamic maze caused SARSA to diverge again, which was fully illustrated by the experimental results.
Dyna-Q, as a model-based algorithm, stores the experience obtained from direct interaction with the environment in the robot’s environment model based on the model-free learning framework and accelerates the learning speed and improves the accuracy of decision making by learning the model. This can effectively reduce the number of interactions with the environment and improve navigation efficiency. The experimental results showed that Dyna-Q’s convergence speed and frequency of post-convergence fluctuations were better than those of the model-free SARSA algorithm. However, even Dyna-Q still performed poorly in dynamic obstacle environments. The dilemma of balancing exploration and exploitation makes both SARSA and Dyna-Q prone to local optimization, resulting in long planned paths and poor navigation performance, especially in complex environments.
By contrast, the algorithm we present alternately attempted potential paths virtually and dynamically adjusted the sweep length according to the decision certainty, which allowed the robot to fully consider the possible state in the future and the effect of each action when making decisions, helping to comprehensively evaluate all action options. At the same time, the forward sweep mechanism further reduced the number of interactions with the real environment, making full use of the experience gained and reducing the cost of the navigation process.
In addition to the forward sweep mechanism, decision certainty assessment also plays an important role in our algorithm based on the experimental results. In our algorithm, the certainty of forward sweep increased with training, and contextual preference for specific goal positions was formed through learning, which was similar to spatial cognition of animals. Accordingly, after having enough confidence in the decision, the sweep length also decreased. This dynamic mechanism allowed the robot to fully explore the environment and better learn action selection. And the improved ε-greedy method, similar to the animal VTE mechanism, enabled the robot to explore the environment in the early stage of training and to use the existing experience in the later stage, which effectively resolved the exploration and exploitation dilemma.

6. Conclusions

This paper proposes an improved Dyna-Q algorithm inspired by the VTE behavior of rats and applies it to robot navigation. By imitating the forward sweep mechanism in the hippocampus and decision certainty measurement in the striatum, the algorithm can make robots navigate autonomously, effectively speed up convergence and learning, and solve the dilemma of balancing exploration and exploitation compared with the SARSA and Dyna-Q algorithms.
We carried out a series of simulated experiments to verify the validation of the proposed algorithm. In the T-maze experiment, the algorithm made the agent behave similarly to the VTE behavior of rats, which proved the biological plausibility of our algorithm. In addition, we tested the navigation performance of our algorithm in static and dynamic environments and compared it with other RL algorithms, including SARSA and Dyna-Q. The experimental results showed that our algorithm outperformed comparisons in both learning rate and path length. In short, our work can not only provide more evidence for neurophysiological research by reproducing biological findings in robots but also help to improve the Dyna-Q algorithm and be applied to path planning.
However, there is still room for further improvement in our work. Firstly, all experiments in the paper were simulated, which may weaken the practical value of the proposed algorithm and limit the research for some topics, such as improving energy efficiency during navigation, etc. In addition, the calculation of forward sweep depth and decision certainty affects the speed of action selection. All of these shortcomings will be the direction of the next optimization.

Author Contributions

Conceptualization, J.H., Z.Z. and X.R.; methodology, Z.Z.; software, Z.Z.; validation, J.H., Z.Z. and X.R.; formal analysis, J.H., Z.Z. and X.R.; investigation, J.H. and Z.Z.; writing—original draft preparation, Z.Z.; writing—review and editing, J.H. and Z.Z.; supervision, J.H. and X.R.; funding acquisition, J.H and X.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program, grant number 2020YFB1005900.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cao, F.; Zhuang, Y.; Yan, F.; Qifeng, Y.; Wei, W. Research progress and prospect of long-term autonomous environment adaptation of mobile robots. J. Autom. 2020, 46, 205–221. [Google Scholar] [CrossRef]
  2. Wang, C.; Wang, L.; Qin, J.; Wu, Z.; Duan, L.; Li, Z.; Cao, M.; Ou, X.; Su, X.; Li, W.; et al. Path planning of automated guided vehicles based on improved A-Star algorithm. In Proceedings of the IEEE International Conference on Information and Automation, Lijiang, China, 8–10 August 2015; pp. 2071–2076. [Google Scholar] [CrossRef]
  3. Liu, L.S.; Lin, J.F.; Yao, J.X. Path Planning for Smart Car Based on Dijkstra Algorithm and Dynamic Window Approach. Wirel. Commun. Mob. Comput. 2021, 2021, 8881684. [Google Scholar] [CrossRef]
  4. Sutton, R.; Barto, A. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  5. Konar, A.; Chakraborty, I.G.; Singh, S.J.A.; Jain, L.C.; Nagar, A.K. Deterministic Improved Q-Learning for Path Planning of a Mobile Robot. Syst. Man Cybern. Syst. 2013, 43, 1141–1153. [Google Scholar] [CrossRef]
  6. Lv, L.; Zhang, S.; Ding, D.; Wang, Y. Path Planning via an Improved DQN-based Learning Policy. IEEE Access 2019, 7, 67319–67330. [Google Scholar] [CrossRef]
  7. Li, S.; Xin, X.; Lei, Z. Dynamic path planning of a mobile robot with improved Q-learning algorithm. In Proceedings of the IEEE International Conference on Information and Automation, Lijiang, China, 8–10 August 2015. [Google Scholar] [CrossRef]
  8. Das, P.K.; Behera, H.S.; Panigrahi, B.K. Intelligent-based multi-robot path planning inspired by improved classical Q-learning and improved particle swarm optimization with perturbed velocity. Eng. Sci. Technol. Int. J. 2015, 19, 651–669. [Google Scholar] [CrossRef]
  9. Soong, L.E.; Pauline, O.; Chun, C.K. Solving the optimal path planning of a mobile robot using improved Q-learning. Robot. Auton. Syst. 2019, 115, 143–161. [Google Scholar] [CrossRef]
  10. Low, E.S.; Ong, P.; Cheng, Y.L.; Omar, R. Modified Q-learning with distance metric and virtual target on path planning of mobile robot. Expert Syst. Appl. 2022, 199, 117191. [Google Scholar] [CrossRef]
  11. Sutton, R.S. Integrated Architecture for Learning, Planning, and Reacting Based on Approximating Dynamic Programing. In Proceedings of the 7th International Conference on Machine Learning, Austin, TX, USA, 21–23 June 1990. [Google Scholar]
  12. Al, D.S.; Wunsch, D. Heuristic dynamic programming for mobile robot path planning based on Dyna approach. In Proceedings of the International Joint Conference on Neural Networks, Vancouver, BC, Canada, 24–29 July 2016; pp. 3723–3730. [Google Scholar] [CrossRef]
  13. Vitolo, E.; Miguel, A.S.; Civera, J.; Mahulea, C. Performance Evaluation of the Dyna-Q algorithm for Robot Navigation. In Proceedings of the IEEE 14th International Conference on Automation Science and Engineering, Munich, Germany, 20–24 August 2018; pp. 322–327. [Google Scholar] [CrossRef]
  14. Pei, M.; An, H.; Liu, B.; Wang, C. An Improved Dyna-Q Algorithm for Mobile Robot Path Planning in Unknown Dynamic Environment. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 4415–4425. [Google Scholar] [CrossRef]
  15. Contreras, M.; Pelc, T.; Llofriu, M.; Weitzenfeld, A.; Fellous, J.-M. The ventral hippocampus is involved in multi-goal obstacle-rich spatial navigation. Hippocampus 2018, 28, 853–866. [Google Scholar] [CrossRef]
  16. Tolman, E.C. Cognitive maps in rats and men. Psychol. Rev. 1948, 55, 189–208. [Google Scholar] [CrossRef]
  17. Daw, N.D.; Niv, Y.; Dayan, P. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nat. Neurosci. 2005, 8, 1704–1711. [Google Scholar] [CrossRef]
  18. Redish, A.D. Vicarious trial and error. Nat. Rev. Neurosci. 2016, 17, 147–159. [Google Scholar] [CrossRef]
  19. Gilbert, D.; Wilson, T. Prospection: Experiencing the Future. Science 2007, 317, 1351–1354. [Google Scholar] [CrossRef] [PubMed]
  20. Gardner, R.S.; Uttaro, M.R.; Fleming, S.E.; Suarez, D.F.; Ascoli, G.A.; Dumas, T.C. A secondary working memory challenge preserves primary place strategies despite overtraining. Learn. Mem. 2013, 20, 648–656. [Google Scholar] [CrossRef] [PubMed]
  21. Regier, P.S.; Amemiya, S.; Redish, A.D. Hippocampus and subregions of the dorsal striatum respond differently to a behavioral strategy change on a spatial navigation task. J. Neurophysiol. 2015, 124, 1308. [Google Scholar] [CrossRef] [PubMed]
  22. Van, D.M.; Matthijs, A.A.; Redish, A.D. Covert Expectation-of-Reward in Rat Ventral Striatum at Decision Points. Front. Integr. Neurosci. 2009, 3, 1. [Google Scholar] [CrossRef]
  23. Wang, M.; Foster, D.J.; Pfeiffer, B.E. Alternating sequences of future and past behavior encoded within hippocampal theta oscillations. Science 2020, 370, 247. [Google Scholar] [CrossRef] [PubMed]
  24. O’Keefe, J.; Recce, M.L. Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus 1993, 3, 317–330. [Google Scholar] [CrossRef]
  25. Wimmer, G.E.; Liu, Y.; Vehar, N.; Behrens, T.E.; Dolan, R.J. Episodic memory retrieval success is associated with rapid replay of episode content. Nat. Neurosci. 2020, 23, 1025. [Google Scholar] [CrossRef]
  26. Bolding, K.A.; Ferbinteanu, J.; Fox, S.E.; Muller, R.U. Place cell firing cannot support navigation without intact septal circuits. Cold Spring Harb. Lab. 2018, 30, 175–191. [Google Scholar] [CrossRef]
  27. Drieu, C.; Todorova, R.; Zugaro, M. Nested sequences of hippocampal assemblies during behavior support subsequent sleep replay. Science 2018, 362, 675–679. [Google Scholar] [CrossRef]
  28. Foster, D.J.; Wilson, M.A. Hippocampal theta sequences. Hippocampus 2010, 17, 1093–1099. [Google Scholar] [CrossRef]
  29. Bett, D.; Murdoch, L.H.; Wood, E.R.; Dudchenko, P.A. Hippocampus, delay discounting, and vicarious trial-and-error. Hippocampus 2015, 25, 643–654. [Google Scholar] [CrossRef] [PubMed]
  30. Donahue, C.H.; Liu, M.; Kreitzer, A.C. Distinct value encoding in striatal direct and indirect pathways during adaptive learning. BioRxiv 2018. [Google Scholar] [CrossRef]
  31. Schultz, W.; Apicella, P.; Scarnati, E.; Ljungberg, T. Neuronal activity in monkey ventral striatum related to the expectation of reward. J. Neurosci. 1992, 12, 4595–4610. [Google Scholar] [CrossRef] [PubMed]
  32. Pennartz, C.; Ito, R.; Verschure, P.; Battaglia, F.P.; Robbins, T.W. The hippocampal-striatal axis in learning, prediction and goal-directed behavior. Trends Neurosci. 2011, 34, 548–559. [Google Scholar] [CrossRef] [PubMed]
  33. Meer, M.; Johnson, A.; Schmitzer-Torbert, N.C.; Redish, A.D. Triple dissociation of information processing in dorsal striatum, ventral striatum, and hippocampus on a learned spatial decision task. Neuron 2010, 67, 25–32. [Google Scholar] [CrossRef]
  34. Stott, J.J.; Redish, A.D. A functional difference in information processing between orbitofrontal cortex and ventral striatum during decision-making behaviour. Philos. Trans. R. Soc. Lond. 2014, 369, 315–318. [Google Scholar] [CrossRef]
  35. Khamassi, M.; Girard, B. Modeling awake hippocampal reactivations with model-based bidirectional search. Biol. Cybern. 2020, 114, 231–248. [Google Scholar] [CrossRef]
  36. Massi, E.; Barthélemy, J.; Mailly, J.; Dromnelle, R.; Canitrot, J.; Poniatowski, E.; Girard, B.; Khamassi, M. Model-Based and Model-Free Replay Mechanisms for Reinforcement Learning in Neurorobotics. Front. Neurorobotics 2022, 16, 864380. [Google Scholar] [CrossRef]
  37. Stoianov, I.P.; Pennartz, C.M.A.; Lansink, C.S.; Pezzulo, G. Model-based spatial navigation in the hippocampus-ventral striatum circuit: A computational analysis. PLoS Comput. Biol. 2018, 14, e1006316. [Google Scholar] [CrossRef] [PubMed]
  38. Chai, J.; Ruan, X.; Huang, J. A Possible Explanation for the Generation of Habit in Navigation: A Striatal Behavioral Learning Model. Cogn. Comput. 2022, 14, 1189–1210. [Google Scholar] [CrossRef]
  39. Johnson, A.; Redish, A.D. Neural Ensembles in CA3 Transiently Encode Paths Forward of the Animal at a Decision Point. J. Neurosci. Off. J. Soc. Neurosci. 2007, 27, 12176–12189. [Google Scholar] [CrossRef]
  40. Huang, J.; Yang, H.Y.; Ruan, X.G.; Yu, N.G.; Zuo, G.Y.; Liu, H.M. A Spatial Cognitive Model that Integrates the Effects of Endogenous and Exogenous Information on the Hippocampus and Striatum. Int. J. Autom. Comput. Engl. 2021, 18, 632–644. [Google Scholar] [CrossRef]
  41. Papale, A.E.; Stott, J.J.; Powell, N.J.; Regier, P.S.; Redish, A.D. Interactions between deliberation and delay-discounting in rats. Cogn. Affect. Behav. Neurosci. 2012, 12, 513–526. [Google Scholar] [CrossRef]
  42. Smith, K.S.; Graybiel, A.M. A Dual Operator View of Habitual Behavior Reflecting Cortical and Striatal Dynamics. Neuron 2013, 79, 361–374. [Google Scholar] [CrossRef]
  43. Bett, D.; Allison, E.; Murdoch, L.H.; Kaefer, K.; Wood, E.R.; Dudchenko, P.A. The neural substrates of deliberative decision making: Contrasting effects of hippocampus lesions on performance and vicarious trial-and-error behavior in a spatial memory task and a visual discrimination task. Front. Behav. Neurosci. 2012, 6, 70. [Google Scholar] [CrossRef]
Figure 1. Vicarious Trial and Error [18]. In Tolman’s view, VTE is a prospective imagination of the future and fundamentally a behavioral observation of pause and reorientation. VTE reflects forward imagination and an assessment of the future. The blue line shows that rats pause and deliberate when they find it difficult to choose a point. The red line is the behavior without the VTE mechanism. The rats select only one track at the selection point and continue along that track.
Figure 1. Vicarious Trial and Error [18]. In Tolman’s view, VTE is a prospective imagination of the future and fundamentally a behavioral observation of pause and reorientation. VTE reflects forward imagination and an assessment of the future. The blue line shows that rats pause and deliberate when they find it difficult to choose a point. The red line is the behavior without the VTE mechanism. The rats select only one track at the selection point and continue along that track.
Biomimetics 09 00315 g001
Figure 2. Three stages of the VTE process [18]. These are deliberation, planning, and automation, respectively. (a) In the first stage, rats have a preliminary understanding of the structure of the environment, but need to imagine different schemes indirectly to make a final decision. (b) In the second stage, rats are familiar with the structure of the environment and have a relatively definite behavior plan, but they are still in a deliberate state. They just keep exploring one option at a time to make sure it is the option they want. (c) In the third stage, automation, rats will no longer virtually search for possible tracks but will confidently execute a certain sequence of actions.
Figure 2. Three stages of the VTE process [18]. These are deliberation, planning, and automation, respectively. (a) In the first stage, rats have a preliminary understanding of the structure of the environment, but need to imagine different schemes indirectly to make a final decision. (b) In the second stage, rats are familiar with the structure of the environment and have a relatively definite behavior plan, but they are still in a deliberate state. They just keep exploring one option at a time to make sure it is the option they want. (c) In the third stage, automation, rats will no longer virtually search for possible tracks but will confidently execute a certain sequence of actions.
Biomimetics 09 00315 g002
Figure 3. Forward prediction of the hippocampus [18]. Researchers trained rats to find a way to obtain rewards (cheese) in the environment with a fork in the road. (a) In the deliberation phase of VTE, sweeps are conducted in different directions to simulate as many paths as possible. In the early stages of training, due to a lack of confidence in decision making, place cells in the hippocampus of rats were activated in turn, simulating possible future spatial trajectories. The blue and yellow tracks represent activation sequences in different directions in the hippocampus. (b) In the second stage, the rat’s decision certainty is increasing and the sweep effect of the rat hippocampus is also gradually weakened, and the sweep tends to move toward the goal. (c) In the third stage, rat behavior tends to develop toward automation and tends to advance along a fixed sequence of actions. The length of the hippocampus sweep is decreased, mainly in the determined direction.
Figure 3. Forward prediction of the hippocampus [18]. Researchers trained rats to find a way to obtain rewards (cheese) in the environment with a fork in the road. (a) In the deliberation phase of VTE, sweeps are conducted in different directions to simulate as many paths as possible. In the early stages of training, due to a lack of confidence in decision making, place cells in the hippocampus of rats were activated in turn, simulating possible future spatial trajectories. The blue and yellow tracks represent activation sequences in different directions in the hippocampus. (b) In the second stage, the rat’s decision certainty is increasing and the sweep effect of the rat hippocampus is also gradually weakened, and the sweep tends to move toward the goal. (c) In the third stage, rat behavior tends to develop toward automation and tends to advance along a fixed sequence of actions. The length of the hippocampus sweep is decreased, mainly in the determined direction.
Biomimetics 09 00315 g003
Figure 4. Dyna architecture. This architecture combines model-free methods with a virtual world model.
Figure 4. Dyna architecture. This architecture combines model-free methods with a virtual world model.
Biomimetics 09 00315 g004
Figure 5. The overall framework of the model. Unlike the traditional RL algorithm, which directly selects the action of a specific state based on the Q table, this model adds forward sweep and decision certainty assessment.
Figure 5. The overall framework of the model. Unlike the traditional RL algorithm, which directly selects the action of a specific state based on the Q table, this model adds forward sweep and decision certainty assessment.
Biomimetics 09 00315 g005
Figure 6. Forward sweep schematic. The sweep is performed on the environment map inside the robot, and the maximum sweep depth is 5. The obstacles shown in Figure 6 have been detected by the robot. The robot sweeps in different directions, as shown in Figure 6, sweeping in directions 1, 2, 3 moving toward the target, represented by blue, orange, and green circles respectively; directions 4, 7, 8 are obstacles, so the sweep stops, represented by white circles; directions 5 and 6 are directions away from the goal, and the accumulated Q value of the sweep is mostly negative, represented by black circles.
Figure 6. Forward sweep schematic. The sweep is performed on the environment map inside the robot, and the maximum sweep depth is 5. The obstacles shown in Figure 6 have been detected by the robot. The robot sweeps in different directions, as shown in Figure 6, sweeping in directions 1, 2, 3 moving toward the target, represented by blue, orange, and green circles respectively; directions 4, 7, 8 are obstacles, so the sweep stops, represented by white circles; directions 5 and 6 are directions away from the goal, and the accumulated Q value of the sweep is mostly negative, represented by black circles.
Biomimetics 09 00315 g006
Figure 7. Forward sweep and action selection mechanism. It is performed inside the robot, starting from the current state s t , to simulate the new state s ^ j i after taking various actions and to make a decision on the cumulative Q value of each sweep direction during the sweep process. If the certainty of a certain sweep depth exceeds the threshold, the sweep will end and the sweep direction with the maximum cumulative Q value will be selected as the final action output. The blue oval dashed box represents the forward sweep process, where the blue, orange, and green circles represent sweeps in different directions, and the blue square dashed box is the process of finding the best action. The orange oval dashed box is the process of calculating decision certainty, where different colored oval boxes are used to distinguish different variables.
Figure 7. Forward sweep and action selection mechanism. It is performed inside the robot, starting from the current state s t , to simulate the new state s ^ j i after taking various actions and to make a decision on the cumulative Q value of each sweep direction during the sweep process. If the certainty of a certain sweep depth exceeds the threshold, the sweep will end and the sweep direction with the maximum cumulative Q value will be selected as the final action output. The blue oval dashed box represents the forward sweep process, where the blue, orange, and green circles represent sweeps in different directions, and the blue square dashed box is the process of finding the best action. The orange oval dashed box is the process of calculating decision certainty, where different colored oval boxes are used to distinguish different variables.
Biomimetics 09 00315 g007
Figure 8. Robot action space. (a) The eight actions of the robot; (b) The green arrow indicates the actions that the robot can take after encountering obstacles, and the red arrow indicates the actions that cannot be taken.
Figure 8. Robot action space. (a) The eight actions of the robot; (b) The green arrow indicates the actions that the robot can take after encountering obstacles, and the red arrow indicates the actions that cannot be taken.
Biomimetics 09 00315 g008
Figure 9. T-maze. The red circle is the starting point and the two green circles are the goal points, which are applied to different experimental conditions, respectively.
Figure 9. T-maze. The red circle is the starting point and the two green circles are the goal points, which are applied to different experimental conditions, respectively.
Biomimetics 09 00315 g009
Figure 10. Results of the T-maze experiment. (a) The route planned by forward sweep; (b) The route planned by the non-sweep method. The red circles in the figure represent the starting point, while the green circles represent the ending point, and the blue lines represent the navigation path.
Figure 10. Results of the T-maze experiment. (a) The route planned by forward sweep; (b) The route planned by the non-sweep method. The red circles in the figure represent the starting point, while the green circles represent the ending point, and the blue lines represent the navigation path.
Biomimetics 09 00315 g010
Figure 11. Learning curve of forward sweep and non-sweep.
Figure 11. Learning curve of forward sweep and non-sweep.
Biomimetics 09 00315 g011
Figure 12. A similar phenomenon to rat VTE was observed in this model. (a) Decision certainty; (b) Forward sweep length.
Figure 12. A similar phenomenon to rat VTE was observed in this model. (a) Decision certainty; (b) Forward sweep length.
Biomimetics 09 00315 g012
Figure 13. Phenomenon of rat VTE mechanism observed in neurophysiology [43]. (a) Rats are trained to find food rewards in the three-choice maze, and the black circle is the container for placing food; (b) Average VTE in the three-choice test in the 16 episodes.
Figure 13. Phenomenon of rat VTE mechanism observed in neurophysiology [43]. (a) Rats are trained to find food rewards in the three-choice maze, and the black circle is the container for placing food; (b) Average VTE in the three-choice test in the 16 episodes.
Biomimetics 09 00315 g013
Figure 14. The paths planned by each method in the static environment.
Figure 14. The paths planned by each method in the static environment.
Biomimetics 09 00315 g014
Figure 15. Comparison of learning curves of four methods in a static environment.
Figure 15. Comparison of learning curves of four methods in a static environment.
Biomimetics 09 00315 g015
Figure 16. Changes in the forward sweep length during the learning process in the static environment.
Figure 16. Changes in the forward sweep length during the learning process in the static environment.
Biomimetics 09 00315 g016
Figure 17. Changes in the forward sweep decision certainty during the learning process in the static environment.
Figure 17. Changes in the forward sweep decision certainty during the learning process in the static environment.
Biomimetics 09 00315 g017
Figure 18. Path planned by each method in a dynamic environment. The red, blue, and green lines in the graph represent the paths planned by SARSA, our algorithm, and Dyna-Q, and the dashed gray lines represent the motion trajectories of three obstacles.
Figure 18. Path planned by each method in a dynamic environment. The red, blue, and green lines in the graph represent the paths planned by SARSA, our algorithm, and Dyna-Q, and the dashed gray lines represent the motion trajectories of three obstacles.
Biomimetics 09 00315 g018
Figure 19. Comparison of learning curves of four methods in a dynamic environment.
Figure 19. Comparison of learning curves of four methods in a dynamic environment.
Biomimetics 09 00315 g019
Figure 20. The trajectory of the robot in a test using this model, represented by blue lines. (a) The robot moves 30 steps; (b) The robot moves 74 steps. The black squares in the figure represent obstacles.
Figure 20. The trajectory of the robot in a test using this model, represented by blue lines. (a) The robot moves 30 steps; (b) The robot moves 74 steps. The black squares in the figure represent obstacles.
Biomimetics 09 00315 g020
Figure 21. Changes in the forward sweep decision certainty in the dynamic environment.
Figure 21. Changes in the forward sweep decision certainty in the dynamic environment.
Biomimetics 09 00315 g021
Figure 22. Changes in the forward sweep length in the dynamic environment.
Figure 22. Changes in the forward sweep length in the dynamic environment.
Biomimetics 09 00315 g022
Table 1. Model parameter description.
Table 1. Model parameter description.
ParameterMeaning
s t Current robot state
s ^ j i Potential states reached during sweep
i The ith direction of sweep direction, i [ 1 , n ]
j Current sweep depth, 1 j M a x _ d e p t h .
Q _ s w e e p i Q value accumulated in the ith direction during sweep
n The number of actions that can be selected by the robot
d i s c o u n t j The discount factor decreases with the increase in sweep depth.
S w e e p C e r t T h r Decision certainty threshold
Table 2. Parameter Configuration of Simulations.
Table 2. Parameter Configuration of Simulations.
ParameterMeaningValue
α Learning rate0.1
β Softmax factor0.5
γ Discount factor0.95
ε Greedy factor0.2
σ Standard deviation0.35
SweepCertThrDecision certainty threshold0.5
Max_depthMaximum sweeping depth5
Max_StepMax steps2000
Max_EpisodeMax episodes1500
N_DynaPlanning steps50
nNumber of actions8
r h o l d Initial reward−0.01
r g Reward for reaching the goal1
r n e a r Reward for approaching the goal0.1
r n e g Punishment for hitting the wall−0.02
Table 3. Average learning results in a static obstacle environment.
Table 3. Average learning results in a static obstacle environment.
SARSADyna-QImproved Dyna-Q
(None-Certainty)
Improved
Dyna-Q
Path Length
(unit)
min86.1486.9086.0776.14
max90.991.3190.1480.14
mean88.5989.7088.1578.26
Table 4. Average learning results in a dynamic obstacle environment.
Table 4. Average learning results in a dynamic obstacle environment.
SARSADyna-QImproved Dyna-Q
(None-Certainty)
Improved
Dyna-Q
Path Length
(unit)
min81.9082.7382.9769.11
max94.9793.8088.4678.49
mean87.6688.6285.0675.59
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, J.; Zhang, Z.; Ruan, X. An Improved Dyna-Q Algorithm Inspired by the Forward Prediction Mechanism in the Rat Brain for Mobile Robot Path Planning. Biomimetics 2024, 9, 315. https://doi.org/10.3390/biomimetics9060315

AMA Style

Huang J, Zhang Z, Ruan X. An Improved Dyna-Q Algorithm Inspired by the Forward Prediction Mechanism in the Rat Brain for Mobile Robot Path Planning. Biomimetics. 2024; 9(6):315. https://doi.org/10.3390/biomimetics9060315

Chicago/Turabian Style

Huang, Jing, Ziheng Zhang, and Xiaogang Ruan. 2024. "An Improved Dyna-Q Algorithm Inspired by the Forward Prediction Mechanism in the Rat Brain for Mobile Robot Path Planning" Biomimetics 9, no. 6: 315. https://doi.org/10.3390/biomimetics9060315

Article Metrics

Back to TopTop