Next Article in Journal
Hairpin Windings for Electric Vehicle Motors: Modeling and Investigation of AC Loss-Mitigating Approaches
Previous Article in Journal
A GAN-BPNN-Based Surface Roughness Measurement Method for Robotic Grinding
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Brain-Inspired Navigation Model Based on the Distribution of Polarized Sky-Light

1
Key Laboratory of Precision and Non-Traditional Machining Technology of Ministry of Education, Dalian University of Technology, Dalian 116024, China
2
Key Laboratory for Micro/Nano Technology and System of Liaoning Province, Dalian University of Technology, Dalian 116024, China
*
Author to whom correspondence should be addressed.
Machines 2022, 10(11), 1028; https://doi.org/10.3390/machines10111028
Submission received: 5 October 2022 / Revised: 2 November 2022 / Accepted: 3 November 2022 / Published: 4 November 2022
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)

Abstract

:
This paper proposes a brain-inspired navigation model based on absolute heading for the autonomous navigation of unmanned platforms. The proposed model combined the sand ant’s strategy of acquiring absolute heading from the sky environment and the brain-inspired navigation system, which is closer to the navigation mechanism of migratory animals. Firstly, a brain-inspired grid cell network model and an absolute heading-based head-direction cell network model were constructed based on the continuous attractor network (CAN). Then, an absolute heading-based environmental vision template was constructed using the line scan intensity distribution curve, and the path integration error was corrected using the environmental vision template. Finally, a topological cognitive node was constructed according to the grid cell, the head direction cell, the environmental visual template, the absolute heading information, and the position information. Numerous topological nodes formed the absolute heading-based topological map. The model is a topological navigation method not limited to strict geometric space scale, and its position and absolute heading are decoupled. The experimental results showed that the proposed model is superior to the other methods in terms of the accuracy of visual template recognition, as well as the accuracy and topology consistency of the constructed environment topology map.

1. Introduction

Autonomous robots can replace humans to complete some repetitive, high-risk or heavy-duty tasks, effectively improve living standards and social work efficiency, and play an important role in promoting social development. The navigation system is one of the core components to ensure that the robots can complete the task smoothly and reliably [1]. At present, the most commonly used navigation system is inertial navigation, which has the advantages of strong autonomy and strong anti-interference ability and can provide complete navigation parameters. However, inertial navigation is expensive, and the error rapidly diverges over time [2]. The output frequency of satellite navigation is low, usually 10 Hz, but for a system with a control frequency of 100–200 Hz, the ideal output frequency is 20–40 Hz; otherwise, the control effect of the system will be affected; and if the satellite antenna is blocked, the signal will be interrupted [3]. In addition, the simultaneous localization and mapping (SLAM) method based on a visual odometer (VO) [4,5] or visual–inertial odometer (VIO) [6,7] is also an important way to solve the autonomous navigation of unmanned platforms, but the common SLAM methods have problems such as a large amount of computation and low robustness in complex and large scenes [8]. Therefore, exploring an autonomous and reliable navigation method suitable for unmanned platforms in large scenes has become an urgent problem that needs to be solved.
Researchers have found that many creatures in nature have superior navigation skills. Although organisms themselves do not have chips with powerful computing power or high-precision navigation sensors, they can achieve migration activities of hundreds or even thousands of kilometers in various complex environments [9,10]. For example, monarch butterflies in North America migrate more than 4000 km each year from Canada to Mexico [11]; a kind of turtle called eretmochelys imbricate will migrate more than 2000 km from their feeding grounds to their breeding grounds [12]. These navigational properties of organisms have led researchers to study the mechanism of biological navigation.
Mann and colleagues found that the navigation method of domestic pigeons is different from the traditional navigation method [13], and its navigation method has two characteristics: the navigation is a node navigation method with strong topology; absolute orientation information is essential for the pigeon. Henrik reviewed the mechanism of orientation and navigation of long-distance migratory animals during the migration process and proposed that long-distance navigation of migratory animals consists of three stages, namely the long-distance phase, narrowing-in or homing phase, and pinpointing the goal phase [14]. In the three-stage navigation process, the absolute heading is always the most important information, and the navigation will be completed by comprehensively using various cues of perception. If the migratory animal navigation mechanism is applied to the autonomous navigation of the unmanned platform, its performance will be improved. Neuroscientists have undertaken a lot of research on the navigation mechanisms of many organisms involving mammals, such as rats, and non-mammals, such as fish [15]. Among them, the navigation mechanism of rats is similar to that of most mammals, including humans; therefore, we can draw on rat brain navigation models.
O’Keefe and Mosers, winners of the 2014 Nobel Prize in Physiology or Medicine, discovered that there are navigation-related cells in the rodent brain, including place cells [16], head direction cells [17], grid cells [18], speed cells [19], and boundary cells [20]. Grid cells show strong topological properties in the process of activation discharge [21], and it has been shown that, similar to pigeon navigation, rodents also use topological maps in the process of navigation.
According to the characteristics of spatial navigation cells, researchers proposed different navigation cell construction models and proposed a spatial topology map construction method. Arleo proposes a navigation model based on head direction cells and place cells, which realizes robot topology map construction, navigation positioning, and loop closure detection in small environments [22]. Gaussier imitates the mechanism of place cells in rats by linking the visual information of the environment with the location information to express the spatial environment, finally creating the topological map of the environment by establishing the connectivity between the place cells [23]. Ramirez proposed a place cell navigation model based on the neural mechanism of the hippocampus in the process of rat navigation, which was actually applied to the control of mobile robots, but it is only suitable for environments with fixed structures [24]. Inspired by position cells, head direction cells, and grid cells, Erdem uses the oscillatory interference model to create grid cells and proposes a bionic navigation model based on the forward linear predictive trajectory detection method [25]. Tejera proposes a biomimetic localization model based on grid cells and place cells, which provides long-term environmental localization through place cells [26]. Based on the discharge characteristics of rat navigation cells, Cong and colleagues proposed a mathematical model based on episodic memory to model the environment [27]. In recent years, researchers have proposed different navigation models based on different neural network models [28,29,30]. Schneider proposed a biologically inspired cognitive architecture that uses a local navigation map as its main data element; each local map is matched with the closest maps, and then the navigation map is mapped; the biological feedback pathways help the exploration and generation of cause-and-effect behavior [31]. Milford proposed the two-dimensional bionic navigation model RatSLAM, which has many advantages [32]. It has a small amount of computing and storage and can build large-scale environmental maps; the entire system is lightweight and low-cost. Yu proposed NeuroSLAM to build a three-dimensional environment topology map [33]. This model greatly improves the performance of RatSLAM and, to a certain extent, extends the mouse brain navigation model to a migratory animal navigation model, which is more suitable for the navigation of unmanned platforms.
However, unlike migratory animal navigation mechanisms, the Neuroslam does not use the most important absolute heading information. The main reason why rats do not need absolute heading is that the distance for foraging to return to the nest is only a few kilometers, while the journey of migratory animals is hundreds or even tens of thousands of kilometers [12], so rats can use relative heading to complete the navigation task. If the absolute heading is applied to the brain-inspired SLAM, the robustness of the system in the large scene environment can be improved, and the topological consistency of the topological map can be improved. However, it is necessary to obtain an absolute heading that is autonomous, reliable, and has a strong anti-interference ability.
Other creatures in nature provide a solution for stably obtaining absolute heading information. The researchers found that a desert ant has a unique compound eye structure, which enables it to perceive the polarization distribution state of the sky, and then obtain absolute heading information [34]. Based on the compound eye structure of sand ants, researchers have developed various types of polarized light sensors [35,36,37,38]. In recent years, researchers have proposed a variety of orientation, attitude, and positioning methods based on polarized sky-light sensors [39,40,41,42]. In terms of practical application, Lambrinos designed a polarization-sensitive unit and applied it to the navigation control of ground robots [35]. The experiment proved the feasibility of bionic polarized sky-light navigation. According to the polarization-sensitive mechanism of insects, Chu analyzed the polarization-sensitive angle measurement model, designed a six-channel photoelectric polarized sky-light sensor, and applied it to the actual navigation control of ground mobile robots [36]. Zhi proposed an attitude measurement method based on an inertial/GPS/polarized sky-light sensor and conducted a flight experiment [39]. Dupeyroux designed a polarized sky-light sensor in the ultraviolet band and applied it to the autonomous navigation of hexapod robots [40], and in 2022, He proposed that insect-inspired artificial intelligence is an important direction for the development of small autonomous robots [43].
This paper proposes a brain-inspired navigation model based on absolute heading for the autonomous navigation of unmanned platforms in large scenes. Inspired by the three-stage navigation method of migratory animals, the proposed model combined the sand ant’s strategy of acquiring absolute heading and the brain-inspired SLAM system, which is a method closer to the navigation mechanism of migratory animals. The proposed model refers to the three components of NeuroSLAM, but each component introduces absolute heading information. First, a brain-inspired grid cell network model and a head-direction cell network model with absolute heading were constructed based on the continuous attractor network. Therefore, the position and heading of this model can be decoupled. Then, an absolute heading-based environment vision template is constructed using the line scan intensity distribution curve, and the path integration error is corrected using the vision template. Finally, a topological cognitive node is constructed according to the grid cell, the head direction cell, the environmental visual template, the absolute heading information, and the position information. Numerous topological nodes form the absolute heading-based topological map. The experimental results showed that, compared with NeuroSLAM, the proposed method has higher visual template recognition accuracy and faster recognition speed, and the constructed environment topology map has higher mapping accuracy and topology consistency.
The rest of this paper is organized as follows. Section 2 describes the principle of bionic polarized sky-light navigation. Section 3 proposes a brain-inspired navigation model based on the absolute heading. Section 4 verifies the performance of the proposed navigation model through outdoor practical experiments. Finally, Section 5 concludes the contribution of this paper.

2. Principle of Bionic Polarized Sky-light Navigation

It is known that the natural light emitted by the Sun to the Earth is unpolarized, but when it passes through the Earth’s atmosphere, the sunlight will be scattered and absorbed by air molecules and aerosol particles in the atmosphere, forming a polarization phenomenon, and the entire sky will generate a regular and stable skylight polarization distribution [44]. When the weather is clear, and the sky is cloudless, the skylight polarization distribution can be described by Rayleigh scattering theory [44], and the distribution is shown in Figure 1. Similar to the gravitational field and the geomagnetic field, the skylight polarization distribution is also a global field; therefore, it can be used for navigation.
In the skylight distribution, the maximum polarization direction vector corresponding to a point in the sky is called the E-vector, as e n shown in Figure 2. According to the Rayleigh scattering theory, the E-vector of the observation point maintains a vertical relationship with the plane formed by the sun, the observation point, and the observed point, and then, the E-vector can be converted into the absolute heading angle.
The polarized sky-light sensor developed based on insect compound eye structure can measure the E-vector of the points. In the paper, a high probe lens type single point sensor independently developed by us is adopted, as shown in Figure 3. Its structural design and principle can be found in [37]. The polarized sky-light sensor reaches a dynamic outdoor accuracy of 0.5 degrees under a clear sky.
In the paper, the attitude angles are defined as roll ( θ ), pitch ( φ ), and yaw ( ψ ); the output of the polarized sky-light sensor is defined as α ; the navigation frame (n-frame) is defined as north-east-down, and the body frame (b-frame) is front-right-down. The yaw angle is the absolute heading angle, which is crucial for carrier navigation. Then, the absolute heading can be calculated as follows.
As shown in Figure 2, the E-vector in b-frame can be represented by:
e b = cos α sin α 0 T
E-vector in b-frame can be converted to n-frame by the direction cosine matrix, the formula is as follows:
e n = C b n e b
Expanding the direction cosine matrix, it can be expressed as follows:
C b n = cos ϕ cos ψ + sin ϕ sin θ sin ψ sin ψ cos θ sin ϕ cos ψ cos ϕ sin θ sin ψ cos ϕ sin ψ + sin ϕ sin θ sin ψ cos ψ cos θ sin ϕ sin ψ cos ϕ sin θ cos ψ sin ϕ cos θ sin θ cos ϕ cos θ
The solar vector in n-frame O S n can be expressed as follows:
O S n = cos h s cos f s cos h s sin f s sin h s T
where, h s and f s represent the solar altitude and solar azimuth respectively. Then, according to Rayleigh scattering theory, we can find:
e n = O S n × O P n
Combined with the above formulas, the absolute heading can be deduced as follows:
ψ = arcsin C / A 2 + B 2 arctan B / A f s
ψ = arcsin C / A 2 + B 2 arctan B / A f s + π
A = cot α cos ϕ sin θ sin ϕ B = cos θ C = ( cot α sin ϕ + sin θ cos ϕ ) tan ( h s )
During the experiment, the polarized sky-light sensor faces the zenith. If the horizontal attitude angle of the carrier is small, it can be assumed that the observation point is the zenith, and the absolute heading is simplified as follows:
ψ = α + f s ± π 2
An actual example is provided to show the process. If the current roll and the pitch of the carrier are 0° and 0°; the local solar azimuth is 120°; the output of the polarized sky-light sensor is 50°. Bringing them into Equations (6) and (7) yields that the current absolute heading is 260° or 80°; bringing them into Equation (9) yields that the current absolute heading is still 260° or 80°. There is ambiguity in the absolute heading, which can be judged according to the value of the previous moment.
If the carrier is tilted greatly, the polarized sky-light sensor cannot be guaranteed to face the zenith, and the absolute heading can be obtained according to the attitude angle.

3. Brain-Inspired Navigation Model Based on Absolute Heading

The overall framework of the brain-inspired navigation model based on absolute heading is shown in Figure 4, which mainly consists of three parts: grid cell and head direction cell model construction, environment visual template construction, and environment topology map construction. The system used two types of sensors, both of which are autonomous and lightweight sensors. The polarized sky-light sensor is responsible for obtaining the absolute heading, and the binocular camera is responsible for obtaining the environmental visual image information. First, the system builds a grid cell network model and a head direction cell network model with an absolute heading based on the continuous attractor network and uses them to perform path integration according to the estimated value of the carrier’s self-motion. Then, an absolute heading-based environment vision template was constructed using the line scan intensity distribution curve, and the path integration error was corrected using the vision template. Finally, a brain-inspired topological map based on absolute heading was constructed according to the grid cell and head direction cell model, the environmental visual template, the absolute heading, and the position corresponding to the carrier.
The continuous attractor network is a type of neural network that consists of an array of units with fixed-weighted excitatory and inhibitory connections. Different from most neural networks, it operates by updating the activity of each unit instead of changing the value of the weighted connections. As the carrier moves in an environment, the activation value of each unit in the CAN varies between 0 and 1 [32].
The algorithm pseudo code of the model is shown in Table 1. The following describes the three components of the system in detail.

3.1. Grid Cell and Head-Directed Cell Model Based on Continuous Attractor Network

The grid cells and head direction cells are the path integrator of the navigation model and are an important part of the whole system. They perform path integration by integrating motion information and environmental cues to complete the rat navigation task. Referring to NeuroSLAM, we constructed a three-dimensional grid cell P g c based on a continuous attractor network to represent the three-dimensional position ( x , y , z ) of the carrier [33]. We used the absolute heading obtained by the polarized sky-light sensor to construct a two-dimensional head direction cell P h c to represent the current heading of the carrier. The following describes the construction process of head direction cells.
The head direction cell was constructed using a two-dimensional continuous attractor network, denoted as ( α , h ) , where α is the absolute heading angle, h is the current vertical height of the carrier, and the head direction cell was used to represent the absolute heading of the carrier corresponding to different heights. The update of the activation of the head direction cell consists of two parts, the dynamics of the attractor and the integration of the heading angle and the height. In addition, loop detection will also update the activation.
Attractor network dynamics include local excitatory connections to surrounding neurons from the activated cells and global inhibitory connections to all neuronal cells. The two connections cause the head-direction cell to gradually converge to a steady state, producing a main cluster of neurons with a high activation level called an activity packet, the center of which represents the absolute heading estimated by the head-direction cell. Firstly, construct the local excitation weighting matrix ε u , v h c :
ε u , v g c = 1 δ α 2 π e u 2 / 2 δ α 2 1 δ h 2 π e v 2 / 2 δ h 2
where δ α and δ h are the variances of the two-dimensional Gaussian distribution; u and v represent the distribution coefficients, which are obtained by the following formula:
u = x i mod n x v = y j mod n y
where n α and n h represent the dimensions of the xy-axis of the head direction cell model. Then the local excitatory connection produces the activation change as:
Δ P α , h = i = 1 n α j = 0 n h ε u , v h c P i , j
To limit the continuous increase in cell activation, global inhibitory connections are established for all cells. κ is the global inhibition constant, then the final change of internal attractor dynamics to cell activation is:
Δ P α , h = i = 1 n α j = 1 n h ε u , v h c P i , j κ
To ensure that the activation of the head direction cell is non-negative:
P α , h t + 1 = max P α , h t + Δ P α , h , 0
Finally, in order to ensure that the total cell activation remains stable, normalize the cell:
P α , h t + 1 = P α , h t + 1 i = 1 n α j = 1 n h P i , j t + 1
The changes in the heading angle and the height will also change the head direction cell activation, enabling the active packets to move in a two-dimensional continuous attractor network. The polarized sky-light sensor was used to obtain the absolute heading α , and the change of the head direction cell activity is:
Δ U α , h t + 1 = a = δ α 0 δ α 0 + 1 b = δ h 0 δ h 0 + 1 η U ( α + a ) , ( h + b ) t
where δ α 0 and δ h 0 represent the integer part of the changes of the heading angle and the height; η represents the residual parameter, which is calculated as follows:
δ α 0 δ h 0 = α t + 1 α t k h v h
δ α f δ h f = α t + 1 α t δ x 0 k h v h δ y 0
where v h represents the velocity in the height direction, and k h represents the velocity coefficient in the height direction; δ α f and δ h f represent the fractional part of the changes of the heading angle and the height. Parameter η can be obtained by:
η = g δ α f , a δ α 0 g δ h f , b δ h 0
g u , v = 1 u , v = 0 u , v = 1

3.2. Vision Template Based on Absolute Heading

During the process of spatial exploration, rats process the environmental visual information through the optic nervous system to form a visual template corresponding to the geographic location so as to memorize their exploration path. With the help of the visual template, the rat can perceive the current position and judge whether it has reached the previously experienced scene.
In this paper, a line scan intensity distribution method based on absolute heading was proposed to construct a visual template. The binocular camera was used to output the environmental visual image, and the line scan intensity distribution method was used to process the visual image information with reference to RatSLAM [32]; the absolute heading obtained by the polarized sky-light sensor added a constraint to each visual template. The following is the specific construction process of the visual template.
First, the visual image was processed using the patch normalization method. This method has been proven to improve the robustness of image recognition in the case of illumination changes. This method was also used by RatSLAM. The intensity of a single pixel of the image after patch normalization is calculated by:
I x y = I x y μ x y δ x y
where μ x y and δ x y represent the mean and standard deviation of the n pixels around the (x, y) pixel, respectively.
The normalized image was converted into a one-dimensional vector using the scanline intensity profile method. This method uses one-dimensional vectors to represent two-dimensional image features, which can significantly improve the calculation speed of the entire navigation system and reduce the storage capacity of the system. Using the image scanline intensity profile vector I and the absolute heading α to construct a vision template, a single vision template is defined as follows:
V i = { I i , α i }
The visual template has two functions, one is to compare the new visual template with the visual template library for template matching, and the other is to calculate the forward-moving speed of the carrier. The implementation of its two functions is as follows.
Set two visual templates V j and V k , and the corresponding line scan intensity profile are I j and I k , then the average intensity difference between the two templates is:
f s , I j , I k = 1 ω s n = 1 ω s I n + max s , 0 j I n min s , 0 k
where s represents the offset of the distribution, that is, the number of pixels offset of the two images; ω represents the pixel width of the image. The smaller the average intensity difference, the more visually similar the two images are, and the more likely they are the same scene. The offset s is a variable. The traditional processing method is to set s [ ω , ω ] , and take the minimum value of the 2 ω times calculation result as the average intensity difference. This method is prone to mismatch between two similar but not the same scenes and requires multiple calculations. Our method utilized the absolute heading of the visual template to calculate the offset s . The absolute heading of the two visual templates are α j and α k , respectively, and the angle deviation Δ α is:
Δ α = α j α k
If the camera is located at the same location and the azimuth angle is shifted by Δ α to capture two images, the translation distance Δ X between the centers of the two images can be approximated as:
Δ X = d tan ( Δ α )
where d is the depth information of the image center. According to the camera imaging model [45], we can obtain the coordinate component X in the camera imaging frame:
X = f X d
Combining Equations (25) and (26), we can obtain:
Δ X = f tan ( Δ α )
The camera imaging plane coordinates ( X , Y ) can be converted to pixel plane coordinates ( m , n ) after one zoom and one translation [45], which can be expressed as:
u = m X + c x v = n Y + c y
where m and n are zoom coefficients, and c x and c y are translations, which are constants related to the camera. According to Equations (27) and (28), the pixel offset can be obtained as follows:
Δ u = m Δ X = m f tan ( Δ α )
Therefore, according to the absolute heading of the visual template, the theoretical distribution offset s 0 is obtained by:
s 0 = m f tan ( Δ α ) δ s tan ( Δ α )
where δ s is a constant only related to the camera, which can be calculated according to the camera parameters and adjusted according to actual experience. Since there will be errors in the experimental process, let s ( s 0 ρ , s 0 + ρ ) , and the minimum value is the average intensity difference:
Δ I j , k = min s s 0 ρ , s 0 + ρ f s , I j , I k
When performing visual template matching, a threshold I 0 a is set, and when Δ I j , k < I 0 , the two visual templates are considered to be successfully matched; when Δ I j , k I 0 , it is considered that the two visual templates are not the same scene. If the current scene is a new scene that has not been experienced before, a new visual template is created.
The second function of the visual template is to calculate the moving speed of the carrier. First, the distribution offset s i , i + 1 between the two visual templates is obtained according to Equation (31), and then the unit average intensity difference is calculated:
f s i , i + 1 , I i , I i + 1 = 1 ω s i , i + 1 n = 1 ω s i , i + 1 I n + max s i , i + 1 , 0 i I n min s i , i + 1 , 0 i + 1 I n + max s i , i + 1 , 0 i + I n min s i , i + 1 , 0 i + 1
Then the forward movement speed of the carrier is:
v = min μ f s i , i + 1 , I i , I i + 1 , v max
In the formula, μ is a constant related to the forward moving speed; v max is the limit coefficient of the maximum speed of the carrier. The carrier speed calculated by the visual template is not accurate, but the method can establish a connection with the actual scene, thereby ensuring the reliability of navigation and, at the same time, significantly reducing the amount of calculation. This just reflects the advantage of the biological topological navigation mode that does not pursue high positioning accuracy.
Similarly, the visual template can also be used to calculate the moving speed of the carrier in the height direction. First, each of the two visual templates is cropped s i , i + 1 column to avoid the influence of the rotation of the heading angle; the remaining parts are calculated using the line scan intensity distribution by row, is I i and I i + 1 . Then, the unit average intensity difference is:
f h = min s [ ρ h h , h ρ h ] f s , I i , I i + 1 = 1 h s n = 1 h s I n + max s , 0 i I n min s , 0 i + 1 I n + max s , 0 i + I n min s , 0 i + 1
where h represents the pixel height of the image and ρ h ensures that the two images have enough overlap. Then the moving speed in the height direction is:
v h = min μ h f h , v h max
Likewise, μ h denotes a constant related to the velocity in the height direction; v h max denotes the maximum velocity in the upward direction.

3.3. Environmental Topological Map Based on Absolute Heading

In the process of moving in space, rats combine their motion trajectory with the perception of the surrounding environment and finally form an environmental cognitive map composed of a series of connected cognitive nodes in the brain to guide the rat navigation. The cognitive map is a topological map. This section details the construction process of the topology map, as shown in Figure 5.
The topological map consists of two basic elements: cognitive nodes e and topological edges l that constitute the topological relationship of the nodes. The cognitive nodes constructed in this paper include the visual template V i , grid cells P i g c , head direction cells P i h c , and the absolute pose T i in the current scene. A single cognitive node is defined as follows:
e i = V i , P i g c , P i h c , T i
where T i = [ X i , Y i , Z i , α i ] T is a four-dimensional vector composed of position and the absolute heading. Define topological edges based on absolute heading:
l i j = { Δ T i j = [ Δ X i j , Δ Y i j , Δ Z i j , α i j ] T }
where Δ X i j , Δ Y i j , and Δ Z i j represent the position change of the carrier from node i to node j but α i j represents the absolute heading from the center of node i to node j, instead of the heading angle change. Topological connections can be established between cognitive nodes through topological edges, which can be expressed as follows:
e j = V j , P j g c , P j h c , T i q 0 + l i j
where q 0 = [ 1 , 1 , 1 , 0 ] is a constant-valued row vector.
When the carrier moves from a cognitive node to a new scene that it has not experienced before, a pending cognitive node will be generated, and the system will compare the matching degree of this node with the existing nodes in the topology map and then determine whether it is necessary to create a new node, the matching degree S i designed in this paper is:
S i = μ v V i V + μ g c P i g c P g c + μ h c P i h c P h c + μ α α s c o r e
where μ v , μ g c , μ h c , and μ α are the weighting coefficients for each part to match the cognitive nodes; α s c o r e represent the matching difference parameters calculated by the absolute heading: if α i α π / 2 ,
α s c o r e = 0 , α i α α 0 α i α i α > α 0
If α i α > π / 2 and α i > α ,
α s c o r e = 0 , α i π α α 0 α i π α α 0 , α i π α > α 0
And if α i α > π / 2 and α i α ,
α s c o r e = 0 , α i + π α α 0 α i + π α α 0 , α i + π α > α 0
here, α 0 is a threshold. When the absolute heading difference between the two cognitive nodes is less than 60°, the two cognitive nodes are considered to be successfully matched in the absolute heading matching item; otherwise, the matching fails. Considering that the carrier has two motions in the same scene, forward and reverse, when calculating the absolute heading difference, if the two cognitive nodes are reversed, the absolute heading of the current cognitive node will be rotated 180° first.
Set a node matching threshold S max , if min ( S i ) S max , add the current cognitive node to the topology map, and establish a relationship between the new node in the topology map through the topology edge according to the equitation 18. If there are multiple nodes whose matching degrees are less than the threshold S max , it is considered that the node e i with the lowest matching degrees is successfully matched with the current scene; that is, the carrier currently moves to the scene represented by the cognitive node e i .
When the cognitive nodes are successfully matched, the system loop detection is triggered, and the accumulated error caused by path integration can be corrected by using the node information. In this paper, the map relaxation method used in RatSLAM is used to correct the topology map. This method corrects the cognitive nodes by compensating a pose offset since there is no accumulated error in the absolute heading; only the position of the cognitive nodes needs to be modified. The pose offset is calculated as follows:
Δ P i = ϑ j = 1 N f T j T i Δ T i j q 0 + k = 1 N t T k T i Δ T k i q 0
where, ϑ is the correction coefficient, N f represents the number of connections from cognitive node e i to other nodes, and N t represents the number of connections from other cognitive nodes to node e i .

4. Experiments

In order to verify the feasibility and effectiveness of the model proposed in this paper, the experimental platform was built independently, and the outdoor vehicle experiments were carried out. This part includes the construction of the experimental platform, the experimental evaluation method, the experimental parameter setting, the experimental process, and the final experimental results and discussion.

4.1. Experimental Setup

4.1.1. Experimental Equipment

The self-built navigation experiment platform is shown in Figure 6, which mainly includes a polarized sky-light sensor, a ZED binocular camera, a high-precision inertial satellite integrated navigation system (SPAN-CPT), a mobile power, and an aluminum frame. SPAN-CPT records the trajectory and attitude changes of the experimental vehicle throughout the entire process, and the final measured value serves as the reference for the experiments. The parameters of each sensor in the experimental platform are shown in Table 2.
The experimental platform was mounted on the car for the actual navigation experiments, as shown in Figure 7. Before the experiment, SPAN-CPT calibration was performed first, and each sensor was activated after the calibration was completed. At the beginning of the experiment, the data of three nodes, the polarized sky-light sensor, ZED, and SPAN-CPT, were collected at the same time through the Rosbag package of the robot operating system (ROS), and the data set was recorded. The sampling rate of the polarized sky-light sensor and ZED was set to 20 Hz, and that of SPAN-CPT was 100 Hz. The recorded data set can be played repeatedly in real time offline to simulate the actual scene, which is convenient for comparing the performance of different navigation systems, and at the same time, it is convenient for time stamp alignment.

4.1.2. Trajectory Evaluation Criteria

In order to reasonably evaluate the accuracy of the constructed topological map and navigation parameters, this paper adopted two evaluation methods of relative pose error (RPE) and absolute trajectory error (ATE) proposed by Sturm for quantitative analysis [46]. ATE directly calculates the difference between the reference and the value obtained by the algorithm. The system first performs timestamp alignment and then calculates the difference between each pair of values. RPE is to first perform timestamp alignment and then calculate the difference between the value changes in the same time period. We first align the topology nodes with the reference trajectory timestamp, randomly select m key positions to calculate several pairs of ATE and RPE as Δ p 1 : m A T E and Δ p k : m R P E , and then calculate its root mean square error (RMSE).
The RMSE of ATE of positions is:
R M S E Δ p 1 : m A T E = 1 m i = 1 m Δ p 2 1 2
The RMSE of RPE of positions is:
R M S E Δ p k : m R P E = 1 m k + 1 i = k m Δ p 2 1 2

4.1.3. Parameter Selection

There are many constants in the navigation model proposed in this paper, and this section providesthe values of these parameters in the experiment. The parameter values were drawn from NeuroSLAM and finally determined through actual experiments [33]. The specific values are shown in Table 3.

4.2. Experimental Results and Discussion

In order to verify the validity and map topology consistency of the proposed brain-inspired navigation model based on the absolute heading (PolSLAM), we collected datasets in two environments with different characteristics. Our method PolSLAM was compared with that of the traditional visual method ORBSLAM [4] and the brain-inspired method NeuroSLAM [33]. PolSLAM was evaluated in terms of the topological consistency of the map trajectories, the geometric accuracy of the trajectories, and the recognition of cognitive nodes.

4.2.1. First Scene: Students Apartment Buildings

The first scene selected was in the student apartment of the Dalian University of Technology. The two buildings form an 8-shaped road, and the experimental trajectory was generated along the sides of the 8-shaped road. The scene has multiple forks, so there were more cognitive nodes that triggered loop detection. The trajectory is shown in Figure 8, where Figure 8a is the environment scene, and Figure 8b is the trajectory recorded by SPAN-CPT during the experiment.
In this scene, the map trajectories obtained by the three methods of PolSLAM, NeuroSLAM, and ORBSLAM are shown in Figure 9. It can be seen that the trajectories obtained by PolSLAM have better accuracy and topological consistency than NeuroSLAM. Since each cognitive node has an absolute heading, it avoids the influence of the cumulative error of the rotation angle in the path integration on the accuracy and consistency of the entire map, so the topology map constructed by PolSLAM has higher accuracy, especially when turning. The overall accuracy of the map trajectory constructed by PolSLAM was comparable to that obtained by ORBSLAM, but the accuracy of our method is higher at the corners due to the effect of the absolute heading of the cognitive node. One of the advantages of the brain-inspired navigation method is its high computational efficiency. Our method not only ensures the computational efficiency but also the topological consistency of the map reaches the same level as the traditional visual slam method.
The most important performance metric for bio-inspired topological maps is the overall map topological consistency, not the geometric accuracy of individual location points. However, in order to further quantitatively evaluate the mapping performance, we selected 20 key positions in the map to calculate the position error and calculated its ATE and RPE. First, align the three method trajectories with the SPAN-CPT trajectory timestamp, select 20 position points, and obtain the position errors between each method and the reference value, as shown in Figure 10.
It can be seen that the position accuracy of PolSLAM is generally higher than that of NeuroSLAM. The accuracy of PolSLAM is higher than ORBSLAM at some key positions because these positions are at the corners and the absolute heading constrains the relative rotation angle of the cognitive node, thereby improving the position accuracy. The ATE and RPE from these position errors can be calculated, as shown in Figure 11. The calculation method of the box plot corresponding to the abscissa (i–j) is to take the i-th point to the i + 1th point, to the i + 2th point, until to the j-th point, a total of j − i + 1 segments, and calculate the RMSE for each segment and draw the (i–j) boxplot. Figure 11 shows that PolSLAM has higher position accuracy and stronger error stability than NeuroSLAM. Compared with ORBSLAM, the error of PolSLAM is more stable, and the accuracy is comparable.
In addition, the performance of PolSLAM and NeuroSLAM in cognitive node recognition was compared, as shown in Figure 12, the abscissa represents the number of environmental visual images throughout the experiment, and the ordinate represents the number of established topological nodes. The curve coordinates indicate that the x-th image corresponds to the y-th cognitive node, and one cognitive node corresponds to multiple visual images, which means that the system detects a loop; that is, multiple visual images are obtained from the same scene at different times. In general, the continuous curve indicates that the loop detection is correct, and the dot indicates that the loop detection is wrong. It can be seen from Figure 12 that there is little difference between the two algorithms when building a new template. However, when it comes to template matching, PolSLAM has no apparent mismatches during the experiment, so it is concluded that it has better accuracy in terms of cognitive point matching.

4.2.2. Second Scene: Exterior Road of Zhifang Building

The second scene was selected on the external road surrounding the Zhifang Building of the Dalian University of Technology. The experimental trajectory is shown in Figure 13. The trajectory is approximately obtained by revolving around a square twice, and the entire trajectory is a closed loop without redundant forks. The map trajectories obtained by the three methods are shown in Figure 14. It can be seen that PolSLAN still has good topological consistency, and the angular accuracy of the whole map is high. Compared with NeruoSLAM, the map accuracy and topological consistency of PolSLAM are better. Compared with ORBSLAM, although the number of multi-fork cognitive nodes is reduced, the geometric accuracy of the map trajectory of PolSLAM is still high, and the map angle accuracy is even better than that of ORBSLAM.
Similarly, in order to quantitatively evaluate the mapping performance, we selected 20 key positions in the map to calculate the position errors and calculated ATE and RPE, and the obtained position errors are shown in Figure 15. The figure shows that the positional accuracy of PolSLAM is higher than that of NeuroSLAM. The positional accuracy of PolSLAM is comparable to that of ORBSLAM, and some positions are even slightly more accurate. ATE and RPE are obtained, as shown in Figure 16. It can be seen that since the topological map of PolSLAM has an accurate absolute heading, there is no drift error of rotation angle, so the overall map trajectory accuracy is high, and the error is more stable.
Figure 17 shows the performance comparison of cognitive node recognition. Since the experimental trajectory is about two circles around the pentagon, the theoretically correct loop detection should start from the second circle, and any cognitive node should trigger a loop with the corresponding node in the first circle, and there should be no loop detection at other positions. Figure 17 shows that because the topological map of PolSLAM has an absolute heading, it has a higher node identification accuracy rate than NeuroSLAM, and continuous loopbacks are triggered at the same position in the two circles. It can be concluded that PolSLAM has a higher recognition accuracy of cognitive nodes than NeuroSLAM, and the continuous loop is triggered at the same position in the two circles, which is because the topological map of PolSLAM has absolute heading assistance.

5. Conclusions

This paper proposes a brain-inspired navigation model based on absolute heading for the autonomous navigation of unmanned platforms. The model used a polarized sky-light sensor inspired by the structure of insect compound eyes to sense the state of the polarization distribution pattern in the atmospheric environment and then obtained the absolute heading information. A brain-inspired grid cell model and an absolute heading-based head direction cell model were constructed for path integration and were used to represent the carrier’s pose. An environment vision template based on absolute heading was constructed for calculating self-motion information and correcting path integration errors. Finally, a topological cognitive node was constructed according to the grid cell, the head direction cell, the environmental visual template, the absolute heading information, and the position information. Numerous topological nodes formed the absolute heading-based topological map. The experimental results showed that compared with the existing brain-inspired methods, the proposed method improved the geometric accuracy and topological consistency of the constructed environmental topological map and improved the recognition accuracy of cognitive nodes. The navigation model is a topological navigation method not limited to a strict geometric space scale. The system consists of only one binocular camera and one polarized sky-light sensor, which has the advantages of high computational efficiency, low power consumption, and lightweight. This navigation model can provide stable navigation for small autonomous unmanned platforms.

Author Contributions

Conceptualization, J.C. and J.L.; methodology, J.L., J.C. and R.Z.; writing—original draft preparation, J.L.; writing—review and editing, J.C. and R.Z.; Experiment and data analysis, J.L. and K.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 52175265, 52275281), Fundamental Research Funds for the Central Universities (Nos. DUT21ZD101, DUT21GF308).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liang, X.; Chen, W.; Cao, Z.; Wu, F.; Wang, L. The Navigation and Terrain Cameras on the Tianwen-1 Mars Rover. Space Sci. Rev. 2021, 217, 37. [Google Scholar] [CrossRef]
  2. He, G.; Yuan, X.; Zhuang, Y.; Hu, H. An Integrated GNSS/LiDAR-SLAM Pose Estimation Framework for Large-scale Map Building in Partially GNSS-Denied Environments. IEEE Trans. Instrum. Meas. 2020, 70, 7500709. [Google Scholar] [CrossRef]
  3. Larson, K.M. Unanticipated Uses of the Global Positioning System. Annu. Rev. Earth Planet. Sci. 2019, 47, 19–40. [Google Scholar] [CrossRef]
  4. Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
  5. Klein, G.; Murray, D. Parallel Tracking and Mapping for Small AR Workspaces. In Proceedings of the IEEE & ACM International Symposium on Mixed & Augmented Reality, Cambridge, UK, 15–18 September 2008. [Google Scholar]
  6. Tong, Q.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2017, 34, 1004–1020. [Google Scholar]
  7. Somlyai, L.; Vamossy, Z. ISVD-Based Advanced Simultaneous Localization and Mapping (SLAM) Algorithm for Mobile Robots. Machines 2022, 10, 519. [Google Scholar] [CrossRef]
  8. Wang, Y.; Zhang, T.; Wang, Y.; Ma, J.; Li, Y.; Han, J. Compass aided visual-inertial odometry. J. Vis. Commun. Image Represent. 2019, 60, 101–115. [Google Scholar] [CrossRef]
  9. Jouventin, P. Saellite tracking of Wandering albatrosses. Nature 1990, 343, 746–748. [Google Scholar] [CrossRef]
  10. Eric, W.; Barrie, F.; Ken, G.; Henrik, M.; David, D.; Andrea, A.; Kristina, B.; Stanley, H. The Australian Bogong Moth Agrotis infusa: A Long-Distance Nocturnal Navigator. Front. Behav. Neurosci. 2016, 10, 162. [Google Scholar]
  11. Brower, L.P. Monarch butterfly orientation: Missing pieces of a magnificent puzzle. J. Exp. Biol. 1996, 199, 93–103. [Google Scholar] [CrossRef]
  12. Lohmann, K.J.; Cain, S.D.; Dodge, S.A.; Lohmann, C.M. Regional Magnetic Fields as Navigational Markers for Sea Turtles. Science 2001, 294, 364–366. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Mann, R.; Freeman, R.; Osborne, M.; Garnett, R.; Armstrong, C.; Meade, J.; Biro, D.; Guilford, T.; Roberts, S. Objectively identifying landmark use and predicting flight trajectories of the homing pigeon using Gaussian processes. J. R. Soc. Interface 2011, 8, 210–219. [Google Scholar] [CrossRef]
  14. Mouritsen, H. Long-distance navigation and magnetoreception in migratory animals. Nature 2018, 558, 50–59. [Google Scholar] [CrossRef] [PubMed]
  15. Rodriguez, F.; Lopez, J.C.; Vargas, J.P.; Broglio, C.; Gomez, Y.; Salas, C. Spatial memory and hippocampal pallium through vertebrate evolution: Insights from reptiles and teleost fish. Brain Res. Bull. 2002, 57, 499–503. [Google Scholar] [CrossRef]
  16. O’Keefe, J.; Dostrovsky, J. The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Res. 1971, 34, 171–175. [Google Scholar] [CrossRef]
  17. Taube, J.S. Head direction cells recorded in the anterior thalamic nuclei of freely moving rats. J. Neurosci. 1995, 15, 70. [Google Scholar] [CrossRef]
  18. Hafting, T.; Fyhn, M.; Molden, S.; Moser, M.B.; Moser, E.I. Microstructure of a spatial map in the entorhinal cortex. Nature 2005, 436, 801–806. [Google Scholar] [CrossRef] [PubMed]
  19. Kropff, E.; Carmichael, J.E.; Moser, M.B.; Moser, E.I. Speed cells in the medial entorhinal cortex. Nature 2015, 523, 419–424. [Google Scholar] [CrossRef]
  20. Okeefe, J.; Burgess, N. Geometric determinants of the place fields of hippocampal neurons. Nature 1996, 381, 425–428. [Google Scholar] [CrossRef]
  21. Park, S.W.; Jang, H.J.; Kim, M.; Kwag, J. Spatiotemporally random and diverse grid cell spike patterns contribute to the transformation of grid cell to place cell in a neural network model. PLoS ONE 2019, 14, e0225100. [Google Scholar] [CrossRef]
  22. Arleo, A.; Gerstner, W. Modeling Rodent Head-direction Cells and Place Cells for Spatial Learning in Bio-mimetic Robotics. Anim. Animat. 2000, 6, 236–245. [Google Scholar]
  23. Gaussier, P.; Revel, A.; Banquet, J.P.; Babeau, V. From view cells and place cells to cognitive map learning: The hippocampus as a spatio-temporal memory. Biol. Cybern. 2002, 86, 15–28. [Google Scholar] [CrossRef] [PubMed]
  24. Ramirez, A.B.; Ridel, A.W. Bio-inspired Model of Robot Adaptive Learning and Mapping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots & Systems, San Diego, CA, USA, 29 October–2 November 2007. [Google Scholar]
  25. Erdem, U.U.M.; Hasselmo, M. A goal-directed spatial navigation model using forward trajectory planning based on grid cells. Eur. J. Neurosci. 2012, 35, 916–931. [Google Scholar] [CrossRef] [PubMed]
  26. Gonzalo, T.; Martin, L.; Alejandra, B.; Alfredo, W. Bio-Inspired Robotics: A Spatial Cognition Model integrating Place Cells, Grid Cells and Head Direction Cells. J. Intell. Robot. Syst. 2018, 91, 85–99. [Google Scholar]
  27. Zou, Q.; Cong, M.; Liu, D.; Du, Y. A neurobiologically inspired mapping and navigating framework for mobile robots. Neurocomputing 2021, 460, 181–194. [Google Scholar] [CrossRef]
  28. Silveira, L.; Guth, F.; Drews, P., Jr.; Ballester, P.; Machado, M.; Codevilla, F.; Duarte-Filho, N.; Botelho, S. An Open-source Bio-inspired Solution to Underwater SLAM. IFAC-PapersOnLine 2015, 48, 212–217. [Google Scholar] [CrossRef]
  29. Wu, C.; Yu, S.M.; Chen, L.; Sun, R.C. An Environmental-Adaptability-Improved RatSLAM Method Based on a Biological Vision Model. Machines 2022, 10, 259. [Google Scholar] [CrossRef]
  30. Tang, H.J.; Yan, R.; Tan, K.C. Cognitive Navigation by Neuro-Inspired Localization, Mapping, and Episodic Memory. IEEE Trans. Cogn. Dev. Syst. 2018, 10, 751–761. [Google Scholar] [CrossRef]
  31. Schneider, H. Navigation Map-Based Artificial Intelligence. AI 2022, 3, 434–464. [Google Scholar] [CrossRef]
  32. Milford, M.J.; Wyeth, G.F. Mapping a Suburb With a Single Camera Using a Biologically Inspired SLAM System. IEEE Trans. Robot. 2008, 24, 1038–1053. [Google Scholar] [CrossRef] [Green Version]
  33. Yu, F.W.; Shang, J.G.; Hu, Y.J.; Milford, M. NeuroSLAM: A brain-inspired SLAM system for 3D environments. Biol. Cybern. 2019, 113, 515–545. [Google Scholar] [CrossRef] [PubMed]
  34. Lebhardt, F.; Ronacher, B. Transfer of directional information between the polarization compass and the sun compass in desert ants. J. Comp. Physiol. A Neuroethol. Sens. Neural Behav. Physiol. 2015, 201, 599. [Google Scholar] [CrossRef] [PubMed]
  35. Lambrinos, D.; Moller, R.; Labhart, T.; Pfeifer, R.; Wehner, R. A mobile robot employing insect strategies for navigation. Robot. Auton. Syst. 2000, 30, 39–64. [Google Scholar] [CrossRef]
  36. Chu, J.K.; Wang, H.Q.; Chen, W.J.; Li, R.H. Application of a Novel Polarization Sensor to Mobile Robot Navigation. In Proceedings of the IEEE International Conference on Mechatronics and Automation, Changchun, China, 9–12 August 2009; pp. 3763–3768. [Google Scholar]
  37. Wang, Y.L.; Chu, J.K.; Zhang, R.; Li, J.S.; Guo, X.Q.; Lin, M.Y. A Bio-Inspired Polarization Sensor with High Outdoor Accuracy and Central-Symmetry Calibration Method with Integrating Sphere. Sensors 2019, 19, 3448. [Google Scholar] [CrossRef]
  38. Lu, H.; Zhao, K.C.; You, Z.; Huang, K.L. Angle algorithm based on Hough transform for imaging polarization navigation sensor. Opt. Express 2015, 23, 7248–7262. [Google Scholar] [CrossRef]
  39. Zhi, W.; Chu, J.K.; Li, J.S.; Wang, Y.L. A Novel Attitude Determination System Aided by Polarization Sensor. Sensors 2018, 18, 158. [Google Scholar] [CrossRef] [Green Version]
  40. Dupeyroux, J.; Serres, J.R.; Viollet, S. AntBot: A six-legged walking robot able to home like desert ants in outdoor environments. Sci. Robot. 2019, 4, eaau0307. [Google Scholar] [CrossRef] [Green Version]
  41. Fan, C.; Hu, X.P.; He, X.F.; Zhang, L.L.; Wang, Y.J. Multicamera polarized vision for the orientation with the skylight polarization patterns. Opt. Eng. 2018, 57, 043101. [Google Scholar] [CrossRef]
  42. Du, T.; Zeng, Y.H.; Yang, J.; Tian, C.Z.; Bai, P.F. Multi-sensor fusion SLAM approach for the mobile robot with a bio-inspired polarised skylight sensor. IET Radar Sonar Navig. 2020, 14, 1950–1957. [Google Scholar] [CrossRef]
  43. de Croon, G.; Dupeyroux, J.J.G.; Fuller, S.B.; Marshall, J.A.R. Insect-inspired AI for autonomous robots. Sci. Robot. 2022, 7, eabl6334. [Google Scholar] [CrossRef]
  44. Lindsay, R.B. On the Light from the Sky, its Polarization and Colour. Phil. Mag. 1871, 41, 274. [Google Scholar]
  45. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004; pp. 153–156. [Google Scholar]
  46. Zhang, Z.; Scaramuzza, D. A Tutorial on Quantitative Trajectory Evaluation for Visual(-Inertial) Odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar]
Figure 1. Polarization patterns in the sky.
Figure 1. Polarization patterns in the sky.
Machines 10 01028 g001
Figure 2. Principle of polarized skylight orientation.
Figure 2. Principle of polarized skylight orientation.
Machines 10 01028 g002
Figure 3. Actual photo of polarization sensor.
Figure 3. Actual photo of polarization sensor.
Machines 10 01028 g003
Figure 4. Schematic of brain-inspired navigation model based on absolute heading.
Figure 4. Schematic of brain-inspired navigation model based on absolute heading.
Machines 10 01028 g004
Figure 5. Schematic diagram of topology map.
Figure 5. Schematic diagram of topology map.
Machines 10 01028 g005
Figure 6. Experimental platform.
Figure 6. Experimental platform.
Machines 10 01028 g006
Figure 7. Photo of the platform and the vehicle.
Figure 7. Photo of the platform and the vehicle.
Machines 10 01028 g007
Figure 8. The experimental trajectory of the first scene. (a) is experimental site, (b) is latitude and longitude.
Figure 8. The experimental trajectory of the first scene. (a) is experimental site, (b) is latitude and longitude.
Machines 10 01028 g008
Figure 9. Comparison of experimental trajectories of three methods based on the first scene.
Figure 9. Comparison of experimental trajectories of three methods based on the first scene.
Machines 10 01028 g009
Figure 10. Comparison of the position errors of the key positions based on the first scene.
Figure 10. Comparison of the position errors of the key positions based on the first scene.
Machines 10 01028 g010
Figure 11. Comparison of the ATE and RPE of the key positions based on the first scene. (a) is ATE, (b) is RPE.
Figure 11. Comparison of the ATE and RPE of the key positions based on the first scene. (a) is ATE, (b) is RPE.
Machines 10 01028 g011
Figure 12. Cognitive node recognition and loop detection based on the first scene. (a) is PolSLAM, (b) is NeuroSLAM.
Figure 12. Cognitive node recognition and loop detection based on the first scene. (a) is PolSLAM, (b) is NeuroSLAM.
Machines 10 01028 g012
Figure 13. The experimental trajectory of the second scene. (a) is experimental site, (b) is latitude and longitude.
Figure 13. The experimental trajectory of the second scene. (a) is experimental site, (b) is latitude and longitude.
Machines 10 01028 g013
Figure 14. Comparison of experimental trajectories of three methods based on the second scene.
Figure 14. Comparison of experimental trajectories of three methods based on the second scene.
Machines 10 01028 g014
Figure 15. Comparison of the position errors of the key positions based on the second scene.
Figure 15. Comparison of the position errors of the key positions based on the second scene.
Machines 10 01028 g015
Figure 16. Comparison of the ATE and RPE of the key positions based on the second scene. (a) is ATE, (b) is RPE.
Figure 16. Comparison of the ATE and RPE of the key positions based on the second scene. (a) is ATE, (b) is RPE.
Machines 10 01028 g016
Figure 17. Cognitive node recognition and loop detection based on the second scene. (a) is PolSLAM, (b) is NeuroSLAM.
Figure 17. Cognitive node recognition and loop detection based on the second scene. (a) is PolSLAM, (b) is NeuroSLAM.
Machines 10 01028 g017
Table 1. The algorithm pseudo code of the navigation model.
Table 1. The algorithm pseudo code of the navigation model.
InputA Series of Visual Image, Absolute Heading Angle
OutputTopology map and pose
1Initialize system;
2While Input ! = []
3   image process;
4   generate visual template;
5   compute visual odometer;
6   do path integration based on visual odometer and absolute heading;
7   update activity in grid cell network and head direction cell network;
8   get current pose;
9   perform topological cognitive node matching;
10     if no match
9       generate topological cognitive node and topological edges;
10     else if matched
11       update topology map;
12       Perform map relaxation to correct the accumulated error;
13     end if;
14endwhile;
15output a topology map and pose;
Table 2. Sensor parameters.
Table 2. Sensor parameters.
SensorsTypeSpecificationsSampling (Hz)
polarized skylight sensorSelf-made point-source
sensor
Outdoor accuracy: 0.5°20
Indoor accuracy: 0.009°
Weight: 126 g Dimension: 6.5 × 6.5 × 7.5 cm
Binocular cameraZED-StereolabsResolution: 1024 × 768 Pixel size: 2μm20
Depth Range: 0.3–25 m Weight: 159 g
SINS/GNSSSPAN-CPTRoll&Pitch accuracy: 0.02° Yaw accuracy: 0.06°100
Continuous position accuracy: 0.5 m
Table 3. Parameters setting.
Table 3. Parameters setting.
ParameterValueDescription
First SceneSecond Scene
n α , n h 36 × 3636 × 36Dimensions of head direction cell
κ 0.00020.0002Global inhibition constant
I 0 2025Visual template matching threshold
μ v , μ h 0.5, 0.50.5, 0.5Constant related to velocity
v h max , v max 0.4, 0.50.4, 0.7The maximum velocity limit
α 0 2030Absolute heading difference threshold
S max 3035Node matching threshold
ϑ 0.50.5Correction coefficient of the map
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, J.; Chu, J.; Zhang, R.; Tong, K. Brain-Inspired Navigation Model Based on the Distribution of Polarized Sky-Light. Machines 2022, 10, 1028. https://doi.org/10.3390/machines10111028

AMA Style

Li J, Chu J, Zhang R, Tong K. Brain-Inspired Navigation Model Based on the Distribution of Polarized Sky-Light. Machines. 2022; 10(11):1028. https://doi.org/10.3390/machines10111028

Chicago/Turabian Style

Li, Jinshan, Jinkui Chu, Ran Zhang, and Kun Tong. 2022. "Brain-Inspired Navigation Model Based on the Distribution of Polarized Sky-Light" Machines 10, no. 11: 1028. https://doi.org/10.3390/machines10111028

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop