Next Article in Journal
GNSS Trajectory Anomaly Detection Using Similarity Comparison Methods for Pedestrian Navigation
Previous Article in Journal
Indirect Microcontact Printing to Create Functional Patterns of Physisorbed Antibodies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Invisible Salient Landmark Approach to Locating Pedestrians for Predesigned Business Card Route of Pedestrian Navigation

1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, 129 Luoyu Road, Wuhan 430079, China
2
School of Urban Construction, Wuhan University of Science and Technology, Wuhan 430065, China
3
Department of Geography, The University of Tennessee, Knoxville, TN 37996-0925, USA
4
Guangzhou Institute of Geography, Guangzhou 510070, China
5
Electronic Information School, Wuhan University, Wuchang District, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(9), 3164; https://doi.org/10.3390/s18093164
Submission received: 18 July 2018 / Revised: 15 September 2018 / Accepted: 16 September 2018 / Published: 19 September 2018
(This article belongs to the Section Sensor Networks)

Abstract

:
Visual landmarks are important navigational aids for research into and design of applications for last mile pedestrian navigation, e.g., business card route of pedestrian navigation. The business card route is a route between a fixed origin (e.g., campus entrance) to a fixed destination (e.g., office). The changing characteristics and combinations of various sensors’ data in smartphones or navigation devices can be viewed as invisible salient landmarks for business card route of pedestrian navigation. However, the advantages of these invisible landmarks have not been fully utilized, despite the prevalence of GPS and digital maps. This paper presents an improvement to the Dempster–Shafer theory of evidence to find invisible landmarks along predesigned pedestrian routes, which can guide pedestrians by locating them without using digital maps. This approach is suitable for use as a “business card” route for newcomers to find their last mile destinations smoothly by following precollected sensor data along a target route. Experiments in real pedestrian navigation environments show that our proposed approach can sense the location of pedestrians automatically, both indoors and outdoors, and has smaller positioning errors than purely GPS and Wi-Fi positioning approaches in the study area. Consequently, the proposed methodology is appropriate to guide pedestrians to unfamiliar destinations, such as a room in a building or an exit from a park, with little dependency on geographical information.

1. Introduction

Computer-assisted pedestrian navigation is an area that requires ongoing research because of people’s varying abilities [1], the complexity of various environments, the localization problem in indoor and outdoor environments [2,3,4,5,6], and data modeling [7,8] for navigation applications. Current researchers and industrial practitioners usually focus on the theory and technology of locating and guiding pedestrians and often provide shortest-path services. Plentiful localization techniques contribute the coordinates (x, y, z) of navigation devices that are commonly used to guide pedestrians location by location through the use of digital navigation maps [9], street view or scene [10], or visual landmarks [11,12]. Current localization techniques and their corresponding pedestrian navigation applications are fully constrained by usability of signals of opportunity [2] in the environment, for example, absence of Wi-Fi access points (APs), shielding of Global Navigation Satellite Systems (GNSS), and visibility of landmarks. Typically, however, only a few sensors in smartphones or navigation devices are used to contribute to localization or navigation, such as Wi-Fi-based positioning [13,14] and its Kalman filter-based error smoothing [15,16], Bluetooth inquires [17], GPS, accelerometers [18], and drift compensation algorithm with invisible landmarks [19]. Other signals from built-in sensors in smartphones or navigation devices are not yet fully utilized in current navigation applications, for instance the gyroscope, gravity, orientation, light, proximity and so on. Each sensor has characteristic changes to its signals during the process of pedestrian navigation as a result of the influence of terrain and pedestrian behavior [1,18,20]. The means of determining characteristic changes in the signals from built-in sensors in smartphones or navigation devices, and using them as “invisible salient landmarks” to guide pedestrians has not been addressed in the literature. These invisible landmarks could be very helpful to understand the real-time locations and movements of pedestrians for smart navigation applications.
This paper introduces an invisible landmark-based approach to locate pedestrians in indoor and outdoor environments, that is, an improved Dempster-Shafer (D-S) theory of evidence, which is proposed to locate pedestrian by considering co-existing phenomena in the form of sensors’ signal change characteristics. All signals from the sensors are sequenced in a linear reference manner along a target “business card” route. The target route is requested by the proposer from a specific location (i.e., entrance of university campus) to a particular destination of him, such as the office. Then, the changing characteristics of various sensors and their combinations are analyzed to build up the frame of discernment in the theory of evidence. In this paper, we propose:
(1)
An improved D-S theory of evidence is introduced by integrating the co-existing phenomena of sensors’ signal change characteristics. This approach is distinguished from navigation methods that use fingerprint-based localization and navigation applications by its focus on the nature of the combined rather than individual signal changes.
(2)
The similarity of real-time data of co-existing sensors to the predefined evidence framework is defined to refine the basic belief assignment in the theory of evidence.
(3)
A match error-based sensor weight assignment approach is proposed to handle conflicts of evidence processing.
Thus, the invisible landmark-based pedestrian locating approach based on the D-S theory of evidence is a useful guide to allow newcomers to follow predesigned business card routes.
The remainder of this paper is structured as follows: Section 2 outlines previous research, while Section 3 describes the sensors in smartphones and their signals, introduces a framework of the proposed approach, and then describes an improved approach of D-S theory for pedestrian locating. Section 4 describes the experiments and results and Section 5 states some conclusions and the direction of future works.

2. Related Work

2.1. Landmark-Based Navigation

Landmark-based navigation tasks are broadly discussed in the disciplines of cognition, neuroscience, geographical information science (GIS), computing, and communication.
There is rich literature in the disciplines of cognition and neuroscience on the spatial cognitive aspects of landmark-based instruction. Researchers have studied the perceptual, cognitive, and contextual aspects of landmarks in wayfinding; for example, landmark-based instructions enable pedestrians to comprehend efficiently the visual prominence, semantic salience, and structural significance of landmarks [7,21,22,23,24]. These studies focus on spatial behavioral factors of people, for instance, spatial abilities, spatial cognition, and spatial decision-making [25]. Some other research works have been published that investigated the landmark-based spatial cognition of different areas of the brain [26], neural activity [27,28], environmental psychology [29], and cognitive map and decision-making recognition processes [30].
In the discipline of GIS, many studies focus on the collection of landmark information, the use of landmarks in route instructions, and the modeling of landmarks in GIS: for example, discovering landmark preferences from photo postings [31], extracting landmarks from laser scanning [32], modeling landmarks using scene graphs [33], including landmarks in routing Instructions [11], landmark location in localization system for urban areas [34], developing landmark-based pedestrian navigation models [7] and three-dimensional navigable data models [35], and estimating cartographic communication performance [36]. Furthermore, some GIS-based pedestrian navigation services or systems have been developed for personal guidance for indoor and outdoor environments [37,38,39,40].
In the disciplines of computing and communication, many studies have been conducted on scene landmark recognition, understanding, and classifications for applications, including robot navigation. For example, identifying landmarks in urban scenes [41], vision-based indoor scene analysis for detecting natural landmarks [42], assessing the recognition time in real scenes [43], projecting dynamic images for scene understanding [44], classifying landmarks in large-scale image collection [45], visualizing landmarks to support spatial orientation [46], and even developing an image-based indoor navigation system [47].

2.2. Sensor-Based Pedestrian Localization

Much research work has focused on the usage of inertial sensors in smartphones or navigation devices for pedestrian localization, for example: GPS, infrared [48], Ultra Wide Band (UWB) or ultrasound [49,50], laser [51], Wireless Local Area Net (WLAN), Wi-Fi [52] or Li-Fi [53], radio-frequency identification (RFID) [54], Bluetooth [55], wireless sensor network [56], light-emitting diode (LED) [57], Global System for Mobile Communications (GSM) [58], and other inertial sensor-based localization [59,60,61]. The general positioning approaches are based on the theory of fingerprinting or triangulation, for instance, to overcome the effects of multipathing and shadowing [62] in the positioning environment. A recent publication [4] proposed a turbo received signal strength (RSS) model-based indoor algorithm for crowded scenarios. The main perspective of these sensor-based pedestrian localization research works was feature analysis of sensor signals related to digital geographical maps, fingerprinting [4], or activities [18]. In these studies, several sensors or information fusion-based approaches of using several signals from sensors were used to contribute to pedestrian localization. A recent, robust crowdsourcing-based indoor localization system that makes full use of signals collected by smartphones from multiple sensors for pedestrian localization was presented [3]. It adopts a novel and promising perspective to solve the pedestrian localization problem. Our study was inspired by this approach and tries to make full use of signal features that are highly correlated with predefined pedestrian navigation routes to ensure accurate route guidance in both indoor and outdoor environments.

3. Materials and Methods

3.1. Sensors Signals in Smartphones

Nowadays, the functionality of smartphones far exceeds simple communication. Current expectations are such that phones have applications for entertainment, practice, and learning. Thus, smartphones integrate more and more sensors (e.g., GPS, Wi-Fi, light meter, magnetometer, gyroscope, and accelerometer) to meet people’s expectation for technological assistance in day-to-day existence.
The sensors in smart phones can be classified as three categories: motion, environmental, and position sensors. A gyroscope, as a motion sensor, can effectively detect data changes caused by pedestrian movement. The light sensor is an environmental sensor, which can measure light intensity in the pedestrian’s environment. GPS, Wi-Fi and magnetometer sensors are position sensors although it is influenced by the Sun. GPS sensors can determine a person’s position both indoors and outdoors. Wi-Fi can be used to triangulate a pedestrian’s location and provide internet services for them. This paper will use the gyroscope, light sensor, GPS, Wi-Fi, and magnetometer sensors to match the locations of pedestrians with predetermined routes that connect origins and destinations, and recode all the signals of these sensors while traveling along the route.

3.2. Framework of the Proposed Approach

This paper proposes an invisible landmark-based navigation guidance approach, based on the D-S theory of evidence, which aims to guide pedestrians to their destination smoothly by following business card route. The business card route is represented by a sequence of signals detected by the smartphone’s sensors between the origin and the destination of this route. The proposed approach compares the real-time signals in smartphones with the precollected signals of the business card route so as to sense the current location of pedestrian.
Figure 1 illustrates the framework of the proposed approach. This approach requires the initial collection of all signals of smartphone’s sensors along the whole business card route. These precollected signals are used to build up the frame of discernment in the theory of evidence by applying two important steps. The first step is to detect signal change characteristics, including significant changes and unchanged patterns. The second step is to divide the routes into route segments according to the changes of the signals. The combination of signal changes forms the frame of discernment, which provides fundamental evidence to the process of signal feature matching while pedestrians navigate, dependent only on their smartphones. When a pedestrian follows the predefined business card route, they need to check their locations with the help of a navigation app on their smartphones. This app will compare the real-time signal of the smartphones with the frame of discernment by evidence checking according to the combination rule of evidence for each route segment. The determination is closely dependent on the basic (belief) probability assignment (BPA) [63] and the proposed weighted combination of evidence in the theory of evidence.

3.3. Pedestrian Location Matching Algorithm Based on an Improved Dempster-Shafer Theory of Evidence

This section introduces the basic concept and approach of the D-S theory of evidence, and then proposes improvements for determining the location of pedestrians.

3.3.1. The D-S Theory of Evidence

The D-S theory of evidence, also known as the theory of belief functions, is an efficient Bayesian framework for approximate reasoning and decision-making in uncertain environments, which was originally proposed by Dempster [63] and improved by Shafer [64]. There are three key components of the theory: defining the frame of discernment, obtaining the basic belief for one question from subjective probabilities for a related question [64], and Dempster’s rule for combining such degrees of belief [63].
Let θ = { θ 1 , θ 2 , θ n } denote the set of all possible situations for a given event, which is called the frame of discernment [64]. Each element of this set is independent of the others, i.e., θ i θ j = ϕ , i , j = { 1 , , n } . The frame of discernment in this paper is derived from smartphone sensor data. The D-S theory of evidence assigns a belief mass to each element as a basic probability assignment function m : 2 θ [ 0 , 1 ] , which satisfies the following conditions: the mass of the empty set m ( ϕ ) = 0 and A θ m ( A ) = 1 . The function m is a BPA on θ . Then, if A is a subset of θ , m ( A ) is a function which can be interpreted as the degree of belief that that the set A belongs to the set θ , which satisfies 0 m ( A ) 1 , A θ & A ϕ . In practice, Shafer’s framework can use an interval [ B e l ( A ) , P l ( A ) ] to represent the belief about propositions. The interval is bounded by two values of the belief function (Bel) and the plausibility function (Pl) [64], where B e l ( A ) measures the total belief that the object is in A , and P l ( A ) measures the total belief that can move into A . The two functions are defined as follows:
B e l ( A ) = B A m ( B )
and
P l ( A ) = B A = ϕ m ( B )
Therefore, the interval represents the level of uncertainty based on the evidence in the framework. For example, the interval [ 0 , 0 ] implies that it is completely unsupported; whereas, the interval [ 1 , 1 ] indicates full support.
In the case of multiple sources of evidence, the BPAs could be combined to yield a new BPA function M , which is defined as follows according to Dempster’s combination rule:
M ( A ) = { 1 K A 1 A 2 A n = A m 1 ( A 1 ) m 2 ( A 2 ) m n ( A n ) A ϕ 0 A = ϕ
K = A 1 A n ϕ m 1 ( A 1 ) m 2 ( A 2 ) m n ( A n ) = 1 A 1 A n = ϕ m 1 ( A 1 ) m 2 ( A 2 ) m n ( A n )
where K is a normalization constant, m i is the BPA of evidence i , and A i is the subset of evidence i .
Two main factors can lead to conflicts of evidence in the theory of evidence, namely: sensor unreliability, caused by equipment failure or limitations, and incomplete knowledge of the world [65]. As long as discernment distributions are not completely consistent, conflicts of evidence will occur during evidence fusion and have a great influence on the accuracy of the result of evidence combination. Two main methods are used to solve this evidence conflict. One is improving the combination rule, which starts from the original flaws of the evidence synthesis. The most representative method proposed by Yager [66] assumes that conflicting evidence also carries some useful information that can be assigned to unknown items and effectively used. The other is the modification of the sources of evidence in order to avoid a conflict of evidence [67].

3.3.2. Improved Approaches of D-S Theory for Pedestrian Location

This paper improves the D-S theory by combining the accompanying rules of evidence and their processing of conflicts of evidence. In the implementation of D-S, this paper generates evidence frameworks for individual route segments. Each segment has some associated evidence from sensors. The following subsection will firstly introduce the path division, and then introduce the consequential improvement in D-S theory.

3.3.2.1. The Evidence Framework Based on Sensor Changes in Target Routes

Before establishing the frame of discernment, the user conducts several complete walks along the target business card route, and collects the sensor data of the smartphone on this route using any sensor data acquisition application. For each sensor, this study divides the route into segments according to the changes in its data. Consequently, this study could obtain the route segment set for each kind of sensor, namely, L M a g x , L M a g y , L G y r o , L L i g h t , L W i - F i , and L L o n & L a t , which are used to establish the evidence framework based on divided route segments. Each route segment must have at least one sensor change feature. The combination of change features within a route segment build up the evidence framework for each route segment.
The first kind of sensor is a magnetometer, which is an instrument that measures the direction and strength of a magnetic field at a particular location. For a fixed point, the geomagnetic field can be decomposed into two horizontal components and a component that is perpendicular to the ground ( x , y , z ) . The vector sum of the two horizontal components points to magnetic north, so the geomagnetism in the current environment can be identified only using the magnetometer x- and y-component data. In some locations, magnetometer readings are stable, but the magnetic values change when the environment is changing. Specifically, it may exhibit a sudden increase or decrease in value, and the variation in change quantity can reach 20 μT or more. The character of both the changes and lack thereof in the environment can be used as evidence of discernment. For example, Figure 2 illustrates two turns and the associated significant changes in magnetometer data. Consequently, this route can be divided into five segments based on the change features exhibited by the magnetometer, which indicate turns along the route. Based on the above division, this study could generate route segment sets L M a g x and L M a g y according to the x- and y-component data of magnetometer individually. The route segment set L M a g x = { L 1 m a g x ,   L 2 m a g x ,   L 3 m a g x ,   L 4 m a g x ,   L 5 m a g x } in Figure 2a, and L M a g y = { L 1 m a g y ,   L 2 m a g y ,   L 3 m a g y ,   L 4 m a g y ,   L 5 m a g y } in Figure 2b.
The second kind of sensor is the gyroscope. This sensor is able to detect the angular velocity of a moving object, which allows the recognition of turning. In the process of turning, the absolute value of angular velocity obtained by the gyroscope will increase significantly. However, in the situation of smooth walking, the z-axis value of the gyroscope is nearly zero. The positions of left and right turns correspond to the peaks and troughs of the gyroscope data, respectively, as shown in Figure 3. Any simple moving window algorithm can detect the peaks and troughs in the data. The found peaks and troughs can be used to divide the target route into segments for constructing the evidence framework, such as the route segment set L g y r o = { L 1 g y r o , L 2 g y r o , L 3 g y r o , L 4 g y r o , L 5 g y r o } in Figure 3.
The third kind of sensor is the light meter. Figure 4 illustrates an example of light intensity data within variable and stable environments, which can be distinguished by the first differential of light intensities, and indicates the divided segments by light intensity. Five thresholds { { 250 , 250 } ,   { 200 , 200 } ,   { 150 , 150 } ,   { 100 , 100 } ,   { 50 , 50 } } were trialed as the first difference in light intensities. The experiment showed that a threshold of { 100 , 100 } resulted in segmentation that was close to reality. Each segment had a unique light characteristic, either of fluctuation or stability. The route segment set L l i g h t = { L 1 l i g h t ,   L 2 l i g h t ,   L 3 l i g h t ,   L 4 l i g h t } in Figure 4.
The other sensors are for Wi-Fi and GPS and the processes associated with them are very similar. The Wi-Fi signal distribution has a certain range, and the mobile phone can accept that signal anywhere within that range. Therefore, Wi-Fi Media Access Control (MAC) address along the target route could be used to divide it into segments. Each segment has a limited Wi-Fi MAC address set and the neighbor segments will have a different Wi-Fi MAC address. For example, the route segment set L W i - F i = { L 1 W i - F i ,   L 2 W i - F i ,   L 3 W i - F i ,   L 4 W i - F i ,   L 5 W i - F i ,   L 6 W i - F i ,   L 7 W i - F i } in Figure 5. Similarly, the route can be divided into segments according to the change of the latitude and longitude of GPS data along the segments. The route segment set L l o n & l a t = { L 1 l o n & l a t ,   L 2 l o n & l a t ,   L 3 l o n & l a t ,   L 4 l o n & l a t } in Figure 6.
This study divides the target route into many tiny segments by intersecting the route segment set of all sensors, which is viewed as the frame of discernment, θ . A schematic diagram of the derived framework is shown in Figure 7. Each tiny route segment has a characteristic combination of sensors, namely, θ = { L M a g x L M a g y L W i - F i L L i g h t L G y r o L L o n & L a t } . In this implementation, θ is represented as:
θ i = { M a g x = ( M a g x s , M a g x a v e | s = 1 , M a g x max | s = 2 , M a g x min | s = 2 , M a g x s l o p e | s = 2 ) , M a g y = ( M a g y s , M a g y a v e | s = 1 , M a g y max | s = 2 , M a g y min | s = 2 , M a g y s l o p e | s = 2 ) , G y r o = ( G y r o s ) , W i - F i = ( W i - F i M A C N u m , W i - F i M A C n a m e 1 , , W i - F i M A C n a m e N u m , W i - F i R S S I 1 , W i - F i R S S I n u m ) , L i g h t = ( L i g h t max , L i g h t min , L i g h t a v e ) , L o n & L a t = ( L o n s t a r t , L a t s t a r t , L o n e n d , L a t e n d ) }
θ = { θ 1 , θ 2 , , θ i , , θ n }
where i is the serial number of a route segments in the target route, and θ i is the discernment within route segment i. The status of magnetometer data along the x- and y-axis M a g x s = { 1 , 2 } and M a g y s = { 1 , 2 } is such that 1 indicates that the data are stable and 2 indicates an obvious change in the data. M a g x a v e | s = 1 is the average of the x-component of the magnetometer data when the status is 1. M a g x max | s = 2 ,   M a g x min | s = 2   and   M a g x s l o p e | s = 2 are the maximum, minimum, and slope of magnetometer data in the x-orientation. M a g y a v e | s = 1 , M a g y max | s = 2 , M a g y min | s = 2 , M a g y s l o p e | s = 2 have similar meanings in the y-orientation. G y r o s is the status of walking reflected by the gyroscope data, G y r o s = { 1 , 0 , 1 } , which indicates turning left, walking straight ahead, and turning right, respectively. W i - F i M A C N u m and W i - F i R S S I N u m indicate the number of Wi-Fi MAC addresses and Wi-Fi RSSI respectively along the route segment and W i - F i M A C n a m e 1 , , W i - F i M A C n a m e N u m and W i - F i R S S I 1 , , W i - F i R S S I N u m are the list of them. L i g h t max , L i g h t min , L i g h t a v e represent the maximum, minimum, and average light values in this route segment. L o n s t a r t , L a t s t a r t , L o n e n d , L a t e n d are the longitude and latitude of the start and end points of the route segment.
The determination of the above mentioned evidence framework is based on single data source. To a certain extent, the framework would bring about incorrect segment results due to external interference. For example, when encountering a temporarily parked vehicle in the process of targeting road, the record of magnetometer would fluctuate and generate redundant segment points. In addition, when data collector changes direction in order to avoid pedestrian, the record of gyroscope reaches peak or valley, which also affects the correctness of the framework. In order to eliminate above mentioned interferences, this paper optimizes the framework by using multiple repeated data sources. In this paper, each sensor data were collected n r p times. They were divided as L M a g x a l l = { L M a g x 1 , L M a g x 2 , , L M a g x n r p } , L L o n & L a t a l l = { L L o n & L a t 1 , L L o n & L a t 2 , , L L o n & L a t n r p } according to the above method. Then, this paper found the common segmentations of framework was the optimized framework of sensors. Figure 8 illustrates all frameworks of gyroscope. L G y r o r is the framework segmentation result of gyroscope in the collection of the r t h times, and s is the distance between two adjacent median points of a given segment route. Since the given GPS accuracy is about 15 m, if the distance between two points is less than 15 m, the two points are considered as a same position. Hence, this paper uses 15 m as the threshold to remove segment points whose median points are far from other median points. Moreover, the characteristic of the segmented section between each two deleted segment points is assigned as same as their previous section. For example, the characteristic of the deleted section { c 2 , d 2 } is a straight walk, which is identical with the one of the sections { b 2 , c 2 } . After this common segmentation finding, this paper could get a suitable framework for possible pedestrian navigation situations.

3.3.2.2. Basic Belief Assignment

Co-existing relationships of evidence are common phenomena for smartphone sensors used to locate pedestrians as a result of similarities in activity behaviors and constraints in a navigation environment. Usually, the best evidence is when they have co-existing relationships that can reflect the uniqueness of the actual scene in space and time sequences. Therefore, the basic selection rule of co-existing relationships between sensors is that the sensor data feature can be readily detected by simple filtering algorithms, such as moving windows, and the combination of sensors in each route segment or the sensor features presents an obvious difference between the neighboring route segments. For smartphones’ sensors in pedestrian navigation environments, the data from the magnetometer, Wi-Fi, and GPS are independent environmental information, and these data obtained at each route segment could become evidence of a unique environment. The gyroscope mainly detects the changes (or turns) in the journey through the environment so that the gyroscope can be added to the evidence framework when the real-time gyroscope data exhibits changing features. For example, Figure 9 shows an example of the co-existing relationship of sensor signals in a target route. Areas #1 and #2 are two subset locations of the framework. The sensor data characteristics of the two subsets are listed in Table 1. In area #1, the x- and y-components of the magnetometer data show a sudden increases and decreases, respectively; the gyroscope shows features that are consistent with a right turn, and the light data are in keeping with features observed outdoors. Therefore, there is a co-existing relationship between the signals of the x- and y-components of the magnetometer, the gyroscope z-component, and the light-level. Therefore, the values of the four sensors in area #1 can be viewed as an evidence combination for defining the basic belief probability distribution. Similarly, for area #2, the gyroscope does not have a co-existing relationship of evidence with other sensors because the gyroscope exhibits stable behavior. Therefore, the combination of evidence and the probability distribution only include data of sensors such as the magnetometer x- and y-components, light, and Wi-Fi.
After selecting sensors as a co-existing sensor combination for each route segment, our proposed approach calculates the similarity of the real-time data of co-existing sensors with a predefined evidence framework, according to the following Equations (6)–(15):
S i m M a g ( M a g x , M a g x ) = { f ( M a g x a v e | s = 1 , M a g x a v e | s = 1 ) M a g x s = M a g x s = 1 [ f ( M a g x M a x | s = 1 , M a g x M a x | s = 1 ) + f ( M a g x M i n | s = 1 , M a g x M i n | s = 1 ) + f ( M a g x s l o p e | s = 1 , M a g x s l o p e | s = 1 ) ] / 3 M a g x s = M a g x s = 2 0 M a g x s M a g x s
f ( x , x ) = 1 | x x | / x
S i m G y r o ( G y r o s , G y r o s ) = { 1 G y r o s = G y r o s 0 G y r o s G y r o s
S i m W i - F i ( W i - F i s , W i - F i s ) = S i m N a m e ( W i - F i s , W i - F i s ) × S i m R S S I ( W F W F )
where
S i m N a m e ( W i - F i s , W i - F i s ) = 1 2 f ( W i - F i M A C n u m , W i - F i M A C n u m ) + | W F W F | 2 | W F W F |
S i m R S S I ( W F W F ) = w R S S I n i W F W F ( R S S I i R S S I i ) / | R S S I i |
W F = { W i - F i M A C n a m e 1 , , W i - F i M A C n a m e N u m } , W F = { W i - F i M A C n a m e 1 , , W i - F i M A C n a m e 1 } .
S i m L i g h t ( L i g h t , L i g h t ) = [ f ( L i g h t M i n , L i g h t M i n ) + f ( L i g h t M a x , L i g h t M a x ) + f ( L i g h t A v e , L i g h t A v e ) ] / 3
Here, w R S S I is the weight relatively to weight of Wi-Fi Mac address while matching the sensors with predefined evidence framework. R S S I i and R S S I i are RSSI measurement values of Wi-Fi access point i in evidence framework and it in the real-time data set.
The light sensor is more complex than other sensors in that the intensity of light varies greatly across time and climate. The most accurate match result can be obtained when the light condition of L i g h t is similar with the L i g h t . Hence, this paper introduces a light similarity method which considers the continuity of data. In this method, the intensities of light under different light conditions are processed according to Section 3.3.2.1. The light segment result can be obtained as L l i g h t = { { L 1 l i g h t 1 , , L n 1 l i g h t 1 } , , { L 1 l i g h t c , , L n c l i g h t c } } , c denotes the amount of light condition, such as day, night, sunny day, cloudy day, etc. L i l i g h t c i is the i t h section when l i g h t _ c o n d i t i o n = c i . Then, we calculate the S i m L i g h t of real-time data with L l i g h t according to Equation (13) in m times. The number of successful match results in different light conditions is represented as N u m i , i [ 1 , c ] 0 < N u m i < m . If l i g h t _ c o n d i t i o n = c i and N u m c i = M a x ( N u m ) , we regard the c i as the most similar light condition of real-time data. Finally, the segment route whose light condition is c i is selected.
Table 2 is an example of selection of light segment. L i g h t Γ represents the Γ t h real-time light data. L l i g h t L i g h t Γ represents the match result of segment data under different light conditions.
L l i g h t L i g h t Γ = { 0 s u c c e s s 1 f a i l u r e
S i m L o n & L a t ( L o n & L a t , L o n & L a t ) = f ( L o n s t a r t L o n e n d , L o n s t a r t L o n e n d ) + f ( L a t s t a r t L a t e n d , L a t s t a r t L a t e n d )
where S i m M a g , S i m G y r o , S i m W i - F i , S i m L i g h t and S i m L o n & L a t represent the similarity of the magnetometer, gyroscope, Wi-Fi, light, and latitude and longitude. M a g x , G y r o , W i - F i , L i g h t , L o n & L a t are the real-time data of sensors that need to be checked for matching the location of pedestrians with the route segment of the evidence framework. Therefore, for real-time data of sensors, this approach will generate a similarity matrix as follows:
( S i m M a g x 1 S i m M a g x 2 S i m M a g x n S i m M a g y 1 S i m M a g y 2 S i m M a g y n S i m G y r o 1 S i m G y r o 2 S i m G y r o n S i m W i - F i 1 S i m W i - F i 2 S i m W i - F i n S i m L i g h t 1 S i m L i g h t 2 S i m L i g h t n S i m L o n & L a t 1 S i m L o n & L a t 2 S i m L o n & L a t n )
where n is the number of discernment in the frame of evidence theory.
In order to improve the error tolerance performance of the matching process, this study finds the five most similar discernments in the frame of evidence using Equations (6)–(15), and ranks them with levels of 1–5. Table 3 lists the values of 1–5 and their corresponding evaluation characteristics. The Equation (17) shows an example of a similarity matrix with 6 discernments.
S i m M a g x S i m M a g y S i m G y r o S i m W i - F i S i m L i g h t S i m L o n & L a t ( 1 5 2 3 4 3 2 1 4 5 5 4 1 2 3 2 3 1 4 5 4 1 3 5 2 1 4 5 2 3 ) ( 0.44 0.08 0.22 0.14 0.11 0.14 0.22 0.44 0.11 0.08 2.28 0.08 0.11 0.44 0.22 0.14 0.22 0.14 0.44 0.11 0.08 0.11 0.44 0.14 0.08 0.22 0.44 0.11 0.08 0.22 0.14 )
The Equation (17) gives an example of similarity matrix with 6 discernments (left part of ) and its probability matrix m ( S ) (right part of ).
This study uses Equation (18) to calculate the probabilities of the five most similar discernments. The probability is used to build up a basic belief assignment of each sensor for real-time sensor data.
m ( S i ) = ( V i ) 1 i = 1 i = 5 ( V i ) 1
where m ( S i ) denotes the probability value of the focal element whose similarity level is i . The right part of Equation (17) shows the probability matrix corresponding to the similarity matrix of the left part of Equation (17).
By combining the co-existing rules of evidence, this study proposes an improved theory of evidence combination formula as follows.
M ( A ) = { 1 K A i = A 1 i n m i ( A i ) A ϕ ,   F i 0 0 A = ϕ
where K is a normalization constant, m i is the BPA of evidence i , and A is the subset of evidence framework. F i 0 means that only the evidence i with characteristic changes are used in the basic belief assignment and evidence combination.

3.3.2.3. Conflict of Evidence Processing

In the process of evidence combination, two factors affect the accuracy of sensor data, one is data latency, the other is external interference. Conflicts of evidence may exist in the matching process of sensor data and evidence. Therefore, it is necessary to deal with this issue during evidence combination. Several approaches have been proposed to manage conflicts in D-S evidence theory, such as, averaging [65], and combining conflict evidence [67], and weighting evidence (evidence pretreatment) [68,69]. Assigning a small probability value to the frame of discernment is a practical operation when determining the basic belief assignment. Therefore, this study combines the approaches of Yager combination rules [66] and weighted evidence to handle the conflict of evidence.
Weighting evidence is crucial to improving the accuracy of the evidence combination results in the case of different evidence association rules. The weights are usually subjectively assigned, or can be objectively calculated in the presence of historical data sets [70]. In this study, the sensor data sets of the target route are precollected to build the framework of discernment. Hence, they can be preprocessed with objective calculations to obtain the weights for the evidence before the evidence is combined. The detail determining process of sensor’s weight is described as follows:
Figure 10 illustrates an example of calculating the matching errors for assigning weights of sensor i. Let N s indicate the number of sensors, q represents the repeated times of collecting historical data set along the target route, θ is the frame of discernment of the target route, and u is the number of labelled points. The matching error can be estimated by comparing the location of the labelled points with the corresponding matched route segments. L j ω i indicates the matched route segment of sensor i at times ω for the corresponding labelled point j . This study assigns the weight of sensor i according to a basic rule that if the matching error of a sensor is smaller, the weight of this sensor is higher. Therefore, the weight of sensor i is defined as follows:
w i = ( 1 u q j = 1 u ω = 1 q D j ω i ) 1 i = 1 N s ( 1 u q j = 1 u ω = 1 q D j ω i ) 1
D j ω i = | L o c j ( θ s t a r t L j ϖ i + θ e n d L j ϖ i ) / 2 |
where L o c j is the location of labelled point j in θ , and θ s t a r t L j ϖ i , θ e n d L j ϖ i represent the start and end locations of L j ω i . For example, in Figure 10, the calculated distances between the matched points and labelled point 1 are D11, D11, D1q-1, D1q. Here, 1 u q j = 1 u ω = 1 q D j ω i = [(D11 + D11 + …… + D1q-1 + D1q) + (D21 + D21 + …… + D2q-1 + D2q) + (D31 + D31 + …… + D3q-1 + D3q)]/3q in the case of sensor i.
Then, this study improves the theoretical formula for evidence by employing two strategies. The first is to assign a small probability to the whole frame θ , which makes the inconsistency of evidence negligible [67]. The other strategy is to add weights of the sensors belonging to the evidence, which reflect the accompanying influence of sensors on estimated results. Therefore, the improved theory of evidence combination formulas can be further defined as
M ( A ) = { 1 K A i = A 1 i n m i ( A i ) w i = 1 K A i = A 1 i n m i ( A i ) ( 1 u q j = 1 u ω = 1 q D j ω i ) 1 i = 1 N s ( 1 u q j = 1 u ω = 1 q D j ω i ) 1 A ϕ ,   F i 0 0 A = ϕ
where w i denotes the weight of evidence i .

4. Experiments

This section introduces the experimental environment and collected data for a target route. Then, it gives the calculated frame of evidence of the improved approach, which is similar as a system training. Finally, this section demonstrates the advantages of the proposed approach by comparing the matched results with those obtained by the traditional D-S theory, GPS locations outdoors, and Wi-Fi locations indoors.

4.1. Experimental Environment and Data

The experiment was conducted on the campus of Wuhan University and a business card route for pedestrians from the entrance of Wuhan University to the No. 2 School Building was selected (see Figure 11). This study collected the sensor data of the route on 10 occasions by a data acquisition application written by the authors and run in the Android operating system of a VIVI X6 smartphone. The first instance was used to build up the frame for discernment, while the data collected on the other nine times were used to calculate the weights of sensors in the proposed approach. In addition, this study selected nine labelled points to check the matching error of the proposed approach. The nine labelled points are waypoints along the trajectory. Of them, six points were located outdoors and the other three were indoors. During the data acquisition process, sensor data were sampled at a frequency of 20 Hz, and the collected data were stored as a separate .txt file for each repetition. The collected files were processed in MATLAB.
The data format collected in this study is listed in Table 4:
Here, RSSI is received signal strength indicator, which is a measurement of how well your device can hear a signal from an access point or router. Due to the situation that RSSI have large fluctuation over the same distance and are attenuated, this study used the radio propagation model [71] to estimate the relationship between the signal strength indicator and distance. To reduce the influence of RSSI measurement on the evidence framework, this study set a relatively low weight (e.g., w R S S I = 1/10) to RSSI values of the evidence of Wi Fi MAC.

4.2. Implementation of the Proposed Approach

Figure 12 illustrates the data of the target route from the first collection. The red points in Figure 12a–e represent the segmentation points of sensor data. Figure 12f shows the constructed frame of discernment for the proposed approach.
In addition, five pieces of data in daytime and the same numbers in nighttime are selected to examine the accuracy of the proposed light condition selection method based on the continuity of light. In Table 5, Day time and Night time are two kinds of light conditions defined in our framework. As shown in Table 5, the accuracy achieves 100%.
Based on the determined framework, this study calculated the basic belief assignment of each sensor in the frame of discernment shown in Table 6. In this table, A 1 , , A n are subsets of the evidence framework θ . The basic belief assignments of each sensor for the sub-proposition m { A n } are listed, which is calculated according to the five most similar sensor data. According to the approach of Murphy [67] for combining belief functions when conflicts of evidence occur, setting m { θ } = 0.01 helps to avoid failure or error caused by conflicts of evidence. The initial probabilities of the five most similar sensors were defined to be 0.44, 0.22, 0.14, 0.11, and 0.01, which are explained in the methodology (see Section 3.3.2.2).
After using the repeated data from collections 2 to 9, this methodology can be used to calculate the distribution of weights of co-existing evidence by the rule that a sensor with a smaller average error has a greater weight in the methodology. Because of the repeatable characteristics of sensors, such as the magnetometer, Wi-Fi, light and GPS along the target route, and two clear conditions of changing and unchanging characteristics of the gyroscope, this study classifies the co-existing situations of sensors into two class, namely, the co-existing situations with and without gyroscope. Table 7 gives the calculated weights for each sensor in the two co-existing situations by using repeated data. GPS is not used in the segments where the GPS signal does not change during a period of time. In the study, the default time threshold is 4 s, which is flexible and can be customized by users.

4.3. Experimental Results and Comparative Analysis

To estimate the performance of the proposed approach in terms of match success rate and positioning error of results, this study selected nine (black) labelled points along the target route shown in Figure 11 and recorded the actual locations of them manually when collecting the sensor data.

4.3.1. Comparison of Match Success Rate

The match success rate is the ratio of the success match cases among all experimental cases. Here, the match success rate is calculated as:
M S R = i = 2 10 M S i n , M S i = { 1 s u c c e s s 0 f a i l u r e
where M S R is the match success rate, and M S i indicates the match status of the element i.
Table 8 gives the match success rate of the nine labelled points along the target route. In this table, #1–#9 indicates the labelled points, ①–⑩ are the serial numbers for data collection, each one of which is a complete collection of sensor data from the origin point to the end point of the target route. The 0 or 1 in the table denotes the match success status by the corresponding approaches, where 0 represents a match failure and 1 represents a successful match.
From Table 8, this study found that the proposed approach could achieve the 100% match success rate for all nine of the labelled points, while the average match success rate of the traditional D-S theory is about 40%. This finding demonstrates the performance of judgment in the case of conflicts of evidence and verifies the feasibility of the proposed method in this study. The proposed method in our paper only use the characteristics of the invisible landmarks to locate pedestrians. Therefore, it can against changes of speed and the time elapsed between invisible landmarks. No matter the distance of two gyroscope events is short or long, the framework section in intermediate time always regarded as the straight section. This method considers it is still between the two invisible landmarks.

4.3.2. Positioning Error of Results

To compare the location error of results, this study uses multiple sensors and GPS when outdoors and multiple sensors and Wi-Fi when indoors. The positioning errors of these approaches, which are the difference between the matched route segments and the actual locations, were evaluated using the nine labelled points. Table 9 and Table 10 present positioning errors in the outdoor (see Figure 11) and indoor environments, respectively. The average positioning error of the proposed approach is less than 5 m outdoors, and less than 3 m indoors, which can meet the positioning needs of pedestrian navigation. Furthermore, the mean positioning errors are smaller than those using GPS or Wi-Fi alone. To estimate the reduced extent of the proposed approach, this study defines a concept of reduced percentage ( R P ) of positioning errors by the proposed approach when comparing with GPS as follows:
R P = E g p s E p D S E g p s × 100 %
where E g p s and E p D S denote the positioning errors of the GPS and the proposed D-S approach. The reduced percentages for the six labelled points outdoors are between 15.9% and 54.4% (see Table 9). The equivalents for the three labelled points indoors are 10.1% to 62.6%. Clearly, the results for both the indoor and outdoor environments demonstrate that they offer improved positioning accuracy.
Figure 13 illustrates the average positioning errors of all nine labelled points obtained by the proposed approach, GPS and Wi-Fi. Among them, the positioning accuracy at the turning position is better than other positions, which may be because sensors, such as gyroscopes and magnetometers, have obvious characteristics when turning. Furthermore, the synthesized results of the proposed approach are obviously better than those derived from GPS or Wi-Fi alone. However, there is a large positioning error for point #6, probably because the path along point #6 is severely blocked by trees, and the weight of GPS is higher than other sensor in the outdoor environment. Furthermore, the intersection area around point #6 is large, and the gyroscopes and magnetometers do not have obvious changing characteristics. This indicates the need to reduce positioning error through the use of more sensitive evidence.

5. Conclusions

In this study, the changing characteristics and combinations of various sensors’ data in smartphones or navigation devices are viewed as invisible salient landmarks for predefined business card route of pedestrian navigation. This study introduces an improved Dempster-Shafer theory of evidence to find invisible landmarks along predesigned pedestrian routes without using digital maps by integrating the co-existing phenomenon of sensors’ signal change characteristics. This approach distinguishes navigation methods from fingerprint-based localization and navigation applications by focusing on combinational features of signal changes, rather than on individual signal changes. Moreover, it integrates a proposed similarity measure of real-time data of co-existing sensors with a predefined evidence framework for refining the basic belief assignment in the theory of evidence, and a match error-based sensor weight assignment approach for handling the conflict of evidence processing. Furthermore, this paper optimizes the framework for possible pedestrian navigation situations by using multiple repeated data sources to eliminate temporally incorrect segment results, e.g., the external interference when encountering a parked vehicle or avoiding pedestrians. In this approach, the frame of discernment is utilized according to the path division by sensors’ characteristics. The real-time pedestrian’s sensor data features are extracted to contrast and match with the framework. As result, this study improves the evidence theory to fuse the matching results of each sensor and infer a pedestrian’s location. The proposed approach is tested in the real experiment environment of pedestrian navigation. The experimental results shows the proposed approach could achieve the 100% match success rate for all labelled points, while the average match success rate of the traditional D-S theory is about 40%. This approach also improved the positioning accuracy of 15.9% and 54.4% for labelled points outdoors and 10.1% to 62.6% for labelled points indoors when it is compared with GPS or Wi-Fi alone in the study area. These experiments demonstrate that the proposed approach outperforms those based on GPS or Wi-Fi alone in the study area. Also, this approach can be applied seamlessly both indoors and outdoors for newcomers to follow predesigned business card routes.
There is scope for further research following this proposed approach. The first avenue it to combine the mapless pedestrian navigation with digital maps, which aims to make full use of visual and invisible landmarks within pedestrian navigation environments. The second avenue is to improve positioning sensors or create an adaptive smartphone-dependent sensor weight assignment method. The third avenue is to produce invisible landmark-based navigation maps and integrate these invisible landmarks into current landmarks or point-of-interest-based pedestrian navigation data models. The fourth avenue is to create comparative navigation in future work.

Author Contributions

Conceptualization, Z.F.; Formal analysis, H.X., S.-L.S.; Methodology, Z.F, Y.J., L.L. and X.G.; Investigation, H.X.; Validation, H.X., L.L.; Visualization, Y.J., L.L., X.G.; Writing—original draft, Z.F. and H.X.; Writing—review & editing, S.-L.S.

Funding

This research was funded by National Natural Science Foundation of China, grand numbers 41771473, 41371420, 41231171 and the innovative research funding of Wuhan University, grand number 2042015KF0167.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fang, Z.X.; Li, Q.Q.; Shaw, S.L. What about people in pedestrian navigation? Geo-Spat. Inf. Sci. 2015, 18, 135–150. [Google Scholar] [CrossRef]
  2. Torres-Sospedra, J.; Jiménez, A.R.; Knauth, S.; Moreira, A.; Beer, Y.; Fetzer, T.; Ta, V.C.; Montoliu, R.; Seco, F.; Mendoza-Silva, G.M.; et al. The Smartphone-Based Offline Indoor Location Competition at IPIN 2016: Analysis and Future Work. Sensors 2017, 17, 557. [Google Scholar] [CrossRef] [PubMed]
  3. Zhou, B.; Li, Q.; Mao, Q.; Tu, W. A Robust Crowdsourcing-based Indoor Localization System. Sensors 2017, 17, 864. [Google Scholar] [CrossRef] [PubMed]
  4. Jiao, J.; Li, F.; Deng, Z.; Ma, W. A Smartphone Camera-Based Indoor Positioning Algorithm of Crowded Scenarios with the Assistance of Deep CNN. Sensors 2017, 17, 704. [Google Scholar] [CrossRef] [PubMed]
  5. Guo, S.; Xiong, H.; Zheng, X.; Zhou, Y. Activity Recognition and Semantic Description for Indoor Mobile Localizaiton. Sensors 2017, 17, 649. [Google Scholar] [CrossRef] [PubMed]
  6. Hernández, N.; Ocaña, M.; Alonso, J.M.; Kim, E. Continuous Space Estimation: Increasing WiFi-Based Indoor Localization Resolution without Increasing the Site-Survey Effort. Sensors 2017, 17, 147. [Google Scholar] [CrossRef] [PubMed]
  7. Fang, Z.; Li, Q.; Zhang, X.; Shaw, S.-L. A GIS data model for landmark-based pedestrian navigation. Int. J. Geogr. Inf. Sci. 2012, 26, 817–838. [Google Scholar] [CrossRef]
  8. Fang, Z.; Li, Q.; Zhang, X. A multiobjective model for generating optimal landmark sequences in pedestrian navigation applications. Int. J. Geogr. Inf. Sci. 2011, 25, 785–805. [Google Scholar] [CrossRef]
  9. Ishikawa, T.; Fujiwara, H.; Imai, O.; Okabe, A. Wayfinding with a GPS-based mobile navigation system: A comparison with maps and direct experience. J. Environ. Psychol. 2008, 28, 74–82. [Google Scholar] [CrossRef]
  10. Brédif, M. Image-based redenring of LOD1 3D city models for traffic-augmented immersive street-view navigation. In Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Antalya, Turkey, 12–13 November 2013; pp. 7–11. [Google Scholar]
  11. Duckham, M.; Winter, S.; Robinson, M. Including Landmarks in Routing Instructions. J. Locat. Based Serv. 2010, 4, 28–52. [Google Scholar] [CrossRef]
  12. Hile, H.; Vedantham, R.; Cuellar, G.; Liu, A.; Gelfand, N.; Grzeszczuk, R.; Borriello, G. Landmark-Based Pedestrian Navigation from Collections of Geotagged Photos. In Proceedings of the 7th International Conference on Mobile and Ubiquitous Multimedia, Umea, Sweden, 3–5 December 2008; pp. 145–152. [Google Scholar]
  13. Bahl, P.; Padmanabhan, V.N. RADAR: An in-building RF-based user location and tracking system. In Proceedings of the Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM 2000), Tel Aviv, Israel, 26–30 March 2000; Volume 2, pp. 775–784. [Google Scholar]
  14. Olivera, V.M.; Plaza, J.M.C.; Serrano, O.S. WiFi localization methods for autonomous robots. Robotica 2006, 24, 455–461. [Google Scholar] [CrossRef]
  15. Chen, Z.; Zou, H.; Jiang, H.; Zhu, Q.; Soh, Y.; Xie, L. Fusion of WiFi, Smartphone Sensors and Landmarks Using the Kalman Filter for Indoor Localization. Sensors 2015, 15, 715–732. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Evennou, F.; Marx, F. Advanced integration of WiFi and inertial navigation systems for indoor mobile positioning. Eurasip J. Appl. Signal Process. 2006, 2006, 164. [Google Scholar] [CrossRef]
  17. Bargh, M.S.; de Groote, R. Indoor localization based on response rate of bluetooth inquiries. In Proceedings of the First ACM International Workshop on Mobile Entity Localization and Tracking in GPS-Less Environments, San Francisco, CA, USA, 19 September 2008; pp. 49–54. [Google Scholar]
  18. Hurr, T.; Bang, J.; Kim, D.; Banos, O.; Lee, S. Smartphone location independent physical activity recognition based transportation natural vibration analysis. Sensors 2017, 17, 931. [Google Scholar] [CrossRef] [PubMed]
  19. Diaz, E.M.; Caamano, M.; Sánchez, F.J.F. Landmark-Based Drift Compensation Algorithm for Inertial Pedestrian Navigation. Sensors 2017, 17, 1555. [Google Scholar] [CrossRef] [PubMed]
  20. Fang, Z.; Luo, H.; Li, L. A Finite State Machine Aided Pedestrian Navigation State Matching Algorithm. Acta Geod. Cartogr. Sin. 2017, 46, 371–380. [Google Scholar] [CrossRef]
  21. Roger, M.; Bonnardel, N.; Le Bigot, L. Landmarks’ use in Speech Map Navigation Tasks. J. Environ. Psychol. 2011, 31, 192–199. [Google Scholar] [CrossRef]
  22. Caduff, D.; Timpf, S. On the Assessment of Landmark Salience for Human Navigation. Cognit. Process. 2008, 9, 249–267. [Google Scholar] [CrossRef] [PubMed]
  23. Raubal, M.; Winter, S. Enriching Wayfinding Instructions with Local Landmarks. In Proceedings of the Second International Conference on Geographic Information Science, Boulder, CO, USA, 25–28 September 2002. [Google Scholar]
  24. Pazzaglia, F.; De Beni, R. Strategies of Processing Spatial Information in Survey and Landmark-Centred Individuals. Eur. J. Cognit. Psychol. 2001, 13, 493–508. [Google Scholar] [CrossRef]
  25. Spiers, H.; Maguire, E. The Dynamic Nature of Cognition during Wayfinding. J. Environ. Psychol. 2008, 28, 232–249. [Google Scholar] [CrossRef] [PubMed]
  26. Commiteri, G.; Galati, G.; Paradis, A.L.; Pizzamiglio, L.; Berthoz, A.; LeBihan, D. Reference frames for spatial cognition: Different brain areas are involved in viewer-, object-, and landmark-centered judgments about object location. J. Cognit. Neurosci. 2004, 16, 1517–1535. [Google Scholar] [CrossRef] [PubMed]
  27. Gothard, K.M.; Skaggs, W.E.; Moore, K.M.; McNaughton, B.L. Binding of Hippocampal CA1 to multiple reference frames in a landmark-based navigation task. J. Neurosci. 1996, 16, 823–835. [Google Scholar] [CrossRef] [PubMed]
  28. Gillner, S.; Weiß, A.M.; Mallot, H.A. Visual homing in the absence of feature-based landmark information. Cognition 2008, 109, 105–122. [Google Scholar] [CrossRef] [PubMed]
  29. Foo, P.; Warren, W.H.; Duchon, A.; Tarr, M. Do Humans Integrate Routes Into a Cognitive Map? Map-Versus Landmark-Based Navigation of Novel Shortcuts. J. Exp. Psychol. 2005, 31, 195–215. [Google Scholar] [CrossRef] [PubMed]
  30. Chersi, F.; Pezzulo, G. Using Hippocampal-Striatal Loops for Spatial Navigation and Goal-Directed Decision-Making. Cognit. Process. 2012, 13, 125–129. [Google Scholar] [CrossRef] [PubMed]
  31. Jankowski, P.; Andrienko, N.; Andrienko, G.; Kisilevich, S. Discovering Landmark Preferences and Movement Patterns from Photo Postings. Trans. GIS 2010, 14, 833–852. [Google Scholar] [CrossRef] [Green Version]
  32. Brenner, C.; Elias, B. Extracting landmarks for car navigation systems using existing GIS database and laser scanning. Transp. Policy 2004, XXXIV, 78–87. [Google Scholar]
  33. Raguram, R.; Wu, C.; Frahm, J.M.; Lazebnik, S. Modeling and Recognition of landmark image collection using iconic scene graphs. Int. J. Comput. Vis. 2011, 95, 213–239. [Google Scholar] [CrossRef]
  34. Bonnifait, P.; Jabbour, M.; Cherfaoui, V. Autonomous navigation in urban areas using GIS-managed information. Int. J. Veh. Auton. Syst. 2008, 6, 84–103. [Google Scholar] [CrossRef]
  35. Lee, J. A Three-Dimensional Navigable Data Model to Support Emergency Response in Microspatial Built-Environments. Ann. Assoc. Am. Geogr. 2007, 97, 512–529. [Google Scholar] [CrossRef]
  36. Pugliesi, E.A.; Decanini, M.M.S.; Tachibana, V.M. Evaluation of the Cartographic Communication Performance of a Route Guidance and Navigation System. Cartogr. Geogr. Inform. Sci. 2009, 36, 193–207. [Google Scholar] [CrossRef]
  37. Golledge, R.G.; Klatzky, R.L.; Loomis, J.M.; Speigle, J.; Tietz, J. A Geographical Information System for a GPS Based Personal Guidance System. Int. J. Geogr. Inf. Sci. 1998, 12, 727–749. [Google Scholar] [CrossRef]
  38. Kaiser, S.; Khider, M.; Robertson, P. A Pedestrian Navigation System Using a Map-Based Angular Motion Model for Indoor and Outdoor Environments. J. Locat. Based Serv. 2013, 7, 44–63. [Google Scholar] [CrossRef]
  39. Ruotsalainen, L.; Kuusniemi, H.; Bhuiyan, M.Z.H. A Two-Dimensional Pedestrian Navigation Solution Aided with a Visual Gyroscope and a Visual Odometer. GPS Solut. 2013, 17, 575–586. [Google Scholar] [CrossRef]
  40. Rehrl, K.; Häusler, E.; Leitinger, S.; Bell, D. Pedestrian Navigation with Augmented Reality, Voice and Digital Map: Final Results from an In Situ Field Study Assessing Performance and User Experience. J. Locat. Based Serv. 2014, 8, 75–96. [Google Scholar] [CrossRef]
  41. Bartie, P.; Mackaness, W.; Petrenz, P.; Dickinso, A. Identifying related landmark tags in urban scenes using spatial and semantic clustering. Comput. Environ. Urban. 2015, 52, 48–57. [Google Scholar] [CrossRef] [Green Version]
  42. Rous, M.; Lupschen, H.; Kraiss, K.F. Vision-based indoor scene analysis for natural landmark detection. In Proceedings of the 2005 IEEE International conference on robotics and automation, Barcelona, Spain, 18–22 April 2005. [Google Scholar] [CrossRef]
  43. Zhang, X.; Li, Q.Q.; Fang, Z.; Lu, S.; Shaw, S.L. An assessment method for landmark recognition time in real scenes. J. Environ. Psychol. 2014, 40, 206–217. [Google Scholar] [CrossRef]
  44. Zheng, J.Y.; Tsuji, S. Generating dynamic projection images for scene representation and understanding. Comput. Vis. Image Underst. 1998, 72, 237–256. [Google Scholar] [CrossRef]
  45. Li, Y.; Crandall, D.J.; Huttenlocher, D.P. Landmark classification in large-scale image collections. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision (ICCV), Kyoto, Japan, 29 September–2 October 2009; pp. 1957–1964. [Google Scholar] [CrossRef]
  46. Li, R.; Korda, A.; Radtke, M.; Schwering, A. Visualising Distant off-Screen Landmarks on Mobile Devices to Support Spatial Orientation. J. Locat. Based Serv. 2014, 8, 166–178. [Google Scholar] [CrossRef]
  47. Wang, E.; Yan, W. iNavigation: An Image Based Indoor Navigation System. Multimed. Tools Appl. 2014, 73, 1597–1615. [Google Scholar] [CrossRef]
  48. Want, R.; Hopper, A.; Falcão, V.; Gibbons, J. The Active Badge Location System. ACM Trans. Inf. Syst. 1992, 10, 91–102. [Google Scholar] [CrossRef]
  49. Priyantha, N.B.; Chakraborty, A.; Balakrishnan, H. The Cricket Location-support System. In Proceedings of the Annual International Conference on Mobile Computing and Networking, Boston, MA, USA, 6–11 August 2000; pp. 32–43. [Google Scholar]
  50. Fontana, R.J. Recent system applications of short-pulse ultra-wideband (UWB) technology. IEEE T. Microw. Theory 2004, 52, 2087–2104. [Google Scholar] [CrossRef]
  51. Barber, R.; Mata, M.; Boada, M.J.L.; Armingol, J.M.; Salichs, M.A. A perception system based on laser information for mobile robot topologic navigation. In Proceedings of the IEEE 2002 28th Annual Conference of the IEEE Industrial Electronics Society, Sevilla, Spain, 5–8 November 2002; Volume 4, pp. 2779–2784. [Google Scholar]
  52. Mok, E.; Yuen, K.Y. A Study on the Use of Wi-Fi Positioning Technology for Wayfinding in Large Shopping Centers. Asian Geogr. 2013, 30, 55–64. [Google Scholar] [CrossRef]
  53. Huang, Q.; Zhang, Y.; Ge, Z.; Lu, C. Refining Wi-Fi based indoor localization with Li-Fi assisted model calibration is smart buildings. In Proceedings of the 2016 International Conference on Computing in Civil and Building Engineering, Osaka, Japan, 6–8 July 2016; pp. 1–8. [Google Scholar]
  54. Want, R. An introduction to RFID technology. IEEE Pervas Comput. 2010, 2, 183–186. [Google Scholar] [CrossRef]
  55. Yang, C.Y.; Shih, K.P.; Hsu, C.H.; Chen, H.C. A location-aware multicasting protocol for Bluetooth location network. Inform. Sci. 2007, 177, 3161–3177. [Google Scholar]
  56. Chang, Y.J.; Wang, T.Y. Indoor Wayfinding Based on Wireless Sensor Networks for Individuals with Multiple Special Needs. Cybern. Syst. 2010, 41, 317–333. [Google Scholar] [CrossRef]
  57. Park, S.; Choi, I.M.; Kim, S.S.; Kim, S.M. A Portable mid-Range Localization System Using Infrared LEDs for Visually Impaired People. Infrared Phys. Technol. 2014, 67, 583–589. [Google Scholar] [CrossRef]
  58. Varshavskya, A.; de Laraa, E.; Hightowerc, J.; LaMarcac, A.; Otsason, V. GSM indoor localization. Pervasive Mob. Comput. 2007, 3, 698–720. [Google Scholar] [CrossRef] [Green Version]
  59. Harle, R. A Survey of Indoor Inertial Positioning Systems for Pedestrians. IEEE Commun. Surv. Tutor. 2013, 15, 1281–1293. [Google Scholar] [CrossRef]
  60. Tomažič, S.; Škrjanc, I. Fusion of Visual Odometry and Inertial Navigation System on a Smartphone. Comput. Ind. 2015, 74, 119–134. [Google Scholar] [CrossRef]
  61. Bancroft, J.B. Multiple Inertial Measurement Unit Fusion for Pedestrian Navigation. Ph.D. Thesis, University of Calgary, Calgary, AB, Canada, December 2010. [Google Scholar]
  62. Liu, H.H. The Quick Radio Fingerprint Collection Method for a WiFi-Based Indoor Positioning System. Mob. Netw. Appl. 2015, 22, 61–71. [Google Scholar] [CrossRef]
  63. Dempster, A. Upper and lower probabilities induced by a multivalued mapping. Ann Math Stat. 1967, 38, 325–339. [Google Scholar] [CrossRef]
  64. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  65. Deng, Y. Generalized evidence theory. Appl. Intell. 2015, 43, 530–543. [Google Scholar] [CrossRef] [Green Version]
  66. Murphy, C.K. Combining belief functions when evidence conflicts. Decis. Support Syst. 2000, 29, 1–9. [Google Scholar] [CrossRef]
  67. Yager, R.R. On the dempster-shafer framework and new combination rules. Inf. Sci. 1987, 41, 93–137. [Google Scholar] [CrossRef]
  68. Jousselme, A.L.; Grenier, D.; Bossé, E. A new distance between two bodies of evidence. Inf. Fusion 2001, 2, 91–101. [Google Scholar] [CrossRef]
  69. Haenni, R. Belief function combination and conflict management. Inf. Fusion 2002, 3, 111–114. [Google Scholar]
  70. Ke, X. A research on Combination of Belief Functions with Applications in Evidence Theory. Ph.D. Thesis, University of Science and Technology of China, Hefei, China, 2016. [Google Scholar]
  71. Chiputa, M.; Li, X. Real time Wi-Fi indoor positioning system based on RSSI measurement: A distributed load approach with fusion of three positioning algorithms. Wirel. Pers. Commun. 2018, 99, 67–83. [Google Scholar] [CrossRef]
Figure 1. Framework of the proposed approach.
Figure 1. Framework of the proposed approach.
Sensors 18 03164 g001
Figure 2. Magnetometer data resolved along the x-axis (a) and y-axis (b).
Figure 2. Magnetometer data resolved along the x-axis (a) and y-axis (b).
Sensors 18 03164 g002aSensors 18 03164 g002b
Figure 3. Gyroscope data.
Figure 3. Gyroscope data.
Sensors 18 03164 g003
Figure 4. Light data.
Figure 4. Light data.
Sensors 18 03164 g004
Figure 5. Wi-Fi Media Access Control (MAC) data.
Figure 5. Wi-Fi Media Access Control (MAC) data.
Sensors 18 03164 g005
Figure 6. Data of Global Positioning System.
Figure 6. Data of Global Positioning System.
Sensors 18 03164 g006
Figure 7. Example of an evidence framework.
Figure 7. Example of an evidence framework.
Sensors 18 03164 g007
Figure 8. Example of optimized method of gyroscope frameworks’ segmentation.
Figure 8. Example of optimized method of gyroscope frameworks’ segmentation.
Sensors 18 03164 g008
Figure 9. Co-existing relationship of sensor data.
Figure 9. Co-existing relationship of sensor data.
Sensors 18 03164 g009
Figure 10. Calculating the matching errors for assigning the weight of sensor i.
Figure 10. Calculating the matching errors for assigning the weight of sensor i.
Sensors 18 03164 g010
Figure 11. Experimental route and markers.
Figure 11. Experimental route and markers.
Sensors 18 03164 g011
Figure 12. Sensor segmentation results and frame of discernment: (a) gyroscope; (b) magnetometer; (c) Wi-Fi; (d) light; (e) longitude and latitude; (f) frame of discernment.
Figure 12. Sensor segmentation results and frame of discernment: (a) gyroscope; (b) magnetometer; (c) Wi-Fi; (d) light; (e) longitude and latitude; (f) frame of discernment.
Sensors 18 03164 g012aSensors 18 03164 g012b
Figure 13. Positioning errors of labelled points.
Figure 13. Positioning errors of labelled points.
Sensors 18 03164 g013
Table 1. An example of the co-existing relationships of sensors in a subset of framework.
Table 1. An example of the co-existing relationships of sensors in a subset of framework.
Mag xMag yGyr zLightWi-Fi
#1IncreaseDecreaseRight turnStabilization47
#2StableStableStableFluctuation20
Table 2. Example of selecting light segment result.
Table 2. Example of selecting light segment result.
Light Condition L l i g h t L i g h t 1 L l i g h t L i g h t 2 L l i g h t L i g h t Γ L l i g h t L i g h t m NumMax (Num)
100 00/
201 0m-2/
/
C10 1m-1
Table 3. Level of similarity and their evaluating characteristics.
Table 3. Level of similarity and their evaluating characteristics.
ValuesEvaluating Characteristics
1Extremely high
2High
3Medium
4Low
5Relatively low
Table 4. Example of experimental data format.
Table 4. Example of experimental data format.
TimestampLongitudeLatitudeGyroscope z-axisLight
20171203135331900114.XXXXXXX30.XXXXXXX0.01350449624,475
20171203135332100114.XXXXXXX30.XXXXXXX0.01321554124,182
Magnetometer x-axisMagnetometer y-axisWi Fi MACWi Fi NameWi Fi RSSI
−34.5−22.859e0:4f:bd:80:09:69ChinaNet-3upP−73
−34.5−23.1e0:4f:bd:80:09:69ChinaNet-3upP−73
Table 5. Selection of light segmentation based on continuity of data matching.
Table 5. Selection of light segmentation based on continuity of data matching.
Real-Time Light ConditionDay TimeNight TimeAccuracy
Day time50100%
Night time05100%
Table 6. Basic belief assignment of each sensor in the frame of discernment.
Table 6. Basic belief assignment of each sensor in the frame of discernment.
Mag xMag yGyro zWi-FiLightGPS
m { A 1 } 0.440.0800.080.120.44
m { A 2 } 0.220.440.080.4400.11
m { A 3 } 0.140.220.4400.440.08
m { A 4 } 0.110.140.220.140.220.14
m { A 5 } 0.080.110.110.110.140
m { A n } 000.140.220.080.22
m { θ } 0.010.010.010.010.010.01
Table 7. Weight of evidence.
Table 7. Weight of evidence.
SituationsMag xMag yGyr zWi-FiLightGPS
OutdoorWith gyroscope0.0030.0040.2050.1430.0050.64
Without gyroscope0.0070.01200.3970.0160.568
IndoorWith gyroscope0.010.010.750.2230.0070
Without gyroscope0.020.03400.920.0260
Table 8. Match success rate of the proposed approach and traditional D-S theory.
Table 8. Match success rate of the proposed approach and traditional D-S theory.
Labelled PointApproachSerial Number of Data CollectingMatch Success Rate
#1Proposed approach1111111111100%
Traditional D-S theory101001010150%
#2Proposed approach1111111111100%
Traditional D-S theory100000000010%
#3Proposed approach1111111111100%
Traditional D-S theory100000000010%
#4Proposed approach1111111111100%
Traditional D-S theory100000000010%
#5Proposed approach1111111111100%
Traditional D-S theory100000000010%
#6Proposed approach1111111111100%
Traditional D-S theory111111111090%
#7Proposed approach1111111111100%
Traditional D-S theory101101011060%
#8Proposed approach1111111111100%
Traditional D-S theory100111010160%
#9Proposed approach1111111111100%
Traditional D-S theory100011011160%
Table 9. Positioning errors (in meters) of the proposed approach (abbreviated as “Our” in the table) and GPS in outdoor environments.
Table 9. Positioning errors (in meters) of the proposed approach (abbreviated as “Our” in the table) and GPS in outdoor environments.
Labelled PointLocating MethodSerial Number of Data CollectingMean Error (m) R P (%)
#1Our0.215.942.228.3410.711.681.682.4310.713.394.7342.9
GPS0.5724.152.2114.5514.552.214.243.9414.551.918.29
#2Our0.182.791.834.986.218.73.151.831.831.833.3351.2
GPS0.518.664.5810.7310.7310.738.664.584.584.586.83
#3Our0.512.346.63.511.113.511.771.260.571.352.2519.1
GPS0.510.648.553.791.013.793.793.041.011.692.78
#4Our0.482.734.923.301.651.773.34.232.911.862.7128.7
GPS0.510.646.385.255.250.648.034.993.642.253.80
#5Our0.511.832.280.932.192.281.232.282.192.281.8054.4
GPS0.514.312.931.162.934.3114.592.932.932.933.95
#6Our0.545.3715.2710.514.0710.512.3914.737.149.459.9915.9
GPS0.546.3017.2111.8113.1313.1315.8618.5610.4311.8111.88
Table 10. Positioning errors (in meters) of the proposed approach (abbreviated as “Our” in the table) and Wi-Fi in indoor environments.
Table 10. Positioning errors (in meters) of the proposed approach (abbreviated as “Our” in the table) and Wi-Fi in indoor environments.
Labelled PointLocating MethodSerial Number of Data CollectingMean Error (m) R P (%)
#7Our0.153.002.403.003.003.003.000.7816.7414.914.9962.6
Wi-Fi0.992.7512.4755.8752.752.752.750.67517.7012.456.12
#8Our0.392.195.911.57.981.352.941.51.50.632.5910.1
Wi-Fi0.631.3751.9253.0753.0751.3753.6253.6256.4753.6252.88
#9Our0.211.773.573.033.031.623.030.63.030.62.0553.2
Wi-Fi0.61.2251.2254.405.3751.59.8753.31.22514.9754.38

Share and Cite

MDPI and ACS Style

Fang, Z.; Jiang, Y.; Xu, H.; Shaw, S.-L.; Li, L.; Geng, X. An Invisible Salient Landmark Approach to Locating Pedestrians for Predesigned Business Card Route of Pedestrian Navigation. Sensors 2018, 18, 3164. https://doi.org/10.3390/s18093164

AMA Style

Fang Z, Jiang Y, Xu H, Shaw S-L, Li L, Geng X. An Invisible Salient Landmark Approach to Locating Pedestrians for Predesigned Business Card Route of Pedestrian Navigation. Sensors. 2018; 18(9):3164. https://doi.org/10.3390/s18093164

Chicago/Turabian Style

Fang, Zhixiang, Yuxin Jiang, Hong Xu, Shih-Lung Shaw, Ling Li, and Xuexian Geng. 2018. "An Invisible Salient Landmark Approach to Locating Pedestrians for Predesigned Business Card Route of Pedestrian Navigation" Sensors 18, no. 9: 3164. https://doi.org/10.3390/s18093164

APA Style

Fang, Z., Jiang, Y., Xu, H., Shaw, S. -L., Li, L., & Geng, X. (2018). An Invisible Salient Landmark Approach to Locating Pedestrians for Predesigned Business Card Route of Pedestrian Navigation. Sensors, 18(9), 3164. https://doi.org/10.3390/s18093164

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop