*3.2. Pseudo-Peak Elimination Method with a Variable Sliding Window Cooperative Time Threshold*

Because the size of the sliding window determined in this paper is larger than that of the single-step sliding window, there may be several true and pseudo-peaks in the window. The positions of true and pseudo-peaks can be divided into three categories: the initial peak in front of the window, the adjacent peak in the window and the peak at the end of the window.

As shown in Figure 8, the initial peak of the second window and the peak at the end of the ninth window are pseudo-peaks in fact. The fixed sliding window separates the connection between the adjacent windows. If the fixed sliding window of 1 s is adopted, the two pseudo-peaks pointed by the arrow will be misjudged as true peaks. In order to solve this problem, this paper proposes a pseudo-peak elimination method with a variable sliding window cooperative time threshold, which ensures the connection between the adjacent windows, and can judge the starting peak and the end peak. Figure 9 is a flowchart of this method.

**Figure 8.** Pseudo-peaks in front of the window and at the end of the window.

**Figure 9.** Flow chart of pseudo-peak elimination method with variable sliding window cooperative time threshold.

According to the flow chart, the specific implementation steps of this method are as follows:

Step 1: judging adjacent peaks. If there are two or more peaks in the *i* (*i* ≥ 1) window, there are neighboring peaks in the window. The time threshold is used to judge the adjacent peaks. If the time difference between peak *a* and peak *a*-1 is less than the time threshold, then peak *a* is a pseudo-peak. Otherwise, peak *a* is suspected to be a true peak, which needs to be judged cyclically with the subsequent peak *a* + *n* (*n* = 1,2,3 ... ). If the time difference between peak *a* and peak *a* + *n* is less than the time threshold, then peak value is compared. If peak *a* is greater than peak *a* + *n*, peak *a* is a suspected true peak, and the program executes *n*++ to continue the loop judgment. If peak *a* is less than peak *a* + *n*, peak *a* is a pseudo-peak. If the time difference between peak *a* and peak *a* + *n* is greater than the time threshold, peak *a* is determined to be the true peak. After achieving the judgment of peak *a,* the program ends this cycle and moves to judge the next peak *a*++.

Step 2: judging the end peak. If the time difference between the end peak and the previous peak is greater than the time threshold, the end peak is suspected to be the true peak, which needs to be judged together with the starting peak of the next window. Otherwise, the end peak is a pseudo-peak.

Step 3: variable sliding window. If the *i* (*i* ≥ 1) window contains a peak, the starting point of the *i* + 1 window is set to the next data point of the end peak of the *i* window (Figure 10a). If the *i* window does not contain a true peak, the starting point of the *i* + 1 window is moved forward two data points(Figure 10b).

**Figure 10.** Schematic diagram of the variable sliding window: (**a**) The *i* (*i* ≥ 1) window contains a peak (**b**) The *i* window does not contain a true peak. Note: the green arrows indicate the starting point of new windows.

Step 4: judging the initial peak. If the previous window does not contain a peak, and the initial peak is greater than the peak threshold, the initial peak is suspected to be a true peak, which needs to be judged with the next peak. The judgment method is the same as that of peak *a* and the peak *a* + *n* in step 1. If the initial peak is less than the peak threshold, the initial peak is a pseudo-peak. If the previous window contains a peak, the initial peak is equivalent to the adjacent peak, which can be judged by the method in step 1.

Step 5: circularly executing step 1, step 2, step 3 and step 4 until the window ends.

#### **4. Experiment and Analysis**

In order to evaluate the accuracy of smartphone-based unconstrained step detection fusing a variable sliding window and an adaptive threshold, 25 volunteers (9 females and 16 males) were recruited. The volunteers' height ranged from 155 cm to 185 cm and the weight ranged from 41 kg to 93 kg. The experimental smartphones were all volunteers' own phones, including 21 models of Huawei, Apple, Samsung, Honor, Xiaomi, Oneplus, Oppo and Vivo. Figure 11 shows 25 smartphone brands and models of the volunteers in the experiment.

**Figure 11.** Brands and models of 25 smartphones.

#### *4.1. Experimental Setup*

The experiment was carried out on the campus of Shandong University of Science and Technology. The four kinds of states of volunteers' motion state, motion environment, smartphone carrying mode and interference factors are shown in Table 2. In total, 25 volunteers were divided into 5 groups to carry out the step detection experiments in a constrained state (smartphones keep flat) and an unconstrained state. The experimental process in a constrained state is shown in Table 3, and the experimental process in an unconstrained state is shown in Table 4. Then, 5 volunteers in each group collected data according to the experimental process of Tables 3 and 4, and accurately counted the actual walking steps. Figure 12 is the schematic diagram of data collected by volunteers in a constrained state, and Figure 13 is the schematic diagram of an unconstrained state. The data collected by the volunteers, as well as attribute information such as height, weight and smartphone models have been uploaded to GitHub (https://github.com/jackleenotjackma/StepCountingData.git (accessed on 25 December 2021).

**Table 2.** Experimental motion environment, motion state, smartphone carrying mode and interference factor.



**Table 3.** Experiment process of 5 constrained state groups.

**Table 4.** Five groups of unconstrained state experiment process.


**Figure 12.** Constrained experiment process.

**Figure 13.** Unconstrained experiment process: (**a**) Schematic diagram of the second group of experimental process (**b**) Schematic diagram of the fourth group of experimental process.

#### *4.2. Experimental Results and Analysis*

A total of 50 sets of data were obtained in the experiment, and the steps were between 72 steps and 343 steps. Unconstrained step count detection algorithm for smartphones is compared with the open source program Stepcount [28], which is well-known on GitHub. Stepcount is a peak detection algorithm based on an adaptive threshold. The actual step number was taken as the reference value.

The step counting results of 25 volunteers in a constrained state and an unconstrained state are shown in Table 5. Figures 14 and 15 show the step counting accuracy of 25 volunteers in a constrained state and an unconstrained state, respectively. It can be seen from Table 5 and Figures 14 and 15 that under constraint conditions, the lowest step counting accuracy of the proposed smartphone-based unconstrained step detection fusing a variable sliding window and an adaptive threshold is 96.3%, whereas the lowest correct rate of step counting by the Stepcount is only 45.4%. In the unconstrained state, the lowest correct rate of the proposed smartphone-based unconstrained step detection fusing variable sliding window and adaptive threshold is 95.3%, whereas it is only 57.5% for Stepcount. Regardless of the constrained state or the unconstrained state, the step counting accuracy of the proposed smartphone-based unconstrained step detection fusing a variable sliding window and an adaptive threshold is stable, whereas the step counting accuracy of the Stepcount fluctuates greatly. In the constraint condition, the average step counting accuracy of the unconstrained step detection algorithm and the Stepcount are about 99.0% and 90.1%, respectively. In the unconstrained state, the average step accuracy of the unconstrained step detection algorithm and Stepcount are about 98.4% and 87.4%, respectively. In the constrained state and unconstrained state, the step accuracy of the proposed smartphonebased unconstrained step detection fusing a variable sliding window and an adaptive threshold is about 8.9% and 11% higher than that of the Stepcount.

**Figure 14.** Step counting accuracy in constrained state.

**Figure 15.** Step counting accuracy in unconstrained state.


**Table 5.** Constrained state and unconstrained state step counting result.

In order to explore the adaptability of the proposed smartphone-based unconstrained step detection fusing a variable sliding window and an adaptive threshold, the performance of this algorithm is analyzed from the perspective of smartphone price and gender of volunteers.

The smartphones of the 25 volunteers were divided into three types according to prices: mid-low-level (1200–2599 RMB), mid-level (2600–3599 RMB) and high-level (over 3600 RMB). Figure 16 shows the step counting accuracy of the smartphone-based unconstrained step detection fusing a variable sliding window and an adaptive threshold with different prices in the unconstrained state. It can be seen from the figure that the smartphonebased unconstrained step detection fusing a variable sliding window and an adaptive threshold has the best step counting accuracy for high-level smartphones, and there is no obvious difference between mid-level smartphones and mid-low-level smartphones. In the unconstrained state, the average step counting accuracy of the smartphone-based unconstrained step detection fusing variable sliding window and adaptive threshold is 99.1%, 98.0% and 98.0% for high-level, mid-level and mid-low-level smartphones respectively.

**Figure 16.** Step counting accuracy rate of smartphones with different prices in unconstrained state.

From a gender perspective, in the unconstrained state, the average step counting accuracy rate of the smartphone-based unconstrained step detection fusing a variable sliding window and an adaptive threshold is 98.2% for male and 98.7% for female, respectively. The difference between them is only 0.5%. It can be seen that the smartphone-based unconstrained step detection fusing a variable sliding window and an adaptive threshold has good adaptability to different genders.

#### **5. Conclusions**

Aiming at the problem of low step detection accuracy of PDR in an unconstrained state, this paper proposes a step detection algorithm for smartphones. In this algorithm, the pseudo-peaks are eliminated by preprocessing acceleration using a FIR low-pass filter, FFT recognition gait, dynamic updating peak threshold and a variable sliding window cooperative time threshold; and, finally, the step counting is realized.

The FIR low-pass filter is used to denoise the overall acceleration signal. FFT is used to identify stroll walking, normal walking, running and interference state. The minimum peak value after a sliding window filter is used to dynamically update the peak value threshold, which solves for the problem that the fixed peak value threshold has low adaptability in an unconstrained state. A method of a variable sliding window cooperative time threshold is proposed, which ensures the connection between adjacent windows and makes up for the problem that the fixed window cannot judge the initial peak and the end peak. In order to evaluate the step counting performance of the proposed algorithm, 50 experiments in constrained and unconstrained states are conducted by 25 volunteers holding 21 different types of smartphones. The experimental results show that, the average step counting accuracy of the proposed algorithm is 98.4% in an unconstrained state, which is 10.0% higher than that of the result from the open source program Stepcount. This proposed algorithm has a strong ability to adapt to complex unconstrained states and it is friendly for different genders and mobile phones with different prices. In the future, we plan to carry out indoor positioning research with this step detection method and use it in the construction of a smart city and the tracking of pandemics.

**Author Contributions:** Conceptualization, Y.X., G.L. and Z.L.; methodology, Y.X. and G.L.; software, G.L. and Z.L.; validation, Y.X., Z.L., H.Y. and J.C.; formal analysis, G.L., H.Y. and J.C.; investigation, G.L., H.Y., J.C. and J.W.; data curation, G.L.; writing—original draft preparation, G.L.; writing—review and editing, Y.X., H.Y., J.C., J.W. and Y.C.; supervision, Y.X. and Z.L.; project administration, Y.X.; funding acquisition, Y.X. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by National Natural Science Foundation of China (42174035), Talent introduction plan for Youth Innovation Team in universities of Shandong Province (innovation team of satellite position and navigation) and Shandong University of Science and Technology school-level scientific research team (2019TDJH103).

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

