1. Introduction
In recent years, the booming mobile internet has promoted the wide application of location-based social networks(LBSNs) with users sharing their location and life using check-in activities in LBSNs, such as Gowalla, Brightkite, Yelp, etc. [
1]. The next point of interest (POI) recommendation, as one of the most significant services in LBSNs, is to recommend the next POI to a user based on his movement pattern. Different from the general POI recommendation, it takes into account the order of user check-ins, and is time-dependent and sequential. The next POI recommendation is of great significance to both the user and the merchant and can be applied for route plans, business advertising, and traffic prediction [
2,
3].
Early studies used the advantage of the Markov chain to predict process states to recommend the next POI [
4]. A recurrent neural network (RNN) was then used in some studies to model check-in sequences for the next POI recommendation. As an improvement, long short-term memory (LSTM) based on spatio-temporal information has been proposed to model long trajectories and achieves good performance [
5]. However, the user’s check-ins are not only limited by time and location—there are also various pieces of contextual information that are worth considering. As shown in
Figure 1, user
u has a check-in sequence with a list of activities on Sunday. The list contains many types of contextual information, such as time, geographical location, POI category, time difference
, distance
, etc. To solve the problem of data sparseness, spatial context information is exploited to capture the correlation between POIs for the POI recommendation [
6]. Liu et al. proposed the ST-RNN model that takes spatio-temporal information into consideration on the basis of an RNN [
7]. However, to be noticed, a user has a different pattern on every day of the week. So it is necessary to analyze the temporal information of check-ins and to study the daily check-in pattern of users in a week. That is to say, the temporal factor should be further used to study the periodicity of the user check-in, so as to mine the regularity of check-ins and improve the accuracy of prediction. In addition, the check-in trajectory shown in
Figure 1 shows the user has a category transition preference at the category level. The user tends to shift from the category entertainment to the category residence. It is necessary to study the user’s preference for the category of POI, and this can assist in the prediction of a specific POI. However, there is still room for improvement about how to utilize the POI category to enhance POI recommendation performance.
Generally, a user’s preference are complicated and change over time. The user’s long-term preference can express their general interest, and the user’s short-term preference can reflect their sudden interest. To analyze a user’s long- and short-term performances, an LSTPM model based on spatial-temporal information is proposed by combining LSTM and RNN [
8]. Moreover, the attention mechanism is introduced into the next POI recommendation to study the different influences of check-ins at different time steps on the next POI check-in [
9]. However, it is necessary to fully utilize the attention mechanism to study the degree of influence of different factors in each user check-in. On the one hand, it is very important to mine non-linear dependence from the non-adjacent terms and find out the main intention of the user in the check-in sequence. On the other hand, the factor influencing the user’s decision in each check-in activity is dynamic—it may be time, distance, or category for different users. It is also an urgent challenge to find out the decisive factors of each user check-in and obtain a multi-factorial dynamic representation of user check-ins.
To take full advantage of the check-in information and take the user’s long- and short-term preferences into account, we construct a long- and short-term preference learning model based on a multi-level attention mechanism (LSMA) for the next POI recommendation. Firstly, the multi-factor dynamic representation of the user check-in is considered and the weights of different attributes in each check-in for a user are obtained. Secondly, the non-linear dependence between user check-ins is considered using the accurate representation of a user check-in, and the influence of the check-in at different time steps on the next POI check-in is obtained. This greatly improves the precision of the check-in presentation and the accuracy of the recommendation. In addition to directly using the contextual information mentioned above at the fine-grained POI level to recommend the next POI to a user, we also study the user’s coarse-grained category transition preference at the semantic level, which can enhance the user’s preference degree for POI. Experimental results on two large real-world datasets of Foursquare show that the LSMA performed significantly better than the other seven baselines in terms of recall and MAP. The main contributions of this study are as follows:
We analyze the long- and short-term preferences of the user, respectively, and combine these to form the final user preference. A top-k POIs recommendation list is generated based on the next POI access probability computed by user preferences and all candidate POIs through a POI filter we designed.
We utilize a multi-level attention mechanism to study the multi-factor dynamic representation of a user check-in behavior and non-linear dependence between check-ins in their check-in trajectory. This can learn the weights of different attributes in each check-in of a user and the influence of check-in at different time steps on the next POI check-in.
We study the user’s category transition preference at the semantic level to build the user’s check-in representation using a category module we constructed. Furthermore, we consider the periodicity of the user check-in and mine the user’s sequential pattern based on spatiotemporal context information. Both of these greatly promote the formation of user preferences and enhance the performance recommendation.
The remainder of the paper is organized as follows. We review the POI recommendation methods in
Section 2. In
Section 3, some preliminary investigations are described.
Section 4 details the proposed POI recommendation approach.
Section 5 provides the experimental results and the corresponding parameter analysis. Finally,
Section 6 presents the conclusions.
3. Preliminaries
3.1. Observations of User’s Trajectory
Two interesting observations were highlighted by the user’s check-in behavior analysis in LBSNs.
Obs.1: (Category transition preference.) On the semantic level, the user’s check-in behavior has a category correlation.
Figure 2a shows the category transition probability of users’ check-ins in the Charlotte dataset. The correlation between the user’s check-ins changes over time and is sequential. Different from other studies using a category as one attribute of check-in for a POI recommendation, we construct a category module of the proposed model to consider the category trajectory of the user and study the user’s category transition preference at a coarse-grained category level. The module influences the user’s preference for a specific POI. Moreover, there are hundreds of POI category labels in the datasets, which makes the prediction space very large and is not conducive to assisting in predicting the specific POI of the check-in. Therefore, inspired by [
27], we summarize twelve coarse-grained categories on the basis of existing categories. Note that each user’s dependence on the category transition preference is different. For instance, the user in
Figure 2b goes from “residence” to “office”, and vice versa.
Obs.2: (Periodic preference.) A user may have a fixed-mobile pattern on weekdays and another check-in pattern on weekends (Saturday and Sunday). However, except for distinguishing the user check-in pattern roughly on working days and weekends to analyze user periodic preference, a user’s check-in behavior will have a relatively obvious change pattern every day of the week in real life. As shown in
Figure 3, for example, a user would like to visit the gym after the restaurant on Wednesday and to visit the company before the bar on Thursday. It is not appropriate to only study the periodicity of the user check-in movement on weekdays and weekends, so we analyze the spatio-temporal information of check-ins and study the periodic check-in pattern of users for every day of the week.
3.2. Problem Statement
Let denote a set of users and denote a set of POIs, where represents the total number of users and represents the total number of POIs. Each POI is a location associated with latitude, longitude and category information in LBSNs, such as a restaurant or bar.
A check-in activity of user is a six-tuple , where represents user u who accesses the POI v at time . The category of v is , is the geographical coordinate, represents the time, and is the w day of a week, such as Monday.
All the check-in activities of user u form their trajectory sequence , where N is the total check-in number of user u. From the historical trajectory sequence , we obtain the category sequence of u, where . The short-term check-in sequence of user u can be extracted from , denoted as , where represents user u check-in for the first time in the short-term, and S is the total check-in number in the short-term.
Given , and , the goal of the next POI recommendation is to predict a top-k list of POIs that user u is likely to visit at the next time based on the two observations.
4. Proposed Method
The proposed model consists of four parts, as shown in
Figure 4: (1) Category module: this captures the category transition preference of the user at the coarse-grained semantic level to assist the long- and short-term preference modules; (2) Long-term preference module: this obtains the user’s long-term preference for POI based on the LSTM, and integrates the multi-level attention mechanism and the user’s category transition preference; (3) Short-term preference module: this obtains the user’s short-term preference for POI based on the RNN, and integrates the temporal attention mechanism and the user’s category transition preference; (4) The output layer: the long and short-term preferences are combined to obtain the user’s preference expression, and the final POI probability ranking list is formed with the calculation of user preference and candidate POIs based on a filter that we designed.
4.1. Category Module
We design the category module to infer the user’s category transition preference, which obtains the category transition pattern when users visit the POI and participates in the POI recommendation as an auxiliary function. Due to the long sequence, the LSTM network is adopted to ensure recommendation accuracy.
We learn the user’s category transition preference
from the category sequence
, where each element of
is denoted as
. This indicates that the user
u visits a POI
v of category
at time
. The latent vector of the category module is defined as follows.
where
is the weight matrix,
d is the dimension of the hidden vector,
is bias, and
indicates the embedding vector of the POI category
. Then,
is input into the LSTM network to infer the hidden state
of user
u at time
.
where
captures the sequential correlation of categories, and
indicates the check-in category up to
. Note that we treat the last hidden vector
as the representation of user
u’s category transition preference.
4.2. Long-Term Preference Module
The long-term preference module obtains the user’s long-term POI preference according to the contextual information of check-in activities and the multi-level attention mechanism.
4.2.1. Network Input
The historical check-in sequence of user
u consists of all their check-ins. Each check-in is a six-tuple
, which reflects the long-term preference of the user to access the POI, so we utilize it to learn the user’s preference at the POI level. Because the user’s check-in is usually affected by the distance between the current location and the next location, as well as the time difference between the last check-in and the current check-in, the embedding layer of the long-term preference module should consider the impact of spatio-temporal contextual information on the check-in on the basis of the check-in location and time. The modeling of continuous check-in activities combined with the day of the week is more conducive to the study of the user’s check-in regularity. In conclusion, the latent vector of the embedding layer of the long-term preference module is defined as follows:
where
is the weight matrix,
is the bias term,
is the embedding of the POI number,
is the embedding of the POI location,
is the embedding of the access timestamp,
is the embedding of
,
is embedding based on the distance
between
and
of
and
, and
is embedding based on the time difference
between
and
of
and
.
The embedding layer of the long-term preference module has a total of seven input features, each of which marks an attribute of the current check-in. The influence degrees of these attributes on the current check-in are different. For example, a user is more likely to visit a POI close to their last check-in at a given time or is more likely to go to the “catering” category. The proportion of different attributes in the current check-in is researched by the contextual attention mechanism.
We use
to represent the
i-th feature of the
k-th history check-in. For example,
represents the POI number information of user
u in the second check-in, and
represents the weight of the
i-th attribute in the
k-th check-in, and the softmax function is used for normalization.
where
I is the number of attributes,
,
,
are the parameters to be learned, and
is the cell state of the LSTM network at time
. Then,
is multiplied by
to obtain the multi-factor dynamic representation of the check-in at time
under the contextual attention mechanism, and then the updated attribute embedding vector is connected to obtain the aggregation
of the embedding layer based on the contextual attention mechanism.
where
is the weight parameter to be learned corresponding to the
i-th attribute, and
is the bias vector to learn.
To take into account the user’s transition preference at the semantic level, we add the user’s category preference on the basis of the embedded layer based on contextual attention and obtain the expression of the end-user’s long-term check-in behavior as shown below.
where
is the weight of user
u’s category transition preference in the current check-in representation of the long-term preference module, and
is the final potential vector sent to the LSTM network to infer the hidden state at
.
4.2.2. Temporal Attention
Check-in activities in the user’s trajectory sequence are not all linearly correlated. However, the LSTM cannot solve this problem because it cannot obtain non-linear dependencies between check-ins. To compensate for this, we study the different effects of different check-ins on user preference; that is, the weights of different time steps of the check-in sequence should be learned to distinguish the important degree of each check-in in the historical check-ins. Therefore, we utilize the temporal attention mechanism to adaptively select relevant historical check-in activities to achieve a better recommendation of the next POI.
Let
be a matrix composed of all hidden vectors
of the long-term preference module, where
N is the length of the historical check-in sequence. The weight vector
of the historical check-in is generated through the temporal attention mechanism, and the influence degree of the
k-th historical check-in on the next check-in is measured with the weight
corresponding to each
.
the attention function
is as follows.
where
is the query information of the long-term check-in sequence in the temporal attention mechanism, that is, the POI that the user checks-in next. The dot product attention is used as the attention function, since the
d is small and the dot product attention is superior to the additive attention. The embedded representation of the query “next POI check-in” for all the historical check-ins is obtained. Then multiply the resulting weight vector
by
to obtain the user
u’s preference representation of the long-term.
4.3. Short-Term Preference Module
The user’s next POI check-in is influenced by the short-term preference represented by the user’s recent check-in behavior in addition to the long-term preference represented by the historical check-in. Using the last S check-in as the user’s short-term check-in sequence, it is represented as , where represents the first check-in in the short-term, and one of them is set as .
As with the long-term model, seven check-in features are extracted from the check-in activity tuple
of the short-term as the user check-in attributes to be learned in the short-term preference module. The latent vector of the embedding layer of the short-term preference module is defined as follows:
where
is the weight matrix, and
is the bias term.
Similarly, the users’ category transition preference is also considered in the short-term preference module of this study, so short-term preference learning aggregation is defined as follows:
where
is the weight of user
u’s category transition preference in the current check-in representation of the short-term preference module.
enters the RNN as a potential vector for the user’s check-in for the short-term.
Note that the RNN has the disadvantage of gradient disappearance. To avoid the problem of inaccurate recommendation results, we introduce the temporal attention mechanism to aggregate the hidden states generated by the RNN.
Let
be a matrix composed of all hidden vectors
of the short-term preference module, where
S is the length of the short-term check-in sequence. The weight vector
of the short-term check-in is generated through the temporal attention mechanism of the short-term preference module, and the influence degree of the
m-th short-term check-in on the next check-in is measured with the weight
corresponding to each
.
where
is the query information of the temporal attention mechanism, that is, the embedded representation of the query “next POI check-in” for all short-term check-ins. Then we multiply that weight vector
times
to obtain the short-term preference representation for user
u.
4.4. Output Layer
4.4.1. POI Filter
Traditional recommendation systems usually recommend POI directly with all POI candidates as the POI, which undoubtedly increases the recommended computation time and space, and reduces the recommended accuracy. Different from other interest recommendations, such as music and movies, users’ check-in is restricted by geographical location, so the next check-in of users will not be too far away from the current location. In addition, considering the time and transportation cost of each check-in, users are actually more inclined to access POIs that have been checked in before. However, users are also influenced by other users to access popular POIs. Therefore, these three factors must be considered at the same time when making recommendations for users.
Considering the above reasons, we designed a filter to sift some POIs as candidate POIs from all POIs. The POI filter has three filtering rules: (1) the POI that the user
u has visited, (2) ten of the nearest POIs to the user’s current location, (3) five of the most popular POIs among all users. The specific parameters are discussed in
Section 5.
4.4.2. Recommend Top-k POIs
In order to comprehensively and dynamically study user preference, we use a weighted fusion of the user’s long-term preference obtained by the long-term preference module and their short-term preference obtained by the short-term preference module to compute the final user preference.
where
and
are the weights to learn. The next POI access probability normalized by the softmax function is defined as follows:
where
is an embedded representation of candidate POI
, and
P is the total number of candidate POIs passed by a filter. Then, we can obtain the next visit probability of all candidate POIs in the output layer and the ranked POI list and recommend the top-k POIs for user
u (Algorithm 1).
Algorithm 1 Training of LSMA |
Input: User set U, historical check-in sequences set , , parameter set Output: LSMA model - 1:
//construct training instances - 2:
Initialize , - 3:
for each do - 4:
- 5:
- 6:
end for - 7:
for each do - 8:
for each , and do - 9:
Get the negative samples , and - 10:
end for - 11:
end for - 12:
//parameter updating - 13:
for ; ; ++ do - 14:
for each do - 15:
Select a random batch of instances - 16:
for each do - 17:
- 18:
end for - 19:
end for - 20:
end for
|
4.5. Network Training
To effectively improve recommendation performance, we employ Bayesian personalized ranking (BPR) to define the loss function for training the network in the category, long- and short-term preference modules [
28]. The data used for the modules consist of a set of triplets sampled from the original data, each triplet containing the user
u and a pair of positive and negative samples. In the category module, the positive sample is the category that user
u is currently accessing, the negative sample is all of the other categories. In the long- and short-term preference modules, the positive sample is the POI that user
u is currently accessing; the negative sample is the POI close to the current check-in location considering the influence of geographical coordinates on the user’s check-in.
The loss function of the category module is:
where
is the negative category of
c,
is the training example consisting of
u,
c,
in the category module,
is the predicted probability of user
u visiting the POI of category
c at time
t, and
is the predicted probability of user
u visiting the POI of category
at time
t.
The loss function of the long-term preference module is:
where
is a negative sample of
v in the long-term preference module, and
is the training example consisting of
u,
v,
.
The loss function of the short-term preference module is:
where
is a negative sample of
in the short-term preference module, and
is the training example consisting of
u,
,
.
To sum up, we design the total loss function by integrating the loss functions and regularization terms of the three modules, shown as follows:
where
is the regularization coefficient, and
is the set of model parameters to learn. AdaGrad has been applied in large-scale learning tasks; thus, AdaGrad was employed to optimize the network parameters.
4.6. Complexity Analysis
In the LSMA training process, the computational complexity of all the category, long- and short-term preference modules is , where d is the embedding size. The training instance with an average category sequence length of , training instance with an average history sequence length of , and training instance with an average short-term sequence length of are given, respectively. Each iteration trains overall complexity to . That is, the complexity of LSMA has a quadratic relationship with the size of the embedding vector d.
5. Experiments
To verify the proposed method, we compared it with seven baselines on two public real-world check-in datasets named Charlotte (CHA) [
17] and New York (NYC) [
29] from Foursquare. All the algorithms were coded in Python 3.8 and the framework was TensorFlow 2.3.1. The experiments were conducted on the computer with CPU AMD Ryzen 5 3500U, Radeon Vega Mobile Gfx 2.10 GHz, and 16G RAM.
5.1. Datasets
The check-in data of the CHA were collected from January 2012 to December 2013 and the check-in data of NYC were collected from April 2012 to February 2013. The CHA dataset included 1580 users, 1791 POIs and 20,939 check-in records, and the NYC dataset included 1083 users, 38,336 POIs and 227,428 check-in records. In this study, each check-in record consisted of user, POI, the geographical coordinates of the POI, the timestamp of the check-in, the category of the POI, and the day of the check-in within the week. Similar to the work of Zhang et al. [
17], we deleted the same POI that was accessed consecutively on the same day and deleted the inactive users who checked in less than eight times. For example, in the trajectory sequence
on Sunday, the processed sequence was
. The 90% of check-ins of each user were used as the training set and the last 10% as the test set.
5.2. Methods for Comparison
We demonstrated the effectiveness of the LSMA method compared to the following seven baseline methods:
PMF [
12]: a recommendation algorithm designed based on the conventional probability matrix decomposition on the user-POI matrix.
ST-RNN [
7]: a next POI recommendation algorithm based on RNN, which integrates the spatio-temporal information into the latent vector.
Time-LSTM [
16]: equips the LSTM with a time gate to model continuous user actions in order to predict the next check-in POI.
ATST-LSTM [
19]: adds an attention mechanism on the basis of the LSTM network, and comprehensively considers spatio-temporal contextual information to improve the effectiveness of the next POI prediction.
LSPL [
25]: learns users’ long- and short-term preferences by considering sequential information and the geographical location and category of the POI.
iMTL [
17]: a new interactive multi-task learning framework composed of a time-aware activity encoder, spatially aware position preference encoder, and task-specific decoder, mainly considering the next POI recommendation under uncertain check-in conditions.
RTPM [
26]: combines long- and short-term preferences and introduces public interest into the short-term preference to study the user’s real-time interest.
5.3. Evaluation Metrics
All the methods discussed in this study calculate the dot product between the user representation and the POI representation to obtain the probability of the user accessing the POI the next time. In fact, the difference of each method lies in the different modeling of the user representation. To demonstrate the effectiveness of the methods, the recall rate (
) and mean average precision (
) were defined as follows:
where
represents the set of top
k POIs recommended to user
u,
represents the POI set actually accessed by the user at the next time in the test set, and
represents the ranking of
in
. Note that, in order to avoid a division error, we specify
when
.
5.4. Parameter Setting
In the short-term preference module, we use the latest
S check-ins as the user’s short-term check-in sequence. The value of
S should be as small as possible in order to study the user’s recent interests and reduce the computing time and space. Considering indicator
as an example,
Figure 5a shows the performance of difference sequence length
S. Considering the performance and computational complexity,
S is set to be 6 as the length of the short-term trajectory sequence.
The LSMA model uses word embedding vectors to represent all users and POI information entering the model. The embedding dimensions of category, long- and short-term preference modules should be unified. We set embedding
, where
d is the number of hidden units.
Figure 5b shows the performance of the difference embedding dimension
d. Similarly, we chose
d to be 128 as the embedding dimension considering the performance and computational complexity.
The setting of the negative sample number is also very important for model training. In the category module, the total number of POI categories is 12. In order to ensure the accuracy of the recommendation, we directly use all categories other than the current POI category as negative samples, i.e., the number of negative samples for each category sequence in the category module is 11. However, the total number of POIs is large, so we cannot use all POIs other than the current check-in POIs as negative samples. Therefore, we conducted experiments to find out the optimal number of negative samples was 5 in the long and short-term preference modules, as shown in
Figure 5c.
In the output layer of the LSMA, in order to reduce the computation and improve the recommendation accuracy, we designed a filter mechanism, which has two hyperparameters: the most popular POI number and the nearest POI number to the user’s current location. We also conducted comparative experiments to explore the best settings of these two hyperparameters, as shown in
Figure 6.
We determined the best setting for each of the remaining hyperparameters as follows: (1) The layer number of the LSTM network of the category module and long-term preference module was 1, while the layer number of RNN of the short-term preference module was set to 1; (2) The learning rate of the three modules was 0.00001, 0.0001, and 0.0001, respectively; (3) The iteration number of the category module was 40, while the iteration number of the long- and short-term preference modules were both 20.
5.5. Results and Analysis
Table 1 and
Table 2 show the performance of the different methods; the results of the two evaluation indicators are listed when
k is set to 5 and 10, respectively. It is obvious that the performance of PMF based on the non-neural network was the worst, lower than that of baseline methods based on RNN (ST-RNN, Time-LSTM, ATST-LSTM, iMTL, LSPL, RTPM), indicating that the neural network was very effective in modeling sequences. It was found that the
and
values of the Time-LSTM were higher than that of ST-RNN, which indicates that LSTM had better performance than RNN in long sequence modeling. Among the recommendation models based on the LSTM, the RTPM performed best on both the CHA and NYC datasets. This demonstrates the importance of considering both the user’s long- and short-term preferences and the effectiveness of filtering some qualified POIs to make recommendations. However, the LSMA we proposed had a better recommendation performance than the RTPM. For example, the
value of the RTPM on the CHA dataset was 0.1569, while the
value of the LSMA was 0.2838. The LSMA increased by 80.87% on
. This was mainly because the LSMA considers the users’ long-term and short-term preferences simultaneously. Secondly, the LSMA mines as much information contained in user check-in sequences, as well as the users’ movement patterns in the category, as possible, and models user behavior in more detail and in various aspects. Finally, the LSMA designs a multi-level attention mechanism to consider the weight of each check-in attribute of the POI and the influence degree of each check-in comprehensively.
To verify the performance obtained by considering the different contributions of the category module, short-term preference module, contextual attention mechanism, temporal attention mechanism, and POI filter, we designed five different variants of the LSMA: (1) LSMA-C removes the category module; the users’ preferences at the semantic level are no longer considered; (2) LSMA-S removes the short-term preference module; the user’s short-term preference is no longer considered; (3) LSMA-CA removes the contextual attention mechanism from the long-term preference module; (4) LSMA-TA removes the temporal attention mechanism from the long-term preference module; (5) LSMA-Filter removes the filter from the output layer.
Figure 7 illustrates the performance of the LSMA compared to the five variants. From
Figure 7, it was found that the LSMA performed better than its variants in recall and MAP. The performance of LSMA-C was better than that of LSMA-S, LSMA-CA, LSMA-Filter, and LSMA-TA, which indicates that the short-term preference module and the attention mechanism play an important role in the LSMA, and the category module assists the long- and short-term modules. In addition, the performance of the LSMA-S is the worst of all variants for three evaluation indicators, which confirms the important influence of short-term preference on the user’s check-in behavior. The
values of the LSMA-C, LSMA-S, LSMA-CA, LSMA-TA, and LSMA-Filter on the CHA dataset were 0.3225, 0.2439, 0.303, 0.3236, and 0.3508, respectively, while the
value of the LSMA was 0.4135. The LSMA yielded an increase of 28.22%, 69.54%, 36.47%, 27.78%, and 17.87% on
, respectively. The necessity of exploiting the temporal attention and contextual attention mechanisms can be inferred. In summary, the five components were indispensable, and they enabled the LSMA to achieve a significant performance improvement.
6. Conclusions
A next POI recommendation algorithm LSMA was proposed in this paper, which models the user’s long- and short-term preferences based on multi-level attention. Specifically, the LSMA designs the category module to obtain the category transition preference of users and participates in check-in representation in long- and short-term preference modules as an auxiliary function. The long-term preference module of the LSMA is constructed to achieve the users’ long-term POI preferences according to the LSTM network and multi-level attention mechanism. The short-term preference module of the LSMA is used to obtain users’ short-term POI preferences according to the RNN and the attention mechanism. Moreover, focusing on the key attribute of the user’s check-in and the key time step of the check-in sequence, the user’s movement behavior patterns can be fully mined through a multi-level attention mechanism. The experimental results showed that the LSMA performance was superior to that of the other seven comparative recommendation methods for the next POI recommendation.
In the future, our work will continue to optimize the LSMA model by considering the user’s comment information and further study the privacy protection for the next POI recommendation.