Author Contributions
Conceptualization, Y.Z.; Formal analysis, Z.Z.; funding acquisition, Z.G.; investigation, X.Z. and Z.G.; methodology, Y.Z.; resources, S.W.; software, E.Z., S.W. and Z.Z.; validation, L.Z., E.Z. and S.W.; visualization, X.Z.; writing—original draft, Z.H.; writing—review and editing, L.W. All authors have read and agreed to the published version of the manuscript.
Figure 1.
An example of vessel trajectory prediction. The black ships denote the historical AIS trajectories. The grey ships denote the prediction results, while the blue ships denote the actual trajectories.
Figure 1.
An example of vessel trajectory prediction. The black ships denote the historical AIS trajectories. The grey ships denote the prediction results, while the blue ships denote the actual trajectories.
Figure 2.
PESO is composed of a Semantic Location Vector (SLV), Parallel Encoders, and a Ship-Oriented Decoder. The SLV of a particular ship represents the spatial correlation of its trajectories. The Parallel Encoders are designed to capture more features by using two different encoders, which include the Location Encoder and the Sailing Status Encoder. The Ship-Oriented Decoder is designed to utilize the SLV to guide the decoding process.
Figure 2.
PESO is composed of a Semantic Location Vector (SLV), Parallel Encoders, and a Ship-Oriented Decoder. The SLV of a particular ship represents the spatial correlation of its trajectories. The Parallel Encoders are designed to capture more features by using two different encoders, which include the Location Encoder and the Sailing Status Encoder. The Ship-Oriented Decoder is designed to utilize the SLV to guide the decoding process.
Figure 3.
Workflow for obtaining the Semantic Location Vector. First, we divide the sea areas of our dataset with 0.1 latitude and 0.1 longitude as a grid. Then, we use sliding window to conduct a training set of a CBOW model. After training, this model can map every grid to an 8-dimensional semantic vector. Finally, after collecting all the grids corresponding to a vessel, we average the semantic vectors of these grids to obtain the Semantic Location Vector (SLV).
Figure 3.
Workflow for obtaining the Semantic Location Vector. First, we divide the sea areas of our dataset with 0.1 latitude and 0.1 longitude as a grid. Then, we use sliding window to conduct a training set of a CBOW model. After training, this model can map every grid to an 8-dimensional semantic vector. Finally, after collecting all the grids corresponding to a vessel, we average the semantic vectors of these grids to obtain the Semantic Location Vector (SLV).
Figure 4.
The details of training loss and testing loss during the the training process. The X-axis represents the number of training epochs, and the Y-axis represents the RMSE loss value.
Figure 4.
The details of training loss and testing loss during the the training process. The X-axis represents the number of training epochs, and the Y-axis represents the RMSE loss value.
Figure 5.
The visual comparison of different baselines and PESO. The difficulty of prediction increases gradually from (a–c). PESO outperforms other models in all scenarios.
Figure 5.
The visual comparison of different baselines and PESO. The difficulty of prediction increases gradually from (a–c). PESO outperforms other models in all scenarios.
Figure 6.
The visual comparison of different structures of PESO. The difficulty of prediction increases gradually from (a–c). As we can see from the figures, PESO (with LSTM–LSTM) can obtain the best prediction results.
Figure 6.
The visual comparison of different structures of PESO. The difficulty of prediction increases gradually from (a–c). As we can see from the figures, PESO (with LSTM–LSTM) can obtain the best prediction results.
Figure 7.
The influence of the sailing status information. The yellow and green lines are prediction results of PESO with and without sailing status information, respectively.
Figure 7.
The influence of the sailing status information. The yellow and green lines are prediction results of PESO with and without sailing status information, respectively.
Figure 8.
The influence of the COG information. The yellow and green lines are prediction results of PESO with and without COG information, respectively.
Figure 8.
The influence of the COG information. The yellow and green lines are prediction results of PESO with and without COG information, respectively.
Figure 9.
The influence of the SOG information. The yellow and green lines are prediction results of PESO with and without SOG, respectively.
Figure 9.
The influence of the SOG information. The yellow and green lines are prediction results of PESO with and without SOG, respectively.
Figure 10.
The influence of the distance information. The yellow and green lines are prediction results of PESO with and without distance information, respectively.
Figure 10.
The influence of the distance information. The yellow and green lines are prediction results of PESO with and without distance information, respectively.
Figure 11.
The influence of the SLV. The yellow and green lines are prediction results by PESO with and without SLV, respectively.
Figure 11.
The influence of the SLV. The yellow and green lines are prediction results by PESO with and without SLV, respectively.
Table 1.
Explorations on the numbers of LSTM layers under the metrics of RMSE, MAE, ADE, and FDE. Note that 1 layer represents that the number of LSTM layers of encoder and decoder in PESO is 1.
Table 1.
Explorations on the numbers of LSTM layers under the metrics of RMSE, MAE, ADE, and FDE. Note that 1 layer represents that the number of LSTM layers of encoder and decoder in PESO is 1.
Model
|
RMSE
|
MAE
|
ADE
|
FDE
|
---|
1 layer | 0.000525 | 0.000369 | 0.000581 | 0.000865 |
2 layers | 0.000614 | 0.000452 | 0.000699 | 0.001008 |
3 layers | 0.000476 | 0.000325 | 0.000565 | 0.000774 |
4 layers | 0.000469 | 0.000316 | 0.000542 | 0.000695 |
5 layers | 0.000466 | 0.000327 | 0.000523 | 0.000681 |
Table 2.
Comparison results with baselines under the metric of RMSE. Here, 10—>5 represents the RMSE value of 5 prediction trajectories through 10 historical trajectories.
Table 2.
Comparison results with baselines under the metric of RMSE. Here, 10—>5 represents the RMSE value of 5 prediction trajectories through 10 historical trajectories.
Model Name
|
10—>1
|
10—>2
|
10—>3
|
10—>4
|
10—>5
|
---|
LSTM | 0.000389 | 0.000539 | 0.000728 | 0.000929 | 0.001130 |
BiLSTM | 0.000499 | 0.000643 | 0.000823 | 0.001015 | 0.001210 |
GRU | 0.000395 | 0.000542 | 0.000730 | 0.000929 | 0.001130 |
BiGRU | 0.000434 | 0.000570 | 0.000750 | 0.000944 | 0.001141 |
LSTM-LSTM | 0.000380 | 0.000499 | 0.000658 | 0.000844 | 0.001054 |
GRU-GRU | 0.000326 | 0.000559 | 0.000848 | 0.001163 | 0.001494 |
BiGRU-GRU | 0.000730 | 0.000819 | 0.000938 | 0.001136 | 0.001447 |
BiLSTM-LSTM | 0.000571 | 0.000596 | 0.000662 | 0.000747 | 0.000864 |
PESO | 0.000333 | 0.000351 | 0.000378 | 0.000417 | 0.000466 |
Table 3.
Comparison results with baselines under the metric of MAE. Here, 10—>5 represents the MAE value of 5 prediction trajectories through 10 historical trajectories.
Table 3.
Comparison results with baselines under the metric of MAE. Here, 10—>5 represents the MAE value of 5 prediction trajectories through 10 historical trajectories.
Model Name
|
10—>1
|
10—>2
|
10—>3
|
10—>4
|
10—>5
|
---|
LSTM | 0.000319 | 0.000415 | 0.000532 | 0.000656 | 0.000780 |
BiLSTM | 0.000380 | 0.000478 | 0.000593 | 0.000714 | 0.000836 |
GRU | 0.000324 | 0.000419 | 0.000532 | 0.000653 | 0.000774 |
BiGRU | 0.000341 | 0.000430 | 0.000541 | 0.000660 | 0.000781 |
LSTM-LSTM | 0.000299 | 0.000379 | 0.000483 | 0.000605 | 0.000740 |
GRU-GRU | 0.000257 | 0.000403 | 0.000581 | 0.000774 | 0.000978 |
BiGRU-GRU | 0.000579 | 0.000640 | 0.000718 | 0.000841 | 0.001021 |
BiLSTM-LSTM | 0.000458 | 0.000470 | 0.000510 | 0.000559 | 0.000623 |
PESO | 0.000259 | 0.000267 | 0.000283 | 0.000303 | 0.000327 |
Table 4.
Comparison results with baselines under the metric of ADE. Here, 10—>5 represents the ADE value of 5 prediction trajectories through 10 historical trajectories.
Table 4.
Comparison results with baselines under the metric of ADE. Here, 10—>5 represents the ADE value of 5 prediction trajectories through 10 historical trajectories.
Model Name
|
10—>1
|
10—>2
|
10—>3
|
10—>4
|
10—>5
|
---|
LSTM | 0.000493 | 0.000651 | 0.000842 | 0.001043 | 0.001245 |
BiLSTM | 0.000611 | 0.000769 | 0.000956 | 0.001152 | 0.001350 |
GRU | 0.000495 | 0.000650 | 0.000839 | 0.001037 | 0.001238 |
BiGRU | 0.000528 | 0.000677 | 0.000860 | 0.001056 | 0.001254 |
LSTM-LSTM | 0.000462 | 0.000587 | 0.000748 | 0.000935 | 0.001139 |
GRU-GRU | 0.000399 | 0.000649 | 0.000953 | 0.001290 | 0.001649 |
BiGRU-GRU | 0.000914 | 0.000995 | 0.001096 | 0.001256 | 0.001499 |
BiLSTM-LSTM | 0.000740 | 0.000753 | 0.000818 | 0.000900 | 0.001007 |
PESO | 0.000412 | 0.000429 | 0.000453 | 0.000484 | 0.000523 |
Table 5.
Comparison results with time-series forecasting models under four metrics of RMSE, MAE, ADE, and FDE. The experiments were conducted in the most typical scenario in this paper: predicting the following 5 track points with the previous 10.
Table 5.
Comparison results with time-series forecasting models under four metrics of RMSE, MAE, ADE, and FDE. The experiments were conducted in the most typical scenario in this paper: predicting the following 5 track points with the previous 10.
Model Name
|
RMSE
|
MAE
|
ADE
|
FDE
|
---|
PESO | 0.000466 | 0.000327 | 0.000523 | 0.000681 |
ARIMA | 0.001977 | 0.001675 | 0.002708 | 0.003318 |
Kalman Filter | 0.000783 | 0.000643 | 0.000989 | 0.001664 |
VAR | 0.004251 | 0.002924 | 0.004701 | 0.010152 |
ST-Norm | 0.000992 | 0.000720 | 0.001133 | 0.001498 |
Table 6.
Exploration results on different Seq2Seq structure.
Table 6.
Exploration results on different Seq2Seq structure.
Model Name
|
Enc
|
Dec
|
RMSE
|
MAE
|
ADE
|
FDE
|
---|
PESO | LSTM | LSTM | 0.000466 | 0.000327 | 0.000523 | 0.000681 |
PESO-GRU-GRU | GRU | GRU | 0.000552 | 0.000399 | 0.000646 | 0.000817 |
PESO-BiGRU-GRU | BiGRU | GRU | 0.000531 | 0.000387 | 0.000617 | 0.000766 |
PESO-BiLSTM-LSTM | BiLSTM | LSTM | 0.000511 | 0.000380 | 0.000562 | 0.000704 |
Table 7.
Quantitative results on each track point under the metric of RMSE.
Table 7.
Quantitative results on each track point under the metric of RMSE.
Model Name
|
First
|
Second
|
Third
|
Fourth
|
Fifth
|
---|
LSTM | 0.000389 | 0.000649 | 0.001000 | 0.001359 | 0.001710 |
BiLSTM | 0.000499 | 0.000757 | 0.001092 | 0.001441 | 0.001785 |
GRU | 0.000395 | 0.000651 | 0.001000 | 0.001358 | 0.001708 |
BiGRU | 0.000434 | 0.000672 | 0.001011 | 0.001366 | 0.001712 |
LSTM-LSTM | 0.000380 | 0.000592 | 0.000893 | 0.001245 | 0.001639 |
GRU-GRU | 0.000326 | 0.000719 | 0.001235 | 0.001801 | 0.002396 |
BiGRU-GRU | 0.000730 | 0.000897 | 0.001133 | 0.001574 | 0.002280 |
BiLSTM-LSTM | 0.000571 | 0.000618 | 0.000772 | 0.000952 | 0.001210 |
PESO | 0.000333 | 0.000367 | 0.000425 | 0.000511 | 0.000620 |
Table 8.
Quantitative results on each track point under the metric of MAE.
Table 8.
Quantitative results on each track point under the metric of MAE.
Model Name
|
First
|
Second
|
Third
|
Fourth
|
Fifth
|
---|
LSTM | 0.000319 | 0.000511 | 0.000767 | 0.001026 | 0.001276 |
BiLSTM | 0.000380 | 0.000577 | 0.000823 | 0.001075 | 0.001322 |
GRU | 0.000324 | 0.000513 | 0.000760 | 0.001013 | 0.001261 |
BiGRU | 0.000341 | 0.000519 | 0.000763 | 0.001017 | 0.001266 |
LSTM-LSTM | 0.000299 | 0.000458 | 0.000693 | 0.000971 | 0.001279 |
GRU-GRU | 0.000257 | 0.000549 | 0.000937 | 0.001354 | 0.001795 |
BiGRU-GRU | 0.000579 | 0.000701 | 0.000874 | 0.001211 | 0.001738 |
BiLSTM-LSTM | 0.000458 | 0.000482 | 0.000589 | 0.000709 | 0.000877 |
PESO | 0.000259 | 0.000278 | 0.000314 | 0.000362 | 0.000426 |
Table 9.
Quantitative results on each track point under the metric of FDE.
Table 9.
Quantitative results on each track point under the metric of FDE.
Model Name
|
First
|
Second
|
Third
|
Fourth
|
Fifth
|
---|
LSTM | 0.000493 | 0.000809 | 0.001225 | 0.001646 | 0.002052 |
BiLSTM | 0.000611 | 0.000927 | 0.001330 | 0.001740 | 0.002140 |
GRU | 0.000495 | 0.000805 | 0.001216 | 0.001634 | 0.002040 |
BiGRU | 0.000528 | 0.000825 | 0.001228 | 0.001643 | 0.002046 |
LSTM-LSTM | 0.000462 | 0.000711 | 0.001071 | 0.001494 | 0.001955 |
GRU-GRU | 0.000399 | 0.000899 | 0.001561 | 0.002299 | 0.003087 |
BiGRU-GRU | 0.000914 | 0.001077 | 0.001296 | 0.001737 | 0.002469 |
BiLSTM-LSTM | 0.000740 | 0.000766 | 0.000947 | 0.001147 | 0.001436 |
PESO | 0.000412 | 0.000446 | 0.000501 | 0.000578 | 0.000681 |
Table 10.
Ablation studies under the metrics of RMSE, MAE, ADE, and FDE.
Table 10.
Ablation studies under the metrics of RMSE, MAE, ADE, and FDE.
Ablation
|
RMSE
|
MAE
|
ADE
|
FDE
|
---|
PESO | 0.000466 | 0.000327 | 0.000523 | 0.000681 |
| 0.000801 | 0.000576 | 0.000921 | 0.001344 |
| 0.000554 | 0.000385 | 0.000621 | 0.000841 |
| 0.000486 | 0.000342 | 0.000540 | 0.000728 |
| 0.000474 | 0.000343 | 0.000542 | 0.000711 |
| 0.000492 | 0.000355 | 0.000563 | 0.000719 |