In this section, we first describe the benchmarks in the problem. Then, we report the parameter settings used in our experimental study. Moreover, the effectiveness of the developed algorithm is analyzed using comprehensive computational methods. Finally, we conducted relevant sensitivity analysis and management insights.
The algorithm is coded in MATLAB R2018b software. All experiments were performed on a PC with an Intel Core i5 CPU clocked at 2.40 GHz and 8 GB of memory running Windows 10 Professional 64-bit version.
5.1. Dataset Generation
The proposed model of SMMAMRRPTW involving stochastic demands and stochastic travel times has not been addressed in previous research. No comparative algorithm and benchmarks are available for comparison. Therefore, we use the well-known Solomon’s dataset for VRPTW to generate our problem instances with appropriate modifications. The Solomon instances are grouped into three types. In type C, customers are clustering distributed. In type R, customers are randomly distributed, and in type RC, the customer distribution is mixed. Four instances are chosen from each type, and thus, there are a total of 12 instances tested, i.e., datasets C101, C102, C201, C202, R101, R102, R201, R202, RC101, RC102, RC201, and RC202.
We used the same vehicle capacity and customer and depot locations as in Solomon. We change the travel time of an edge of an instance to a random variable , where the mean travel time is the product of the Euclidean distance between two points and the travel speed , and the variance of . The customer demands in Solomon as a mean (, ) for the stochastic demand in our problem, and the variance is , . Furthermore, we assume that a service time per demand unit is 2 s () and the bias is 10 s (), which corresponds to approximately 12 s per demand unit. The confidence level for meeting the vehicle capacity () and time window () in SMMAMRRPTW is 0.95.
5.2. Parameter Setting
To determine the appropriate parameter settings, we used the Taguchi method to tune the parameters of the PTS algorithm. In the Taguchi method, orthogonal arrays are used to study a large number of decision variables with a small number of experiments. The Taguchi method offers orthogonal arrays to determine the best values of decision variables with a small number of experiments [
35]. Taguchi method transforms the repetition data to another value, which is the signal-to-noise (S/N) ratio. Here, ‘signal’ denotes the response variable, and ‘noise’ denotes the standard deviation. Therefore, the S/N ratio indicates the amount of variation present in the response variable. The aim of the Taguchi method is to maximize the S/N ratio [
35].
Table 2 shows the parameters of the PTS algorithm. For each parameter, three levels were considered. Ten different runs were performed on twelve instances, and the Minitab software 22 was used to provide the mean S/N ratio plot for the PTS algorithm, as shown in
Figure 6. Based on the S/N ratio plot illustrated in
Figure 6, the optimum level of all parameters was obtained (
Table 2).
Finally, the portion of the population to be filled was determined, and a set of runs with different percentages (
) for the six instances C101, C201, R101, R201, RC101, and RC201 was performed. Here,
represents the population percentage filled by the current optimal solution of the TS algorithm in the PTS algorithm, while the remaining portion (
) is filled randomly to achieve diversity.
Figure 7 shows the iteration diagrams of six instances at four different proportions
(20%, 50%, 80%, and 100%).
Figure 7 shows that using the current optimal solution generated by each iteration of the TS algorithm as a chromosome to fill the entire population (
) can make the PTS algorithm converge to a better solution.
5.3. Performance of the PTS Algorithm
In this section, we conducted a series of experiments using 12 instances to assess the effectiveness and efficiency of the PTS algorithm and model. Specifically, the performance of the Gk algorithm for generating initial solutions was validated by comparing k-means and Greedy insertion algorithms. Furthermore, we compared the PTS with TS and GA algorithms and finally analyzed the service quality of patients.
We first compared Gk with the k-means and Greedy insertion algorithm on generating initial solutions. Each instance was run ten times, and the comparison results are described in
Table 3, where
and
are the average objectives for ten runs,
and
are the optimal objectives for ten runs of the three algorithms, and
is the objective of the Greedy insertion algorithm. It is worth noting that the Greedy insertion algorithm generates the same result every time it runs. The percentage improvement (
,
) of the average objective generated by the Gk algorithm relative to the average objectives generated by k-means and Greedy insertion algorithms, as defined by
We see that in most cases, Gk outperforms the k-means and Greedy insertion algorithm. Furthermore, we observe that the k-means method demonstrates favorable clustering outcomes for the type RC1 instances. This can be explained by the customer distribution characteristics and the width of time windows of RC1 instances. The k-means method can effectively cluster instances with mixed spatial distributions. The RC1 instance time window is relatively narrow, so the effectiveness of the Greedy insertion algorithm based on the width of time windows becomes less obvious. On the contrary, for the type C2 instances with clustering distribution and wide time windows, employing the Greedy insertion algorithm can obtain better initial solutions.
Table 3.
Comparison results of Gk, k-means, and Greedy insertion algorithms for solving initial solutions.
Table 3.
Comparison results of Gk, k-means, and Greedy insertion algorithms for solving initial solutions.
Instance | Gk | k-Means | Greedy Insertion |
---|
| | | | | | |
---|
C101 | 2,045,910.08 | 1,863,415 | 2,610,782 | 2,417,649 | 21.63 | 2,617,170 | 21.82 |
C102 | 1,254,631.38 | 1,107,483 | 1,655,803 | 1,434,495 | 24.22 | 1,870,543 | 32.92 |
C201 | 12,085,514.7 | 8,324,882 | 14,809,985 | 11,851,861 | 18.39 | 7,814,317 | −54.65 |
C202 | 9,612,167.96 | 8,365,007 | 12,044,963 | 9,850,724 | 20.19 | 9,311,346 | −3.23 |
R101 | 938,552.54 | 884,402.9 | 959,881.3 | 830,621 | 2.22 | 1,166,671 | 19.55 |
R102 | 742,706.32 | 686,893 | 797,058 | 739,009.9 | 6.81 | 953,384.5 | 22.09 |
R201 | 5,093,775.34 | 3,129,344 | 7,378,546 | 6,652,661 | 30.96 | 5,823,723 | 12.53 |
R202 | 4,614,830.48 | 2,115,929 | 6,124,538 | 5,632,099 | 24.65 | 6,014,803 | 23.27 |
RC101 | 830,637.94 | 718,604 | 785,494.6 | 734,688.3 | −5.74 | 1,015,725 | 18.22 |
RC102 | 693,004.58 | 628,846.4 | 666,737.6 | 576,412 | −3.93 | 929,352.6 | 25.43 |
RC201 | 4,923,085.68 | 3,848,542 | 591,6827 | 5,416,624 | 16.79 | 5,317,144 | 7.41 |
RC202 | 3,916,137.13 | 2,559,224 | 5,231,515 | 4,386,069 | 25.14 | 5,463,542 | 28.32 |
Average | 3,895,912.85 | 2,852,714.40 | 4,915,177.43 | 4,210,242.97 | 20.73 | 4,024,810.2 | 3.20 |
We then compared PTS with the TS and GA algorithms on the SMMAMRRPTW problem. The comparative results are provided in
Table 4, where we report the average objective (
,
, and
), best objective (
,
, and
), Gap value (
), and the computation time (CPU) on ten runs of three algorithms. The results clearly show that our proposed PTS algorithm is markedly efficient. The relative percentage deviation (
,
) between the average objective value obtained by the PTS algorithm and the average objective value obtained by the TS and GA algorithm is calculated as follows, respectively.
A larger Gap value indicates better performance of the PTS algorithm. Indeed, the average objective and best objective in almost all the instances are found by the PTS algorithm. The average values of and under the TS algorithm are 1,604,411 and 1,855,859, respectively, while the average values of and under PTS are 1,481,096 and 1,707,478, respectively, demonstrating that PTS improves the average objective by . This observation highlights the benefit of incorporating the population into the TS framework. Similarly, comparing PTS with the GA algorithm, the addition of the tabu list and aspiration rule improves the average objective by . This result shows the benefits of incorporating the tabu list and aspiration rule into GA. In terms of runtime, we can see that the PTS algorithm proposed in this paper needs longer CPU computation time than the others, while the GA algorithm has the shortest computation time since each iteration of the PTS algorithm not only requires a tabu mechanism on the current neighborhood solution but also crossover and mutation of the population, which needs additional computation time.
Table 4.
Comparison results of TS, GA, and PTS algorithms for solving SMMAMRRPTW.
Table 4.
Comparison results of TS, GA, and PTS algorithms for solving SMMAMRRPTW.
Instance | TS | GA | PTS | | |
---|
| | CPU (s) | | | CPU (s) | | | CPU (s) |
---|
C101 | 1,018,859 | 1,122,640 | 41.68 | 1,339,227 | 1,465,171 | 20.41 | 794,186.5 | 934,713.9 | 70.44 | 20.1 | 56.7 |
C102 | 544,306.6 | 660,273.5 | 40.60 | 850,956.7 | 887,624.5 | 20.36 | 383,453.1 | 510,149.9 | 89.07 | 29.4 | 74.0 |
C201 | 5,875,695 | 6,200,411 | 36.04 | 6,794,412 | 7,638,171 | 14.80 | 5,300,789 | 5,857,005 | 80.19 | 5.9 | 30.4 |
C202 | 2,707,980 | 4,023,706 | 43.43 | 3,946,166 | 4,838,110 | 16.86 | 3,032,538 | 3,568,176 | 77.69 | 12.8 | 35.6 |
R101 | 601,024.8 | 630,914.4 | 44.75 | 777,221.5 | 805,799.9 | 18.74 | 564,001.8 | 615,274.9 | 81.56 | 2.54 | 31.0 |
R102 | 462,079 | 496,479.4 | 41.56 | 605,469.9 | 639,811.3 | 20.42 | 428,810.9 | 459,041.6 | 70.80 | 8.16 | 39.4 |
R201 | 2,041,798 | 2,366,187 | 42.13 | 2,624,790 | 2,975,805 | 16.96 | 1,886,346 | 2,267,428 | 68.81 | 4.36 | 31.2 |
R202 | 1,626,803 | 1,862,737 | 38.31 | 1,915,175 | 2,140,760 | 16.53 | 1,518,544 | 1,679,639 | 71.34 | 10.9 | 27.5 |
RC101 | 535,131.9 | 572,251 | 48.61 | 652,803.2 | 709,288.6 | 20.72 | 484,415.7 | 548,133.4 | 72.90 | 4.4 | 29.4 |
RC102 | 445,098.8 | 480,440.4 | 47.07 | 567,694.6 | 587,258.1 | 19.31 | 415,750.2 | 443,555.7 | 72.34 | 8.32 | 32.4 |
RC201 | 1,888,989 | 2,147,014 | 43.4 | 2,682,160 | 2,991,024 | 17.10 | 1,689,098 | 2,106,756 | 72.36 | 1.91 | 42.0 |
RC202 | 1,505,167 | 1,707,251 | 44.33 | 1,775,964 | 2,111,596 | 17.25 | 1,275,220 | 1,499,864 | 70.56 | 13.8 | 40.8 |
Average | 1,604,411 | 1,855,859 | 42.66 | 2,044,337 | 2,315,868 | 18.29 | 1,481,096 | 1,707,478 | 74.84 | 8.69 | 35.6 |
All in all, the detailed analysis reveals that the PTS algorithm can find high-quality, objective value. Moreover, it is evident that incorporating population crossover and mutation operations into the TS algorithm significantly improves solution quality. This enhancement markedly boosts the scheduling performance of AMRs, particularly in complex routing scenarios.
5.4. Sensitivity Analysis and Management Insights
The CCP considers management decisions by controlling confidence levels when solving the SMMAMRRPTW problem. Our next experiment aims to analyze the impact of confidence level
of capacity constraint on the total delay service time (TDS) of AMRs and the number of AMRs. We compare the
,
,
,
, and
, as shown in
Figure 8.
From the observations in
Figure 8a,b, we can see that increasing the confidence level tends to reduce the delayed service of AMRs, thereby improving the quality of service for patients. Especially for types C1, R1, and RC1 instances (
Figure 8a), the impact of increasing the confidence level of capacity constraints on service quality is evident. Similarly, we can observe that increasing the confidence level leads to an increase in the use of AMR (see
Figure 8c,d), which is more pronounced in types C1, R1, and RC1 instances (
Figure 8c). This phenomenon can be explained by the smaller loading capacity of AMR in types C1, R1, and RC1 instances, while the larger capacity of AMR in types C2, R2, and RC2 instances. For instances with smaller capacity, increasing the confidence level on capacity constraints strengthens capacity constraints. Therefore, to satisfy capacity constraints, increasing the likelihood of AMR participating in services also increases accordingly, thereby reducing the cost of delayed services.
Based on the analysis above, we offer several management insights for hospitals regarding the scheduling and deployment of AMRs. Firstly, hospital managers should establish higher confidence levels to ensure prompt deliveries, particularly in critical areas such as medication and document transport. Secondly, adjusting confidence levels based on real demand and capacity constraints can significantly enhance operational efficiency. This approach ensures optimal utilization of AMRs and improves overall performance. Lastly, for hospitals with frequent high service demands, investing in AMRs with larger capacities can be advantageous. This strategy reduces the frequency of trips and addresses capacity constraints with fewer AMRs, leading to long-term cost savings.