*4.5. Algorithm Effectiveness Comparison*

We propose path point generation rules with an exploration strategy to adapt the scenario for multi-drone logistics operations in urban environments and verify the effectiveness of our algorithm in this section. The A\* algorithm [55] is a standard algorithm for path planning by limiting the selectable actions of drone movements. We consider that both the A\* algorithm and the algorithm proposed in this paper are node-based optimal algorithms, while the A\* algorithm performs local merit by limiting the candidate nodes at the current location when selecting the next location. The algorithm in this paper is a global merit algorithm that selects any point within the map in the form of probability by setting heuristic factors. So, we take the A\* algorithm as the benchmark for comparison, which is a more relevant comparison. In addition, we choose the genetic algorithm as another comparison algorithm because it treats paths as individuals and selects individuals with higher fitness through the calculation of individual fitness functions. This approach is consistent with the global merit strategy for node search, and the comparison shows the effectiveness of the method in this paper more clearly.

Based on the A\* algorithm, at each step of path planning, a drone can choose among one of a fixed number of equally distributed directions to move one unit step. In our experiments, we set up eight directions for drones and thus have eight candidate nodes *pi* for a drone to choose from at each step. To apply the A\* algorithm in the constructed environment, it is required to specify the cost-benefit function of performing an optional action at the current location, as shown in Equation (24)

$$A\_{iP\_i} = d\_{i, p\_i} + M\_{risk} R\_{P\_i} + \frac{M\_{benefit}}{1 + \sum\_{P\_i \in s(p\_j, R)} \mathbb{C}\_b(\mathbb{C}\_{demand-j})} \tag{24}$$

where *i* is the current position, *pi* is the candidate point specified for the next step, *di*,*pi* is the Euclidean distance between *<sup>i</sup>* and *pi*, *RPi* and *Cb*(*Cdemand*−*j*) are risk cost and service benefit consistent with the previous definition.

For genetic algorithms, we need to specify the calculation of individual fitness, as shown in Equation (25)

$$A^i = \frac{1}{d^i} + \frac{1}{M\_{risk}R^i} + \frac{\mathsf{C}^i\_b \left(\mathsf{C}^0\_{demand} - \mathsf{C}^{end}\_{demand}\right)}{M\_{benefit}} \tag{25}$$

where *i* is the *i*-th path individual, *d<sup>i</sup>* is the path length of individual *i*, *R<sup>i</sup>* is the total path risk cost of individual *i*, *Cend demand* is the remaining customer demand after the drone provides service by path, *C*<sup>0</sup> *demand* is the initial total customer demand, *<sup>C</sup><sup>i</sup> b C*0 *demand* <sup>−</sup> *<sup>C</sup>end demand* is the benefit of the service completed by path *i*.

According to the comparison with the results of the genetic algorithm, the quality of the results obtained by the algorithm in this paper is basically consistent with the method of directly generating the overall path. There was a difference in path length of about 2% and a difference in risk cost of about 5%. The similarities in the values and trends of the results demonstrate that applying the global merit strategy in the node search process can improve the quality of the results. Service completion is the most important index to measure the result of logistics service path planning in a complex urban environment. As shown in Tables 7 and 8, compared with the A\* algorithm, the service completion of our algorithm is significantly higher, with an improvement of 20–40%. The path planning results of the algorithm in this paper have a slightly higher average risk than the A\* algorithm. While the difference between the two algorithms' path lengths remains between 1 and 2%, proving that the increase in risk basically comes from the existence of an overlap between the customer location and the risk area.

**Table 7.** Comparison of the planning results of the two algorithms by varying *Mbenefit*.


The mentioned indexes show that the proposed search rule promotes the drone's exploration of the environment compared to the A\* algorithm. Furthermore, the shortest path could be found based on the guarantee of completing the service. In addition, the algorithm can flexibly respond to the change of risk factor k, which ensures the risk tolerance of drones. It avoids the situation that the original algorithm cannot complete the path planning in the complex environment.

As *Mrisk* increased significantly, the drone was more sensitive to risks in the environment. This is equivalent to a more complex risk area in the environment, which requires more detours to avoid, and the A\* algorithm fails to find a valid path and deadlocks in

this situation. The path search rule proposed in this study still guarantees 100% service completion, while the average path length and average risk have a smooth change. It indicates that the solving ability of our algorithm is still acceptable.

**Table 8.** Comparing the planning results of the two algorithms by varying *Mrisk*.


As for the benefit coefficient *Mbenefit*, the search rule proposed in this study can better show the change in the preference for customer demand. The service completion was 100% when *Mbenefit* = 0.2, which increased by 34% compared with *Mbenefit* = 0.01, and remained at 100% completion. For the A\* algorithm, when *Mbenefit* = 0.2, the service completion was 61%, with an increase of 16%, indicating that the A\* algorithm is worse in response to the demand factor *Mbenefit*. The main reason for this difference is that the path search rule proposed in this paper can guarantee a comprehensive exploration of the environment.
