*2.1. Knapsack Problem*

The 0-1 knapsack problem is a well-known combinatorial optimal problem that has been studied in areas such as project selection, resource distribution, and the network interdiction problem. The KP01 was demonstrated to be NP-complete [55,56]. It can be described as follows:

Given a set *W* of *m* items, *W* = (*<sup>x</sup>*1, *x*2, *x*3, ... , *xm*). *wi* is the weight item *xi*, and *pi* is the profit of *xi*. *C* is the weight capacity of the knapsack. The objective is to find the subset *<sup>X</sup>*optimal from set of *m* items that maximizes the total profit, while keeping the total weight of the selected items from exceeding *C*.

The 0-1 knapsack problem can be defined as:

$$\begin{aligned} \text{Maximize } f(\mathbf{x}) &= \sum\_{i=1}^{m} p\_i \mathbf{x}\_i \\ \text{s.t. } \sum\_{i=1}^{m} w\_i \mathbf{x}\_i &\le \mathcal{C}, \mathbf{x}\_i \in \{0, 1\}, \forall i \in \{1, 2, \dots, m\} \end{aligned} \tag{1}$$

where *xi* can take either the value 1 (as selected) or the value 0 (as not selected, also called rejected).

#### *2.2. Grey Wolf Optimizer (GWO)*

The GWO algorithm [52] is inspired by the leadership hierarchy and hunting mechanism of grey wolves. To model the social order of grey wolves in the GWO, the best solution is considered the alpha (*α*) wolf, and the second and third best solutions are beta (*β*) and delta (*δ*) wolves, respectively. The rest of the feasible solutions are considered as omega ( *ω*) wolves. In the GWO, *α*, *β*, and *δ* wolves lead the hunting, and the *ω* wolves go after these leading wolves when searching for the global optimal solution (target) as the prey, as shown in Figure 1.

**Figure 1.** Position updating mechanism of GWO.

Grey wolves have the ability to recognize the location of prey and encircle them during the hunt. In order to simulate the hunting behavior of grey wolves mathematically, it is supposed that the *α*, *β*, and *δ* wolves have better knowledge about the potential location of prey. Therefore, the GWO saves the first three best solutions arrived at so far and forces the other *ω* wolves to update their positions according to the position of the best search agents.

During optimization, the GWO algorithm allows its search agents to update their position based on the location of the alpha, beta, and delta wolves with the distance vector between itself and the three best wolves when attacking the prey. Finally, the position and fitness of the alpha wolf are regarded as the global optimal solution in searching for the optimization when a termination criterion is satisfied.

#### **3. Quantum-Inspired Differential Evolution with Adaptive Grey Wolf Optimizer**

The proposed QDGWO algorithm is presented for the knapsack problems. First, the proposed algorithm adopts the quantum computing principles such as quantum representation and quantum measurement operation. Quantum representation allows the representation of the superposition of all potential states in one quantum individual. Second, adaptive mutation operations (used in the DE), crossover operations (used in the DE), and quantum observation are combined to generate new solutions as trial individuals in the solution space. Finally, the selection operator chooses the better solutions between the stored individuals and the trial individuals generated by the mutation and crossover operations of the DE. In the event that the trial individuals are worse than the current individuals, the QDGWO integrates the adaptive GWO and quantum rotation gate to preserve the diversity of the population of solutions as well as accelerate the search for the global optimum. The framework of the QDGWO algorithm is shown in Figure 2.

**Figure 2.** Framework of QDGWO.
