Next Article in Journal
A Privacy-Preserving and Quality-Aware User Selection Scheme for IoT
Next Article in Special Issue
Location and Size Planning of Charging Parking Lots Based on EV Charging Demand Prediction and Fuzzy Bi-Objective Optimization
Previous Article in Journal
Machine Learning Prediction of Fuel Cell Remaining Life Enhanced by Variational Mode Decomposition and Improved Whale Optimization Algorithm
Previous Article in Special Issue
Risk Analysis of the Use of Drones in City Logistics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Tour Construction Heuristic for Generating the Candidate Set of the Traveling Salesman Problem with Large Sizes

by
Boldizsár Tüű-Szabó
1,*,
Péter Földesi
2 and
László T. Kóczy
1
1
Department of Information Technology, Szechenyi Istvan University, 9026 Gyor, Hungary
2
Department of Logistics, Szechenyi Istvan University, 9026 Gyor, Hungary
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(19), 2960; https://doi.org/10.3390/math12192960
Submission received: 16 August 2024 / Revised: 18 September 2024 / Accepted: 22 September 2024 / Published: 24 September 2024
(This article belongs to the Special Issue Fuzzy Logic Applications in Traffic and Transportation Engineering)

Abstract

:
In this paper, we address the challenge of creating candidate sets for large-scale Traveling Salesman Problem (TSP) instances, where choosing a subset of edges is crucial for efficiency. Traditional methods for improving tours, such as local searches and heuristics, depend greatly on the quality of these candidate sets but often struggle in large-scale situations due to insufficient edge coverage or high time complexity. We present a new heuristic based on fuzzy clustering, designed to produce high-quality candidate sets with nearly linear time complexity. Thoroughly tested on benchmark instances, including VLSI and Euclidean types with up to 316,000 nodes, our method consistently outperforms traditional and current leading techniques for large TSPs. Our heuristic’s tours encompass nearly all edges of optimal or best-known solutions, and its candidate sets are significantly smaller than those produced with the POPMUSIC heuristic. This results in faster execution of subsequent improvement methods, such as Helsgaun’s Lin–Kernighan heuristic and evolutionary algorithms. This substantial enhancement in computation time and solution quality establishes our method as a promising approach for effectively solving large-scale TSP instances.
MSC:
90C27

1. Introduction

The Traveling Salesman Problem (TSP) is one of the most extensively studied problems in combinatorial optimization, with applications ranging from logistics and manufacturing to telecommunications and DNA sequencing. The TSP involves finding the shortest possible route that visits a set of nodes exactly once and returns to the original city. Despite its apparent simplicity, the TSP is NP-hard, making it computationally challenging to solve, especially as the number of nodes increases.
In this study, we focus specifically on the symmetric TSP (sTSP), a variant where the distance between any two nodes is identical regardless of the direction of travel. The sTSP is a common representation of real-world problems where the travel cost between locations is the same in both directions, such as in road networks or communication systems. This symmetry allows for certain algorithmic simplifications, but it also presents unique challenges in terms of efficiently finding optimal or near-optimal solutions for large problem instances.
In the last few decades, many efficient methods were presented in the literature for solving the TSP. The most efficient exact method is the Concorde solver, which has been used to obtain the optimal solutions to TSPLIB instances up to 85,900 nodes [1]. One possible solution for tackling larger problems with hundreds of thousands of vertices is to use heuristics and metaheuristics which can provide near-optimal solutions within a reasonable amount of time.
Among these methods, Helsgaun’s implementation of the Lin–Kernighan (LKH) heuristic is one of the most effective and widely recognized techniques for solving the TSP [2]. The LKH algorithm, an advanced variant of the original Lin–Kernighan heuristic developed by Helsgaun, incorporates several improvements and new strategies that significantly enhance its performance. The LKH heuristic extends the Lin–Kernighan approach by adapting the number of edges involved in the optimization process, allowing it to explore more complex edge rearrangements and achieve high-quality solutions. Notably, LKH is renowned for achieving the best-known solutions for many large TSP instances, demonstrating its exceptional capability in solving complex and large-scale problems.
A class of metaheuristics called “memetic algorithms” [3] that combine evolutionary-based algorithms with local search are effective in solving the TSP and closely related optimization problems [4]. Dinh Nguyen et al. introduced an effective memetic algorithm for large-scale TSP problems in 2007 [5]. This approach integrates a parallel multipopulation steady-state genetic algorithm with Lin–Kernighan local search, including variants like maximal preservative crossover and double-bridge move mutation. By balancing exploration through the genetic algorithm and exploitation via local search, the method proves highly efficient and has achieved new best solutions for numerous large-scale problems, outperforming the LKH heuristic in several cases. Another effective metaheuristic is the Fast Ant Colony Optimization (FACO) algorithm, which leverages a hybrid approach combining constructive and perturbation-based strategies [6]. FACO efficiently finds solutions within 1% of the best-known results for TSP Art instances with up to 200,000 nodes on an 8-core CPU, demonstrating notable speed and performance improvements over recent ACO-based methods. However, while FACO offers competitive performance, it generally does not surpass the state-of-the-art LKH heuristic, which remains superior for most TSP instances. Nonetheless, FACO has occasionally outperformed LKH on particularly challenging instances, showcasing its potential for specific problem scenarios. In addition to solving TSP and other optimization problems, metaheuristic algorithms have found significant application in enhancing the security of AI-based systems. For example, combining AI, machine learning (ML), and metaheuristic techniques has proven effective in detecting and mitigating dynamic security threats like intrusion detection and real-time malware detection [7]. Another notable application of metaheuristics is in optimizing supply chain management, where techniques like genetic algorithms [8] and simulated annealing [9] are used to improve logistics efficiency and reduce operational costs.
To implement a fast and efficient heuristic or metaheuristic approach for the TSP, the key point is to use a limited size neighborhood (candidate sets) during the search especially for large-size TSP instances. Candidate sets help to limit the search space by reducing the number of moves evaluated during the search, which speeds up the computation.
When solving geometric problems, there are several methods for creating the candidate set. For 2D Euclidean instances, a Delaunay triangulation can be created with O ( n l o g n ) time complexity, where n is the number of nodes involved in the problem. If the nodes are defined by their K-dimensional coordinates, an alternative is to construct a KD-tree [10] with an O K n l o g n time complexity. This method involves selecting only a few of the closest nodes in each quadrant. The Nearest Neighbor heuristic is a well-known and frequently used approach, although not so efficient, for generating candidate sets. Limiting the candidate set to only the closest vertices may prevent finding the optimal solution in the tour improvement phase [11]. To illustrate this, in a problem that involves 532 nodes (att532), one of the connections in the optimal solution is with the node that is the 22nd closest to a particular endpoint. However, including more nodes in the pool of potential connections significantly increases the runtime.
To limit the moves, one can use a fast randomized heuristic to generate a few dozen TSP solutions of moderate quality. The edges of a specified number of these tours can be merged to create a candidate set. Only the edges contained in these tours are utilized for building moves. This technique, known as tour merging, was introduced in 2007 [12]. Blazinskas and Misevicius utilized a multi-random-start technique incorporating fast 3-opt and the simplified LKH heuristic to create high-quality candidate sets in 2012 [13]. However, its more than quadratic time complexity limits its usability for large-scale instances. Ali et al. presented an efficient tour construction heuristic called Circle Group Heuristic in 2022 which creates better tours compared with the well-known tour construction methods [14].
One way to create high-quality solutions with low time complexity is to use the POPMUSIC template, formalized by Taillard and Voss in 2002 [15]. It has been shown to be very efficient for solving hard combinatorial problems such as p-median [16], sum of squares clustering [16], vehicle routing [17], and map labeling [18]. The concept of POPMUSIC is to optimize sub-parts of a solution locally, after obtaining a solution for the problem. These local optimizations are repeated until no further improvements are found. Helsgaun and Taillard first used this idea for the TSP and proposed a new efficient tour construction method [19]. This method consists of two main steps: creating a feasible tour based on clustering and 2-opt local search and optimizing every subpath of the initial solution with a Lin–Kernighan local search. It has been proven that this method has low empirical complexity, typically O n 1.6 . Taillard proposed an improved version of the POPMUSIC heuristic. Instead of the initial tour construction phase, a recursive randomized procedure is used to build the tour, which takes O n l o g n time [20]. Additionally, the subpath optimization (fast POPMUSIC) has been made faster by reducing the number of examined subpaths. Although the time required to improve the initial tour still depends on the problem size, it has been reduced by a factor of 10 to 20 compared to the previous implementation. Since the overall time complexity of the method is near-linear, it can be used to solve extremely large instances on a standard personal computer.
This improved POPMUSIC heuristic was compared with two other candidate set generating techniques, the alpha and Delaunay methods. While the POPMUSIC and alpha candidate set generating methods are generally applicable, Delaunay triangulation works only on Euclidean instances. However, the Delaunay triangulation method is the fastest, but POPMUSIC always produced candidate sets with better quality (the number of missing edges from the best-known tours are lower). Helsgaun proposed the alpha candidate set generating method as an alternative to the Nearest Neighbor candidate set generating approach in his effective implementation of the Lin–Kernighan heuristic [2]. This method, which is both widely used and efficient, relies on the minimum 1-tree. It generates high-quality candidate sets that include nearly all the edges of optimal solutions. Candidate sets with fewer edges typically have average vertex degrees of about 5–6. However, the preprocessing step involving 1-trees, when integrated into the LKH algorithm, exhibits empirical complexity that seems to be quadratic. This makes it impractical for instances with more than 100,000 nodes.
As highlighted by Queiroga et al. in 2021, in their study on the capacitated vehicle routing problem, POPMUSIC remains a robust and adaptable heuristic for a range of optimization problems, showcasing its effectiveness even for very large instances [17].
The methods developed for solving the Traveling Salesman Problem (TSP) have achieved notable progress, yet significant challenges remain, particularly with large-scale instances. While exact methods like the Concorde solver provide optimal solutions, and heuristics such as Nearest Neighbor and Delaunay triangulation offer practical approaches, they often fall short in terms of efficiency or scalability for larger problems. The quadratic time complexity of some candidate set generation methods and the limitations of existing heuristics underscore the need for more effective solutions. This research addresses these challenges by proposing a novel heuristic aimed at generating high-quality candidate sets specifically for large-scale TSP problems.
The primary goal of this research is to develop an efficient heuristic with low time complexity that can generate high-quality candidate sets for large-scale TSP instances. By leveraging fuzzy clustering techniques, the proposed approach seeks to partition the problem into manageable subproblems and optimize connections between subpaths. This method is designed to address the limitations of current techniques, offering a more effective alternative for large-scale TSP instances and contributing significantly to the field.
This paper is organized as follows: Section 2 introduces the novel tour construction heuristic, detailing its methodology and the integration of fuzzy clustering techniques. Section 3 presents the results of extensive experimentation, including parameter tuning and comparative analysis with existing methods like POPMUSIC. This paper concludes with a discussion of the strengths and limitations of the proposed heuristic, as well as suggestions for future research directions.

2. From Clustering to Tour Construction: An Innovative Approach to TSP

The first part of this section offers a comprehensive overview of clustering techniques, with a detailed focus on the Fuzzy C-means (FCM) algorithm, which is crucial for partitioning large-scale TSP instances in our heuristic. We explore the mechanics of FCM, emphasizing its advantages over traditional clustering methods, particularly in handling overlapping clusters and providing nuanced membership information.
In the second part, we present the step-by-step process of our novel tour construction heuristic, designed to effectively handle large TSP instances. This heuristic uses clustering to break down complex problems into manageable subproblems and employs optimization techniques to generate high-quality candidate sets.

2.1. Clustering Techniques: Theory and Principles

The Traveling Salesman Problem (TSP) is a well-known NP-hard problem, which means it is computationally challenging to solve, particularly for large instances. The complexity of TSP grows exponentially with the size of the problem, making it increasingly difficult to find optimal solutions as the number of cities increases. One effective strategy for managing such large-scale problems is to break them down into smaller, more manageable subproblems. This approach, known as “divide and conquer”, simplifies the overall problem by partitioning it into smaller parts that can be solved independently before combining their solutions.
In this context, clustering techniques offer a practical method for partitioning large TSP instances into smaller subproblems. Clustering involves grouping data points into clusters such that points within the same cluster are more similar to each other than to those in other clusters. Clustering can be classified as either hard or fuzzy. In hard clustering, patterns are separated by well-defined cluster boundaries. However, due to the overlapping nature of these boundaries, some patterns may fall into a single cluster or a dissimilar group. This limitation makes hard clustering less suitable for real-life applications. To address this issue, fuzzy clustering was introduced [21]. Fuzzy clustering provides more information about the pattern memberships, reducing the limitations of hard clustering. We investigated two clustering methods for partitioning the problem: k-means clustering and Fuzzy C-means clustering (FCM) [22]. The k-means clustering method involves an iterative data-partitioning algorithm that assigns data points to clusters defined by centroids. Fuzzy clustering, on the other hand, allows data points to belong to more than one cluster with different degrees of membership. Fuzzy C-means clustering is a popular fuzzy method for clustering, introduced in 1973 [21] and improved in 1981 [22], but the method has some disadvantages, such as sensitivity to the initialization of cluster centers, the given number of expected clusters, and convergence to the optimal local response [23]. It has been widely applied in various fields such as clustering [24], classification [25], image analysis [26], etc.
The pseudo-code of the FCM method is shown in Figure 1, starting with the determination of the initial cluster centers. Figure 2 shows the flowchart of the algorithm.
Afterwards, in an iterative process, the membership matrix and the cluster centers are updated, and the value of objective function is calculated. The membership grades are calculated with Equation (1). In our implementation, the Euclidean distance metric was used.In this equation, each data point x i is assigned a membership grade w i j to cluster c j based on its distances to all cluster centers. The fuzzy membership value w i j is inversely related to the distances: points closer to the cluster center will have higher membership grades for that cluster. The term in the denominator normalizes the membership values across all clusters for the given data point:
w i j = 1 k = 1 c x i c j x i c k 2 m 1 ,
where w i j 0,1 represents a fuzzy membership value that quantifies the grade of membership of data point x i belongs to the fuzzy cluster c j ;   m is the weighing exponent parameter that determines the degree of fuzziness of the clustering; c is the number of the clusters.
The cluster centers are updated with the following equation:
c j = i = 1 n w i j x i i = 1 n w i j .
This formula calculates the new cluster center c j as the weighted average of all data points, where the weights are given by the membership grades. The more a data point belongs to a cluster (i.e., the higher its membership grade w i j ) the more influence it has on the position of the cluster center.
Finally, the value of the objective function is calculated in each iteration:
J m = j = 1 c i = 1 n w i j x i c j 2 .
The objective function J m measures the overall quality of the clustering by calculating the weighted sum of squared distances between data points and their cluster centers. This value is minimized during the iterative process to achieve better clustering. The weights w i j affect how much each point contributes to the total distance, reflecting the degree of its membership to each cluster.

2.2. Our Novel Tour Construction Heuristic

In this section, our new heuristic will be presented in detail. This method consists of five main steps:
  • Splitting the problem into subproblems with fuzzy clustering (k subproblems);
  • Solving each subproblem with Helsgaun’s Lin–Kernighan heuristic;
  • Splitting the tours found at step 2 into two paths (2k paths in total);
  • Connecting the paths into a solution for the entire problem with Helsgaun’s Lin–Kernighan heuristic;
  • Post-optimization of randomly selected segments of the solution found at step 4 with Helsgaun’s Lin–Kernighan heuristic.
In our research, we explored both k-means and Fuzzy C-means (FCM) clustering techniques to partition large TSP instances into smaller, more manageable subproblems. K-means clustering, a widely used approach, resulted in a slightly smaller candidate set but was limited by its rigid cluster boundaries and inability to handle overlapping clusters effectively.
In contrast, Fuzzy C-means clustering proved to be more advantageous. FCM allows data points to belong to multiple clusters with varying degrees of membership, addressing the limitations of traditional k-means and better accommodating the overlapping nature of real-world problems. Our experiments demonstrated that while k-means produced a smaller candidate set, Fuzzy C-means yielded faster convergence in reducing the number of missing edges in the candidate set. Consequently, we chose Fuzzy C-means for partitioning the problem, as it offered superior performance and efficiency. In the FCM algorithm, each node is assigned a fuzzy membership grade between 0 and 1 for each cluster, which indicates the degree of belonging to each cluster. The subsequent step of our method involves assigning each node to a single cluster proportionally based on these membership grades, meaning that the assignment reflects the relative probabilities of the node belonging to each cluster. This means that a node is more likely to be assigned to a cluster where its membership grade is higher. When applying a small degree of fuzziness and assigning nodes proportionally to a single cluster based on their membership grades, nodes are typically allocated to the nearest or a nearby cluster. This happens because, with reduced fuzziness, membership grades become more extreme. As a result, nodes are usually assigned to the cluster closest to them or to one of the nearby clusters, where the membership grades are higher. Figure 3 exemplifies this process using a small fuzziness parameter (m = 1.05) in the FCM algorithm, showing the clusters for the dkc3938 instance with 10 clusters after this allocation method. The dkc3938 instance is a Very-Large-Scale Integration (VLSI) instance consisting of 3938 points, where the distances between points are Euclidean. The x-axis and y-axis represent the spatial coordinates of the points in the 2D Euclidean plane.
In order to increase the convergence speed in our implementation, the initial cluster centers were selected with the following process:
  • The first cluster center is chosen uniformly at random from the vertices;
  • All other cluster centers are chosen as follows: select 100 vertices uniformly at random and choose the one that is the furthest away from the cluster centers that have already been determined. This method aims to ensure that the new cluster centers are as far apart as possible from the existing ones, which can help in achieving a more diverse and well-distributed set of cluster centers.
As part of the new tour construction heuristics, the second main step is to solve the subproblems created by fuzzy clustering separately. The aim of this step is not to search for the optimal solution to the subproblems, as this would take a significant amount of time and could result in missing numerous edges in the candidate set that are part of the optimal solution of the entire problem. Instead, the goal is to generate high-quality solutions in the shortest time possible. To achieve this, the Lin–Kernighan heuristic implemented by Helsgaun was chosen (LKH-2.0.10 version), which is currently the most efficient heuristic in the literature for solving the TSP and holds the best-known solutions for many instances with thousands of nodes. When setting the parameters of the LKH, the aforementioned goal was kept in mind. The initial tours of the subproblems were generated with the Nearest Neighbor heuristic, which is a simple algorithm that always visits the closest unvisited point. For generating the candidate sets of the subproblems (five candidates for each vertex), the Nearest Neighbor heuristic was also used. During the search, based on our experiments, 3-opt sequential moves were applied as sub-moves instead of the 5-opt suggested by Helsgaun. The number of trials was set to 1, so the search consists of generating the initial subtour and improving it with LKH.
After creating the subtours, each of them was divided into two subpaths. The splitting points were determined with the following three-steps process:
  • Determining the closest and second-closest clusters to each cluster based on the distances of the clusters’ centers;
  • Determining the two splitting points by finding the closest vertices to the closest and second-closest clusters;
  • Deleting for each splitting point one of the two edges in which the splitting point appears.
Figure 4 shows the 20 subpaths obtained on the dkc3938 instance after solving the 10 subproblems from Figure 3 with LKH and then splitting each into two subpaths.
The next step connects the subpaths into a complete tour which is a solution for the entire problem. The LKH (2.0.10 version) was also applied to find the optimal connection of the subpaths with the default parameters. The LKH adds 2k edges (where k is the number of the clusters) to form a valid tour for the entire problem.All edges in the subpaths were fixed during the search. Here, 5-opt sequential moves were applied in order to find the optimal connection of the subpaths. For such small-sized problems, specifically finding the optimal order of the 2ksubpath connections, LKH consistently finds the optimal solution.
Figure 5 shows the complete tour on the dkc3938 instance after connecting the subpaths. There are some long edges in this tour, but their number can be reduced by using post-optimization with LKH (2.0.10 version). It improves subpaths starting with a randomly selected vertex. The first and last vertex of the subpath is fixed ensuring that improving the subpath leads to an improvement in the entire tour. As basic steps, 3-opt sequential moves were applied in LKH.
Figure 6 shows the tour after post-optimization on the dkc3938 instance. A comparison of Figure 5 and Figure 6 demonstrates that post-optimization has successfully eliminated several long edges from the final solution. This indicates that the post-optimization process has effectively reduced the number of long edges present in the solution. This is beneficial because long edges are typically unlikely to be included in the optimal solution, and their presence can unnecessarily inflate the size of the candidate set. By eliminating these long edges, the candidate set becomes more refined, thereby improving the overall performance of the tour construction heuristic.

3. Results

In this section, we present the results of our extensive experimentation aimed at optimizing the parameters of our tour construction heuristic. We explore the effects of various parameter settings on the performance of the heuristic. Our analysis is based on multiple benchmark instances, both Euclidean and VLSI, to illustrate the impact of these parameters on solution quality and runtime. We also compare our heuristic’s performance with the POPMUSIC method in Section 3.2.

3.1. Parameter Tuning and Its Impact on Heuristic Performance

We have conducted extensive investigations in order to set the parameters of the heuristic (fuzzy or traditional clustering, parameter m of the Fuzzy C-means clustering, the number of clusters n c l u s t e r , a binary variable determining whether post-optimization is applied or not, the segment size of post-optimization p o s t o p t , and the number of times the post-optimization is performed n p o s t o p t ) appropriately. The experiments were carried out on the following computer configuration (Core i7-7500U 2.7 GHz, 8GB of RAM memory running under Linux Mint 18.2). Seven instances were selected for tuning the parameters of our tour construction heuristic, including ei8246, vm22775, sw24978, bm33708, ics39603, E10k0, and dan59296. These instances encompass a range of problem types: Euclidean, VLSI, and national instances. This diverse selection ensures that the parameter tuning process considers various types of problems, allowing for a more comprehensive evaluation of the heuristic’s performance across different scenarios. The number in each instance name refers to the number of nodes. Similar trends were observed in the parameter tuning across these seven benchmarks. Consequently, the results for one Euclidean benchmark (E10k0) and one Very-Large-Scale Integration (VLSI) benchmark (dan59296) are presented in detail to illustrate these trends comprehensively. While the optimal solution is known for the E10k0, dan59296 has not yet been solved optimally, so in this case, the candidate set was compared with the best-known solution.
Some of our results with different parameter settings can be seen in Table 1 and Table 2. Two indicators were used to evaluate the quality of the candidate set, the number of missing edges, and the average vertex degree of the candidate set. “Missing edges” refers to the set of edges that are present in the best-known solution (or the optimal solution, if available) but are not included in the candidate set. Finding the optimal or a near-optimal solution during the tour improvement phase can be hindered by a significant number of missing edges. On the other side, generating a larger candidate set can lead to longer run times reducing the efficiency of the tour improvement algorithm. Therefore, it is necessary to develop a candidate set generating method that can produce a relatively small set with a low number of missing edges. In Table 1 and Table 2, ME stands for missing edges, and AVD stands for average vertex degree of the candidate set. The tables highlight results for a subset of parameter settings, while constant parameters across the experiments include the use of post-optimization ( p o s t o p t = yes) with n p o s t o p t = 10 and p o s t o p t = nodes/10. For the hard clusters, the number of clusters is fixed at 10, while for the fuzzy clusters, the number of clusters varies between 5, 10, 15, and 20, with a constant m  = 1.05. In the case of fuzzy clusters with n c l u s t e r = 20, two scenarios are considered: with and without post-optimization, highlighting how this specific setting impacts the results.
For the E10k0 instance, our heuristic was able to find all the edges present in the optimal solution across all parameter settings (see Table 1). However, the dan59296 VLSI instance proved to be a much more challenging task (see Table 2). In this case, the impact of different parameter settings on the number of missing edges becomes evident.
The following subsections detail the effects of various parameters on the performance of the heuristic. Each subsection focuses on a specific aspect of the parameter adjustments and their impact on solution quality and runtime.

3.1.1. Effect of Cluster Numbers on Solution Quality and Runtime

Smaller numbers of clusters (5) lead to longer computation times without significant improvements in solution quality. This is because fewer clusters result in routes that are “too good”, leading to a higher number of missing edges compared to using a larger number of clusters. Although solving smaller subproblems takes less time, creating clusters and connecting subpaths into a complete solution requires additional time resources. These opposing effects seem to balance each other out when using cluster numbers between 10 and 20, resulting in nearly identical runtimes. Increasing the number of clusters also raises the average vertex degree. Considering this, choosing 10 clusters appears to be a good compromise in terms of the number of missing edges and the average vertex degree.

3.1.2. Impact of Basic Moves in the Lin–Kernighan Heuristic

Helsgaun’s Lin–Kernighan heuristic was employed in three phases of our novel method: for solving the subproblems, connecting the tours, and for post-optimization. The impact of the basic move on the method’s effectiveness was thoroughly analyzed. The choice of basic move significantly influences both the run time and the quality of the solution. Table 3 shows the average gap percentages relative to the optimal solution and the average computation time per tour for various basic moves used in the LKH heuristic. As indicated, increasing the complexity of the move affects both tour quality and computational efficiency. Specifically, the 2-opt move results in a higher average gap of 10.79% but has the shortest average computation time of 0.229 s per tour. Conversely, the 3-opt move improves tour quality to an average gap of 6.14%, with a slightly increased computation time of 0.354 s. The 4-opt and 5-opt moves further enhance tour quality, reducing the average gap to 4.83% and 4.38%, respectively, but they also lead to longer computation times of 0.853 s and 2.562 s per tour. Thus, while more complex moves generally yield better solutions, they require more computational resources. It can be concluded that the 3-opt move is the optimal choice. Although increasing the complexity of the basic move slightly reduces the size of the candidate set (as shown in Figure 7), it does not improve significantly the rate of decrease in the missing edges (Figure 8). Furthermore, using 4-opt and 5-opt requires longer runtimes and more iterations compared to the 3-opt move to ensure that the candidate set includes all edges of the optimal solution. This is because while more complex moves generate better routes, they may also exclude some edges of the optimal solution from the candidate set. For the E10k0 instance, identifying all the edges included in the optimal solution requires 73 iterations with the 4-opt move, while only 64 iterations are needed with the 3-opt move (Figure 7). When accounting for the average time required to generate a single tour with both the 3-opt and 4-opt moves (Table 3), it is observed that finding all the optimal edges with the 4-opt move takes approximately 2.75-times longer than with the 3-opt move.

3.1.3. Fuzzy Clustering vs. Traditional Clustering

Based on the simulations, it can be concluded that fuzzy clustering offers significant advantages over traditional clustering methods (see Figure 9 and Figure 10). While standard clustering can also produce a high-quality candidate set, it typically requires merging more tours and thus demands more computational time. For example, with the E10k0 instance, the fuzzy clustering approach identified all the edges of the optimal solution after merging 64 tours, whereas the traditional clustering method needed 97 iterations to achieve similar results. Allowing a small degree of fuzziness in the clustering process not only facilitates the rapid reduction in missing edges relative to the number of iterations but also provides an efficient balance between solution quality and computational effort. This is evident from the faster decrease in the number of missing edges observed with fuzzy clustering (see Figure 9).

3.1.4. Effects of the Fuzziness Parameter

Increasing the fuzziness parameter m resulted in a somewhat faster reduction in missing edges as the number of iterations increased (Figure 11). However, it is crucial to consider that a higher m value significantly enlarges the candidate set, as depicted in Figure 12. This increase in candidate set size occurs because, with higher fuzziness, points that are far from each other can be assigned to the same cluster. Consequently, this broader clustering leads to a larger candidate set. Therefore, it is advisable to choose a small degree of fuzziness, close to 1 (e.g., m = 1.05, which is used as the default value), to strike a balance between the efficiency of edge inclusion and the size of the candidate set.

3.1.5. Post-Optimization Impact

With post-optimization, the number of missing edges and the size of the candidate set can be reduced. Although the post-optimization of longer segments usually results in a greater improvement in the length of the route, this does not necessarily mean an improvement in the quality of the candidate set (in fact, in some cases, it can even make it worse missing more edges of the optimal or best-known solution); moreover, it takes longer time to perform. Although the longest segment length initially shows the fastest reduction in missing edges for the E10k0 instance, it still requires more iterations than the other two parameter settings to find all edges in the optimal solution (see Figure 13). On the other hand, increasing the length of the segment slightly increases the size of the candidate set (see Figure 14). For these reasons, the default parameter values are set to segment length p o s t o p t  = nodes/10 and number of post-optimization iterations n p o s t o p t  = 10, balancing the convergence speed and the candidate set size.

3.2. Comparison with the POPMUSIC Heuristic

Based on the parameter tuning experience, our comparative analysis uses and recommends the following parameter values for our new heuristic:
-
Fuzzy clusters with m = 1.05 and n c l u s t e r = 10;
-
LKH with 3-opt basic moves;
-
post_opt = yes;
-
n p o s t o p t = 10;
-
p o s t o p t = nodes/10.
Using these parameters, our tour construction heuristic produced high-quality candidate sets, achieving a good compromise between the number of missing edges and the size of the candidate set (see Table 1 and Table 2).

3.2.1. Evaluation on Benchmark Instances

We compared our heuristic’s performance with the improved POPMUSIC method using VLSI and national TSP instances (see Table 4 and Table 5). The simulations were conducted on the same computer. The VLSI instances represent real-world challenges in integrated circuit design. VLSI design requires the efficient placement and routing of a vast number of components on a chip, often involving tens or even hundreds of thousands of connections. The complexity of these instances is comparable to large-scale TSP problems, where finding near-optimal routing solutions is critical for minimizing wire length, reducing signal delay, and improving overall chip performance. Given the scale of modern VLSI problems, where the number of components can easily exceed 100,000, a highly efficient heuristic is essential. Our method demonstrated strong performance on these VLSI instances, making it a promising tool for optimizing large-scale integrated circuit layouts in a feasible time frame, which is crucial for advancing the efficiency and cost-effectiveness of semiconductor manufacturing. The national TSP instances simulate real-world geographic routing problems. These instances are designed to reflect the challenges faced in logistics, transportation, and supply chain management, where the goal is to optimize routes across vast geographical regions. In these scenarios, minimizing travel distance is critical for reducing operational costs and improving delivery efficiency. National TSP instances often involve thousands or even tens of thousands of locations, making the ability to efficiently handle large-scale problems essential. Our method showed strong performance for these instances, demonstrating its practical applicability in optimizing complex logistics and transportation networks, where the scale and complexity of the problem can have a significant impact on real-world operations. The candidate set for our method was generated by merging the edges from 100 tours created with our fuzzy clustering-based approach. We evaluated the fast POPMUSIC method using the recommended default parameters from the literature. Our comparison revealed that although POPMUSIC is faster, our method produces smaller candidate sets with a comparable number of missing edges. Additionally, the results show that VLSI problems are significantly more challenging than national TSP instances. Specifically, the constructed tours for VLSI problems are further from the optimal solution, and the number of missing edges is higher compared to those for world TSP problems (e.g., bm33708 versus dan59296). The average gap [%] represents the deviation from the optimal or best-known solution. The results indicate that our heuristic consistently produces better-quality tours, as evidenced by the lower average gap percentages compared to those obtained using the fast POPMUSIC method.

3.2.2. Time Complexity Analysis

To determine the empirical time complexity of our novel tour construction heuristic, a polynomial curve was fit on the run times. The parameters of the polynomial model were determined by minimizing the RMSE with 95% confidence bounds (in Table 6). The time complexity is near linear, similar to the improved version of the POPMUSIC heuristic, so it can be used for creating the candidate sets of even large-sized instances with several hundred thousand nodes. Figure 15 shows the fitted curve.

3.2.3. Tour Improvement Performance

Our proposed method is designed to generate high-quality candidate edges to enhance the performance of heuristic or metaheuristic approaches during the tour improvement phase. To evaluate its efficiency, we used the state-of-the-art TSP heuristic, the LKH algorithm (version LKH-2.0.10), with candidate sets generated by our heuristic. The LKH algorithm was configured with 5-opt steps as basic moves and was run with a termination condition set to 1000 trials, with each instance tested 10 times. Additionally, simulations were conducted using candidate sets from the improved version of the POPMUSIC heuristic, with a time limit set for fair comparison.
The results, summarized in Table 7, reveal that LKH using candidate sets from our fuzzy clustering-based method consistently produced high-quality solutions, with tour gaps of less than 0.1% from the best-known solutions. In contrast, while the fast POPMUSIC method generates candidate sets more quickly, our method provides higher quality candidate sets, leading to superior tour improvement. This suggests that our method offers a more effective solution for enhancing tour optimization.
Figure 16 illustrates the convergence speed to the best-known solution using candidate sets from both our approach and the POPMUSIC method for the E100k0 instance. The graph displays the gap between the average tour length of 10 runs and the best-known solution over time. Initially, the LKH algorithm with candidate sets from the POPMUSIC heuristic shows superior performance, achieving better results up to approximately 1000 s. However, beyond this point, the LKH algorithm utilizing candidate sets from our approach begins to outperform the POPMUSIC-based sets. This shift occurs because the LKH algorithm with the larger candidate set initially finds better tours more quickly. Over time, though, the candidate sets generated by our approach enable the LKH algorithm to overcome this initial advantage and achieve better results overall.

3.3. Discussion

In this section, we evaluate the strengths and weaknesses of our proposed heuristic in comparison with the POPMUSIC method, highlighting how each method’s characteristics impact their performance in practical scenarios.
One of the main advantages of our heuristic over POPMUSIC is the quality of the candidate sets it generates. While POPMUSIC is efficient in producing candidate sets quickly, this speed comes at the cost of generating larger candidate sets. The downside of these larger sets becomes apparent during the optimization phase, as more unnecessary options need to be considered and processed. This increases the computational load and time required to handle the excess candidate edges, ultimately affecting the efficiency of finding high-quality solutions.
In contrast, our heuristic is designed to create smaller, more refined candidate sets that are more manageable during the optimization process. By focusing on quality rather than quantity, our method tends to produce candidate sets that lead to better performance in the subsequent optimization phases. This is particularly evident when using the LKH algorithm with candidate sets generated by our approach. The LKH algorithm demonstrated superior results in terms of tour quality and convergence speed when given the candidate sets from our heuristic, compared to those generated via POPMUSIC. Reinelt, in his studies on 2-opt and 3-opt local searches, similarly observed the impact of candidate set size and quality on optimization results. It was found that larger candidate sets for 2-opt and 3-opt are not worthwhile, as they significantly increase computational time. This is consistent with our approach, where smaller candidate sets lead to faster convergence and better optimization performance when using the Lin–Kernighan heuristic [27].
For example, the LKH algorithm demonstrates the impact of high-quality candidate sets on optimization performance [2]. In the earlier versions of the Lin–Kernighan heuristic, the Nearest Neighbor (NN) algorithm was used to generate candidate sets, which often resulted in larger and less-refined sets [28]. However, Helsgaun introduced a shift to using alpha candidate sets in his implementation. The alpha candidate sets are defined by an alpha parameter that limits the candidate edges to those within a specified distance, leading to smaller and more targeted sets compared to the NN approach. This transition to alpha candidate sets significantly improved the LKH algorithm’s performance, as evidenced by superior tour quality and faster convergence rates [2]. Similarly, our heuristic’s focus on generating smaller, high-quality candidate sets mirrors this approach and achieves comparable improvements in performance during the optimization phase. Our results align with Helsgaun’s findings, illustrating that refining candidate sets can lead to more effective optimization, particularly in terms of tour quality and convergence speed.
However, it is important to note that the trade-off for this improved performance is an increase in computational time for generating the candidate sets. Our heuristic requires more time to produce candidate sets compared to POPMUSIC, which can be a significant consideration in time-sensitive applications or scenarios where rapid results are needed. Despite this, the benefit of higher-quality candidate sets often outweighs the additional time investment, especially in cases where achieving near-optimal solutions is crucial.
In summary, while POPMUSIC is faster and may be preferable in situations where time is of essential importance, our proposed heuristic offers notable advantages in terms of the quality of the candidate sets and the effectiveness of the tour improvement phase. The trade-off between the speed of candidate set generation and the quality of the resulting tours highlights the importance of selecting the appropriate heuristic based on the specific requirements and constraints of the problem at hand.

4. Conclusions

The research presented in this paper highlights the critical role of candidate set quality in the success of tour improvement methods for solving the Traveling Salesman Problem (TSP), particularly as problem instances scale up. We introduced a novel tour construction heuristic grounded in fuzzy clustering, which proves to be highly effective for generating candidate sets in a nearly linear time complexity framework. Our extensive experimentation on large-scale TSP instances, including Euclidean and VLSI datasets, confirms that the proposed method covers almost all edges of the best-known solutions. Moreover, when integrated with Helsgaun’s Lin–Kernighan heuristic, the method not only accelerates convergence but also achieves superior results compared to the improved POPMUSIC algorithm. These findings suggest that our approach offers a robust and scalable solution for addressing large-scale TSP challenges, paving the way for more efficient optimization in combinatorial problems. Future work may explore further refinements of the heuristic and its application to other complex optimization problems.

Author Contributions

Conceptualization, B.T.-S., P.F. and L.T.K.; methodology, B.T.-S.; software, B.T.-S.; validation, B.T.-S.; writing—original draft preparation, B.T.-S.; writing—review and editing, L.T.K.; visualization, B.T.-S.; supervision, L.T.K. and P.F.; funding acquisition, B.T.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the ÚNKP-22-4 New National Excellence Program of the Ministry of Human Capacities.

Data Availability Statement

The source code of our algorithm and the instances used in the experiments will be made available upon request. A subset of the tested instances is publicly accessible at https://www.math.uwaterloo.ca/tsp/data/index.html (accessed on 18 September 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Applegate, D.L.; Bixby, R.E.; Chvátal, V.; Cook, W.J.; Espinoza, D.; Goycoolea, M.; Helsgaun, K. Certification of an optimal tour through 85,900 cities. Oper. Res. Lett. 2009, 37, 11–15. [Google Scholar] [CrossRef]
  2. Helsgaun, K. An effective implementation of the Lin-Kernighan traveling salesman heuristic. Eur. J. Oper. Res. 2000, 126, 106–130. [Google Scholar] [CrossRef]
  3. Moscato, P. On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts—Towards Memetic Algorithms. In Technical Report Caltech Concurrent Computation Program, Report 826; California Institute of Technology: Pasadena, CA, USA, 1989. [Google Scholar]
  4. Kóczy, L.T.; Földesi, P.; Tüű-Szabó, B. Enhanced discrete bacterial memetic evolutionary algorithm—An efficacious metaheuristic for the traveling salesman optimization. Inf. Sci. 2018, 460–461, 389–400. [Google Scholar] [CrossRef]
  5. Nguyen, H.D.; Yoshihara, I.; Yamamori, K.; Yasunaga, M. Implementation of an effective hybrid GA for large-scale traveling salesman problems. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2007, 37, 92–99. [Google Scholar] [CrossRef] [PubMed]
  6. Skinderowicz, R. Improving Ant Colony Optimization efficiency for solving large TSP instances. Appl. Soft Comput. 2022, 120, 108653. [Google Scholar] [CrossRef]
  7. Singh, S.P.; Kumar, N.; Dhiman, G.; Vimal, S.; Viriyasitavat, W. AI-Powered Metaheuristic Algorithms: Enhancing Detection and Defense for Consumer Technology. IEEE Consum. Electron. Mag. 2024, 1–8. [Google Scholar] [CrossRef]
  8. Jauhar, S.K.; Pant, M. Genetic algorithms in supply chain management: A critical analysis of the literature. Sādhanā 2016, 41, 993–1017. [Google Scholar] [CrossRef]
  9. Balaji, A.N.; Jawahar, N. A simulated annealing algorithm for a two-stage fixed charge distribution problem of a supply chain. Int. J. Oper. Res. 2010, 7, 192–215. [Google Scholar] [CrossRef]
  10. Bentley, J.L. Multidimensional binary search trees used for associative searching. Commun. ACM 1975, 18, 509–517. [Google Scholar] [CrossRef]
  11. Padberg, M.W.; Rinaldi, G. Optimization of a 532-city symmetric traveling salesman problem by branch and cut. Oper. Res. Lett. 1987, 6, 1–7. [Google Scholar] [CrossRef]
  12. Applegate, D.L.; Bixby, R.E.; Chvátal, V.; Cook, W.J. The Traveling Salesman Problem: A Computational Study, 1st ed.; Princeton University Press: Princeton, NJ, USA, 2007; pp. 469–489. [Google Scholar]
  13. Blazinskas, A.; Misevicius, A. Generating High Quality Candidate Sets by Tour Merging for the Traveling Salesman Problem. In Information and Software Technologies. ICIST 2012 Communications in Computer and Information Science; Skersys, T., Butleris, R., Butkiene, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 319. [Google Scholar]
  14. Ali, I.J.; Tüű-Szabó, B.; Kóczy, L.T. Effect of the initial population construction on the DBMEA algorithm searching for the optimal solution of the traveling salesman problem. Infocommun. J. 2022, 14, 72–78. [Google Scholar]
  15. Taillard, E.D.; Voss, S. Popmusic—Partial Optimization Metaheuristic under Special Intensification Conditions. In Essays and Surveys in Metaheuristics; Operations Research/Computer Science Interfaces Series; Springer: Boston, MA, USA, 2002; Volume 15. [Google Scholar]
  16. Taillard, E.D. Heuristic methods for large centroid clustering problems. J. Heuristics 2003, 9, 51–73. [Google Scholar] [CrossRef]
  17. Queiroga, E.; Sadykov, R.; Uchoa, E. A POPMUSIC matheuristic for the capacitated vehicle routing problem. Comput. Oper. Res. 2021, 136, 105475. [Google Scholar] [CrossRef]
  18. Alvim, A.C.F.; Taillard, E.D. POPMUSIC for the point feature label placement problem. Eur. J. Oper. Res. 2009, 192, 396–413. [Google Scholar] [CrossRef]
  19. Taillard, E.D.; Helsgaun, K. POPMUSIC for the travelling salesman problem. Eur. J. Oper. Res. 2019, 272, 420–429. [Google Scholar] [CrossRef]
  20. Taillard, E.D. A linearithmic heuristic for the travelling salesman problem. Eur. J. Oper. Res. 2022, 297, 442–450. [Google Scholar] [CrossRef]
  21. Dunn, J.C. A fuzzy relative ISODATA process and its use in detecting compact well-separated clusters. J. Cybern. 1974, 3, 32–57. [Google Scholar] [CrossRef]
  22. Bezdek, J.C. Pattern Recognition with Fuzzy Objective Function Algorithms; Springer: New York, NY, USA, 1981. [Google Scholar]
  23. Suganya, R.; Shanthi, R. Fuzzy c-means algorithm—A review. Int. J. Sci. Res. Publ. 2012, 2, 1. [Google Scholar]
  24. Kim, W.D.; Lee, K.H.; Lee, D. A novel initialization scheme for the fuzzy c-means algorithm for color clustering. Pattern Recognit. Lett. 2004, 25, 227–237. [Google Scholar] [CrossRef]
  25. Yu, X.C.; He, H.; Hu, D.; Zhou, W. Land cover classification of remote sensing imagery based on interval-valued data fuzzy c-means algorithm. Sci. China Earth Sci. 2014, 57, 1306–1313. [Google Scholar] [CrossRef]
  26. Chuang, K.-S.; Tzeng, H.-L.; Chen, S.; Wu, J.; Chen, T.J. Fuzzy c-means clustering with spatial information for image segmentation. Comput. Med. Imaging Graph. 2006, 30, 9–15. [Google Scholar] [CrossRef] [PubMed]
  27. Reinelt, G. Improving Solutions. The Traveling Salesman: Computational Solutions for TSP Applications; Springer: Berlin/Heidelberg, Germany, 1994; pp. 100–132. [Google Scholar]
  28. Lin, S.; Kernighan, B.W. An effective heuristic algorithm for the traveling-salesman problem. Oper. Res. 1973, 21, 498–516. [Google Scholar] [CrossRef]
Figure 1. The pseudo-code of the FCM method.
Figure 1. The pseudo-code of the FCM method.
Mathematics 12 02960 g001
Figure 2. The flowchart of the FCM method.
Figure 2. The flowchart of the FCM method.
Mathematics 12 02960 g002
Figure 3. The 10 clusters on the dkc3938 instance.
Figure 3. The 10 clusters on the dkc3938 instance.
Mathematics 12 02960 g003
Figure 4. The subpaths of the dkc3938 instance.
Figure 4. The subpaths of the dkc3938 instance.
Mathematics 12 02960 g004
Figure 5. A complete tour of the dkc3938 instance after connecting the subpaths.
Figure 5. A complete tour of the dkc3938 instance after connecting the subpaths.
Mathematics 12 02960 g005
Figure 6. A final tour of the dkc3938 instance after post-optimization.
Figure 6. A final tour of the dkc3938 instance after post-optimization.
Mathematics 12 02960 g006
Figure 7. The candidate set size with different basic steps on the E10k0 instance (fuzzy clusters, m = 1.05) vs. hard clusters ( n c l u s t e r  = 10, post_opt = yes, n p o s t o p t  = 10, and p o s t o p t  = nodes/10).
Figure 7. The candidate set size with different basic steps on the E10k0 instance (fuzzy clusters, m = 1.05) vs. hard clusters ( n c l u s t e r  = 10, post_opt = yes, n p o s t o p t  = 10, and p o s t o p t  = nodes/10).
Mathematics 12 02960 g007
Figure 8. The reduction in the number of missing edges with different basic steps on the E10k0 instance (fuzzy clusters, m = 1.05, n c l u s t e r  = 10, post_opt = yes, n p o s t o p t  = 10, and p o s t o p t  = nodes/10).
Figure 8. The reduction in the number of missing edges with different basic steps on the E10k0 instance (fuzzy clusters, m = 1.05, n c l u s t e r  = 10, post_opt = yes, n p o s t o p t  = 10, and p o s t o p t  = nodes/10).
Mathematics 12 02960 g008
Figure 9. The reduction in the number of missing edges on the E10k0 instance fuzzy ( m = 1.05) vs. hard clusters ( n c l u s t e r  = 10, post_opt = yes, n p o s t o p t  = 10, and p o s t o p t  = nodes/10).
Figure 9. The reduction in the number of missing edges on the E10k0 instance fuzzy ( m = 1.05) vs. hard clusters ( n c l u s t e r  = 10, post_opt = yes, n p o s t o p t  = 10, and p o s t o p t  = nodes/10).
Mathematics 12 02960 g009
Figure 10. The candidate set size on the E10k0 instance fuzzy ( m = 1.05 vs. normal clusters ( n c l u s t e r  = 10, post_opt = yes, n p o s t o p t  = 10, and p o s t o p t  = nodes/10).
Figure 10. The candidate set size on the E10k0 instance fuzzy ( m = 1.05 vs. normal clusters ( n c l u s t e r  = 10, post_opt = yes, n p o s t o p t  = 10, and p o s t o p t  = nodes/10).
Mathematics 12 02960 g010
Figure 11. The reduction in the number of missing edges with different parameter m values on the E10k0 instance (fuzzy clusters, n c l u s t e r = 10, post_opt = yes, n p o s t o p t  = 10, and p o s t o p t  = nodes/10).
Figure 11. The reduction in the number of missing edges with different parameter m values on the E10k0 instance (fuzzy clusters, n c l u s t e r = 10, post_opt = yes, n p o s t o p t  = 10, and p o s t o p t  = nodes/10).
Mathematics 12 02960 g011
Figure 12. The candidate set size with m parameter values on the E10k0 instance (fuzzy clusters, n c l u s t e r = 10, post_opt = yes, n p o s t o p t  = 10, and p o s t o p t  = nodes/10).
Figure 12. The candidate set size with m parameter values on the E10k0 instance (fuzzy clusters, n c l u s t e r = 10, post_opt = yes, n p o s t o p t  = 10, and p o s t o p t  = nodes/10).
Mathematics 12 02960 g012
Figure 13. The reduction in the number of missing edges with different parameter values of the post-optimization on the E10k0 instance (fuzzy clusters, m = 1.05, n c l u s t e r = 10, and post_opt = yes).
Figure 13. The reduction in the number of missing edges with different parameter values of the post-optimization on the E10k0 instance (fuzzy clusters, m = 1.05, n c l u s t e r = 10, and post_opt = yes).
Mathematics 12 02960 g013
Figure 14. The candidate set size with different parameter values of the post-optimization on the E10k0 instance (fuzzy clusters, m = 1.05, n c l u s t e r = 10, and post_opt = yes).
Figure 14. The candidate set size with different parameter values of the post-optimization on the E10k0 instance (fuzzy clusters, m = 1.05, n c l u s t e r = 10, and post_opt = yes).
Mathematics 12 02960 g014
Figure 15. The polynomial curve fitted to our tour construction heuristic.
Figure 15. The polynomial curve fitted to our tour construction heuristic.
Mathematics 12 02960 g015
Figure 16. The comparison of reduction in gap over time using our fuzzy cluster-based and POPMUSIC candidate set generating methods.
Figure 16. The comparison of reduction in gap over time using our fuzzy cluster-based and POPMUSIC candidate set generating methods.
Mathematics 12 02960 g016
Table 1. Results with different parameter settings on the E10k0 instance.
Table 1. Results with different parameter settings on the E10k0 instance.
Parameters
( Constant   Values :   m = 1.05 ,   n p o s t o p t = 10 ,   p o s t o p t = Nodes/10)
After 50 ToursAfter 100 ToursAfter 150 Tours
MEAVDMEAVDMEAVDBest TourAvg. TourTime [s]
hard clusters, n c l u s t e r = 10, post_opt = yes10515.806.574,497,31876,030,799.349.573
fuzzy clusters, n c l u s t e r = 5, post_opt = yes 25.116.10774,311,48175,780,634.553.286
fuzzy clusters, n c l u s t e r = 10, post_opt = yes15.306.407.374,886,40076,277,249.453.162
fuzzy clusters, n c l u s t e r = 15, post_opt = yes 35.506.707.575,375,74676,883,98052.013
fuzzy clusters, n c l u s t e r = 20, post_opt = no45.616.807.775,496,76077,081,900.733.835
fuzzy clusters, n c l u s t e r = 20, post_opt = yes35.506.507.574,936,87476,749,658.454.801
Table 2. Results with different parameter settings on the dan59296 instance.
Table 2. Results with different parameter settings on the dan59296 instance.
Parameters
( Constant   Values :   m = 1.05 ,   n p o s t o p t = 10 ,   p o s t o p t = Nodes/10)
After 50 ToursAfter 100 ToursAfter 150 Tours
MEAVDMEAVDMEAVDBest TourAvg. TourTime [s]
hard clusters, n c l u s t e r = 10, post_opt = yes2364.781025.88686.75183,385186,523224.42
fuzzy clusters, n c l u s t e r = 5, post_opt = yes 2275.221286.70597.60184,360187,635331.22
fuzzy clusters, n c l u s t e r = 10, post_opt = yes1825.61727.32448.63185,815188,579231.07
fuzzy clusters, n c l u s t e r = 15, post_opt = yes 1735.64707.37368.78185,531189,331229.51
fuzzy clusters, n c l u s t e r = 20, post_opt = no1986.02817.90599.38186,064189,873174.03
fuzzy clusters, n c l u s t e r = 20, post_opt = yes1595.79707.66309.04185,997189,153238.56
Table 3. Results with using different basic moves on the E10k0 instance (fuzzy clusters, m  = 1.05, n c l u s t e r  = 10, post_opt = yes, n p o s t o p t  = 10, and p o s t o p t  = nodes/10).
Table 3. Results with using different basic moves on the E10k0 instance (fuzzy clusters, m  = 1.05, n c l u s t e r  = 10, post_opt = yes, n p o s t o p t  = 10, and p o s t o p t  = nodes/10).
Basic MoveAvg. Gap [%]Avg. Time [s]/Tour
2-opt10.790.229
3-opt6.140.354
4-opt4.830.853
5-opt4.382.562
Table 4. Comparison of candidate sets on VLSI instances.
Table 4. Comparison of candidate sets on VLSI instances.
Fuzzy Cluster BasedFast POPMUSIC
Candidate SetTime [s]Avg. Gap [%]Candidate SetTime [s]Avg. Gap [%]
SizeMissing EdgesSizeMissing Edges
dkc39387.1312.9913.447.857.3817.04
xmc101507.71434.8216.068.71816.1020.06
pba384787.981115.3713.798.46360.9919.41
ics396036.948125.7814.098.43463.6920.44
dan592967.372150.0114.038.66288.6419.45
sra1048157.6140511.7815.988.698170.8619.29
ara2380257.73201298.2213.298.6221376.7619.94
Table 5. Comparison of candidate sets on national instances.
Table 5. Comparison of candidate sets on national instances.
Fuzzy Cluster BasedFast POPMUSIC
Candidate SetTime [s]Avg. Gap [%]Candidate SetTime [s]Avg. Gap [%]
SizeMissing EdgesSizeMissing Edges
ei82464.9025.766.766.3013.5912.26
fi106396.3336.587.697.1218.5613.15
mo141856.3042.618.197.1328.6913.57
it168625.8360.316.867.0027.1412.86
vm227755.7672.304.837.0324.5512.93
sw249786.0373.916.877.1524.9113.31
bm337085.91397.725.807.1650.2813.28
ch710096.218281.185.557.19114.1411.72
Table 6. The parameters of the polynomial curve fitting.
Table 6. The parameters of the polynomial curve fitting.
abR2
0.0001741 (2.26 × 10−6, 0.0003459)1.279 (1.198, 1.36)0.994
Table 7. Comparison of results on Euclidean instances. (The better results are highlighted in bold).
Table 7. Comparison of results on Euclidean instances. (The better results are highlighted in bold).
InstanceFuzzy Cluster Based (100 Tours)POPMUSIC
Candidate SetLKH (1000 Trials)Candidate SetLKHTime Limit[s]
SizeTime [s]Avg. Time [s]Best GapAvg. GapSizeTime [s]Best GapAvg. Gap
E10k06.436.483255.490.01%0.02%7.117.820.00%0.01%300
E31k06.2124.6521880.120.01%0.02%7.255.670.01%0.03%2000
E100k06.1419.81914,462.180.03%0.04%7.2136.530.05%0.06%15,000
E316k061642.85102,750.020.09%0.09%7.3569.870.10%0.10%110,000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tüű-Szabó, B.; Földesi, P.; Kóczy, L.T. An Efficient Tour Construction Heuristic for Generating the Candidate Set of the Traveling Salesman Problem with Large Sizes. Mathematics 2024, 12, 2960. https://doi.org/10.3390/math12192960

AMA Style

Tüű-Szabó B, Földesi P, Kóczy LT. An Efficient Tour Construction Heuristic for Generating the Candidate Set of the Traveling Salesman Problem with Large Sizes. Mathematics. 2024; 12(19):2960. https://doi.org/10.3390/math12192960

Chicago/Turabian Style

Tüű-Szabó, Boldizsár, Péter Földesi, and László T. Kóczy. 2024. "An Efficient Tour Construction Heuristic for Generating the Candidate Set of the Traveling Salesman Problem with Large Sizes" Mathematics 12, no. 19: 2960. https://doi.org/10.3390/math12192960

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop