Next Article in Journal
Applications of Generalized Hypergeometric Distribution on Comprehensive Families of Analytic Functions
Previous Article in Journal
Improving the Diagnosis of Systemic Lupus Erythematosus with Machine Learning Algorithms Based on Real-World Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Probabilistic Cellular Automata Monte Carlo for the Maximum Clique Problem

Dipartimento di Matematica e Informatica, Università di Perugia, Via Vanvitelli, 06123 Perugia, Italy
Mathematics 2024, 12(18), 2850; https://doi.org/10.3390/math12182850
Submission received: 27 August 2024 / Revised: 10 September 2024 / Accepted: 12 September 2024 / Published: 13 September 2024
(This article belongs to the Section Probability and Statistics)

Abstract

:
We consider the problem of finding the largest clique of a graph. This is an NP-hard problem and no exact algorithm to solve it exactly in polynomial time is known to exist. Several heuristic approaches have been proposed to find approximate solutions. Markov Chain Monte Carlo is one of these. In the context of Markov Chain Monte Carlo, we present a class of “parallel dynamics”, known as Probabilistic Cellular Automata, which can be used in place of the more standard choice of sequential “single spin flip” to sample from a probability distribution concentrated on the largest cliques of the graph. We perform a numerical comparison between the two classes of chains both in terms of the quality of the solution and in terms of computational time. We show that the parallel dynamics are considerably faster than the sequential ones while providing solutions of comparable quality.

1. Introduction

A graph is a combinatorial structure consisting of a set of objects called vertices or nodes together with a collection of links, called edges, between pairs of vertices. Graphs may be used to model the relationship between pairs of entities of a population such as those between individuals in a social network or between data points in a dataset. A typical question in decision theory is as follows: “Given a graph G with set of vertices V is there a subset A V with cardinality k such that all pairs of vertices in A are connected by an edge?”. This problem is known as the clique problem and, in the context of social network analysis, has been introduced in [1] to model groups of mutually linked people. The clique problem is known to be NP-complete (it is, actually, one of the 21 problems identified by Karp in [2]) and its formulation as an optimization problem “what is the largest clique of a given graph G” is known as the maximum clique problem and is NP-hard. Though it is easy to verify that a collection of k nodes is indeed a clique for a graph G (where “easy” means that this verification can be conducted in polynomial time this is actually the definition of an NP problem), no polynomial time algorithm is known to exist to determine the size of the largest clique of a graph. For this reason, in many situations where computation time is limited, finding an optimal clique using an exact algorithm is usually not a viable option. In these cases, having an algorithm capable of finding a good heuristic solution in a short time may be extremely valuable.
Beyond the exact algorithms (see, for instance, [3,4,5,6]), which, in the general case, are not suitable for large instances, several approaches have been used to tackle the clique problem such as greedy procedures, Monte Carlo methods, machine learning and artificial intelligence tools, and quantum computing algorithms. For recent overviews, it is possible to see [7,8].
Here, our focus is on Markov Chain Monte Carlo methods. The idea behind Markov Monte Carlo methods consists of the ability to sample collections of nodes from a probability distribution which puts most of the weight on the largest cliques of a given graph. This probability distribution is usually approximated by the long-term distribution of a suitable Markov Chain (see [9] for an elementary introduction). This fact gives Markov Chain Monte Carlo methods a solid mathematical foundation which makes this class of heuristic algorithms “explainable”. This is in contrast with other heuristic methods such as greedy and local search algorithms (see, e.g., [10,11,12,13]) and methods based on machine learning and artificial intelligence (see, for instance, as [14,15,16]).
In the context of the maximum clique problem, Markov Chain Monte Carlo (MCMC) methods have been analyzed extensively from both the theoretical and numerical points of view. Even though in [17] some of the limitations of MCMC have been highlighted, more recent results such as those in [18,19,20] showed that this approach may be worth tackling in practical instances.
This work aims to evaluate, numerically, a class of Markov Chain Monte Carlo algorithms, which here we refer to as Probabilistic Cellular Automata Monte Carlo (PCAMC), where the considered Markov chain is a Probabilistic Cellular Automaton (PCA), that is Markov Chain living on some multidimensional discrete space and for which, at each step, all the components of the “current state” are updated independently one from the other. This is in contrast with the usual choice in MCMC, which is to update only one component at each step of the dynamics. The inherent parallel nature of PCAMC allows for a straightforward implementation that takes full advantage of parallel computing architecture giving PCAMC a significant advantage over standard MCMC methods from the computational point of view. The fact that the parallel update of the components of the current state represents the main novelty of the PCAMC approach with respect to other parallel update rules considered in the literature (as the one introduced in [18]). Indeed, thanks to this independence, it is possible to distribute the computation necessary for each update to a computing unit without the need for these units to exchange messages.
These algorithms have already been considered (by the author of this paper and several coauthors) to tackle certain instances of the QUBO and proved to be rather effective. Here, we will use them in the context of the maximum clique problem and we will show that their performances are quite promising.
The rest of the paper is organized as follows. Section 2 provides a formal description of the maximum clique problem and gives a mathematical definition of the algorithms together with their theoretical foundations. Section 3 presents a comparison between the results obtained with the standard MCMC approach and the PCAMC approach both in terms of computation time and largest clique found. Finally, Section 4 contains some final remarks and suggests future lines of research concerning the theoretical aspects of the algorithms considered in the paper.

2. Materials and Methods

In this section, we give a mathematical definition of the maximum clique problem and see how it can be formulated in terms of the minimization of a suitable objective function. We then describe the Monte Carlo algorithms we want to evaluate, namely the one we refer to as the PCA (and that throughout the paper we denote by A ) and the Shaken dynamics (denoted by S ) and the algorithm we take as a reference, that is, the Metropolis “single spin flip” algorithm (that we denote by M ).

2.1. Preliminary Definitions and Statement of the Problem

Let G = ( V , E ) be a non-oriented graph where V is the set of vertices and E V × V is the set of edges. A graph G is said to be complete if E = { { i , j } for all i , j V , i j } , that is if there as an edge between each pair of vertices of G. A graph G = ( V , E ) is called a subgraph of G if V V and E E . Let A V be a subset of the vertices of G. Then, the graph G [ A ] with set of vertices A and set of edges E | A = { { i , j } : i , j A } (that is the set of edges of G with both endpoints in A) is called the subgraph of G induced by A. The subset C V is called a clique for G if the induced subgraph G [ C ] is complete. We denote by K ( G ) the set of all cliques of G. A clique C is said to be maximal for G if, for all C V such that C C , C is not a clique. In words, this means that C is maximal if it can not be extended by adding to C an adjacent vertex. Further, a clique C is called a maximum clique of G if | C | | C | for all C K ( G ) where | S | denotes the cardinality of set | S | . The cardinality of a maximum clique of G is called the clique number of G and is denoted by ω ( G ) .
Given a graph G, the maximum clique problem consists of determining the clique number ω ( G ) , that is, the size of the largest clique in the graph. As mentioned in the introduction, this problem is NP-hard.
A graph G = ( V , E ) can be encoded in several ways. One possibility is that of considering its adjacency matrix A G = ( a i j ) i , j = 1 , , N defined by a i j = 1 if { i , j } E and a i j = 0 otherwise. Note, that A G is a symmetric matrix since G is non-oriented.

2.2. A Hamiltonian for the Clique Problem: The Theoretical Foundation of MCMC Algorithms

A discrete optimization problem consists of finding the minimum of an objective function H ( η ) where η is an element of a suitable multidimensional space X .
Consider a graph G = ( V , E ) , with vertices V = { 1 , 2 , , N } and the space of boolean configurations with N components X = { 0 , 1 } N . Then, for each subgraph G [ A ] , with  A V , there is one, and only one, η A X defined by η i A = 1 i A . We call the set of indices where η is positive the support of η end denoted by supp ( η ) (in words supp ( η ) is the set of vertices selected by the configuration η ). We can, therefore, identify subgraphs of a graph with N vertices and configurations of { 0 , 1 } N . With an abuse of notation we write “the subgraph η ‘’ in place of the subgraph G [ A ] induced by the set A = { i 1 , , N : η i = 1 } . Similarly, if the vertices of subgraph η are a clique of G we refer to η as a clique of G. Moreover, we write | η | for the cardinalitiy of a configuration η defined as | η | = | { i 1 , , N : η i = 1 } | .
For any graph G = ( V , E ) with N vertices, it is possible to define the N × N matrix J = J ( G ) as J i j = 1 2 if { i , j } E and J i j = 0 otherwise. We call the matrix J ( G ) defined in this way the matrix of missing edges of G. Note, that the matrix J is symmetric and is closely related to the adjacency matrix of the graph. In particular, if  A = A ( G ) is the adjacency matrix of the graph, then J i j = 1 2 ( 1 A i j ) if i j and 0 otherwise. Therefore, any symmetric matrix J with zeroes on the diagonal and entries in 0 , 1 2 uniquely determines a graph G ( J ) . Moreover, when it does not give rise to confusion we drop the explicit dependence of J on G.
Given a graph G = ( V , E ) consider the associated matrix of missing edges J and define the function
H J , λ ( η ) = i , j N J i j η i η j λ i = 1 N η i .
In the remainder of the paper, to lighten the notation, we will write H instead of H J , λ .
Lemma 1. 
Let G = ( V , E ) be a non-oriented graph. If 0 < λ < 1 then the H is minimal if and only if η is a maximum clique of G.
Proof. 
Let us first show that the condition 0 < λ < 1 is sufficient.
Let η be a configuration with cardinality k. Then, H ( η ) = λ k if η is a clique and H ( η ) > λ k otherwise since the term i , j N J i j η i η j > 0 for all η ’s that are not cliques (since there is at least one “missing edge”).
Let ω ( G ) be the clique number of G, that is, the cardinality of the largest clique of G. Then, there exists a clique η ¯ with cardinality ω ( G ) such that H ( η ¯ ) = λ ω ( G ) . From the previous observation, it follows that H ( η ) λ ω ( G ) for all η ’s with cardinality at most ω ( G ) with equality only if η is a maximum clique for G.
It is left to show that, if the cardinality of η is larger than ω ( G ) , then H ( η ) > λ ω ( G ) . Let η have cardinality k > ω ( G ) . Write S = supp ( η ) and denote by A the maximum clique of G [ S ] . Further let B = S A and call η A and η B the configurations whose support is, respctively, A and B. Note, that A and B are disjointed by construction and | B | 1 since, by assumption | A | + | B | > ω ( G ) and | A | ω ( G ) . We have
H ( η ) = H ( η A ) + H ( η B ) + i A , j B J i j η i η j .
Now, observe that by the assumed maximality of clique A, i A , j B J i j η i η j | B | since there must be at least one missing edge between each vertex in B and the vertices of A. Moreover, it must be H ( η A ) λ ω ( G ) and H ( η B ) λ | B | . Hence,
H ( η ) λ ω ( G ) + ( 1 λ ) | B | λ ω ( G )
as soon as λ < 1 .
To see that the condition λ < 1 is necessary, consider the case where η is a clique of size ω ( G ) . Then, H ( η ) = λ ω ( G ) . Let A be the set of indices such that η i = 1 (that is, the set of vertices of the clique). Suppose there is a configuration τ obtained from η by setting η j = 1 for some j A such that there is an edge between vertex j and exactly ω ( G ) 1 vertices in A. Then, H ( τ ) = λ ( ω ( G ) + 1 ) + 1 = H ( η ) ( λ 1 ) H ( η ) as soon as λ 1 .
Finally, observe that if λ = 0 , H ( η ) = 0 for all η ’s that are a clique and if λ < 0 only the empty configuration 0 has energy H ( 0 ) = 0 whereas H ( η ) > 0 for all η 0 .    □
Thanks to the previous lemma, the maximum clique problem can be formulated in a standard optimization problem form as
min η { 0 , 1 } N H ( η ) = min η { 0 , 1 } N i , j N J i j η i η j λ i = 1 N η i , 0 < λ < 1 .
Note, that exploiting the fact that η i = η i 2 , it is possible to rewrite the previous formula in terms of a matrix J ˜ = J ˜ ( λ ) as
min η { 0 , 1 } N H ( η ) = min η { 0 , 1 } N i j N J ˜ i j η i η j = min η { 0 , 1 } N η T J ˜ η
with J ˜ i j = J i j if i j and J ˜ i i = λ . This formulation is known in the literature as Quadratic Unconstrained Binary Optimization Problem (QUBO): an NP-hard problem with a vast range of applications (see the text [21] for an overview) that allows for a straightforward embedding of (beyond the maximum clique problem) several combinatorial optimization problems such as max-cut and graph coloring (see [22]. Further, QUBO is a model of interest in the field of quantum computing (see, e.g., [23,24,25]),
It is worth noting that the objective function of an optimization problem can be interpreted as the Hamiltonian energy function of a statistical mechanics model with state space X . The statistical mechanics model’s ground states (minima of the Hamiltonian) correspond to configurations minimizing the objective function. The equilibrium distribution of a statistical mechanics model with Hamiltonian H is described by the Gibbs measure
μ G ( η ) = 1 Z e β H ( η ) , η X
where β is a positive real parameter called the inverse temperature and Z is a normalizing constant called the partition function. As β (low-temperature regime) this probability measure concentrates on the ground states of the system. This fact suggests that a solution to the optimization problem can be found by sampling from the Gibbs measure for the corresponding statistical mechanics model at low temperature.
To this aim, it is possible to define a Markov chain on X with transition probability P : X × X [ 0 , 1 ] whose stationary distribution is the Gibbs measure μ g . If the chain is irreducible and aperiodic, then, irrespective of the initial starting configuration, the probability distribution of the chain after n steps tends to the stationary distribution as n . The sampling from the Gibbs measure can, then, be performed by letting the chain evolve for a sufficiently long time. More precisely, let μ n ( η ) be the probability distribution after n steps of the chain starting from configuration η and let π be the stationary measure for the chain with transition probability P. Then, denoting by d T V ( · , · ) the total variation distance.
d T V ( μ n ( η ) , π ) 0 as n .
The time it takes for the chain to have a distribution that is within a distance ε from the equilibrium distribution is referred to as the mixing time of the chain. In formulae, the mixing time of a Markov Chain with state space X and stationary distribution π is defined for  ε < 1 2 as 
t mix ( ε ) = min { n : d T V ( μ n ( η ) , π ) ε }
In general, determining a priori bound on the mixing time of the chain that guarantees that the algorithm is useful in practice (that is it has a mixing time that is at most polynomial in the size of the problem) may be non-trivial. Actually, in the case of Erdős–Rényi random graphs, Jerrum ([17]) proved that the mixing time of the Metropolis process he considered for the maximum clique problem has super-polynomial mixing time. However, for instances that are not too big (number of nodes up to a few thousand) Monte Carlo algorithms may still provide some good heuristic solutions (see, e.g., [18]).
In the case of the PCAMC algorithms that we will describe below, the stationary measure is not exactly the Gibbs measure (6). However, we will see that their stationary measure is close to the Gibbs measure and, hence, gives most of its probability weights to the largest cliques in the low-temperature regime.

2.3. The Metropolis Single Spin Flip Algorithm

A reference choice among Markov Chain Monte Carlo algorithms is the Metropolis update rule with a single spin flip whose transition probability matrix is as follows.
For a given graph G = ( V , E ) with missing edges matrix J, consider the Hamiltonian function H defined in (1). Let η ( i ) be the configuration obtained from η by flipping the occupation number of the i-th component, that is η i ( i ) = 1 η i and η j ( i ) = η j for all j i .
P M ( η , τ ) = 1 N e β [ H ( τ ) H ( η ) ] + . if τ = η ( i ) , i = 1 , , N 1 i 1 N e β [ H ( η ( i ) ) H ( η ) ] + if τ = η 0 otherwise
where [ · ] + denotes the positive part. In words, at each step, an index i 1 , , N is chosen a random and if the configuration obtained by flipping the value of the i-th component of η has a Hamiltonian energy lower than that of η the occupation number of η i is changed with probability one. If this is not the case, the occupation number of η i is changed with a probability that is exponentially small in the difference between the energy of the target configuration and the energy of the current one.
Let π M ( η ) = 1 Z e β H ( η ) . Then, it is immediate to verify that P M satisfies the detailed balance condition
π M ( η ) P M ( η , τ ) = π M ( τ ) P M ( τ , η )
ensuring that π M is the stationary distribution of P M .
We remark that this algorithm is allowed to visit configurations that are not cliques of the graph G. However, in the limit β λ 1 β the algorithm is equivalent (see [18]) to the Metropolis algorithm considered by Jerrum in [17] which allows transitions only between cliques of G.

2.4. The “Pair Hamiltonian” Probabilistic Cellular Automaton

A probabilistic cellular automaton on X is a Markov Chain whose transition probability matrix can be written, for any η , τ X as
P η , τ = i P ( τ i = · | η ) .
In words, this means that at each step, the value of each component τ i of the target configuration τ can be sampled independently of the values of all other τ j , j i . From a computational perspective, this fact is particularly appealing because, in theory, each component of τ could be updated by a dedicated computing core (see [26]). With the advent of massively parallel processors such as GPUs, this is an actual possibility even on consumer hardware for configurations with several hundreds of components.
In [27] the PCA Monte Carlo algorithm has been introduced to study the minima of QUBO. When used to approach the maximum clique problem the algorithm is as follows:
Consider the pair Hamiltonian function defined on pairs of configurations
H ( η , τ ) = β i , j J ˜ i j η i τ j + q i [ η i ( 1 τ i ) + τ i ( 1 η i ) ] ,
where J ˜ is the matrix used in (5), and set the transition probability to be
P A ( η , τ ) = e H ( η , τ ) = e H ( η , τ ) τ e H ( η , τ )
Let h i ( η ) = j J ˜ i j η j be the local energy field felt by the i-th component of η . Then, H ( η , τ ) can be rewritten as H ( η , τ ) = β i h i ( η ) τ i + q ( [ η i ( 1 τ i ) + τ i ( 1 η i ) ] ) . Consequently, the transition probabilities can be rewritten as
P A ( η , τ ) = i e β h i ( η ) τ i q [ η i ( 1 τ i ) + τ i ( 1 η i ) ] Z i
yielding
P ( τ i = 1 | η ) = e β h i ( η ) q ( 1 η i ) Z i and P ( τ i = 0 | η ) = e q η i Z i
with Z i = e β h i q ( 1 η i ) + e q η i . Therefore, P X defines, indeed, a PCA in the sense of (10).
By the symmetry of J ˜ , a straightforward computation shows that the detailed balance condition
P A ( η , τ ) τ e H ( η , τ ) η , τ e H ( η , τ ) = P A ( τ , η ) η e H ( τ , η ) η , τ e H ( η , τ )
holds and, hence,
π A ( η ) = τ e H ( η , τ ) η , τ e H ( η , τ )
is the stationary measure of P A . Note, that also this algorithm may visit all configurations and not only those that are cliques.
A closer look at equation (13) shows that at each step, the dynamics tend to select, for the target configuration τ , the vertices connected to all vertices selected by configuration η . Indeed, for these components, the local energy field is negative (equal to λ ). On the other hand, the probability of selecting, in  τ , components for which there is at least one missing edge with the components selected by η , is exponentially small (since λ < 1 ). Further, note that the term e q [ η i ( 1 τ i ) + τ i ( 1 η i ) ) ] reduces the probability that the occupation number of the i-th component of τ differs from the i-th component of η so that the probability that η and τ differ at too many sites stays small.
To obtain an idea of why this algorithm is expected to work, it is possible to argue as follows. At first observe that H ( η , η ) = β H ( η ) with H defined as in (1). Further, note that as q gets large, the weight of pairs ( η , τ ) with η τ in π A ( η ) is exponentially depressed and π A ( η ) becomes closer to the Gibbs measure e β H ( η ) Z with Hamiltonian H and inverse temperature β .
In [27] and in [28] the authors considered QUBO instances where the coefficients of the matrix J ˜ are the realization of independent identically distributed random variables with expected value zero and showed that the PCA Monte Carlo performed quite well when compared to other heuristic techniques. Here, however, the elements of the matrix J ˜ have a particular structure: they are negative on the diagonal and positive or null outside of it and it is not, a priori, obvious how this structure could have an impact on the practical performances of the algorithm.

2.5. The Shaken Dynamics

Shaken dynamics have been introduced in [29,30] in the process of finding efficient algorithms to sample from the Gibbs measure on spin systems (see [31,32]) and extend the class of PCA with transition probabilities of type P A = e β H ( η , τ ) τ e β H ( η , τ ) described above and defined in terms of a pair Hamiltonian.
The transition probabilities of the Shaken dynamics come as a combination of two steps and are of the following type:
P S ( η , τ ) = σ P ( η , σ ) P ( σ , τ )
with
P ( η , σ ) = e H ( η , σ ) σ e H ( η , σ ) and P ( η , σ ) = e H ( η , σ ) σ e H ( η , σ ) .
The two functions H and H appearing in the definition of the transition probabilities must be such that H ( η , σ ) = H ( σ , η ) . If this is the case, then it follows that
π S ( η ) = σ e H ( η , σ ) η , σ e H ( η , σ )
is the stationary (reversible) measure for P S . To see this, observe that the detailed balance condition holds. Indeed, calling Z η = σ e H ( η , σ ) and Z = η , σ e H ( η , σ ) , we have
π S ( η ) P S ( η , τ ) = σ e H ( η , σ ) η , σ e H ( η , σ ) σ e H ( η , σ ) σ e H ( η , σ ) e H ( σ , τ ) τ e H ( σ , τ )
= σ e H ( η , σ ) η , σ e H ( η , σ ) σ e H ( σ , η ) σ e H ( η , σ ) e H ( τ , σ ) τ e H ( σ , τ )
= τ e H ( τ , σ ) τ , σ e H ( τ , σ ) σ e H ( τ , σ ) σ e H ( τ , σ ) e H ( σ , η ) σ e H ( σ , η ) = π S ( τ ) P S ( τ , σ )
Having defined the Shaken dynamics, we can describe how they can be used as a heuristic algorithm for the maximum clique problem.
Consider a graph G = ( V , E ) , with  | V | = N , its associated missing edge matrix and J and the corresponding matrix J ˜ = J ˜ ( λ ) used to formulate the maximum clique problem as QUBO (as in (5)).
Let J be any square matrix satisfying J + J T = J ˜ and define
H ( η , σ ) = β i , j N J i j η i σ j q i [ η i ( 1 σ i ) + σ i ( 1 η i ) ]
H ( η , σ ) = β i , j N J T i j η i σ j q i [ η i ( 1 σ i ) + σ i ( 1 η i ) ] .
Then, the condition H ( η , σ ) = H ( σ , η ) is verified. Further, observe that
H ( η , η ) = η T J ˜ η = η T ( J + J T ) η = 2 ( η T J η ) = 1 2 β H ( η )
where H ( η ) is the same one defined in (1).
Strictly speaking, the Shaken dynamics are not a PCA. However the transition probabilities of each half-step are of the form (13) where, instead of the vector of field h, the two vectors of fields h = J · η and h = J T · σ appear.
Also, in this case, if the parameter q is large, the stationary measure π S ( η ) is close to the Gibbs measure e β H ( η ) Z (the factor e 1 / 2 appears both at the numerator and at the denominator and, therefore, cancels out) and, hence, the Shaken dynamics provide a heuristic algorithm for the maximum clique problem. Also, this Markov chain lives on the whole set X and not only on the subsets corresponding to cliques.
In [30] the Shaken dynamics have been used, in the context of spin systems, to find the minima of a QUBO-like problem defined on a lattice that is a problem where the interaction J i j was different from zero only for pairs i , j satisfying a certain relation. In that scenario, the Shaken dynamics appeared to be rather effective. However, in the case of the maximum clique problem, besides the requirement for J to be symmetric, there is no restriction on which (and how many) pairs J i j can be different from zero as long as i j . Therefore the effectiveness of this approach to the maximum clique problem has to be assessed.

3. Results

We tested our algorithms on a set of benchmark graphs commonly considered in the literature (DIMACS benchmarks). The graphs considered have a different number of nodes N spanning from 125 to 4000. The graphs taken into account are of several types which can be retrieved from the graph id:
Erdős–Rényi graphs: 
graphs with a fixed number of vertices and edges selected independently at random with uniform probability. These are graphs with an id of type Cn.d (where n is the number of vertices and d/10 is the density (probability) of edges) and DSJCn_d (where n is the number of nodes and d/10 is the density of edges)
Steiner triple graphs: 
with id MANN_aXX are graphs constructed to have a clique formulation of the Steiner Triple Problem
Brockington graphs: 
with id brockN_d where N is the number of vertices and d is a parameter
Sanchis graphs: 
with id genn_p0.x_y where n is the number of vertices, p0.x is the density of edges and y is the size of the planted clique
Hamming graphs: 
with id hammingx-y with parameters x and y
Keller graphs: 
with id kellern and parameters n = 4, 5, 6
P-hat graphs: 
with id p-hatn-x where n is the number of nodes and x is an identifier; graphs generated with p-hat have wider node degree spread and larger cliques than uniform graphs.
More details can be found in [33].
We formulated the problem as in (5) and fixed, for all graphs and all algorithms, λ = 0.25 .
As far as the parameters β and q, since we do not have an apriori argument for an optimal choice, we tested several pairs of values. However, to determine the values of q to perform our test, we kept into account the size N (number of nodes) of the graph and chose values of q so as to have e q N = c q N for several values of c q as described below. The rationale for this choice is the fact that, for  A and S, the term e q is proportional to the probability of “flipping” a component neglecting the contribution depending on the “energy field” for that component. In this way we intended to change, at each iteration, a number of components of the order N .
The values we used are the following:
Metropolis ( M ):
β in the range 1 , 1.5 , 2 , , 4
PCA ( A ):
β in the range 1 , 1.5 , 2 , 4 and c q in the set 1 8 , 1 4 , 1 2 , 1 , 2 , 3 , 4
Shaken dynamics ( S ):
in the range 1 , 2 , 7 and c q in the set { 1 64 , 1 32 , 1 16 , 1 8 , 1 4 , 1 2 , 1 1 }
These sets of ranges for the parameters have been determined after some experimentation where we saw that certain ranges of parameters were not suitable for a given algorithm. In particular, we needed to find values of q big enough (that is c q sufficiently small) for the dynamics to retain a few components. Note, that not all pairs of parameters ( β , q ) that have been taken into account are effective for a given algorithm.
For the Shaken dynamics S , we also had to make a choice for the matrix J . We proceeded as follows. For each i 1 , , N we set
J i j = J i j with j = ( ( j 1 ) mod N ) + 1 for j ( i + 1 , i + 2 , i + N 2 1 ) J i j 2 for j = i + N 2 if N is even 0 otherwise
Intuitively, for each row, the matrix J is obtained retaining the N 2 elements of J ˜ “on the right” of the diagonal element. It is straightforward to verify that J + J T = J ˜ .
This choice also provides a reason for why the values β used for the algorithm S are in a range that is, roughly, twice as wide then that used for M and A: the fields used to update the configuration at each “half-step” are proportional to J · η whose components could be expected to be approximately one-half of the components of J · η (which are the fields determining the transitions of A ).
Simulations were run for a fixed number of iterations and the largest clique found in each run of any of the algorithms was recorded. For each set of parameters ( β for M and ( β , q ) for A and S ) we run 10 simulations. The number of iterations has been set in the following way:
Metropolis ( M ):
100,000 sweeps where a sweep corresponds to N steps of the Markov Chain (that is a total of 100,000 N “attempted” flips);
PCA ( A ):
200,000 steps of the Markov Chain;
Shaken dynamics ( S ):
100,000 complete steps of the Markov Chain (200,000 half-steps).
Our intent was to perform a fair comparison of the three algorithms. However, it is not obvious what the right criterion to establish fairness should be. For instance, one could run the chains for the same number of “nominal” steps. However, this approach would be rather penalizing for the Metropolis algorithm since it would take, on average, O ( N log N ) steps to give all vertices of the graphs at least one opportunity to be selected (this is the “coupon collector’s problem”). Another opportunity would be to count the number of “attempted flips”. Since in both A or S all components are potentially updated, following this approach would require setting the number of steps for both A and S to be the same as the number of sweeps for M . However, if run on a single core, the computing time for a step of A of S is significantly shorter than the time required for a sweep of M . Finally, one could simply consider computation time. However, we feel that this criterion would be too dependent on the implementation details and the features of the computing machine. Because of these considerations, we decided to use a “hybrid” approach using as a primary criterion the total number of attempted flips but giving an extra quota of iterations to the parallel dynamics (note that, in each full step, the Shaken dynamics tries to update all component twice) to take into account (in a conservative way) their faster execution times.
The three algorithms were implemented in Julia (version 1.10, see [34]) and executed on an Nvidia DGX-1 system equipped with Intel Xeon CPU E5-2698 v4 @ 2.20 GHz (80 cores). We considered only a CPU implementation of the algorithms. For linear algebra operations (matrix-vector products) we used BLAS libraries ([35,36]) with the default implementation shipped with the LinearAlgegra package of Julia. Random numbers were generated using the default random number generator available in Julia (which in version 1.10 implements the Xoshiro256++ algorithm [37]).
A comparison concerning the computational times of the three algorithms is provided in Table 1.
This first table shows the speedups with respect to the Metropolis algorithm of the A and S algorithms. The speedup is computed taking into account the execution time of 1000 iterations of each algorithm (sweeps for M and steps for A and S . Computation times are determined by computing averages over five runs. For each of the two parallel algorithms, columns with headings 1, 2, 4, and 8 give the speedup when the number of threads used by the BLAS library is set to 1, 2, 4, and 8, respectively. Column N gives the number of nodes in the graph.
The values of the largest cliques found with the three algorithms are provided in Table 2.
This second table shows the largest clique found with each algorithm for the examined benchmark graphs. Bold numbers highlight those cases where the cardinality of the largest clique found by any of the algorithms considered in this paper coincides with the clique number of the graph or its best lower bound available in the literature (to the best of our knowledge). Underlined numbers signal the largest clique obtained with any of the algorithms considered in the paper when this number is smaller than the clique number of the graph (or its best lower bound). Column A gives the results for the Probabilistic Cellular Automaton, column S the results for the Shaken dynamics, and column M the results for the Metropolis single flip dynamics. Column ω ( G ) gives the value of the clique number of the graph (or its best lower bound). The value of ω ( G ) is marked with an asterisk when it is known to be exact (see [8]). Column N gives the number of nodes in the graph.
The Metropolis algorithm finds the best-known clique 30 out of 37 times proving to be, in practice, a rather effective tool for the maximum clique problem for instances with a size that is not too large. The PCA algorithm A allowed to find the maximum clique 27 out of 37 times whereas the Shaken dynamics found the maximum clique 23 out of 37 times. In those cases where neither of the algorithms found the maximum clique, the Metropolis is the one that found the largest clique more often. It should be noted, though, that in the case of the graph MANN_a81, the PCA found a better solution for the Metropolis and in two more cases, it found an equivalent solution.
In general, the performances of the two parallel algorithms, especially those of the PCA A , are not much worse in terms of the value of the largest clique found. Nevertheless, it is possible to argue that the slightly worse performances of A and S are partly compensated by their computational effectiveness. Even using a single thread for the computation of the energy field, algorithm A was faster between 5 and 40 times. Running with eight threads the speedup reached a factor beyond 100 for certain instances and it was more than 50 the majority of times. The situation is not too different for the Shaken dynamics S . Here, one has to take into account that, at each step, two configurations are visited: one for each half-step. A GPU implementation of the two algorithms is expected to yield an even greater speed up, especially on the larger instances. The increased speed could be advantageous in applications.
Looking at the two parallel algorithms, A appears to be better than S both in terms of the value of the solution provided and in terms of speed (even though the Shaken dynamics explores two configurations at each step the time to obtain a new configuration with the PCA appears to be less than 1 2 of the time required by the shaken dynamics). Likely this can be explained by the fact that the energy fields of the PCA can be computed as the J ˜ · η matrix-vector product where the matrix J ˜ is symmetric, whereas the matrix J used to compute J · η is not.
From the computational point of view, the most expensive task of the PCA and the Shaken dynamics is the computation of the vector of fields which is of the type h ( η ) = J · η . This matrix-vector product can be performed using highly optimized linear algebras libraries, which may take great advantage of multicore processors and GPUs.
As a consequence, the algorithm will benefit “for free” from improvements in the performances of linear algebra libraries. Since the use of these libraries is ubiquitous in computer applications, improvements in this direction will be driven by the efforts of communities coming from the most diverse domains where high-performance computing is used. This is actually one of the main strengths of the PCAMC approach: their computational efficiency is not due to some implementation trick but, rather, to their inherently parallel nature.
Note, that this type of advantage could be exploited even further. Indeed, it is possible to let k PCA evolve in parallel. Denoting by E the matrix whose columns are the configurations of the k chains, the collection of the k vector of fields { h i ( η ( m ) ) } i = 1 , , N ; m = 1 , , k can be computed as the matrix matrix product J ˜ · E . Performing this operation is, generally, significantly faster than the computation of k matrix-vector products. Note, that for each chain, a different pair of parameters β and q could be used without significantly affecting the computation time.
The possibility to run multiple chains at the time with different inverse temperatures β makes it possible to consider both dynamics A and S for algorithms like parallel tempering [38] which has been applied to the clique problem using standard single spin Metropolis dynamics (see, e.g., [20]).
Note, that Markov Chain Monte Carlo algorithms defined in terms of a “pair Hamiltonian” were already considered in [18,39,40]. Also, the algorithm considered in those works (referred to as “Cavity”) had the feature of updating more than one component of the configuration at each step. However, in these works, the cardinality of the configuration stayed fixed throughout the whole evolution of the chain and, therefore, its components could not be updated independently limiting, in this way, the possibility to exploit parallel computing architectures at their fullest.

4. Concluding Remarks

We believe that the results presented above foster a further investigation into the behavior of Probabilistic Cellular Automata. Indeed, they appear to provide good results while taking great advantage of the features of the computing architectures available nowadays. This investigation should start, in our opinion, from a better understanding of the relation between the stationary measure of the PCA and the Gibbs measure. In particular, it would be interesting to determine, for any β ¯ , bounds for the parameters ( β , q ) so that the total variation distance between π A (or π S ) and the Gibbs measure at inverse temperature β ¯ is within a prescribed value ε . If this were the case, it would be possible, for any β ¯ , to choose the parameters ( β , q ) from a well-defined set, greatly simplifying the tuning of the algorithms that would behave as if they were “one-parameter” algorithms. This fact would, in turn, improve their performance from a practical point of view.
In the previous section, we observed that the values of the largest clique obtained with the Metropolis algorithm M are slightly better than those obtained with A and S . It should be mentioned that the dependence on two parameters of the latter algorithms makes their tuning more challenging than the tuning of the single parameter of the Metropolis. For this reason, it would be interesting to study, from a theoretical point of view, the behavior of both dynamics in the ( β , q ) plane. In particular, it would be interesting to determine a “phase transition” curve identifying a “high temperature” region where the stationary measure is “close” to the uniform measure on X and a “low temperature” where the stationary measure is “concentrated” on the minima of the Hamiltonian 1. It is to be expected that this phase transition curve will depend on the law with which the graph has been generated An analog study would be interesting for the Metropolis case where it would be interesting to determine a critical value of β separating the high and low-temperature regions.
As already mentioned, the Metropolis algorithm discussed above is equivalent, at a large inverse temperature, to the algorithm described by Jerrum in [17]. In that paper, it has been shown that the mixing time of the chain, in the case of Erdős–Rényi graphs, is not polynomial in the number of vertices. Further, the limiting typical size of the clique typically determined by the Metropolis process has been determined. It would be interesting to perform a similar analysis for the two parallel algorithms introduced here. In general, it is to be expected that the mixing time grows more than polynomially with the number of nodes of the graphs also for the PCAMC algorithms. The (likely) non-polynomial mixing time of the chains represents the main limitations of these algorithms.
As a final comment, we would like to mention that both the parallel algorithms discussed in this paper may have applications outside combinatorial optimization. As far as the PCA is concerned, it has been suggested in [41] that this Markov Chain could be used to sample Exponential Random Graphs in the generation of Synthetic Power Grids. As for the Shaken dynamics, it has been hinted in [30,42] that this Markov Process can be used as a microscopic model for tidal dissipation and as a way to connect the Ising model on the square lattice to the Ising model on the hexagonal lattice [43,44].

Funding

This work has been partially supported by PRIN 2022 PNRR: “RETINA: REmote sensing daTa INversion with multivariate functional modeling for essential climAte variables characterization” (Project Number: P20229SH29, CUP: J53D23015950001) funded by the European Union under the Italian National Recovery and Resilience Plan (NRRP) of NextGenerationEU, under the Italian Ministry of University and Research (MUR).

Data Availability Statement

We downloaded the instances of the graphs analyzed in this paper from https://iridia.ulb.ac.be/~fmascia/maximum_clique/ (accessed on 15 July 2024). Information on the generators used to create the instances and their source code can be found at DIMACS ftp reachable at http://archive.dimacs.rutgers.edu/pub/challenge/graph/ (accessed on 15 July 2024).

Acknowledgments

The author thanks the Department of Mathematics and Physics of the University of Rome “Tre” which provided computing resources.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MCMCMonte Carlo Markov Chain
PCAProbabilistic cellular automaton or Probabilistic cellular automata
PCAMCPCA Markov Chain
QUBOQuadratic Unconstrained Binary Optimization

References

  1. Luce, R.D.; Perry, A.D. A method of matrix analysis of group structure. Psychometrika 1949, 14, 95–116. [Google Scholar] [CrossRef] [PubMed]
  2. Karp, R.M. Reducibility among Combinatorial Problems. In Complexity of Computer Computations: Proceedings of a symposium on the Complexity of Computer Computations, held 20–22 March 1972, at the IBM Thomas J. Watson Research Center, Yorktown Heights, New York, and sponsored by the Office of Naval Research, Mathematics Program, IBM World Trade Corporation, and the IBM Research Mathematical Sciences Department; Miller, R.E., Thatcher, J.W., Bohlinger, J.D., Eds.; Springer: Boston, MA, USA, 1972; pp. 85–103. [Google Scholar]
  3. Carraghan, R.; Pardalos, P.M. An exact algorithm for the maximum clique problem. Oper. Res. Lett. 1990, 9, 375–382. [Google Scholar] [CrossRef]
  4. Babel, L.; Tinhofer, G. A branch and bound algorithm for the maximum clique problem. Z. Für Oper. Res. 1990, 34, 207–217. [Google Scholar] [CrossRef]
  5. Östergård, P.R. A fast algorithm for the maximum clique problem. Discret. Appl. Math. 2002, 120, 197–207. [Google Scholar] [CrossRef]
  6. San Segundo, P.; Rodríguez-Losada, D.; Jiménez, A. An exact bit-parallel algorithm for the maximum clique problem. Comput. Oper. Res. 2011, 38, 571–581. [Google Scholar] [CrossRef]
  7. Wu, Q.; Hao, J.K. A review on algorithms for maximum clique problems. Eur. J. Oper. Res. 2015, 242, 693–709. [Google Scholar] [CrossRef]
  8. Marino, R.; Buffoni, L.; Zavalnij, B. A Short Review on Novel Approaches for Maximum Clique Problem: From Classical algorithms to Graph Neural Networks and Quantum algorithms. arXiv 2024, arXiv:2403.09742. [Google Scholar]
  9. Häggström, O. Finite Markov Chains and Algorithmic Applications; Cambridge University Press: Cambridge, UK, 2002; Volume 52. [Google Scholar]
  10. Gendreau, M.; Soriano, P.; Salvail, L. Solving the maximum clique problem using a tabu search approach. Ann. Oper. Res. 1993, 41, 385–403. [Google Scholar] [CrossRef]
  11. Battiti, R.; Protasi, M. Reactive local search for the maximum clique problem 1. Algorithmica 2001, 29, 610–637. [Google Scholar] [CrossRef]
  12. Wu, Q.; Hao, J.K.; Glover, F. Multi-neighborhood tabu search for the maximum weight clique problem. Ann. Oper. Res. 2012, 196, 611–634. [Google Scholar] [CrossRef]
  13. Jin, Y.; Hao, J.K. General swap-based multiple neighborhood tabu search for the maximum independent set problem. Eng. Appl. Artif. Intell. 2015, 37, 20–33. [Google Scholar] [CrossRef]
  14. Schuetz, M.J.; Brubaker, J.K.; Katzgraber, H.G. Combinatorial optimization with physics-inspired graph neural networks. Nat. Mach. Intell. 2022, 4, 367–377. [Google Scholar] [CrossRef]
  15. Lauri, J.; Dutta, S.; Grassia, M.; Ajwani, D. Learning fine-grained search space pruning and heuristics for combinatorial optimization. J. Heuristics 2023, 29, 313–347. [Google Scholar] [CrossRef]
  16. Cappart, Q.; Chételat, D.; Khalil, E.B.; Lodi, A.; Morris, C.; Veličković, P. Combinatorial optimization and reasoning with graph neural networks. J. Mach. Learn. Res. 2023, 24, 1–61. [Google Scholar]
  17. Jerrum, M. Large cliques elude the Metropolis process. Random Struct. Algorithms 1992, 3, 347–359. [Google Scholar] [CrossRef]
  18. Iovanella, A.; Scoppola, B.; Scoppola, E. Some spin glass ideas applied to the clique problem. J. Stat. Phys. 2007, 126, 895–915. [Google Scholar] [CrossRef]
  19. Montanari, A. Finding one community in a sparse graph. J. Stat. Phys. 2015, 161, 273–299. [Google Scholar] [CrossRef]
  20. Angelini, M.C. Parallel tempering for the planted clique problem. J. Stat. Mech. Theory Exp. 2018, 2018, 073404. [Google Scholar]
  21. Punnen, A.P. (Ed.) The Quadratic Unconstrained Binary Optimization Problem: Theory, Algorithms, and Applications; Springer: Cham, Switzerland, 2022. [Google Scholar]
  22. Glover, F.; Kochenberger, G.; Du, Y. Quantum Bridge Analytics I: A tutorial on formulating and using QUBO models. Ann. Oper. Res. 2019, 314, 141–183. [Google Scholar] [CrossRef]
  23. Baioletti, M.; Santini, F. Abstract Argumentation Goes Quantum: An Encoding to QUBO Problems. In Proceedings of the PRICAI 2022: Trends in Artificial Intelligence, Shanghai, China, 10–13 November 2022; Khanna, S., Cao, J., Bai, Q., Xu, G., Eds.; Springer: Cham, Switzerland, 2022; pp. 46–60. [Google Scholar]
  24. Glover, F.; Kochenberger, G.; Ma, M.; Du, Y. Quantum Bridge Analytics II: QUBO-Plus, network optimization and combinatorial chaining for asset exchange. Ann. Oper. Res. 2022, 314, 185–212. [Google Scholar] [CrossRef]
  25. Tasseff, B.; Albash, T.; Morrell, Z.; Vuffray, M.; Lokhov, A.Y.; Misra, S.; Coffrin, C. On the emerging potential of quantum annealing hardware for combinatorial optimization. J. Heuristics 2024, 1–34. [Google Scholar] [CrossRef]
  26. Fukushima-Kimura, B.H.; Handa, S.; Kamakura, K.; Kamijima, Y.; Kawamura, K.; Sakai, A. Mixing time and simulated annealing for the stochastic cellular automata. J. Stat. Phys. 2023, 190, 79. [Google Scholar] [CrossRef]
  27. Scoppola, B.; Troiani, A. Gaussian Mean Field Lattice Gas. J. Stat. Phys. 2018, 170, 1161–1176. [Google Scholar] [CrossRef]
  28. Isopi, M.; Scoppola, B.; Troiani, A. On some features of quadratic unconstrained binary optimization with random coefficients. Boll. Dell’Unione Mat. Ital. 2024, 1–21. [Google Scholar] [CrossRef]
  29. Apollonio, V.; D’autilia, R.; Scoppola, B.; Scoppola, E.; Troiani, A. Criticality of Measures on 2-d Ising Configurations: From Square to Hexagonal Graphs. J. Stat. Phys. 2019, 177, 1009–1021. [Google Scholar] [CrossRef]
  30. Apollonio, V.; D’Autilia, R.; Scoppola, B.; Scoppola, E.; Troiani, A. Shaken dynamics: An easy way to parallel Markov Chain Monte Carlo. J. Stat. Phys. 2022, 189, 39. [Google Scholar] [CrossRef]
  31. D’Autilia, R.; Andrianaivo, L.N.; Troiani, A. Parallel simulation of two-dimensional Ising models using probabilistic cellular automata. J. Stat. Phys. 2021, 184, 1–22. [Google Scholar] [CrossRef]
  32. Scoppola, B.; Troiani, A.; Veglianti, M. Shaken dynamics on the 3d cubic lattice. Electron. J. Probab. 2022, 27, 1–26. [Google Scholar] [CrossRef]
  33. Johnson, D.S.; Trick, M.A. Cliques, Coloring, and Satisfiability: Second DIMACS Implementation Challenge, 11–13 October 1993; American Mathematical Society: Providence, RI, USA, 1996; Volume 26. [Google Scholar]
  34. Bezanson, J.; Edelman, A.; Karpinski, S.; Shah, V.B. Julia: A fresh approach to numerical computing. SIAM Rev. 2017, 59, 65–98. [Google Scholar] [CrossRef]
  35. Lawson, C.L.; Hanson, R.J.; Kincaid, D.R.; Krogh, F.T. Basic linear algebra subprograms for Fortran usage. ACM Trans. Math. Softw. 1979, 5, 308–323. [Google Scholar] [CrossRef]
  36. Dongarra, J.J.; Croz, J.D.; Hammarling, S.; Hanson, R.J. An extended set of FORTRAN basic linear algebra subprograms. ACM Trans. Math. Softw. 1988, 14, 1–17. [Google Scholar] [CrossRef]
  37. Blackman, D.; Vigna, S. Scrambled linear pseudorandom number generators. ACM Trans. Math. Softw. 2021, 47, 1–32. [Google Scholar] [CrossRef]
  38. Hukushima, K.; Nemoto, K. Exchange Monte Carlo method and application to spin glass simulations. J. Phys. Soc. Jpn. 1996, 65, 1604–1608. [Google Scholar] [CrossRef]
  39. Viale, M. Il Problema Della Massima Clique: Teoria & Pratica. Ph.D. Thesis, Università Roma Tre, Roma, Italy, 2009. [Google Scholar]
  40. Gaudilliere, A.; Scoppola, B.; Scoppola, E.; Viale, M. Phase transitions for the cavity approach to the clique problem on random graphs. J. Stat. Phys. 2011, 145, 1127–1155. [Google Scholar] [CrossRef]
  41. Giacomarra, F.; Bet, G.; Zocca, A. Generating Synthetic Power Grids Using Exponential Random Graph Models. PRX Energy 2024, 3, 023005. [Google Scholar] [CrossRef]
  42. Pinzari, G.; Scoppola, B.; Veglianti, M. Spin orbit resonance cascade via core shell model: Application to Mercury and Ganymede. Celest. Mech. Dyn. Astron. 2024, 1–20. [Google Scholar] [CrossRef]
  43. Apollonio, V.; Jacquier, V.; Nardi, F.R.; Troiani, A. Metastability for the Ising model on the hexagonal lattice. Electron. J. Probab. 2022, 27, 1–48. [Google Scholar] [CrossRef]
  44. Baldassarri, S.; Jacquier, V. Metastability for Kawasaki dynamics on the hexagonal lattice. J. Stat. Phys. 2023, 190, 46. [Google Scholar] [CrossRef]
Table 1. Speedups of PCA A and Shaken dynamics S with respect to Metropolis M .
Table 1. Speedups of PCA A and Shaken dynamics S with respect to Metropolis M .
Graph IDN A S
12481248
C1000.9100010.218.3127.0177.23.15.646.750.5
C125.91258.28.08.08.03.33.74.03.4
C2000.520007.714.425.848.62.24.17.612.2
C2000.920005.77.012.019.61.62.03.66.1
C250.925022.324.516.518.39.08.66.97.9
C4000.540005.56.812.622.31.51.93.66.0
C500.950037.960.384.8107.314.321.731.042.3
DSJC1000_510006.010.785.0128.91.83.36.344.8
DSJC500_550030.947.166.482.911.216.724.332.6
MANN_a2737828.741.155.666.410.615.621.526.4
MANN_a4510356.110.887.4126.61.83.36.223.7
MANN_a8133216.312.022.340.32.53.46.612.4
brock200_220029.027.033.336.111.713.813.514.5
brock200_420029.927.433.336.312.410.513.013.9
brock400_240033.731.843.156.212.517.116.621.0
brock400_440033.043.242.854.412.417.015.820.9
brock800_280038.159.757.079.812.120.020.329.4
brock800_480037.559.454.879.61.819.620.329.4
gen200_p0.9_4420027.625.831.934.411.513.813.014.5
gen200_p0.9_5520029.535.632.034.211.814.113.314.4
gen400_p0.9_5540033.631.538.048.813.512.416.122.4
gen400_p0.9_6540035.730.249.058.312.914.614.921.7
gen400_p0.9_7540034.046.045.658.113.316.017.322.4
hamming10-410245.866.166.694.51.83.521.432.7
hamming8-425625.028.832.135.47.610.812.515.0
keller417122.623.323.223.29.010.610.311.3
keller577639.159.458.984.313.115.819.528.5
keller633615.56.812.623.01.42.63.56.6
p_hat1500-115005.710.3102.494.41.73.15.66.2
p_hat1500-215005.810.798.0108.42.718.535.441.9
p_hat1500-3150015.326.738.149.74.67.511.414.5
p_hat300-130013.412.717.58.35.46.27.88.7
p_hat300-230014.220.625.817.64.67.67.28.8
p_hat300-330012.112.916.318.25.05.26.77.9
p_hat700-170015.622.828.431.15.17.29.411.1
p_hat700-270015.123.330.628.05.26.09.711.4
p_hat700-370016.922.628.832.95.77.98.311.8
Table 2. Largest cliques found with the A , S and M algorithms.
Table 2. Largest cliques found with the A , S and M algorithms.
Graph IDN A S M ω ( G )
C1000.9100067656868
C125.912534343434
C2000.5200016151616 *
C2000.9200073727680
C250.925044444444
C4000.5400017161718 *
C500.950057575757
DSJC1000_5100015141515 *
DSJC500_550013131313 *
MANN_a27378124123124126 *
MANN_a451035335334336345 *
MANN_a8133211084108010831100 *
brock200_220012121212 *
brock200_420017171717 *
brock400_240029252929 *
brock400_440033323333 *
brock800_280020202124 *
brock800_480020202126 *
gen200_p0.9_4420044444444
gen200_p0.9_5520055555555
gen400_p0.9_5540055555555
gen400_p0.9_6540065656565
gen400_p0.9_7540075757575
hamming10-4102440404040
hamming8-425616161616
keller417111111111 *
keller577627272727 *
keller6336155555959 *
p_hat1500-1150012111212 *
p_hat1500-2150065656565 *
p_hat1500-3150094949494 *
p_hat300-13008888 *
p_hat300-230025252525 *
p_hat300-330036363636 *
p_hat700-170011111111 *
p_hat700-270044444444 *
p_hat700-370062626262 *
* The asterisk marks exact values of ω(G); bold denotes values equal to ω(G); underlined denotes largest cliques found with any of the three algorithms if this value is smaller than ω(G).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Troiani, A. Probabilistic Cellular Automata Monte Carlo for the Maximum Clique Problem. Mathematics 2024, 12, 2850. https://doi.org/10.3390/math12182850

AMA Style

Troiani A. Probabilistic Cellular Automata Monte Carlo for the Maximum Clique Problem. Mathematics. 2024; 12(18):2850. https://doi.org/10.3390/math12182850

Chicago/Turabian Style

Troiani, Alessio. 2024. "Probabilistic Cellular Automata Monte Carlo for the Maximum Clique Problem" Mathematics 12, no. 18: 2850. https://doi.org/10.3390/math12182850

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop