Next Article in Journal
Lexicographic Unranking of Combinations Revisited
Previous Article in Journal
Towards Understanding Clustering Problems and Algorithms: An Instance Space Analysis
Previous Article in Special Issue
DynASP2.5: Dynamic Programming on Tree Decompositions in Action
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Descriptive Complexity of Color Coding †

Institute for Theoretical Computer Science, Universität zu Lübeck, 23562 Lübeck, Germany
*
Authors to whom correspondence should be addressed.
This paper is an extended version of our paper published in STACS 2019.
Algorithms 2021, 14(3), 96; https://doi.org/10.3390/a14030096
Submission received: 29 January 2021 / Revised: 13 March 2021 / Accepted: 16 March 2021 / Published: 19 March 2021
(This article belongs to the Special Issue Parameterized Complexity and Algorithms for Nonclassical Logics)

Abstract

:
Color coding is an algorithmic technique used in parameterized complexity theory to detect “small” structures inside graphs. The idea is to derandomize algorithms that first randomly color a graph and then search for an easily-detectable, small color pattern. We transfer color coding to the world of descriptive complexity theory by characterizing—purely in terms of the syntactic structure of describing formulas—when the powerful second-order quantifiers representing a random coloring can be replaced by equivalent, simple first-order formulas. Building on this result, we identify syntactic properties of first-order quantifiers that can be eliminated from formulas describing parameterized problems. The result applies to many packing and embedding problems, but also to the long path problem. Together with a new result on the parameterized complexity of formula families involving only a fixed number of variables, we get that many problems lie in FPT just because of the way they are commonly described using logical formulas.

1. Introduction

Descriptive complexity provides a powerful link between logic and complexity theory: We use a logical formula to describe a problem and can then infer the computational complexity of the problem just from the syntactic structure of the formula. As a striking example, Fagin’s Theorem [1] tells us that 3-colorability lies in NP just because its describing formula (“there exist three colors such that all adjacent vertex pairs have different colors”) is an existential second-order formula. In the context of fixed-parameter tractability theory, methods from descriptive complexity are also used a lot—but commonly to show that problems are difficult. For instance, the A- and W-hierarchies are defined in logical terms [2], but their hard problems are presumably “beyond” the class FPT of fixed-parameter tractable problems.
The methods of descriptive complexity are only rarely used to show that problems are in FPT. More precisely, the syntactic structure of the natural logical descriptions of standard parameterized problems found in textbooks are not known to imply that the problems lie in FPT—even though this is known to be the case for many of them. To appreciate the underlying difficulties, consider the following three parameterized problems (the prefix “p” stand for “parameterized” and its index names the parameter): p k -MATCHING , p k -TRIANGLE-PACKING , and p k -CLIQUE . In each case, we are given an undirected graph as input and a number k and we are then asked whether the graph contains k vertex-disjoint edges (a size-k matching), k vertex-disjoint triangles, or a clique of size k, respectively. The problems are known to have widely different complexities (maximal matchings can actually be found in polynomial time, triangle packing lies at least in FPT, while finding cliques is W [ 1 ] -complete) but very similar logical descriptions:
α k = x 1 x 2 k i j x i x j i = 1 k E x 2 i 1 x 2 i ,
β k = x 1 x 3 k i j x i x j i = 1 k ( E x 3 i 2 x 3 i 1 E x 3 i 2 x 3 i E x 3 i 1 x 3 i ) ,
γ k = x 1 x k i j x i x j i j E x i x j .
The family ( α k ) k N of formulas is clearly a natural “slicewise” description of the matching problem: A graph G has a size-k matching if and only if G α k . The families ( β k ) k N and ( γ k ) k N are natural parameterized descriptions of the triangle packing and the clique problems, respectively. Well-known results on the descriptive complexity of parameterized problems allow us to infer [2] from the above descriptions that all three problems lie in W [ 1 ] , but offer no hint why the first two problems actually lie in the class FPT—syntactically the clique problem arguably “looks like the easiest one” when in fact it is semantically the most difficult one. The results of this paper will remedy this mismatch between syntactic and semantic complexity: We will show that the syntactic structures of the formulas α k and β k imply membership of p k -MATCHING and p k -TRIANGLE-PACKING in FPT. In particular, we will identify the non-obvious syntactic properties that make α k and β k “easier” than γ k .
The road to deriving the computational complexity of parameterized problems just from the syntactic properties of slicewise first-order descriptions involves three major steps: First, a characterization of when the color coding technique is applicable in terms of syntactic properties of second-order quantifiers. Second, an exploration of how these results on second-order formulas apply to first-order formulas, leading to the notion of strong and weak quantifiers and to an elimination theorem for weak quantifiers. Third, we add a new characterization to the body of known characterizations of how classes like FPT can be characterized in a slicewise fashion by logical formulas.
  • Our Contributions I: A Syntactic Characterization of Color Coding.
Consider once more the triangle packing problem, where we are asked whether an undirected graph G contains k vertex-disjoint triangles. While this problem is known to be hard, it becomes almost trivial if we change it slightly: Suppose someone colored the vertices of the graph and our job is just to determine whether there are a red triangle, a green triangle, a blue triangle, a yellow triangle, and so on for k different colors. Clearly, this is a now a very easy problem and if we do, indeed, find k triangles having k different colors, we have found k vertex-disjoint triangles.
The ingenious idea behind the color coding technique of Alon, Yuster, and Zwick [3] is to reduce the hard triangle packing problem to the much simpler colored version by simply randomly coloring the graph. Of course, even if there are k disjoint triangles, we will most likely not color them monochromatically and differently, but the probability of “getting lucky” is nonzero (at least k 3 k ) and depends only on the parameter k. Even better, Alon et al. point out that one can derandomize the coloring easily by using universal hash functions to color each vertex with its hash value.
Applying this idea in the setting of descriptive complexity was recently pioneered by Chen et al. [4]. Transferred to the triangle packing problem, their argument would roughly be: “Testing for each color i, whether there is a monochromatic triangle of color i, can be done in first-order logic using something like i = 1 k x y z ( E x y E y z E x z C i x C i y C i z ) . Next, instead of testing whether x has color i using the formula C i x , we can test whether x gets hashed to i by a hash function. Finally, since computing appropriate universal hash functions only involves addition and multiplication, we can express the derandomized algorithm using an arithmetic first-order formula of low quantifier rank.” Phrased differently, Chen et al. would argue that i = 1 k x y z ( E x y E y z E x z C i x C i y C i z ) together with the requirement that the C i are pairwise disjoint is (ignoring some details) equivalent to δ k = p q i = 1 k x y z ( E x y E y z E x z HASH k ( x , p , q ) = i HASH k ( y , p , q ) = i HASH k ( z , p , q ) = i ) , where HASH k ( x , p , q ) = i is a formula that is true when “x is hashed to i by a member of a universal family of hash functions indexed by q and p.”
The family ( δ k ) k N may seem rather technical and, indeed, its importance becomes visible only in conjunction with another result by Chen et al. [4]: They show that a parameterized problem lies in para- AC 0 , one of the smallest “sensible” subclasses of FPT, if it can be described by a family ( ϕ k ) k N of FO [ + , × ] formulas of bounded quantifier rank such that the finite models of ϕ k are exactly the elements of the kth slice of the problem. Since the triangle packing problem can be described in this way via the family ( δ k ) k N of formulas, all of which have a quantifier rank 5 plus the constant number of quantifiers used to express the arithmetics in the formulas HASH k ( x , p , q ) = i , we get p k -TRIANGLE-PACKING FPT .
Clearly, this beautiful idea cannot work in all situations: If it also worked for the formula mentioned earlier expressing 3-colorability, 3-colorability would be first-order expressible, which is known to be impossible. Our first main contribution is a syntactic characterization of when the color coding technique is applicable, that is, of why color coding works for triangle packing but not for 3-colorability: For triangle packing, the colors C i are applied to variables only inside existential scopes (“ x y z ”) while for 3-colorability the colors R, G, and B are also applied to variables inside universal scopes (“for all adjacent vertices”). In general, see Theorem 2 for the details, we show that a second-order quantification over an arbitrary number of disjoint colors C i can be replaced by a fixed number of first-order quantifiers whenever none of the C i is used in a universal scope.
  • Our Contributions II: New First-Order Quantifier Elimination Rules.
The “purpose” of the colors C i in the formulas i = 1 k x y z ( E x y E y z E x z C i x C i y C i z ) is not that the three vertices of a triangle get a particular color, but just that they get a color different from the color of all other triangles. Indeed, our “real” objective in these formulas is to ensure that the vertices of a triangle are distinct from the vertices in the other triangles—and giving vertices different colors is “just a means” of ensuring this.
In our second main contribution we explore this idea further: If the main (indeed, the only) use of colors in the context of color coding is to ensure that certain vertices are different, let us do away with colors and instead focus on the notion of distinctness. To better explain this idea, consider the following family, also describing triangle packing, where the only change is that we now require (a bit superfluously) that even the vertices inside a triangle get different colors: j = 1 k x y z ( E x y E y z E x z C 3 j 2 x C 3 j 1 y C 3 j z ) . Observe that each C i is now applied to exactly one variable (x, y, or z in one of the many literals) and the only “effect” that all these applications have is to ensure that the vertices to which these variables get bound are different. In particular, the formula is equivalent to
x 1 x 3 k i j x i x j j = 1 k x y z ( E x y E y z E x z x 3 j 2 = x x 3 j 1 = y x 3 j = z )
and these formulas are clearly equivalent to the almost identical formulas from (2).
In a sense, in (4) the many existential quantifiers x i and the many x i x j literals come “for free” from the color coding technique, while x , y , and z have nothing to do with color coding. Our key observation is a syntactic property that tells us whether a quantifier comes “for free” in this way (we will call it weak) or not (we will call it strong): Definition 3 states (essentially) that weak quantifiers have the form x ( ϕ ) such that x is not free in any universal scope of ϕ and x is used in at most one literal that is not of the form x y . To make weak quantifiers easier to spot, we mark the variables they bind with a dot (note that this is just a “syntactic hint to the reader” without any semantic meaning). Formulas (4) now read x ˙ 1 x ˙ 3 k i j x ˙ i x ˙ j j = 1 k x y z ( E x y E x z E y z x ˙ 3 j 2 = x x ˙ 3 j 1 = y x ˙ 3 j = z ) . Observe that x, y, and z are not weak since each is used in three literals that are not inequalities.
We show in Theorem 4 that each ϕ is equivalent to a ϕ whose quantifier rank depends only on the strong quantifier rank of ϕ (meaning that we ignore the weak quantifiers) and whose number of variables depends only on the number of strong variables in ϕ . For instance, the formulas from (4) all have strong quantifier rank 3 and, thus, the triangle packing problem can be described by a family of constant (normal) quantifier rank. Applying Chen et al.’s characterization yields membership in para- AC 0 .
As a more complex example, let us sketch a “purely syntactic” proof of the result [5,6] that the embedding problem for graphs H of tree depth at most d lies in para- AC 0 for each d. Once more, we construct a family ( ϕ H ) of formulas, one for each to-be-embedded graph H, of constant strong quantifier rank that describes the problem. For a graph H = V ( H ) , E ( H ) to have tree depth d means that there is a rooted tree T of depth d such that E ( H ) is contained in the edges of T’s transitive closure. Let c 1 be the root of T and let children ( c ) be the (possibly empty) set of children of c in T. Then the following formula of strong quantifier rank d describes that H can be embedded into a graph structure G = ( V , E G ) (note that in the formula the E in “ E n i n j ” refers to the edge relation E G of G while in “ ( c i , c j ) E ( H ) ” it refers to the edge set E ( H ) of the fixed parameter H):
x ˙ 1 x ˙ | V ( H ) | ( i j x ˙ i x ˙ j n 1 ( n 1 = x ˙ c 1 c 2 children ( c 1 ) n 2 ( n 2 = x ˙ c 2 c 3 children ( c 2 ) n 3 ( n 3 = x ˙ c 3 c 4 children ( c 3 ) n 4 ( n 4 = x ˙ c 4 c d children ( c d 1 ) n d ( n d = x ˙ c d i , j { 1 , , d } : ( c i , c j ) E ( H ) E n i n j ) ) ) ) ) ) .
  • Our Contributions III: Slicewise Descriptions and Variable Set Sizes.
Our third contribution is a new result in the same vein as the already repeatedly mentioned result of Chen et al. [4], stated as Fact 1 in our paper: Our new Theorem 1 states that a parameterized problem can be described slicewise by a family ( ϕ k ) k N of arithmetic first-order formulas that all use only a bounded number of variables if and only if the problem lies in para- AC 0 —a class that has been encountered repeatedly in the literature [5,7,8,9], but for which no characterization was known. It contains all parameterized problems that can be decided by AC-circuits whose depth depends only on the parameter and whose size is of the form f ( k ) · n c .
As an example, consider the problem of deciding whether a graph contains a path of length k (no vertex may be visited twice). It can be described by the family ( δ k ) of formulas with (for odd k) δ k equal to
s t x ( E s x x ˙ 1 ( x ˙ 1 = x y ( E x y x ˙ 2 ( x ˙ 2 = y x ( E y x x ˙ 3 ( x ˙ 3 = x y ( E x y x ˙ 4 ( x ˙ 4 = y x ( E y x x = t x ˙ k ( x ˙ k = x i j x ˙ i x ˙ j ) ) ) ) ) ) ) ) ) ) .
Note that the strong quantifier rank of δ k is k + 2 and, thus, depends on k. However, there are only four strong variables, namely s, t, x, and y. By Theorem 3 we see that the above formulas are equivalent to a family of formulas with a bounded number of variables and by Theorem 1 we see that p k -LONG-PATH para- AC 0 FPT . These ideas also generalize easily and we give a purely syntactic proof of the seminal result from the original color coding paper [3] that the embedding problem for graphs of bounded tree width lies in FPT. The core observation—which unifies the results for tree width and depth—is that for each graph with a given tree decomposition, the embedding problem can be described by a formula whose strong nesting structure mirrors the tree structure and whose strong variables mirror the bag contents.
Table 1 summarizes, for the problems discussed in this paper, how they can be described using formulas in such a way that membership in para- AC 0 and para- AC 0 follows. All memberships were previously known, the contribution of this paper is that we prove the memberships by presenting families of first-order formulas that describe them, such that that strong quantifier rank (strong-qr) or the number of strong variables (strong-vars) is parameter-independent.

1.1. Related Work

Flum and Grohe [10] were the first to give characterizations of FPT and of many subclasses in terms of the syntactic properties of formulas describing their members. Unfortunately, these syntactic properties do not hold for the descriptions of parameterized problems found in the literature. For instance, they show that FPT contains exactly the problems that can be described by families of FO [ LEP ] -formulas of bounded quantifier rank—but actually describing problems like p k -VERTEX-COVER in this way is more or less hopeless and yields little insights into the structure or complexity of the problem. We believe that it is no coincidence that no applications of these beautiful characterizations to concrete problems could be found in the literature—at least prior to very recent work by Chen and Flum [11], who study slicewise descriptions of problems on structures of bounded tree depth, and the already cited article of Chen et al. [4], who do present a family of formulas that describe the vertex cover problem. This family internally uses the color coding technique and is thus closely related to our results. The crucial difference is, however, that we identify syntactic properties of logical formulas that imply that the color coding technique can be applied. It then suffices to find a family describing a given problem that meets the syntactic properties to establish the complexity of the problem: there is no need to actually construct the color-coding-based formulas—indeed, there is not even a need to understand how color coding works in order to decide whether a quantifier is weak or strong.
A different approach towards proving that a problem lies in FPT purely because of the syntactic nature of logic-based problem descriptions comes from the context of optimization problems. Cai and Chen [12] have shown that all problems in the syntactically-defined classes MAXSNP, due to [13], and MIN F + Π 1 , due to [14], are fixed-parameter tractable (but not necessarily in the small classes like para- AC 0 and para- AC 0 studied in the present paper).

1.2. Organization of this Paper

In Section 2 we first review some of the existing work on the descriptive complexity of parameterized problems. We add to this work in the form of the mentioned characterization of the class para- AC 0 in terms of a bounded number of variables. Our main technical results are then proved in Section 3, where we establish and prove the syntactic properties that formulas must have in order for the color coding method to be applicable. In Section 4 we then apply the findings and show how membership of different natural problems in para- AC 0 and para- AC 0 (and, thus, in FPT) can be derived entirely from the syntactic structure of the formulas describing them.

2. Describing Parameterized Problems

A happy marriage of parameterized complexity and descriptive complexity was first presented in [10] by Flum and Grohe. We first review their most important definitions and then prove a new characterization, namely of the class para- AC 0 that contains all problems decidable by AC-circuits of parameter-dependent depth and “FPT-like” size.

2.1. Logical Terminology

We only consider first-order logic and use standard notations following for instance [10], with the perhaps only deviations being that we write relational atoms briefly as E x y instead of E ( x , y ) and that the literal x y is an abbreviation for ¬ x = y (recall that a literal is an atom or a negated atom). Signatures, typically denoted τ , are always finite and may only contain relation symbols and constant symbols—with one exception: The special unary function symbol SUCC may also be present in a signature. Let us write SUCC k for the k-fold application of SUCC, so SUCC 3 ( x ) is short for SUCC ( SUCC ( SUCC ( x ) ) ) . It allows us to specify any fixed non-negative integer without having to use additional variables. An alternative is to dynamically add constant symbols for numbers to signatures as done in [4], but we believe that following [10] and adding the successor function gives a leaner formal framework. Let arity ( τ ) be the maximum arity of relation symbols in τ .
We denote by STRUC [ τ ] the class of all τ -structures and by A the universe of A . As is often the case in descriptive complexity theory, we only consider ordered structures in which the ternary predicates ADD and MULT are available and have their natural meaning. Formally, we say τ is arithmetic if it contains all of the predicates <, ADD, MULT, the function symbol SUCC, and the constant symbol 0 (it is included for convenience only). In this case, STRUC [ τ ] contains only those A for which < A is a linear ordering of | A | and the other operations have their natural meaning relative to < A (with the successor of the maximum element of the universe being itself and with 0 being the minimum with respect to < A ). We write ϕ FO [ + , × ] when ϕ is a τ -formula for an arithmetic τ .
A τ -problem is a set Q STRUC [ τ ] closed under isomorphisms. A τ -formula ϕ describes a τ -problem Q if Q = { A STRUC [ τ ] A ϕ } and it describes Q eventually if ϕ describes a set Q that differs from Q only on structures of a certain maximum size.
Lemma 1.
For each ϕ FO [ + , × ] that describes a τ-problem Q eventually, there are quantifier-free formulas α and β such that ( α ϕ ) β describes Q.
Proof. 
The statement of the lemma would be quite simple if we did not require α and β to be quantifier-free: Without this requirement, all we need to do is to use α and β to fix ϕ on the finitely many (up to isomorphisms) structures on which ϕ errs by “hard-wiring” which of these structures are elements of Q and which are not. However, the natural way to do this “hard-wiring” of size-m structures is to use m quantifiers to bind all elements of the universe. This is exactly what we do not wish to do. Rather, we use the successor function to refer to the elements of the universe without using any quantifiers.
In detail, let m be a number such that for all A STRUC [ τ ] with A m (that is, the size A of the universe | A | is at least m) we have A ϕ if and only if A Q . We set α to UNIVERSE m , a shorthand for SUCC m 1 ( 0 ) SUCC m ( 0 ) , which is true only for universes of size at least m. We define β so that it is true exactly for all τ -structures A Q of size at most m (for simplicity we assume that E 2 is the only relation symbol in τ ):
β = s = 1 m ( ( UNIVERSE s ¬ UNIVERSE s + 1 ) A Q , | A | = { 0 , , s 1 } ( u , v | A | : ( u , v ) E S E ( SUCC u ( 0 ) , SUCC v ( 0 ) ) A Q , | A | = { 0 , , s 1 } ( u , v | A | : ( u , v ) E S ¬ E ( SUCC u ( 0 ) , SUCC v ( 0 ) ) ) ) .
The first line of β checks whether the universe of the current structure (the structure for which we would like to know whether it is a model of ( α ϕ ) β ) has size s for some s m and, if so, checks that there is some A Q of size exactly s so that the edges of A are present in the current structure (second line) and that the edges not in A are also not present in the current structure (last line). In particular, if the current structure has size at most m, it will be a model of β if and only if it is isomorphic to some A Q and, thus, if it is an element of Q.  □
We write qr ( ϕ ) for the quantifier rank of a formula and bound ( ϕ ) for the set of its bound variables. For instance, for ϕ = x y ( E x z ) y ( P x ) we have qr ( ϕ ) = 2 , since the maximum nesting is caused by the two nested existential quantifiers, and bound ( ϕ ) = { x , y } .
Let us say that ϕ is in negation normal form if negations are applied only to atomic formulas.

2.2. Describing Parameterized Problems

When switching from classical complexity theory to descriptive complexity theory, the basic change is that “words” get replaced by “finite structures.” The same idea works for parameterized complexity theory and, following Flum and Grohe [10], let us define parameterized problems as subsets Q STRUC [ τ ] × N where Q is closed under isomorphisms. In a pair ( A , k ) STRUC [ τ ] × N the number k is, of course, the parameter value of the pair. Flum and Grohe now propose to describe such problems slicewise using formulas. Since this will be the only way in which we describe problems, we will drop the “slicewise” in the phrasings and just say that a family ( ϕ k ) k N of formulas describes a problem Q STRUC [ τ ] × N if for all ( A , k ) STRUC [ τ ] × N we have ( A , k ) Q if and only if A ϕ k . (In this paper, we ignore the question of whether the mappings k ϕ k should be required to be computable. While this will be the case for all of our examples and constructions, it is not important for our formalism and results.)
For a class Φ of families ( ϕ k ) k N , let us write X Φ for the class of all parameterized problems that are described by the members of Φ (we chose “X” to represent a “slicewise” description, which seems to be in good keeping with the usual use of X in other classes such as XP or XL). For instance, the mentioned characterization of FPT in logical terms by Flum and Grohe can be written as
FPT = X ( ϕ k ) k N ϕ k FO [ LEP ] , max k qr ( ϕ k ) < .
We remark that instead of describing parameterized problems using families, a more standard and at the same time more flexible way is to use reductions to model checking problems. Clearly, if a family ( ϕ k ) k N of L -formulas describes Q STRUC [ τ ] × N , then there is a very simple parameterized reduction from Q to the model checking problem p ϕ -MC ( L ) , where the input is a pair ( A , num ( ϕ ) ) and the question is whether both A ϕ and ϕ L hold. (The function num encodes mathematical objects like ϕ or later tuples like ( ϕ , δ ) as unique natural numbers.) The reduction simply maps a pair ( A , k ) to ( A , num ( ϕ k ) ) . Even more interestingly, without going into any of the technical details, it is also not hard to see that as long as a reduction is sufficiently simple, the reverse implication holds, that is, we can replace a reduction to the model checking problem by a family of formulas that describe the problem. We can, thus, use whatever formalism seems more appropriate for the task at hand and—as we hope that this paper shows—it is sometimes quite natural to write down a family that describes a problem.

2.3. Parameterized Circuits

For our descriptive setting, we need to slightly adapt the definition of the circuit classes para- AC 0 and para- AC 0 from [5,7]: Let us say that a problem Q STRUC [ τ ] × N is in para- AC 0 , if there is a family ( C n , k ) n , k N of AC-circuits (Boolean circuits with unbounded fan-in) such that for all ( A , k ) STRUC [ τ ] × N we have
  • ( A , k ) Q if and only if C | x | , k ( x ) = 1 where x is a binary encoding of A and | x | is the length of the encoding,
  • the size of each C n , k is at most f ( k ) · n c for some function f and some constant c that is independent of both n and k,
  • the depth of each C n , k is bounded by some constant that is again independent of both n and k, and
  • the circuit family satisfies a dlogtime-uniformity condition. (Since the complex questions around circuit uniformity are not of special importance for the present paper, we keep its discussion short and just remark that dlogtime-uniformity roughly means that arithmetic formulas suffice to describe the circuits.)
The class para- AC 0 is defined the same way, but the depth may be g ( k ) for some g instead of only O ( 1 ) . The following fact and theorem show how these two circuit classes are closely related to descriptions of parameterized problems using formulas:
Fact 1
([4]). para- AC 0 = X ( ϕ k ) k N | ϕ k FO [ + , × ] , max k qr ( ϕ k ) < .
Theorem 1.
para- AC 0 = X ( ϕ k ) k N | ϕ k FO [ + , × ] , max k bound ( ϕ k ) < .
Proof. 
The basic idea behind the proof is quite “old.” We wish to establish links between circuit depth and size and the number of variables used in a formula—and such links are well-known, see for instance [15]: The quantifier rank of a first-order formula naturally corresponds to the depth of a circuit that solves the model checking problem for the formula. The number of variables corresponds to the exponent of the polynomial that bounds the size of the circuit (the paper [16] is actually entitled “ DSPACE [ n k ] = VAR [ k + 1 ] ”). One thing that is usually not of interest (because only one formula is usually considered) is the fact that the length of the formula is linked multiplicatively to the size of the circuit.
In detail, suppose we are given a problem Q STRUC [ τ ] × N with Q para- AC 0 via a circuit family ( C n , k ) n , k N of depth g ( k ) and size f ( k ) n c . For a fixed k, we now need to construct a formula ϕ k that correctly decides the k-th slice. In other words, we need an FO [ + , × ] -formula ϕ k whose finite models are exactly those on which the family ( C n , k ) n N (note that k no longer indexes the family) evaluates to 1 (when the models are encoded as bitstrings). It is well-known how such a formula can be constructed, see for instance [15], we just need a closer look at how the quantifier rank and number of variables relate to the circuit depth and size.
The basic idea behind the formula ϕ k is the following: The circuit has f ( k ) n c gates and we can “address” these gates using c variables (which gives us n c possibilities) plus a number i { 1 , , f ( k ) } (which gives us f ( k ) · n c possibilities). Since for fixed k the number f ( k ) is also fixed, it is permissible that the formula ϕ k contains f ( k ) copies of some subformula, where each subformula handles another value of i. The basic idea is then to start with formulas ψ i 0 for i { 1 , , f ( k ) } , each of which has c free variables, so that ψ i 0 ( x 1 , , x c ) is true exactly if the tuple ( x 1 , , x c , i ) represents an input gate set to 1. At this point, the uniformity condition basically tells us that arithmetic formulas suffice to express this and that they all have a fixed quantifier rank. Next, we construct formulas ψ i 1 ( x 1 , , x c ) that are true if ( x 1 , , x c , i ) addresses a gate for which the input values are all already computed by the ψ j 0 and that evaluates to 1. Next, formulas ψ i 2 are constructed, but now, we can reuse the variables used in the ψ j 0 . In this way, we finally build formulas ψ i g ( k ) and apply it to the “address” of the output gate. All told, we get a formula whose quantifier rank is c · g ( k ) + O ( 1 ) and in which at most 2 c + O ( 1 ) variables are used (note that the size of the formula depends on f ( k ) ). Clearly, this means that the family ( ϕ k ) k N created in this way does, indeed, only use a bounded number of variables (namely O ( c ) many) and decides Q.
For the other direction, suppose ( ϕ k ) k N describes Q and that all ϕ k contain at most v variables (since they contain no free variables, this is same as the number of bound variables). Clearly, we may assume that the formulas ϕ k are in negation normal form. We may also assume that they are “flat,” by which we mean that they contain no subformulas of the form ( α β ) γ or α ( β γ ) : Using the distributive laws of propositional logic, any first-order formula can be turned into an equivalent flat formula with the same number of variables and the same quantifier rank (one can loosely think of this as locally transforming the formula into disjunctive normal form, but “not past quantifiers”). Lastly, we may assume that the SUCC function symbol is only used in atoms of the form x = SUCC s ( 0 ) for some variable x and some number s: We can replace for instance E SUCC 6 ( x ) SUCC 3 ( y ) by the equivalent formula x x y y ( x = SUCC 6 ( 0 ) ADD x x x y = SUCC 3 ( 0 ) ADD y y y E x y ) without raising the number of variables and the quantifier rank by more than 4 (or, in general, by more than the constant 2 · arity ( τ ) ).
As before, it is now known that for each ϕ k there is a family ( C n , k ) n N that evaluates to 1 exactly on the (encoded) models of ϕ k . These circuits are constructed as follows: While ϕ k has no free variables, a subformula ψ of ϕ k can have up to v free variables. For each such subformula, the circuits use n v gates to keep track of all assignments to these v variables that make the subformula true. Clearly, this is relatively easy to achieve for literals in a constant number of layers, including literals of the form x = SUCC s ( 0 ) since s is a fixed number depending only on k. Next, if a formula is of the form i α i and for some assignment we have one gate for each α i that tells us whether it is true, we can feed all these wires into one ∧-gate. We can take care of a formula of the form i α i in the same way—and note that in a flat formula there will be at most one alternation from ⋀ to ⋁ before we encounter a quantifier. Now, for subformulas of the form x ϕ , the correct values for the n v 1 gates can be obtained by a big ∨-gate attached to the outputs from the gates for ϕ . Similarly, x ϕ can be handled using a big ∧-gate.
Based on these observations, it is now possible to build a circuit of size | ϕ k | n v and depth O ( qr ( ϕ k ) ) . In particular, the resulting overall circuit family has a depth that depends only on the parameter (since the quantifier rank can be at most | ϕ k | , which depends only on k) and has a size of at most f ( k ) · n c for f ( k ) = | ϕ k | . It can also be shown that the necessary uniformity conditions are satisfied.
We remark that the above proof also implies Fact 1, namely for g ( k ) = O ( 1 ) for the first direction and for qr ( ϕ k ) = O ( 1 ) for the second direction.  □

2.4. Bounded Rank Reductions

To complete the formal framework for describing parameterized problems, we need a notion of parameterized reductions that is very weak to ensure that the smallest class we study, para- AC 0 , is closed under them. Such a reduction is used in the literature [5], boringly named para- AC 0 -reduction, but both its definition as well as the definition of other kinds of parameterized reductions found in the literature do not fit well with our logical framework: The reductions are defined in terms of machines or circuits that get as input a string that explicitly or implicitly contains the parameter k and output a new problem instance that once more explicitly or implicitly contains a new parameter value k .
In contrast, in our setting the inputs and outputs must be logical structures that we wish to define in terms of formulas. Furthermore, “outputting a parameter value” is difficult in our formal framework since parameter values are not elements of the universe, but indices of the formulas. All of these problems can be circumvented, see for instance ([11], Definition 5.3), but we believe it gives a cleaner formalism to give a new “purely logical” definition of reductions between parameterized problems. We will not prove this, but remark that the power of this reduction is the same as that of para- AC 0 -reductions.
Definition 1.
Let τ and τ be signatures and let Q STRUC [ τ ] × N and Q STRUC [ τ ] × N be two problems. A bounded rank reduction from Q to Q , written Q br Q , is a pair of families ( f k ) k N and ( ι k , k ) k , k N where
  • each f k is a first-order query from τ-structures to τ -structures and
  • each ι k , k is a τ-formula
such that
1. 
for each ( A , k ) STRUC [ τ ] × N there is exactly one k N , denoted by ι k ( A ) in the following, such that A ι k , k ,
2. 
there is a mapping ι * : N N such that for all A STRUC [ τ ] we have ι k ( A ) ι * ( k ) ,
3. 
( A , k ) Q if and only if f k ( A ) , ι k ( A ) Q , and
4. 
the quantifier rank of all ι k , k and of all formulas inside the f k and of the widths of each f k is bounded by a constant c.
Let us briefly explain the ingredients of this definition: Each f k maps all τ -structures A to τ -structures A . The fact that we have one function for each parameter value allows us to make our mapping depend on the parameter. The job of the formulas ι k , k is solely to “compute” the new parameter value k , based not only on the original value k, but also on A . If, as is the case in many reductions, the new parameter value k just depends on k (typically, it even is k), we can just set ι k , k to a trivial tautology ⊤ and all other ι k , k to the contradiction ⊥.
In the definition, we referred to first-order queries, which are a standard way of defining a logical τ -structure in terms of a τ -structure in database theory and finite model theory (we remark that in general model theory “interpretation” is used instead of “query” and that “transductions” from fpt-theory are queries with special semantic and syntactic properties). A detailed account of first-order queries can be found in [17], but here is the basic idea: Suppose we wish to map graphs ( ( E 2 ) -structures) to their underlying undirected graphs ( ( U 2 ) -structures, where U represent the underlying symmetric edge set). In this case, there is a simple formula ϕ U ( x , y ) that tells us when U x y holds in the new structure: E x y E y x . More importantly, if we have a formula ψ that internally uses U x y to check whether there is an undirected edge in the mapped graph, we can easily turn this into a formula ψ [ f ] , where we replace all occurrences of U x y by ϕ U ( x , y ) , that gives the same answer as ψ when fed the original graph. In other words, if a first-order query maps A to A and we wish to check whether A ψ holds, we can just as well check whether A ψ [ f ] holds. The just-described example of a first-order query did not change the universe, which is something we sometimes wish to do. This is achieved by allowing the width w of the query to be larger than 1. The effect is that the universe U gets replaced by U w and, now, elements of this new universe can be described by tuples of variables of length w. We can also reduce the size of the universe using a formula ϕ universe ( x 1 , , x w ) that is true only for the tuples we wish to keep in the new structure’s universe.
Lemma 2.
Let Q br Q via a bounded rank reduction given by ( f k ) k N and ( ι k , k ) k , k N . Let ( ϕ k ) k N describe Q . Then there is a family ( ϕ k ) k N that describes Q with
1. 
max k qr ( ϕ k ) = max k qr ( ϕ k ) + O ( 1 ) and
2. 
max k bound ( ϕ k ) = max k bound ( ϕ k ) + O ( 1 ) .
In particular, para- AC 0 and para- AC 0 are closed under bounded rank reductions.
Proof. 
Set ϕ k to k = 1 ι * ( k ) ( ι k , k ϕ k [ f k ] ) . By definition, we have A ϕ k if and only if f k ( A ) ϕ k for the unique k with A ι k , k . If we can argue that the substitutions do not increase the quantifier rank or number of variables by more than a constant, we get the claim. Unfortunately, there is a case where simple substitutions fail to preserve the quantifier rank, namely when a formula ϕ k contains a large number of nested applications of the successor function. Suppose, for instance, ϕ k is something like x y ( SUCC 1000 x = y ) . While this formula has quantifier rank 2 and uses only two variables, a simple substitution of each occurrence of the one thousand SUCC operators in ϕ k by any nontrivial formula in f k that describes the successor function will yield a quantifier rank of at least 1000.
The trick is to use the results from the next section: We can easily modify any formula so that all occurrences of the successor function are of the form x = SUCC i 0 for some number i. This means that we “only” need a way of identifying the ith element of the new universe using a bounded quantifier rank. However, assuming for simplicity a width of 1 and assuming that ϕ universe ( x ) and ϕ < ( x , y ) describe how f k restricts and reorders the universe, respectively, the formula ϕ universe ( x ) = i 1 y ( ϕ universe ( y ) ϕ < ( y , x ) ) is true exactly for the ith element of the universe. We will see in the next section that we can express the = i 1 y quantifier using a constant quantifier rank that is independent of i.  □

3. Syntactic Properties Allowing Color Coding

The color coding technique [3] is a powerful method from parameterized complexity theory for “discovering small objects” in larger structures. Recall the example from the introduction: While finding k disjoint triangles in a graph is difficult in general, it is easy when the graph is colored with k colors and the objective is to find for each color one triangle having this color. The idea behind color coding is to reduce the (hard) uncolored version to the (easy) colored version by randomly coloring the graph and then “hoping” that the coloring assigns a different color to each triangle. Since the triangles are “small objects,” the probability that they do, indeed, get different colors depends only on k. Even more importantly, Alon et al. noticed that we can derandomize the coloring procedure simply by coloring each vertex by its hash value with respect to a simple family of universal hash functions that only use addition and multiplication [3]. This idea is beautiful and works surprisingly well in practice [18], but using the method inside proofs can be tricky: One needs to “keep the set sizes under control” (they must stay roughly logarithmic in size) and one needs to algorithmically “identify the small set based just on its random coloring,” which especially for more complex proofs can lead to rather subtle arguments.
In the present section, we identify syntactic properties of formulas that guarantee that the color coding technique can be applied. The property is that the colors (the predicates C i in the formulas) are not in the scope of a universal quantifier (this restriction is necessary, as the example of the formula describing 3-colorability shows).
As mentioned already in the introduction, the main “job” of the colors in proofs based on color coding is typically to ensure that vertices of a graph are different from other vertices. This leads us to the idea of focusing entirely on the notion of distinctness in the second half of this section. This time, there will be syntactic properties of existentially bounded first-order variables that will allow us to apply color coding to them.

3.1. Formulas with Color Predicates

In graph theory, a coloring of a graph can either refer to an arbitrary assignment that maps each vertex to a color or to such an assignment in which vertices connected by an edge must get different colors (sometimes called proper colorings). For our purposes, colorings need not be proper and are thus partitions of the vertex set into color classes. From the logical point of view, each color class can be represented by a unary predicate. A k-coloring of a τ -structure A is a structure B over the signature τ k -colors = τ { C 1 1 , , C k 1 } , where the C i are fresh unary relation symbols, such that A is the τ -restriction of B and such that the sets C 1 B to C k B form a partition of the universe | A | of A .
Let us now formulate and prove the first syntactic version of color coding. An example of a possible formula ϕ in the below theorem is i = 1 k x y z ( E x y E y z E x z C i x C i y C i z ) , for which the theorem tells us that there is a formula ϕ of constant quantifier rank that is true exactly when there are pairwise disjoint sets C i that make ϕ true.
Theorem 2.
Let τ be an arithmetic signature and let k be a number. For each first-order τ k -colors -sentence ϕ in negation normal form in which no C i is inside a universal scope, there is a τ-sentence ϕ such that:
1. 
For all A STRUC [ τ ] we have A ϕ if and only if there is a k-coloring B of A with B ϕ .
2. 
qr ( ϕ ) = qr ( ϕ ) + O ( 1 ) .
3. 
bound ( ϕ ) = bound ( ϕ ) + O ( 1 ) .
(Let us clarify that O ( 1 ) represents a global constant that is independent of τ and k.)
Proof. 
Let τ , k, and ϕ be given as stated in the theorem. If necessary, we modify ϕ to ensure that there is no literal of the form ¬ C i x j , by replacing each such literal by the equivalent l i C l x j . After this transformation, the C i in ϕ are neither in the scope of universal quantifiers nor of negations—and this is also true for all subformulas α of ϕ . We will now show by structural induction that all these subformulas (and, hence, also ϕ ) have two semantic properties, which we call the monotonicity property and the small witness property (with respect to the C i ). Afterwards, we will show that these two properties allow us to apply the color coding technique.
Establishing the monotonicity and small witness properties. Some notations will be useful: Given a τ -structure A with universe A and given sets A i A for i { 1 , , k } , let us write A ϕ ( A 1 , , A k ) to indicate that B is a model of ϕ where B is the τ k -colors -structure with universe A in which all symbols from τ are interpreted as in A and in which the symbol C i is interpreted as A i , that is, C i B = A i . Subformulas γ of ϕ may have free variables and suppose that x 1 to x m are the free variables in γ and let a i A for i { 1 , , m } . We write A γ ( A 1 , , A k , a 1 , , a m ) to indicate that γ holds in the just-described structure B when each x i is interpreted as a i .
Definition 2.
Let γ be a τ k -colors -formula with free variables x 1 to x m . We say that γ has the monotonicity and the small witness properties with respect to the C i if for all τ-structures A with universe A = | A | and all values a 1 , , a m A the following holds:
1. 
Monotonicity property: Let A 1 , , A k A and B 1 , , B k A be sets with A i B i for all i { 1 , , k } . Then A γ ( A 1 , , A k , a 1 , , a m ) implies A γ ( B 1 , , B k , a 1 , , a m ) .
2. 
Small witness property: If there are sets B 1 , , B k A such that we have A γ ( B 1 , , B k , a 1 , , a m ) , then there are sets A i B i whose sizes | A i | depend only on γ for i { 1 , , k } , such that A γ ( A 1 , , A k , a 1 , , a m ) .
We now show that ϕ has these two properties. For monotonicity, just note that the C i are not in the scope of any negation and, thus, if some A i make ϕ true, so will all supersets B i of the A i .
To see that the small witness property holds, we argue by structural induction: If ϕ is any formula that does not involve any C i , then ϕ is true or false independently of the B i and, in particular, if it is true at all, it is also true for A i = for i { 1 , , k } . If ϕ is the atomic formula C i x j , then setting A i = { a j } and A i = for i i makes the formula true.
If ϕ = α β , then α and β have the small witness property by the induction hypothesis. Let B 1 , , B k A make ϕ hold in A . Then they also make both α and β hold in A . Let A 1 α , , A k α A with A i α B i be the witnesses for α and let A 1 β , , A k β A be the witnesses for β . Then by the monotonicity property, A 1 α A 1 β , , A k α A k β makes both α and β true, that is
A α ( A 1 α A 1 β , , A k α A k β , a 1 , , a m )
and the same holds for β . Note that A i α A i β B i still holds and that they have sizes depending only on α and β and thereby on ϕ .
For ϕ = α β we can argue in exactly the same way as for the logical and.
The last case for the structural induction is ϕ = x m ( α ) . Consider B 1 , , B k A that make ϕ true. Then there is a value a m A such that A α ( B 1 , , B k , a 1 , , a m ) . Now, since α has the small witness property by the induction hypothesis, we get A i B i of size depending on α for which we also have A α ( A 1 , , A k , a 1 , , a m ) . Then, by the definition of existential quantifiers, these A i also witness A x m ϕ ( A 1 , , A k , a 1 , , a m 1 ) . (Observe that this is the point where the argument would not work for a universal quantifier: Here, for each possible value of a m we might have a different set of A i ’s as witnesses and their union would then no longer have small size.)
Applying color coding: our next step in the proof is to use color coding to produce the partition. First, let us recall the basic lemma on universal hash functions formulated below in a way equivalent to ([2], p. 347):
Lemma 3.
There is an n 0 N such that for all n n 0 and all subsets X { 0 , , n 1 } there exist a prime p < | X | 2 log 2 n and a number q < p such that the function h p , q ( m ) = ( q · m mod p ) mod | X | 2 is injective on X.
As has already been observed by Chen et al. [4], if we set k = | X | we can easily express the computation underlying h p , q : { 0 , , n 1 } { 0 , , k 2 1 } using a fixed FO [ + , × ] -formula ρ ( k , p , q , x , y ) . That is, if we encode the numbers k , p , q , x , y { 0 , , n 1 } as corresponding elements of the universe with respect to the ordering of the universe, then ρ ( k , p , q , x , y ) holds if and only if h p , q ( x ) = y . Note that the p and q from the lemma could exceed n for very large X (they can reach up to n 2 log 2 n n 3 ), but, first, this situation will not arise in the following and, second, this could be fixed by using three variables to encode p and three variables to encode q. Trivially, ρ ( k , p , q , x , y ) has some constant quantifier rank (the formula explicitly constructed by Chen et al. has qr ( ρ ) = 9 , assuming k 2 < n ).
Next, we will need the basic idea or “trick” of Alon et al.’s [3] color coding technique: While for appropriate p and q the function h p , q will “just” be injective on { 0 , , k 2 1 } , we actually want a function that maps each element x X to a specific element (“the color of x”) of { 1 , , k } . Fortunately, this is easy to achieve by concatenating h p , q with an appropriate function g : { 0 , , k 2 1 } { 1 , , k } .
In detail, to construct ϕ from the claim of the theorem, we construct a family of formulas ϕ g ( p , q ) where p and q are new free variables and the formulas are indexed by all possible functions g : { 0 , , k 2 1 } { 1 , , k } : In ϕ , replace every occurrence of C i x j by the following formula π i g ( p , q , x j ) :
y { 0 , , k 2 1 } , g ( y ) = i k ^ y ^ SUCC k ( 0 ) = k ^ SUCC y ( 0 ) = y ^ ρ ( k ^ , p , q , x j , y ^ )
where k ^ and y ^ are fresh variables that we bind to the numbers k and y (if the universe is large enough). Note that the formula C i x j has x j as a free variable, while π i g ( p , q , x j ) additionally has p and q as free variables. As an example, for the formula ϕ = x ( C 2 x y C 5 y ) we would have ϕ g = x ( π 2 g ( p , q , x ) y π 5 g ( p , q , y ) ) . Clearly, each ϕ g has the property qr ( ϕ g ) = qr ( ϕ ) + O ( 1 ) .
The desired formula ϕ is (almost) simply g : { 0 , , k 2 1 } { 1 , , k } p q ( ϕ g ( p , q ) ) . The “almost” is due to the fact that this formula works only for structures with a sufficiently large universe—but by Lemma 1 it suffices to consider only this case. Let us prove that for every σ -structure A with universe A = { 0 , , n 1 } and n c for some to-be-specified constant c, the following two statements are equivalent:
  • There is a k-coloring B of A with B ϕ .
  • A g : { 0 , , k 2 1 } { 1 , , k } p q ( ϕ g ( p , q ) ) .
Let us start with the implication of item 2 to 1. Suppose there is a function g : { 0 , , k 2 1 } { 1 , , k } and elements p , q { 0 , , n 1 } such that A ϕ g ( p , q ) . We define a partition A 1 ˙ ˙ A k = A by A i = { x A g ( h p , q ( x ) ) = i } . In other words, A i contains all elements of A that are first hashed to an element of { 0 , , k 2 1 } that is then mapped to i by the function g. Trivially, the A i form a partition of the universe A.
Assuming that the universe size is sufficiently large, namely for k 2 log 2 n < n , inside ϕ g all uses of ρ ( k ^ , p , q , x , y ^ ) will have the property that A ρ ( k ^ , p , q , x , y ^ ) if and only if h p , q ( x ) = y ^ . Clearly, there is a constant c depending only on k such that for all n > c we have k 2 log 2 n < n .
With the property established, we now see that π i g ( p , g , x j ) holds inside the formula ϕ g if and only if the interpretation of x j is an element of A i . This means that if we interpret each C i by A i , then we get A ϕ ( A 1 , , A k ) and the A i form a partition of the universe. In other words, we get item 1.
Now assume that item 1 holds, that is, there is a partition B 1 ˙ ˙ B k = A with A ϕ ( B 1 , , B k ) . We must show that there are a g : { 0 , , k 2 1 } { 1 , , k } and p , q A such that A ϕ g ( p , q ) .
At this point, we use the small witness property that we established earlier for the partition. By this property there are pairwise disjoint sets A i A such that, first, | A i | depends only on ϕ and, second, A ϕ ( A 1 , , A k ) . Let X = i = 1 k A i . Then | X | depends only on ϕ and let s ϕ be a ϕ -dependent upper bound on this size. By the universal hashing lemma (Lemma 3), there are now p and q such that h p , q : { 0 , , n 1 } { 0 , , s ϕ 2 1 } is injective on X. Then, we can set g : { 0 , , s ϕ 2 1 } { 1 , , k } to g ( v ) = i if there is an x A i with h p , q ( x ) = v and setting g ( v ) arbitrarily otherwise. Note that this is, indeed, a valid definition of g since h p , q is injective on X.
With these definitions, we now define the following sets D 1 to D k : Let D i = { x A g ( h p , q ( x ^ ) ) = i } where x ^ is the index of x in A with respect to the ordering (that is, x ^ = | { y A y < A x } | and for the special case that A = { 0 , , n 1 } and that < A is the natural ordering, x ^ = x ). Observe that D i A i holds for all D i and that the D i form a partition of the universe A. By the monotonicity property, A ϕ ( A 1 , , A k ) implies A ϕ ( D 1 , , D k ) . However, by definition of the D i and of the formulas π i g , for a sufficiently large universe size n (namely s ϕ 2 log 2 n < n ), we now also have A ϕ g ( p , q ) , which in turn implies A g p q ϕ g .
This concludes the proof of Theorem 2.  □
In the theorem we assumed that ϕ is a sentence (a formula without free variables) to keep the notation simple, but both the theorem and later theorems still hold when ϕ ( x 1 , , x n ) has free variables x 1 to x n . Then there is a corresponding ϕ ( x 1 , , x n ) such that first item becomes that for all A STRUC [ τ ] and all a 1 , , a n | A | we have A ϕ ( a 1 , , a n ) if and only if there is a k-coloring B of A with B ϕ ( a 1 , , a n ) . Note that the syntactic transformations in the theorem do not add dependencies of universal quantifiers on the free variables.
Mostly for simplicity, we have opted to only use first-order logic throughout this paper and this will be exactly the kind of logic we need in the rest of this paper. However, it is arguably more natural to formulate the above Theorem 2 in terms of second-order logic since “guessing a coloring” is clearly a case of a special existential second-order quantification. The below corollary is a rephrasing of Theorem 2 in terms of second-order logic. In the corollary, we use the notation ( C 1 ˙ ˙ C k ) ( ϕ ) as a shorthand for C 1 C k ( ϕ x i j ( ¬ C i x ¬ C j x ) ) , that is, for the “existential guessing of a coloring where each element of the universe gets at most one color”.
Corollary 1.
Let τ be an arithmetic signature and let k be a number. For each first-order τ k -colors -sentence ϕ in negation normal form in which no C i is inside a universal scope, there is a first-order τ-sentence ϕ such that:
1. 
( C 1 ˙ ˙ C k ) ( ϕ ) ϕ (on finite structures).
2. 
qr ( ϕ ) = qr ( ϕ ) + O ( 1 ) .
3. 
bound ( ϕ ) = bound ( ϕ ) + O ( 1 ) .

3.2. Formulas with Weak Quantifiers

If one has a closer look at proofs based on color coding, one cannot help but notice that the colors are almost exclusively used to ensure that certain vertices in a structure are distinct from certain other vertices: Recall the introductory example j = 1 k x y z ( E x y E y z E x z C 3 j 2 x C 3 j 1 y C 3 j z ) , which describes the triangle packing problem when we require that the C i form a partition of the universe. Since the C i are only used to ensure that the many different x, y, and z are different, we already rewrote the formula in (4) as x 1 x 3 k i j x i x j j = 1 k x y z ( E x y E y z E x z x 3 j 2 = x x 3 j 1 = y x 3 j = z ) . While this rewriting gets rid of the colors and moves us back into the familiar territory of simple first-order formulas, the quantifier rank and the number of variables in the formula have now “exploded” (from the constant 3 to the parameter-dependent value 3 k + 3 )—which is exactly what we need to avoid in order to apply Fact 1 or Theorem 1.
We now define a syntactic property that the x i have that allows us to remove them from the formula and, thereby, to arrive at a family of formulas of constant quantifier rank. For a (sub)formula α of the form d ( ϕ ) or d ( ϕ ) , we say that d depends on all free variables in ϕ (at the position of α in a larger formula). For instance, in E x y b ( E x b z ( E y z ) ) b ( E x x ) , the variable b depends on x and y at its first binding ( b ) and on x at the second binding ( b ).
Definition 3.
Let ϕ be in negation normal form. We call the leading existential quantifier of x ( ϕ ) strong if
1. 
some universal binding inside ϕ depends on x or
2. 
there is a subformula α β of ϕ such that both α and β contain x in literals that are not of the form x y for some variable y.
Dually, we call the universal leading quantifier of x ( ϕ ) strong if
1. 
some existential binding inside ϕ depends on x or
2. 
there is a subformula α β such that both α and β contain x in literals that are not of the form x = y for some variable y.
A quantifier leading a (sub)formula that is not strong is weak. The strong quantifier rank strong-qr ( ψ ) is the quantifier rank of ψ, where weak quantifiers are ignored; strong-bound ( ψ ) contains all variables of ψ that are bound by non-weak quantifiers.
We place a dot on the variables bound by weak quantifiers to make them easier to spot. For example, in ϕ = x y z ˙ ( R x x z ˙ z ˙ x y y z ˙ P x w ˙ E w ˙ y y ) the quantifier z ˙ is weak, but neither are x (since x is used in two literals joined by a conjunction, namely in R x x z ˙ z ˙ and P x ) nor y (since w ˙ depends on y in w ˙ E w ˙ y y ). We have qr ( ϕ ) = 4 , but strong-qr ( ϕ ) = 2 , and bound ( ϕ ) = { x , y , z ˙ , w ˙ } , but strong-bound ( ϕ ) = { x , y } .
Admittedly, the definition of weakness is a bit technical, but note that there is a rather simple sufficient condition for an existentially-bound variable x to be weak: If it is not used in universal bindings and is used in only one literal that is not an inequality, then x is weak. This condition almost always suffices for identifying the weak variables, although there are of course exceptions like x ˙ ( P x ˙ Q x ˙ ) .
Our objective is now to simultaneously remove all weak quantifiers from a formula without increasing the strong quantifier rank by more than a constant factor or the number of strong variables by more than a constant. We first prove this only for existential weak quantifiers in the below theorem (by considering only formulas that do not have weak universal quantifiers). Once we have achieved this, a comparatively simple syntactic duality argument allows us to extend the claim to all formulas in Theorem 4.
Theorem 3.
Let τ be an arithmetic signature. Then for every τ-formula ϕ in negation normal form without weak universal quantifiers there is a τ-formula ϕ such that
1. 
ϕ ϕ (on finite structures),
2. 
qr ( ϕ ) = 3 · strong-qr ( ϕ ) + arity ( τ ) + O ( 1 ) , and
3. 
bound ( ϕ ) = strong-bound ( ϕ ) + arity ( τ ) + O ( 1 ) .
Before giving the detailed proof, we briefly sketch the overall idea: Using simple syntactic transformations, we can ensure that all weak quantifiers follow in blocks after universal quantifiers (and, by assumption, all universal quantifiers are strong). We can also ensure that inequality literals directly follow the blocks of weak quantifiers and are joined by conjunctions. If the inequality literals following a block happen to require that all weak variables from the block are different (that is, if for all pairs x ˙ i and x ˙ j of different weak variables there is an inequality x ˙ i x ˙ j ), then we can remove the weak quantifiers x ˙ i and at the (typical single) place where x ˙ i is used, we use a color C i instead. For instance, if x ˙ i is used in the literal x ˙ i = y , we replace the literal by C i y . If x ˙ i is used for instance in ¬ E x ˙ i y , we replace this by x ( C i x ¬ E x y ) . In this way, for each block we get an equivalent formula to which we can apply Theorem 2. A more complicated situation arises when the inequality literals in a block “do not require complete distinctness,” but this case can also be handled by considering all possible ways in which the inequalities can be satisfied in parallel. In result, all weak quantifiers get removed and for each block a constant number of new quantifiers are introduced. Since each block follows a different universal quantifier, the new total quantifier rank is at most the strong quantifier rank times a constant factor; and the new number of variables is only a constant above the number of original strong variables.
Proof of Theorem 3.
Let ϕ be given. We first apply a number of simple syntactic transformations to move the weak quantifiers directly behind universal quantifiers and to move inequality literals directly behind blocks of weak quantifiers. Then we show how sets of inequalities can be “completed” if necessary. Finally, we inductively transform the formula in such a way that Theorem 2 can be applied repeatedly. As a running example of how the different syntactic transformations work, we use the (semantically not very sensible) formula
ϕ = a x ˙ ( E a x ˙ y ˙ ( y ˙ x ˙ ) ) c x ˙ y ˙ ( E x ˙ y ˙ z ( x ˙ y ˙ P z Q c a z ) ) .
In this formula, the (only) universally quantified variable, c, is strong since the existential binding z depends on it via Q c a z . The variable a is strong since c depends in it, once more via Q c a z . Finally, z is strong since it is used is two parts of a conjunction: P z Q c a z .
Preliminaries: It will be useful to require that all weak variables are different. Thus, as long as necessary, when a variable is bound by a weak quantifier and once more by another quantifier, replace the variable used by the weak quantifier by a fresh variable. Note that this may increase the number of distinct (weak) variables in the formula, but we will get rid of all of them later on anyway. From now on, we may assume that the weak variables are all distinct from one another and also from all other variables.
It will also be useful to assume that ϕ starts with a universal quantifier. If this is not the case, replace ϕ by the equivalent formula v ( ϕ ) where v is a fresh variable. This increases the quantifier rank by at most 1.
Finally, it will also be useful to assume that the formula has been “flattened” as in the proof Theorem 1 (one can loosely think of this as bringing the formula “locally into disjunctive normal form”): We use the distributive laws of propositional logic to repeatedly replace subformulas of the form ( α β ) γ by ( α γ ) ( β γ ) and α ( β γ ) by ( α β ) ( α γ ) . Note that this transformation does not change which variables are weak.
For our running example, applying the described preprocessing yields:
ϕ v a x ˙ 1 ( E a x ˙ 1 x ˙ 2 ( x ˙ 2 x ˙ 1 ) ) c x ˙ 3 x ˙ 4 ( E x ˙ 3 x ˙ 4 z ( x ˙ 3 x ˙ 4 P z Q c a z ) ) .
Syntactic transformations I: blocks of weak quantifiers. The first interesting transformation is the following: We wish to move weak quantifiers “as far up the syntax tree as possible.” To achieve this, we apply the following equivalences as long as possible by always replacing the left-hand side (and also commutatively equivalent formulas) by the right-hand side:
x ˙ ( α ) β x ˙ ( α β ) , x ˙ ( α ) β x ˙ ( α β ) , y x ˙ ( α ) x ˙ y ( α ) .
Note that β does not contain x ˙ since we made all weak variables distinct and, of course, by y we mean a strong quantifier.
Once the transformations have been applied exhaustively, all weak quantifiers will be directly preceded in ϕ by either a universal quantifier or another weak quantifier. This means that all weak quantifiers are now arranged in blocks inside ϕ , each block being preceded by a universal quantifier.
ϕ v x ˙ 1 x ˙ 2 a E a x ˙ 1 x ˙ 2 x ˙ 1 c x ˙ 3 x ˙ 4 ( E x ˙ 3 x ˙ 4 z ( x ˙ 3 x ˙ 4 P z Q c a z ) )
Syntactic transformations II: weak and strong literals. In order to apply color coding later on, it will be useful to have only three kinds of literals in ϕ :
  • Strong literals are literals that do not contain any weak variables.
  • Weak equalities are literals of the form x ˙ = y involving exactly one strong variable that is existentially bound inside the weak variable’s scope:
    x ˙ ( y ( x ˙ = y ) ) .
  • Weak inequalities are literals of the form x ˙ y ˙ for two weak variables.
Let us call all other kinds of literals bad. This includes literals like E x ˙ x ˙ or E z y ˙ that contain a relation symbol and some weak variables, but also inequalities x ˙ y involving a weak and a strong variable, equalities x ˙ = y ˙ involving two weak variables, or an equality literal like the one in y x ˙ ( x ˙ = y ) . Finally, literals involving the successor function and weak variables are also bad.
In order to get rid of the bad literals, we will replace them by equivalent formulas that do not contain any bad literals. The idea is that we bind the variable or term that we wish to get rid of using a new existential quantifier. In order to avoid introducing too many new variables, for all of the following transformations we use the set of fresh variables v 1 , v 2 , and so on, where we may need more than one of these variables per literal, but will need no more than O ( arity ( τ ) ) (recall that arity ( τ ) is the maximum arity of relation symbols in τ ).
Let λ be a bad literal, that is, let it contain a weak variable x ˙ , but neither be a weak equality nor a weak inequality. Replace λ by v i ( v i = x ˙ λ [ x ˙ v i ] ) . Here, λ [ t 1 t 2 ] is our notation for the substitution of the term t 1 by t 2 in λ . The number i is chosen minimally so that λ does not already contain v i . Since this transformation reduces the number of weak variables in λ and does not introduce a bad literal ( v i = x ˙ is a weak equality and hence “good”), sooner or later we will have gotten rid of all bad literals. For each literal we use at most arity ( τ ) new variables from the v i (more precisely, at most max { arity ( τ ) , 2 } in case τ contains only unary or no relation symbols and λ is something like SUCC ( x ˙ ) = SUCC ( y ˙ ) , causing two replacements). Overall, we get that ϕ is equivalent to a formula without any bad literals in which we use at most arity ( τ ) + 2 = arity ( τ ) + O ( 1 ) additional variables and whose quantifier rank is larger than that of ϕ by at most arity ( τ ) + O ( 1 ) . Note that the transformation ensures that weak variables stay weak. Applied to our example formula, we get:
v x ˙ 1 x ˙ 2 a ( v 1 ( v 1 = x ˙ 1 E a v 1 ) x ˙ 2 x ˙ 1 c x ˙ 3 x ˙ 4 ( v 1 ( v 1 = x ˙ 3 v 2 ( v 2 = x ˙ 4 E v 1 v 2 ) ) z ( x ˙ 3 x ˙ 4 P z Q c a z ) ) )
Syntactic transformations III: accumulating weak inequalities. We now wish to move all weak inequalities to the “vicinity” of the corresponding block of weak quantifiers. More precisely, just as we did earlier, we apply the following equivalences (interpreted once more as rules that are applied from left to right):
( x ˙ y ˙ α ) β ( x ˙ y ˙ β ) ( α β ) ,
x ( α β ) x ( α ) x ( β ) ,
z ( x ˙ y ˙ α ) x ˙ y ˙ z ( α ) .
Note that these rules do not change which variables are weak. When these rules can no longer be applied, the weak inequality are “next” to their quantifier block, that is, each subformula starting with weak quantifiers has the form
x ˙ i 1 x ˙ i k i j λ i j α i
where the α i contain no weak inequalities while all λ i j are weak inequalities.
For our example formula, we get:
ϕ v x ˙ 1 x ˙ 2 ( x ˙ 2 x ˙ 1 a v 1 ( v 1 = x ˙ 1 E a v 1 ) c ( x ˙ 3 x ˙ 4 ( v 1 ( v 1 = x ˙ 3 v 2 ( v 2 = x ˙ 4 E v 1 v 2 ) ) ) c ( x ˙ 3 x ˙ 4 ( ( x ˙ 3 x ˙ 4 z ( P z Q c a z ) ) ) ) .
Finally, we now swap each block of weak quantifiers with the following disjunction, that is, we apply the following equivalence from left to right:
x ˙ i 1 x ˙ i k i ψ i i x ˙ i 1 x ˙ i k ψ i .
If necessary, we rename weak variables to ensure once more that they are unique. For our example, the different transformations yield:
ϕ v x ˙ 1 x ˙ 2 ( x ˙ 2 x ˙ 1 a v 1 ( v 1 = x ˙ 1 E a v 1 ) c ( x ˙ 3 x ˙ 4 ( v 1 ( v 1 = x ˙ 3 v 2 ( v 2 = x ˙ 4 E v 1 v 2 ) ) ) c ( x ˙ 5 x ˙ 6 ( x ˙ 5 x ˙ 6 z ( P z Q c a z ) ) ) ) .
Let us spell out the different ψ i , λ i j , and α i contained in the above formula: First, there is one block of weak variables ( x ˙ 1 x ˙ 2 ) following v at the beginning. There is only a single formula ψ 1 for this block, which equals ( j = 1 1 λ 1 j ) α 1 with λ 1 1 = ( x ˙ 2 x ˙ 1 ) and α 1 = a v 1 ( v 1 = x ˙ 1 E a v 1 ) c ( ) . Second, there are two blocks of weak variables ( x ˙ 3 x ˙ 4 and x ˙ 5 x ˙ 6 ) following c , which are followed by (new) formulas ψ 1 and ψ 2 . The first is of the form ψ 1 = ( j = 1 0 λ 1 j ) α 1 and the second of the form ψ 2 = ( j = 1 1 λ 2 j ) α 2 . There are no λ 1 j and we have α 1 = v 1 ( v 1 = x ˙ 3 v 2 ( v 2 = x ˙ 4 E v 1 v 2 ) ) . We have λ 2 1 = ( x ˙ 5 x ˙ 6 ) and we have α 2 = z ( P z Q c a z ) .
We make the following observation at this point: Inside each ψ i , each of the variables x ˙ i 1 to x ˙ i k is used at most once outside of weak inequalities. The reason for this is that rules (7) and (8) ensure that there are no disjunctions inside the ψ i that involve a weak variable x ˙ . Thus, the requirement “in any subformula of ψ i of the form α β only α or β —but not both—may use x ˙ in a literal that is not a weak inequality” from the definition of weak variables just boils down to “ x ˙ may only be used once in ψ i in a literal that is not a weak inequality.” Since weak variables cannot be present in strong literals, this means, in particular, that x ˙ is now only used in (at most) a single weak equality x ˙ = y and otherwise only in weak inequalities.
Syntactic transformations IV: completing weak inequalities. The last step before we can apply the color coding method is to “complete” the conjunctions of weak inequalities. After all the previous transformations have been applied, each block of weak quantifiers has now the form x ˙ 1 x ˙ k i λ i α where the λ i are all weak inequalities (between some or all pairs of x ˙ 1 to x ˙ k ) and α contains no weak inequalities involving the x ˙ i (but may contain one weak equality for each x ˙ i ). Of course, the weak variables need not be x ˙ 1 to x ˙ k , but let us assume this to keep to notation simple.
The formula i λ i expresses that some of the variables x ˙ i must be different. If the formula encompasses all possible weak inequalities between distinct x ˙ i and x ˙ j , then the formula would require that all x ˙ i must be distinct—exactly the situation in which color coding can be applied. However, some weak inequalities may be “missing” such as in the formula x ˙ 1 x ˙ 2 x ˙ 2 x ˙ 3 x ˙ 1 x ˙ 3 x ˙ 3 x ˙ 4 : This formula requires that x ˙ 1 to x ˙ 3 must be distinct and that x ˙ 4 must be different from x ˙ 3 —but it would be allowed that x ˙ 4 equals x ˙ 1 or x ˙ 2 . Indeed, it might be the case that the only way to make α true is to make x ˙ 1 equal to x ˙ 4 . This leads to a problem in the context of color coding: We want to color x ˙ 1 , x ˙ 2 , and x ˙ 3 differently, using, say, red, green, and blue. In order to ensure x ˙ 3 x ˙ 4 , we must give x ˙ 4 a color different from blue. However, it would be wrong to color it red or green or using a new color like yellow since each would rule out x ˙ 4 being equal or different from either x ˙ 1 or x ˙ 2 —and each possibility must be considered to ensure that we miss no assignment that makes α true.
The trick at this point is to reduce the problem of missing weak inequalities to the situation where all weak inequalities are present by using a large disjunction over all possible ways to unify weak variables without violating the weak inequalities.
In detail, let us call a partition P 1 ˙ ˙ P l of the set { x ˙ 1 , , x ˙ k } allowed by the λ i if the following holds: For each P j and any two different x ˙ p , x ˙ q P j none of the λ i is the inequality x ˙ p x ˙ q . In other words, the λ i do not forbid that the elements of any P j are identical. Clearly, the partition with P j = { x ˙ j } is always allowed by any λ i , but in the earlier example, the partition P 1 = { x ˙ 1 , x ˙ 4 } , P 2 = { x ˙ 2 } , P 3 = { x ˙ 3 } would be allowed, while P 1 = { x ˙ 1 } , P 2 = { x ˙ 2 } , P 3 = { x ˙ 3 , x ˙ 4 } would not be.
We introduce the following notation: For a partition P 1 ˙ ˙ P l = { x ˙ 1 , , x ˙ k } we will write distinct ( P 1 , , P l ) for 1 i < j l , x ˙ p P i , x ˙ q P j x ˙ p x ˙ q . We claim the following:
Claim 1.
For any weak inequalities λ i we have
i λ i P 1 ˙ ˙ P l is allowed by the λ i distinct ( P 1 , , P l ) .
Proof. 
For the implication from left to right, assume that A i λ i ( a 1 , , a k ) for some (not necessarily distinct) a 1 , , a k | A | . The elements induce a natural partition P 1 ˙ ˙ P l = { x ˙ 1 , , x ˙ k } where two variables x ˙ p and x ˙ q are in the same set P j if and only if a p = a q . Then, clearly, for all i and j with 1 i < j l and any x ˙ p P i and x ˙ q P j we have a i a j . Thus, all inequalities in distinct ( P 1 , , P l ) are satisfied and, hence, the right-hand side holds.
For the other direction, suppose that A is a model of the right hand side for some a 1 to a k . Then there must be a partition P 1 ˙ ˙ P l that is allowed by the λ i such that A is also a model of distinct ( P 1 , , P l ) . Furthermore, each λ i is actually present in this last formula: If x ˙ p x ˙ q is one of the λ i , then by the very definition of “ P 1 ˙ ˙ P l is allowed for the λ i ” we must have that x ˙ p and x ˙ q lie in different P i and P j —which, in turn, implies that x ˙ p x ˙ q is present in distinct ( P 1 , , P l ) .  □
Applied to the example x ˙ 1 x ˙ 2 x ˙ 2 x ˙ 3 x ˙ 1 x ˙ 3 x ˙ 3 x ˙ 4 from above, the claim states the following: Since there are three partitions that are allowed by these literals (namely the one in which each variable gets its own equivalence class, the one where x ˙ 1 and x ˙ 4 are put into one class, and the one where x ˙ 2 and x ˙ 4 are put into one class) we have:
x ˙ 1 x ˙ 2 x ˙ 2 x ˙ 3 x ˙ 1 x ˙ 3 x ˙ 3 x ˙ 4 distinct ( { x ˙ 1 } , { x ˙ 2 } , { x ˙ 3 } , { x ˙ 4 } ) distinct ( { x ˙ 1 , x ˙ 4 } , { x ˙ 2 } , { x ˙ 3 } ) distinct ( { x ˙ 1 } , { x ˙ 2 , x ˙ 4 } , { x ˙ 3 } ) .
The claim has the following corollary:
Corollary 2.
For any weak inequalities λ i involving only variables from { x ˙ 1 , , x ˙ k } we have
x ˙ 1 x ˙ k ( i λ i α ) P 1 ˙ ˙ P l is allowed by the λ i x ˙ 1 x ˙ k ( distinct ( P 1 , , P l ) α ) .
As in the previous transformations we now apply the equivalence from the corollary from left to right. If we create copies of α during this process, we rename the weak variables in these copies to ensure, once more, that each weak variable is unique. In our example formula ϕ , there is only one place where the transformation changes anything: The middle weak quantifier block (the x ˙ 3 x ˙ 4 block). For the first and the last block, the literals x ˙ 1 x ˙ 2 and x ˙ 5 x ˙ 6 , respectively, already rule out all partitions except for the trivial one. For the middle block, however, there are no weak inequalities at all and, hence, there are now two allowed partitions: First, P 1 = { x ˙ 3 } , P 2 = { x ˙ 4 } , but also P 1 = { x ˙ 3 , x ˙ 4 } . This means that we get a copy of the middle block where x ˙ 3 and x ˙ 4 are required to be different—and we renumber them to x ˙ 7 and x ˙ 8 :
ϕ v x ˙ 1 x ˙ 2 ( x ˙ 2 x ˙ 1 a v 1 ( v 1 = x ˙ 1 E a v 1 ) c ( x ˙ 3 x ˙ 4 ( x ˙ 3 x ˙ 4 v 1 ( v 1 = x ˙ 3 v 2 ( v 2 = x ˙ 4 E v 1 v 2 ) ) ) c ( x ˙ 7 x ˙ 8 ( x ˙ 7 x ˙ 8 v 1 ( v 1 = x ˙ 7 v 2 ( v 2 = x ˙ 8 E v 1 v 2 ) ) ) c ( x ˙ 5 x ˙ 6 ( x ˙ 5 x ˙ 6 z ( P z Q c a z ) ) ) ) .
Of course, for our particular example, the introduction of x ˙ 7 and x ˙ 8 is superfluous insofar as it is not necessary to introduce the special case “force x ˙ 3 and x ˙ 4 to be different” in addition to the already present case “do not care whether x ˙ 3 and x ˙ 4 are different or not.” It is only with more complex weak inequalities like x ˙ 1 x ˙ 2 x ˙ 2 x ˙ 3 x ˙ 1 x ˙ 3 x ˙ 3 x ˙ 4 that the syntactic transformation becomes really necessary.
Applying color coding: we are now ready to apply the color coding technique; more precisely, to repeatedly apply Theorem 2 to the formula ϕ . Before we do so, let us summarize the structure of ϕ :
  • All weak quantifiers come in blocks, and each such block either directly follows a universal quantifier or follows a disjunction after a universal quantifier. In particular, on any root-to-leaf path in the syntax tree of ϕ between any two blocks of weak quantifiers there is at least one universal quantifier.
  • All blocks of weak quantifiers have the form
    x ˙ i 1 x ˙ i k distinct ( P 1 , , P l ) α
    for some partition P 1 ˙ ˙ P l = { x i 1 , , x i k } and for some α in which the only literals that contain any x ˙ i j are of the form x ˙ i j = y for a strong variable y that is bound by an existential quantifier inside α . Furthermore, none of these weak equality literals is in the scope of a universal quantifier inside α . (Of course, all variables in ϕ are in the scope of a universal quantifier since we added one at the start, but the point is that none of the x ˙ i is in the scope of a universal quantifier that is inside α .)
In ϕ there may be several blocks of weak quantifiers, but at least one of them (let us call it β ) must have the form (10) where α contains no weak variables other than x ˙ i 1 to x ˙ i k . (For instance, in our example formula, this is the case for the blocks starting with x ˙ 3 x ˙ 4 , for x ˙ 7 x ˙ 8 , and for x ˙ 5 x ˙ 6 , but not for x ˙ 1 x ˙ 2 since, here, the corresponding α contains all of the rest of the formula.) In our example, we could choose
β = x ˙ 7 x ˙ 8 ( x ˙ 7 x ˙ 8 v 1 ( v 1 = x ˙ 7 v 2 ( v 2 = x ˙ 8 E v 1 v 2 ) ) )
and would then have
α = v 1 ( v 1 = x ˙ 7 v 2 ( v 2 = x ˙ 8 E v 1 v 2 ) ) .
We build a new formula α from α as follows: We replace each occurrence of a weak equality x ˙ i = y in α for some weak variable x ˙ i P j and some strong variable y by the formula C j y . In our example, where P 1 = { x ˙ 7 } and P 2 = { x ˙ 8 } we would get
α = v 1 ( C 1 v 1 v 2 ( C 2 v 2 E v 1 v 2 ) ) .
An important observation at this point is that α contains no weak variables any longer, while no additional variables have been added. In particular, the quantifier rank of α equals the strong quantifier rank of α and the number of variables in α equals the number of strong variables in α .
Note that the literals C j y and also x ˙ i = y are positive since the formulas are in negation normal form. Hence, they have the following monotonicity property: If some structure together with some assignment to the free variables is a model of α or α , but a literal x ˙ i = y or C j y is false, the structure will still be a model if we replace the literal by a tautology.
For simplicity, in the following, we assume that x ˙ i 1 to x ˙ i k are just x ˙ 1 to x ˙ k . Additionally, for simplicity we assume that β contains no free variables when, in fact, it can. However, these variables cannot be any of the variables y for which we make changes and, thus, it keeps the notation simpler to ignore the additional free variables here. The following statement simply holds for all assignments to them:
Claim 2.
Let P 1 ˙ ˙ P l = { x 1 , , x k } . Then for each structure A , the following are equivalent:
1. 
A x ˙ 1 x ˙ k distinct ( P 1 , , P l ) α .
2. 
There are elements a 1 , , a k | A | with A α ( a 1 , , a k ) and such that a p a q whenever x ˙ p P i , x ˙ q P j , and i j .
3. 
There is an l-coloring B of A such that B α .
Proof. 
For the proof of the claim, it will be useful to apply some syntactic transformations to α and α . Just like the many transformations we encountered earlier, these transformations yield equivalent formulas and, thus, it suffices to prove the claim for them (since the claim is about the models of α and α ). However, these transformations are needed only to prove the claim, they are not part of the “chain of transformations” that is applied to the original formula (they increase the number of strong variables far too much).
In α there will be some occurrences of literals of the form x ˙ i = y . For each such occurrence, there will be exactly one subformula in α of the form y ( γ ) where γ contains x ˙ i = y . We now apply two syntactic transformations: First, we replace y in y ( γ ) by a fresh new variable y i (that is, we replace all free occurrences of y inside γ by y i and we replace the leading y by y i ). Second, we “move all y i to the front” by simply deleting all occurrences of y i from α , resulting in a formula δ , and then adding the block y 1 y k before δ . As an example, if we apply these transformations to α = v 1 ( x ˙ 7 = v 1 v 2 ( x ˙ 8 = v 2 E v 1 v 2 ) ) , the first transformation yields
y 7 ( x ˙ 7 = y 7 y 8 ( x ˙ 8 = y 8 E y 7 y 8 ) )
and the second one yield the new
α = y 1 y 8 ( x ˙ 7 = y 7 x ˙ 8 = y 8 E y 7 y 8 ) .
In α , we apply exactly the same transformations, only now the literals we look for are not x ˙ i = y , but C j y . We still apply the same renaming of y (namely to y i and not to y j ) as in α and apply the same movement of the quantifiers. This results in a new formula α of the form y 1 y k ( δ ) . For α = v 1 ( C 1 v 1 v 2 ( C 2 v 2 E v 1 v 2 ) ) we get the new
α = y 1 y 8 ( C 1 y 7 C 2 y 8 E y 7 y 8 )
and δ is now the inner part without the quantifiers.
Let us now prove the claim. The first two items are trivially equivalent by the definition of distinct ( P 1 , , P l ) .
The second statement implies the third: To show this, for j { 1 , , l } we first set C j B = { a i x ˙ i P j } and then add | A | \ { a 1 , , a k } to, say, C 1 B in order to create a correct partition. This setting clearly ensures that whenever x ˙ i = y holds in α , we also have C j y holding in α . Since α differs from α only on the literals of the form x ˙ i = y (which got replaced by C j y ), since we just saw that when x ˙ i = y holds in α , the replacements C j y holds in α , and since α has the monotonicity property (by which it does matter when more literals of the form C i y hold in α than did in α ), we get the third statement.
The third statement implies the second: Let an l-coloring B of A be given with B α . Since α = y 1 y k ( δ ) , there must now be elements b 1 , , b k | A | such that B δ ( b 1 , , b k ) . We define new elements a i | A | as follows: If b i C i B , let a i = b i . Otherwise, let a i be an arbitrary element of C i B . We show in the following that the a i constructed in this way can be used in the second statement, that is, we claim that A α ( a 1 , , a k ) and the a i have the distinctness property from the claim.
First, recall that α is of the form y 1 y k ( δ ) (because of the syntactic transformations we applied for the purposes of the proof of this claim) and δ contains literals of the form x ˙ i = y i , where the x ˙ i are the free variables for which the values a i are now plugged in. We claim that A δ ( a 1 , , a k , b 1 , , b k ) , that is, we claim that if we plug in a 1 to a k for the free variables x ˙ 1 to x ˙ k in δ and we plug in b 1 to b k for the (additional) free variables y 1 to y k in δ , then δ holds in A . To see this, recall that B δ ( b 1 , , b k ) holds and δ is identical to δ except that x ˙ i = y i got replaced by C j y i . In particular, by construction of the a i , whenever C j y i holds in B with y i being set to b i (that is, whenever b i C j B ), we clearly also have that x ˙ i = y i holds in A with x ˙ i being set to a i and y i being set to b i (since we let a i = b i whenever b i C j B ). Then, by the monotonicity property, we know that A δ ( a 1 , , a k , b 1 , , b k ) will hold.
Second, we argue that the distinctness property holds, that is, a p a q whenever x ˙ p P i , x ˙ q P j , and i j . However, our construction ensured that we always have a r C s B for the s with x ˙ r P s . In particular, x ˙ p P i and x ˙ q P j for i j implies that a p and a q lie in two different color classes and are, hence, distinct.  □
By the claim, A β is equivalent to there being an l-coloring B of A such that B α ( β was a block in our main formula of the form x ˙ 1 x ˙ k distinct ( P 1 , , P l ) α where there are no weak variables other than the x i ). We now apply Theorem 2 to α (as ϕ ), which yields a new formula α (called ϕ in the theorem) with the property A α A β . The interesting thing about α is, of course, that it has the same quantifier rank and the same number of variables as α plus some constant. Most importantly, we already pointed out earlier that α does not contain any weak variables and, hence, the quantifier rank of α is the same as the strong quantifier rank of β and the number of variables in α is the same as the number of strong variables in β —plus some constant.
Applying this transformation to our running example ϕ and choosing as β once more the subformula starting with x ˙ 7 x ˙ 8 , we would get the following formula (ignoring the technical issues how, exactly, the hashing is implemented, see the proof of Theorem 2 for the details):
v x ˙ 1 x ˙ 2 ( x ˙ 2 x ˙ 1 a v 1 ( v 1 = x ˙ 1 E a v 1 ) c ( x ˙ 3 x ˙ 4 ( x ˙ 3 x ˙ 4 v 1 ( v 1 = x ˙ 3 v 2 ( v 2 = x ˙ 4 E v 1 v 2 ) ) ) c ( g p q v 1 ( HASH g ( v 1 , p , q ) = 1 v 2 ( HASH g ( v 2 , p , q ) = 2 E v 1 v 2 ) ) c ( x ˙ 5 x ˙ 6 ( x ˙ 5 x ˙ 6 z ( P z Q c a z ) ) ) ) .
We can now repeat the transformation to replace each block β in this way. Observe that in each transformation we can reuse the variables (in particular, p and q) introduced by the color coding:
v g p q ( a v 1 ( HASH g ( v 1 , p , q ) = 1 E a v 1 ) c ( g p q v 1 ( HASH g ( v 1 , p , q ) = 1 v 2 ( HASH g ( v 2 , p , q ) = 1 E v 1 v 2 ) ) c ( g p q v 1 ( HASH g ( v 1 , p , q ) = 1 v 2 ( HASH g ( v 2 , p , q ) = 2 E v 1 v 2 ) ) c ( g p q z ( P z Q c a z ) ) ) .
In conclusion, we see that we can transform the original formula ϕ to a new formula ϕ with the following properties:
  • We added new variables and quantifiers to ϕ compared to ϕ during the first transformation steps, but the number we added depended only on the signature τ (it was the maximum arity of relations in τ plus possibly 2).
  • We then removed all weak variables from ϕ in ϕ .
  • We added some variables to ϕ each time we applied Theorem 2 to a block β . The number of variables we added is constant since Theorem 2 adds only a constant number of variables and since we can always reuse the same set of variables each time the theorem is applied.
  • We also added some quantifiers to ϕ each time we applied Theorem 2, which increases the quantifier rank of ϕ compared to ϕ by more than a constant. However, the essential quantifiers we add are p q and these are always added directly after a universal quantifier or directly after a disjunction after a universal quantifier. Since the strong quantifier rank of ϕ is at least the quantifier rank of ϕ where we only consider the universal quantifiers (the “universal quantifier rank”), the two added nested quantifiers per universal quantifiers can add to the quantifier rank of ϕ at most twice the universal quantifier rank.
Putting it all together, we see that ϕ is equivalent to ϕ , that ϕ has a quantifier rank that is at most 3 strong-qr ( ϕ ) + arity ( τ ) + O ( 1 ) , and the ϕ contains at most strong-bound ( ϕ ) + arity ( τ ) + O ( 1 ) variables.
This concludes the proof of Theorem 3. □
Since qr ( ϕ ) = qr ( ¬ ϕ ) , a trivial duality argument shows that the above theorem also holds for formulas ϕ without existential weak quantifiers (just apply the theorem to the negation normal form of ¬ ϕ ). The more interesting observation is that we can still remove all existential and universal weak quantifiers from a formula when both are present in an arbitrarily complex intertwined form:
Theorem 4.
Let τ be an arithmetic signature. Then for every τ-formula ϕ in negation normal form there is a τ-formula ϕ such that
1. 
ϕ ϕ (on finite structures),
2. 
qr ( ϕ ) = 3 · strong-qr ( ϕ ) + arity ( τ ) + O ( 1 ) , and
3. 
bound ( ϕ ) = strong-bound ( ϕ ) + arity ( τ ) + O ( 1 ) .
Proof. 
Given a formula ϕ that contains both existential and universal weak quantifiers, we apply a syntactic preprocessing that “separates these quantifiers and moves them before their dual strong quantifiers.” The key observation that makes these transformations possible in the mixed case is that weak existential and weak universal quantifiers commute: For instance, x ˙ ( α y ˙ ( β ) ) y ˙ ( β x ˙ ( α ) ) since x ˙ and y ˙ cannot depend on one another by the core property of weak quantifiers ( α cannot contain y ˙ and β cannot contain x ˙ ). Once we have sufficiently separated the quantifiers, we can repeatedly apply Theorem 3 or its dual to each block individually.
As a running example, let us use the following formula ϕ :
x ˙ a ( E x ˙ a b ( E b a E a b ) y ˙ ( E a y ˙ z ˙ ( E a z ˙ ) ) w ˙ ( E w ˙ a ) ) ,
which mixes existential and universal weak variables rather freely.
Similar to the proof of Theorem 3, for technical reasons we first add the superfluous quantifiers v v for a fresh strong variable v at the beginning of the formula.
Our main objective is to get rid of alternations of weak universal and weak existential quantifiers without a strong quantifier in between. In the example, this is the case, for instance, for x ˙ ( y ˙ ( z ˙ ) ) . We get rid of these situations by pushing all quantifiers (weak or strong) down as far as possible (later on, when we apply Theorem 3, we will push them up once more). Let us write x ˚ to indicate that x may be a weak or a strong variable.
If β does not contain x ˚ as a free variable, we can apply the below four equivalences from left to right (and, of course, commutatively equivalent ones). Note that since the definition of weak variables forbids that a universally bound variable depends on an existential weak variable (and vice versa), the condition “ β does not contain x ˚ ” is true, in particular, whenever x ˚ is weak and β starts with a universal quantifier in the first two lines or with an existential quantifier in the last two lines.
x ˚ ( α β ) x ˚ ( α ) β ,
x ˚ ( α β ) x ˚ ( α ) β ,
x ˚ ( α β ) x ˚ ( α ) β ,
x ˚ ( α β ) x ˚ ( α ) β .
Furthermore, we also apply the following general equivalences as long as possible:
x ˚ ( α ( β γ ) ) x ˚ ( ( α β ) ( α γ ) ) ,
x ˚ ( α β ) x ˚ ( α ) x ˚ ( β ) ,
x ˚ ( α ( β γ ) ) x ˚ ( ( α β ) ( α γ ) ) ,
x ˚ ( α β ) x ˚ ( α ) x ˚ ( β ) .
Applied to our example, we would get:
v v x ˙ a ( E x ˙ a b ( E b a E a b ) y ˙ ( E a y ˙ ) z ˙ ( E a z ˙ ) w ˙ ( E w ˙ a ) ) .
As a final transformation, we “sort” the operands of disjunctions and conjunctions: We replace a subformula α β in ϕ by β α and we replace α β by β α , whenever β contains no weak universal variables, but α does, and also whenever α contains no weak existential variables, but β does. For our example, this means that we get the following:
v v x ˙ a ( z ˙ ( E a z ˙ ) w ˙ ( E w ˙ a ) E x ˙ a b ( E b a E a b ) y ˙ ( E a y ˙ ) ) .
The purpose of the transformations was to achieve the situation described in the next claim:
Claim 3.
Assume that the above transformations have been applied exhaustively to ϕ and assume ϕ contains both existential and universal weak variables. Consider the maximal subformulas α i of ϕ that contain no weak universal variables and the maximal subformulas β i of ϕ that contain no weak existential variables. Then for some i and some γ one of the following formulas is a subformula of ϕ: x ( α i γ ) or x ( γ β i ) .
In our example, there is only a single maximal α 1 , namely z ˙ ( E a z ˙ ) w ˙ ( E w ˙ a ) E x ˙ a b ( E b a E a b ) , and a single maximal β 1 , namely b ( E b a E a b ) y ˙ ( E a y ˙ ) . The claim holds since a ( γ β 1 ) is a subformula for γ = z ˙ ( E a z ˙ ) w ˙ ( E w ˙ a ) E x ˙ a .
Proof. 
Consider any α among the α i . Since α is maximal but α is not all of ϕ , there must be a β among the β i such that either α β or α β is also a subformula of ϕ . Let us call it δ and consider the minimal subformula η of ϕ that contains δ and starts with a quantifier.
This quantifier cannot be a weak quantifier: Suppose it is x ˙ (the case x ˙ is perfectly symmetric). Since we can no longer apply one of the equivalences (11)–(18), the formula η must have the form x ˙ i ψ i (where the ψ i are not of the form ρ σ ) such that all ψ i contain x ˙ (otherwise (11) would be applicable) and such that none of the ψ i is of the form ρ σ (otherwise (15) would be applicable). This implies that all ψ i start with a quantifier. Since η was minimal to contain δ , we conclude that one ψ i must be α and another one must be β . Then, β contains a weak existential variable, namely x ˙ , which we ruled out.
Since η does not start with a weak quantifier, it must start with a strong quantifier. If it is x , by the same argument as before we get that η must have the form x i ψ i with some ψ i equal to α and some other ψ j equal to β . Then, we have found the desired subformula of ϕ if we set γ to i j ψ i . If the strong quantifier is x , a perfectly symmetric argument shows that η must have the form x i ψ i with some ψ j = α , which implies the claim for γ = i j ψ i .  □
The importance of the claim for our argument is the following: As long as ϕ still contains both existential and universal weak variables, we still find a subformula α or β that contains only existential or universal weak variables such that if we go up from this subformula in the syntax tree of ϕ , the next quantifier we meet is a strong quantifier. This means that we can now apply Theorem 3 or its dual to this subformula, getting an equivalent new formula α or β whose quantifier rank equals the strong quantifier rank of α or β , respectively, times a constant factor. Furthermore, similar to the argument at the end of the proof of Theorem 3 where we processed one β after another, each time a replacement takes place, there is a strong quantifier that contributes to the strong quantifier rank of ϕ .
This concludes the proof of Theorem 4. □

4. Syntactic Proofs and Natural Problems

The special allure of descriptive complexity theory lies in the possibility of proving that a problem has a certain complexity just by describing the problem in the right way. The “right way” is, of course, a logical description that has a certain syntax (such as having a bounded strong quantifier rank). In the following we present such descriptions for several natural problems and thereby bound their complexity “in a purely syntactic way.” First, however, we present “syntactic tools” for describing problems more easily. These tools are built on top of the notion of strong and weak quantifiers.

4.1. Syntactic Tools: New Operators

It is common in mathematical logic to distinguish between the core syntax and additional “shorthands” built on top of the core syntax. For instance, while ¬ and ∨ are typically considered to be part of the core syntax of propositional logic, the notation a b is often seen as a shorthand for ¬ a b . In a similar way, we now consider the notions of weak variables and quantifiers introduced in the previous section as our “core syntax” and build a number of useful shorthands on top of them. Of course, just as a b has an intended semantic meaning that the expansion ¬ a b of the shorthand must reflect, the shorthands we introduce also have an intended semantic meaning, which we specify.
As a first example, consider the common notation k x ( ϕ ( x ) ) , whose intended semantics is “there are at least k different elements in the universe that make ϕ ( x ) true.” This notation is often considered as a shorthand for x 1 x k i j x i x j i = 1 k ϕ ( x i ) , but we will consider it a shorthand for the equivalent, but slightly more complicated formula x ˙ 1 x ˙ k i j x ˙ i x ˙ j i = 1 k x ( x = x ˙ i ϕ ( x ) ) . The difference is, of course, that the strong quantifier rank is now much lower and, hence, by Theorem 3 we can replace any occurrence of k x ( ϕ ( x ) ) by a formula of quantifier rank qr ( ϕ ) + O ( 1 ) . In all of the following notations, k and s are arbitrary values. The indicated strong quantifier rank for the notation is that of its expansion. The semantics describe which structures A are models of the formula.
Notation 
( k x ( ϕ ( x ) ) )                    Strong-qr: 1 + strong-qr ( ϕ )
Semantics 
There are k distinct a 1 , , a k | A | with A ϕ ( a i ) for all i.
Expansion 
x ˙ 1 x ˙ k i j x ˙ i x ˙ j i = 1 k x ( x = x ˙ i ϕ ( x ) )
Notation 
( k x ( ϕ ( x ) ) )                    Strong-qr: 1 + strong-qr ( ϕ )
Semantics 
There are at most k distinct a 1 , , a k | A | with A ϕ ( a i ) for all i.
Expansion 
x ˙ 1 x ˙ k + 1 i j x ˙ i = x ˙ j i = 1 k + 1 x ( x x ˙ i ¬ ϕ ( x ) ) ( ¬ k + 1 x ( ϕ ( x ) ) )
Notation 
( = k x ( ϕ ( x ) ) )                    Strong-qr: 1 + strong-qr ( ϕ )
Semantics 
There are exactly k distinct a 1 , , a k | A | with A ϕ ( a i ) for all i.
Expansion 
k x ( ϕ ( x ) ) k x ( ϕ ( x ) )
The next notation is useful for “binding” a set of vertices to weak or strong variables. The binding contains the allowed “single use” of the weak variables in the sense of Definition 3, but they can still be used in inequality literals. Let x ˚ indicate that x may be weak or strong.
Notation 
( { x ˚ 1 , , x ˚ k } = { x ϕ ( x ) } )              Strong-qr: 1 + strong-qr ( ϕ )
Semantics 
Let a 1 , , a k | A | be the assignments to the x ˚ i (note that they need not be distinct). Then { a 1 , , a k } = a A | A ϕ ( a ) must hold.
Expansion 
i = 1 k x x = x ˚ i ϕ ( x )         // ensure { x ˚ 1 , , x ˚ k } { x ϕ ( x ) }
s = 1 k ( = s x ( ϕ ( x ) )               // bind s to | { x ϕ ( x ) } |
I { 1 , . . . , k } , | I | = s i , j I , i j x ˚ i x ˚ j ) .       // ensure | { x ˚ 1 , , x ˚ k } | s
(We remark that in the above notation, the last line does, indeed, by itself only ensure that | { x ˚ 1 , , x ˚ k } | s holds. However, in conjunction with the two lines before it, we get that | { x ˚ 1 , , x ˚ k } | = s must hold for the formula to be true.)
The final notation can be thought of as a “generalization of = k ” where we not only ask whether there are exactly k distinct a i with a property ϕ , but whether these a i then also have an arbitrary special additional property. Formally, let Q STRUC [ τ ] be an arbitrary τ -problem. We write A [ I ] for the substructure of A induced on a subset I | A | .
Notation 
( INDUCED size = k { x ϕ ( x ) } Q )              Strong-qr: 1 + strong-qr ( ϕ ) + arity ( τ )
Semantics 
The set I = { a | A | A ϕ ( a ) } has size exactly k and A [ I ] Q .
Expansion 
Assuming for simplicity that τ contains only E 2 as non-arithmetic predicate:
= k x ( ϕ ( x ) ) A Q , | A | = { 1 , , k } ( i , j ) E A x y ( π i ( x ) π j ( y ) E x y ) ( i , j ) E A x y ( π i ( x ) π j ( y ) ¬ E x y ) ,
where π i ( x ) is a shorthand for ϕ ( x ) = i 1 z ( z < x ϕ ( z ) ) , which binds x to the ith element of the universe with property ϕ .
Notation 
( INDUCED size k { x ϕ ( x ) } Q )           Strong-qr: 1 + strong-qr ( ϕ ) + arity ( τ )
Semantics 
The set I = { a | A | A ϕ ( a ) } has size at most k and A [ I ] Q .
Expansion 
s = 0 k INDUCED size = s { x ϕ ( x ) } Q

4.2. Bounded Strong-Rank Description of Vertex Cover

The first advanced problem for which we now use our framework to “prove syntactically” that it lies in para- AC 0 , is the vertex cover problem. A vertex cover of an undirected graph G = ( V , E ) is a subset X V with e X for all e E . The problem p k -VERTEX-COVER asks whether a graph has a cover X with | X | k . The most natural way to describe the problem is in terms of the following family ( ψ k ) k N :
ψ k = x 1 x k u v E u v i ( x i = u x i = v ) .
However, the strong quantifier rank of this family is clearly not bounded since all quantifiers are strong (all x i depend on the universally bound variables u and v). Thus, we need a different way of describing the vertex cover problem:
Theorem 5
([4,7]). p k -VERTEX-COVER para- AC 0 .
Proof. 
We describe the problem using a family ( ϕ k ) k N of constant strong quantifier rank that expresses the well-known Buss kernelization “using logic”: Let HIGH ( x ) = k + 1 y ( E x y ) expresses that x is a high-degree vertex. Buss observed that all high-degree vertices must be part of a vertex cover of size at most k. Thus, h k must hold for the unique h with = h x ( HIGH ( x ) ) . A remaining vertex is interesting if it is connected to at least one non-high-degree vertex: INTERESTING ( x ) = ¬ HIGH ( x ) y ( E x y ¬ HIGH ( y ) ) . If there are more than ( k h ) ( k + 1 ) k 2 + k interesting vertices, there cannot be a size-k vertex cover—and if there are less, the original graph can only have a size-k vertex cover if the graph induced on the interesting vertices has a vertex cover of size k h . In symbols: ϕ k = h = 0 k = h x ( HIGH ( x ) ) INDUCED size k 2 + k { x INTERESTING ( x ) } Q k h for the problem Q s = { G G has a vertex cover of size s } (recall that for the INDUCED notation we allow arbitrary problems).  □
While the family ( ϕ k ) constructed in the above proof is not extremely complex and much simpler than formulas or circuits found in the literature [4,7] for proving p k -VERTEX-COVER para- AC 0 without using our framework, the family certainly lacks the elegance of the earlier “natural” family ( ψ k ) . It seems like an interesting goal to find new syntactic properties that broaden the notion of weak quantifiers so that the existential quantifiers in ψ k become weak. We believe that this is possible, but also point out that while any such properties must apply to the quantifiers in the ψ k , they may not apply to the almost identical formulas
ψ k = x 1 x k u v i j ( x i = u x j = v ) E u v ,
which describe the parameterized clique problem—a problem provably not in para- AC 0 .

4.3. Bounded Strong-Rank Description of Hitting Set

Hitting sets generalize vertex covers to hypergraphs, which are pairs ( V , E ) where the members of E are called hyperedges and we have e V for all e E . Hitting sets are still sets X V with e X for all e E . The problem p k , d -HITTING-SET asks whether a hypergraph with max e E | e | d has a hitting set X with | X | k . Formally, let d ( H ) be the maximum size of any hyperedge in H and p k , d -HITTING-SET be the set of all pairs ( H , max ( k , d ) ) such that H encodes (see below for details) a hypergraph H with d ( H ) d and for which there is a hitting set of size at most k. The problem p k -VERTEX-COVER is basically this problem restricted to the parameter value d = 2 (if one models size-1 hyperedges e as self-loops and ignores the case of the empty hyperedge, which can never be hit by any set X).
While by no means trivial, it is not too hard to generalize the arguments used in Theorem 5 to prove that the parameterized hitting sets problem lies in para- AC 0 when d is fixed and k is the parameter. It is much harder to prove the following result, where both d and k are parameters:
Theorem 6
([19]). p k , d -HITTING-SET para- AC 0 .
Before we prove the theorem using our framework, we need to fix how we model hypergraphs ( V , E ) as logical structures. We use a τ hyper -structure H for τ hyper = ( VERTEX 1 , HYPEREDGE 1 , IN 2 ) . Let H = V E , VERTEX H = V , set HYPEREDGE H = E , and IN H = { ( v , e ) v e E } . The other way round, given a τ hyper -structure H , we consider it as the following hypergraph H ( H ) = ( V , E ) with V = VERTEX H and E = { set ( e ) e HYPEREDGE H } , where we write set ( e ) for { v ( v , e ) IN H } .
Note that we allow the universe of H to contain elements e that are neither vertices nor hyperedge-representing elements, but their set ( e ) do not contribute to E. We also allow that two different elements e , e | H | represent the same set set ( e ) = set ( e ) . This can be problematic in a kernelization: When we identify a kernel set E of hyperedges, there could still be a large (non-parameter-dependent) number of elements in the universe that represent these hyperedges—meaning that these elements do not form a kernel themselves. Fortunately, this can be fixed: We can easily check whether two elements represent the same set using x ( IN x e IN x e ) and then always consider only the first representing element with respect to the ordering < of the universe. For this reason, we will assume in the following that for any subset s V there is at most one e | H | with s = set ( e ) .
Proof of Theorem 6.
The idea behind the proof is a (very strong) generalization of the Buss kernel argument from the proof of Theorem 5. As in that proof, we will present a family ( ϕ k , d ) k , d N of bounded strong quantifier rank that describes p k , d -HITTING-SET . First, there are two simple preliminaries: Testing whether d ( H ) d holds is easy to achieve using e ( HYPEREDGE e d v ( IN v e ) ) , so let us assume that this is the case and let us write H = ( V , E ) for H ( H ) . Furthermore, let us write SUBSET e f for x ( IN x e IN x f ) , which indicates that set ( e ) set ( f ) .
Representing subsets of hyperedges: recall that the core idea of the kernelization of the vertex cover problem is that a “high-degree vertex” must be part of a vertex cover. Rephrased in the language of hypergraphs, a graph is a hypergraph H with d ( H ) = 2 , a vertex cover is a hitting set, and making a high-degree vertex v part of a hitting set is (in essence) the same as removing all edges containing v and then adding the singleton hyperedge { v } , which can clearly only be hit by making v part of the hitting set.
In the general case, we will also remove hyperedges from the hypergraph and replace them by smaller hyperedges (though, no longer, by singletons) and we will do so repeatedly. The problem is that adding hyperedges is difficult in our encoding since this means that we would have to add elements to the universe of the logical structure that represent the new hyperedges. Although these problems can be circumvented by complex syntactic trickery, we feel it is cleaner to do the following: We reduce the original hitting set problem to a new version, where the universe already contains all the necessary elements for representing the hyperedges we might wish to add later on.
In detail, we define a subset p k , d -HITTING-SET p k , d -HITTING-SET as follows: It contains only those ( H , max ( k , d ) ) such that for every e HYPEREDGE H and every subset s set ( e ) there is an e H with s = set ( e ) . In other words, for every subset s of any hyperedge there must already be an element e “in store” in the universe that represents it.
We can reduce p k , d -HITTING-SET to p k , d -HITTING-SET by adding for an input H , if necessary, elements to the universe that represent all these subsets (note that the set HYPEREDGE H is not changed in the reduction, we just add elements to the universe for “potential, future” hyperedges). We are helped by the fact that we have an upper bound d on the size of the hyperedges, which means that the maximum blowup of the universe in this reduction is by the parameter-dependent value of 2 d . This reduction can be realized via a bounded rank reduction (recall Definition 1):
Claim 4.
p k , d -HITTING-SET br p k , d -HITTING-SET .
Proof. 
In the reduction, we do not change the parameter, so we set ι max ( k , d ) , max ( k , d ) = and ι x , x = otherwise. For the first-order queries, we wish to map a hypergraph H to a new version H in which for every subset of a hyperedge there is already an element in the universe representing this subset. This means that the size of the universe can increase from H to at most 2 d H . (If there is a hyperedge of size larger than d in the input, we can yield a trivial “no” instance as output.) We use a first-order query of width 2, meaning that the universe size gets enlarged from H to H 2 . This will be larger than 2 d H for all sufficiently large universes. Since, with respect to f max ( k , d ) the number 2 d is a constant, we can apply Lemma 1 to take care of those inputs whose universes are smaller than 2 d and directly map them to the correct instances. For the large instances, we now have a universe that is “large enough” to contain an element for each subset of a hyperedge and it is not difficult (but technical) to use the bit predicate to define the correct predicates HYPEREDGE, VERTEX, and IN in terms of the original structure.  □
Thus, in the following, we may assume that for every hyperedge in the input structure for all subsets of this hyperedge we already have an element in the universe representing this subset.
Finding sunflowers: we first show a way of kernelizing the hitting set problem, due to Chen et al. [4], that “almost works.” The core idea is to detect and collapse sunflowers in the input hypergraph [20]. A sunflower of size k + 1 with core c is a set { p 1 , , p k + 1 } E of distinct hyperedges, called petals, such that for all i j we have p i p j = c . In other words, all petals contain the core but are otherwise pairwise distinct. For convenience, we also assume that all petals are proper supersets of the core. The important observation is that if a sunflower of size k + 1 has a hitting set of size k, then the core must also be hit—and when the core is hit, all petals are hit. This means that we can just replace a sunflower by its core when we are looking for size-k hitting sets.
The following formula tests whether set ( c ) is the core of a sunflower of size k + 1 :
CORE c = p ˙ 1 1 p ˙ k + 1 d i j r , s { 1 , , d } p ˙ i r p ˙ j s i = 1 k + 1 e ( HYPEREDGE e SUBSET c e
i = 1 k + 1 e ( { p ˙ i 1 , , p ˙ i d } = { v IN v e ¬ IN v c } ) .
Here, (20) guarantees that the petals are pairwise disjoint outside the core and (21) checks that the petals are supersets of c and when we add p i 1 to p i d (which are not necessarily distinct) to c, we get a present hyperedge.
The “collapsing” of sunflowers to their cores can now be done as follows: We define a formula with e as a free variable that is true when set ( e ) is a core or when set ( e ) is not a superset of any core (otherwise, we need not include set ( e ) since we include the core of a sunflower that contains it, instead):
CORE e ( HYPEREDFE e ¬ c ( CORE c CSUBSET c e ) ) .
The importance of the above formula lies in the following fact: The number of hyperedges for which the second part of the formula is true (that is, which are not supersets of a core of a sunflower of size k + 1 ), is bounded by a function in k and d. This is due to the famous Sunflower Lemma [20] which states that if a hypergraph has more than k d d ! hyperedges, it contains a sunflower of size k + 1 (which has a core).
This means that if CORE e were to hold for just a few hyperedges, (22) would describe a kernel for the hitting set problem, meaning that the number of hyperedges remaining would be bounded by a purely parameter-dependent value. We could then proceed as in the proof of Theorem 5: In order to determine whether the original hypergraph has a hitting set of size k, we only need to check whether the hypergraph induced on the remaining hyperedges has such a hitting set—and since a kernel has a purely parameter-dependent size, we could use the INDUCED notation to solve the hitting set problem on the vertices and hyperedges for which (22) holds. Unfortunately, it is possible to construct hypergraphs such that CORE e still holds for a very large (not only parameter-dependent) number of hyperedges.
However, we know that a core always has a smaller size than any petal in its sunflower. In particular, all cores have maximum size d 1 . Thus, if we “view CORE as our new HYPEREDGE predicate,” we get “cores of cores”:
CORE 2 c = p ˙ 1 1 p ˙ k + 1 d i j r , s { 1 , , d } p ˙ i r p ˙ j s i = 1 k + 1 e ( CORE e SUBSET c e i = 1 k + 1 e ( { p ˙ i 1 , , p ˙ i d } = { v IN v e ¬ IN v c } ) .
Note that strong-qr ( CORE 2 c ) = strong-qr ( CORE c ) + 1 = 2 since we had to add a new strong quantifier ( e ) whose scope contains CORE e , which adds its own strong quantifier ( e ).
By the same argument as earlier, we get that the number of e for which the following formula holds equals the number of cores of cores plus something that only depends on the parameters k and d:
CORE 2 e ( CORE e ¬ c ( CORE 2 c SUBSET c e ) ) ( HYPEREDGE e ¬ c ( CORE c SUBSET c e ) ) .
Still, the number of cores of cores can be large, but they all have size at most d 2 . Repeating the argument a further d 2 times, we finally get the predicate KERNEL e :
CORE d e i = 1 d ( CORE i 1 e ¬ c ( CORE i c SUBSET c e ) ) ,
where CORE 0 is of course HYPEREDGE and CORE d e can only be true for the (sole) e representing the empty set (in which case, there is no hitting set).
Unfortunately, the strong quantifier rank of CORE d is d since the definition of CORE i in terms of CORE i 1 always adds one strong quantifier nesting (through a new e ). Thus, (23) also has a strong quantifier rank of d while we need O ( 1 ) .
Finding pseudo-sunflowers: at this point, we need a way of describing cores of cores of cores and so on using a bounded strong quantifier rank. The idea how this can be done was presented in [19], where the notions of pseudo-cores and pseudo-sunflowers are introduced. The definitions are somewhat technical, see below, but the interesting fact about these definitions is that they can be expressed very nicely in a way similar to (20) and (21).
For a level L and a number k, let T L k denote the rooted tree in which all leaves are at the same depth L and all inner nodes have exactly k + 1 children. The root of T L k will always be called r in the following. Thus, T 1 k is just a star consisting of r and its k + 1 children, while in T 2 k each of the k + 1 children of r has k + 1 new children, leading to ( k + 1 ) 2 leaves in total. For each l leaves ( T L k ) = { l l is a leaf of T L k } there is a unique path ( l 0 , l 1 , , l L ) from l 0 = r to l L = l .
Definition 4.
(Pseudo-Sunflowers and Pseudo-Cores, [19]) Let H = ( V , E ) be a hypergraph and let L and k be fixed. A set c V is called a k-pseudo-core of level L in H if there exists a mapping S : leaves ( T L k ) × { 0 , 1 , , L } { e e V } , called a T L k -pseudo-sunflower for H with pseudo-core c, such that for all l , m leaves ( T L k ) with l m we have:
1. 
S ( l , 0 ) = c .
2. 
S ( l , 0 ) S ( l , 1 ) S ( l , L ) E .
3. 
S ( l , i ) S ( l , j ) = for 0 i < j L , but S ( l , i ) for i { 1 , , L } .
4. 
Let z { 1 , , L } be the smallest number such that l z m z , that is, z is the depth where the path from r to l and the path from r to m diverge for the first time. Then S ( l , z ) S ( m , z ) = must hold.
This definition translates almost directly into a formula PSEUDOCORE L c , which starts with a block of weak existential quantifiers, one for each element of leaves ( T L k ) × { 1 , , L } × { 1 , , d } :
( x ˙ l , i j ) l leaves ( T L k ) , i { 1 , , L } , j { 1 , , d } l , m leaves ( T L k ) , l m , z as in the definition ( e HYPEREDGE e SUBSET c e { x ˙ l , 1 1 , , x ˙ l , L d } = { v IN v e ¬ IN v c }
i j p , q { 1 , , d } x ˙ l , i p x ˙ l , j q
p , q { 1 , , d } x ˙ l , z p x ˙ m , z q ) .
Here, (24) ensures, similarly to (21) for normal sunflowers, that S ( l , 0 ) S ( l , 1 ) S ( l , L ) is a hyperedge, item 2 of the definition. The inequalities (25) ensure that item 3 of the definition holds, while (26) ensures item 4.
The important observation is that PSEUDOCORE L has a strong quantifier rank that is independent of L. Since, as shown in [19], we can use PSEUDOCORE L as a replacement for CORE L in (23), we get that the hitting set problem can be described by a family of formulas of constant strong quantifier rank.
This concludes the proof of Theorem 6, which compared to the original proof in [19] is considerably simpler and shorter because of the use of the framework from the present paper. □

4.4. Bounded Strong-Rank Description of Model Checking for First-Order Logic

An important result by Flum and Grohe [10] states that the model checking problem for first-order logic lies in FPT on structures whose Gaifman graph has bounded degree (the Gaifman graph of a logical structure has the structure’s universe as its vertex set and has an undirected edge between two vertices u and v if u and v are simultaneously part of some tuple of some relation of the structure; in particular, the Gaifman graph of a directed graph is its underlying undirected graph). Once more, this result can now be obtained “syntactically.” For simplicity, we only consider graphs and let
p ψ , δ -MC ( FO ) = { ( G , num ( ψ , δ ) ) | G STRUC [ ( E 2 ) ] , ψ FO , G ψ , max-degree ( G ) δ } .
Theorem 7
([7,10]). p ψ , δ -MC ( FO ) para- AC 0 .
Proof. 
We present a family ( ϕ ψ , δ ) ψ FO , δ N with a bound on the number of strong variables that describes p ψ , δ -MC ( FO ) . Fix ψ and δ . Recall that we fixed the signature for ψ to just τ = ( E 2 ) for simplicity and, thus, τ -structures are just graphs G . In particular, there are no arithmetic predicates available to ψ (one could, of course, also consider them, but then the Gaifman graph would always be a clique and the claim of the theorem would be boring). In contrast, the ϕ ψ , δ are normal FO [ + , × ] formulas and they have access to arithmetics.
For a graph G let us write G ¯ for the underlying undirected graph and let us write E ¯ x y as a shorthand for E x y E y x . The first thing we check is that the maximum degree of the input graph G ¯ is, indeed, δ . This is rather easy: x δ y ( E ¯ x y ) .
For the hard part of determining whether G ψ , let d ¯ ( a , b ) denote the distance of two vertices in G ¯ and let N r ( a ) = b | G | | d ¯ ( a , b ) r be the ball around a of radius r in G ¯ . Let G [ N r ( a ) ] denote the subgraph of G induced on N r ( a ) . By Gaifman’s Theorem [21] we can rewrite ψ as a Boolean combination of formulas of the following form:
x 1 x k ( i j γ d ¯ ( x i , x j ) > 2 r
i ρ ( x i ) )
where γ d ¯ ( x i , x j ) > 2 r expresses that d ¯ ( x i , x j ) > 2 r should hold and ρ is r-local, meaning that for all a | G | we have G ρ ( a ) G [ N r ( a ) ] ρ ( a ) (the minimum number r for which is the case is called the locality rank of ρ ).
We now wish to express the above formula using only a constant number of strong variables. The problem is, of course, that the x i are not (yet) weak since they are used many times. We fix this in two steps. First, let us tackle (27): Clearly, the x i will have a pairwise distance of at least 2 r , if the balls of radius r that surround them are pairwise disjoint. Now, because of the bounded degree of the graph, a ball of radius r can have maximum size δ r . This allows us to bind all members of each ball and testing disjointness is exactly what weak variables are all about.
In detail, let γ d ¯ ( x , y ) r be the standard formula with two bound variables expressing that there is path from x to y of length at most r in G ¯ . Then we can express (27) as follows:
x ˙ 1 x ˙ k y ˙ 1 1 y ˙ k δ r ( i j p , q { 1 , , δ r } y ˙ i p y ˙ j q i = 1 k x ( x = x ˙ i { y ˙ i 1 , , y ˙ i δ r } = { y γ d ¯ ( x , y ) r } ) ) .
In the formula, at the end we bind the variables y ˙ i 1 , , y ˙ i δ r exactly to the elements of the ball around x ˙ i or radius r; and in the first part we require that all these balls are pairwise disjoint. Note that we do not require all y ˙ i p to be different: If the size of a ball is less than δ r , we must allow some y ˙ i p and y ˙ i q to be identical.
In order to express (28), we just have to check for each x ˙ i that the ball of radius δ r around it is a model of ρ ( x ˙ i ) . Since the size of this ball is at most δ r , we can use the INDUCED notation. There is, however, a technical problem: We basically wish to check whether G [ N r ( a ) ] { H H ρ ( a ) } holds for a given a, but { H H ρ ( a ) } obviously depends on a—which is not compatible with the INDUCED notation. Fortunately, this problem can be fixed: For i N let Q i = { H i H , a is the ith element of | H | with respect to < H , H ρ ( a ) } (recall that for the INDUCED notation the set Q can be arbitrarily complex). If we know for some element a | G | that it is the ith element in N r ( a ) , then our problematic test can be replaced by G [ N r ( a ) ] Q i . Since testing whether a is the ith element in N r ( a ) is possible using a formula like ι i ( a ) = = i 1 b ( b < a γ d ¯ ( a , b ) r ) , we get the following complete formula ϕ ψ , δ :
x ˙ 1 x ˙ k y ˙ 1 1 y ˙ k δ r ( i j p , q { 1 , , δ r } y ˙ i p y ˙ j p i = 1 k x ( x = x ˙ i { y ˙ i 1 , , y ˙ i δ r } = { y γ d ¯ ( x , y ) r } i = 1 k x ( i = 1 δ r ( ι i ( x ) INDUCED size δ r { y γ d ¯ ( x , y ) r } Q i ) ) ) .
This formula uses only a constant number of strong variables. Its strong quantifier rank would also be constant except that the formula γ d ¯ ( x , y ) r uses r nested (strong) quantifiers (but only 2 variables). This means that the strong quantifier rank of ϕ ψ , δ will be O ( locality-rank ( ψ ) ) .  □

4.5. Bounded Strong-Rank Description of Embedding Graphs of Constant Tree Width or Constant Tree Depth

For our final example, a graph H = ( V ( H ) , E ( H ) ) embeds into another graph G = ( V ( G ) , E ( G ) ) if there is an injective mapping ι : V ( H ) V ( G ) such that for all ( u , v ) E ( H ) we have ( ι ( u ) , ι ( v ) ) E ( G ) . We wish to show that the embedding problems for graphs of bounded tree depth or bounded tree width, parameterized by the to-be-embedded graphs, lie in para- AC 0 and para- AC 0 , see Theorem 8 below. Let us first briefly review the underlying definitions. A tree decomposition of H is a tree T = ( V ( T ) , E ( T ) ) (a connected, acyclic, undirected graph) together with a mapping B that assigns a subset of V ( H ) to each node in V ( T ) . These subsets are called bags and must have two properties: First, for every edge { u , v } E ( H ) there must be a node n V ( T ) with u , v B ( n ) . Second, for each vertex v V ( H ) the set { n V ( T ) v B ( n ) } must be nonempty and connected in T. Let width ( B ) = max n V ( T ) | B ( n ) | 1 . The tree width tw ( H ) of H is the minimum width of any tree decomposition for it. We call ( T , B ) a tree-depth decomposition if T can be rooted in such a way that if u lies on the path from some vertex v to the root, then B ( u ) B ( v ) . The tree depth td ( H ) is the minimum width of a tree-depth decomposition ( T , B ) of H plus 1. Note that this width is an upper bound on depth ( T ) , the depth of T, and that this definition is slightly different from the one briefly mentioned in the introduction (but gives the same class).
The problems p H -EMB td ( H ) c and p H -EMB tw ( H ) c , for a fixed number c (not a parameter), contains all pairs ( G , num ( H ) ) STRUC [ ( E 2 ) ] × N such that there exists an embedding of H into G and td ( H ) c and tw ( H ) c , respectively.
Theorem 8
([6,7]). p H -EMB td ( H ) c para- AC 0 and p H -EMB tw ( H ) c para- AC 0 for each c.
Proof. 
We present a family ( ϕ H , T , B ) of τ -formulas (where τ = ( E 2 , < 2 , SUCC 1 , ADD 3 , MULT 3 , 0 0 ) is the arithmetic signature of graphs) indexed by graphs H together with any tree decomposition ( T , B ) of H (without bounds on the depth or width) that describe the embedding problem. More precisely, we show the following:
Claim 5.
There is a family ( ϕ H , T , B ) H STRUC [ τ ] , ( T , B ) is a tree decomposition of H such that:
1. 
G ϕ H , T , B if and only if H embeds into G (more precisely, into ( | G | , E G ) ).
2. 
strong-qr ( ϕ H , T , B ) = depth ( T ) .
3. 
strong-bound ( ϕ H , T , B ) = width ( B ) + 1 .
Proof. 
Before we present the formula, we define what we will call a consistent numbering of the vertices of H. It is a mapping p : V ( H ) { 1 , , m } , where m is the maximum bag size of the decomposition (so m = width ( B ) + 1 ). The number p ( v ) for v V ( H ) can be thought as the “position” or “index” of v in all bags that contain it, that is, we require that for any bag B ( n ) = { b 1 , , b | B ( n ) | } the values p ( b 1 ) , , p ( b | B ( n ) | ) are all different. (Phrased differently, p restricted to any bag is injective.) Such a consistent numbering can be obtained as follows: First, assign the numbers 1 to | B ( r ) | to the elements of the root bag B ( r ) . Now, consider a child c of the root r in T. The bag B ( c ) may miss some of the elements of B ( r ) and there may be some new elements. For each new element e, let p ( e ) be a different number from the set { 1 , , m } \ { p ( v ) v B ( r ) B ( c ) } and note that we will not run out of numbers. We assign numbers to all elements in the bags of the children of the root in this way and, then, we recursively use the same method for the children’s children and so on. Note that, not only, we do not run out of numbers, but the consistency condition is also met: Once an element drops out of a bag, we will never see it again in a later bag and, hence, we cannot inadvertently assign a different number to it later on.
As a running example, we will use the graph H and the tree decomposition ( T , B ) of it from Figure 1.
Let us now define ϕ H , T , B . We may assume V ( H ) = { 1 , , | V ( H ) | } . The first step is to bind all vertices of H to weak variables using x ˙ 1 x ˙ | V ( H ) | i j x ˙ i x ˙ j ψ , where ψ must now express that the bound elements form an embedding. To achieve this, we build ψ recursively, starting at the root r of T and ψ = ψ r .
For our example, we would have:
ϕ H , T , B = x ˙ 1 x ˙ 7 i j x ˙ i x ˙ j ψ r .
For any node n V ( T ) , let the elements of the set B ( n ) be named b 1 to b s (these are just temporary names that have nothing to do with the consisting numbering p) and let the first t of them be new, that is, not present in the parent bag (for the root, s = t and all elements are new; if there are no new elements, t = 0 ). The formula ψ n will now express the following: First, it binds the new elements using strong variables that are made equal to the weak variables representing the elements in the input structure. This makes the new strong variables disjoint from one another and also from all other (images of) vertices of H. Second, we check that for all edges { x , y } E ( H ) between elements x and y of B ( n ) , that their images (which we have bound to strong variables) are also connected in the input structure. Third, we require that these properties also hold for all children of n. In symbols, we set:
ψ n = v p ( b 1 ) v p ( b t ) ( v p ( b 1 ) = x ˙ b 1 v p ( b t ) = x ˙ b t x , y B ( n ) , { x , y } E ( H ) E v p ( x ) v p ( y ) c children ( n ) ψ c ) .
For our example, let us start with the root r. Here, we have B ( r ) = { 3 } and p ( 3 ) = 1 and there are no edges between the vertices in the bag (there is just one vertex, after all). This yields: ψ n = v 1 ( v 1 = x ˙ 3 ψ a ψ b ) .
For the node a, a new node (1) enters the bag B ( a ) with index 2 = p ( 1 ) , but there are no intra-bag edges, so ψ a = v 2 ( v 2 = x ˙ 1 ψ c ψ d ) .
For the node b the situation is very similar, but there is now an edge { 3 , 5 } in H. This means that we must check that the nodes v 1 , representing 3, and v 2 , representing 5, are connected in the input structure. This yields ψ b = v 2 ( v 2 = x ˙ 5 E v 1 v 2 ψ e ) .
For the node c, we only have one new node (2) with a new index ( 3 = p ( 2 ) ), but now there are two intra-bag edges in H, namely { 1 , 2 } E ( H ) and { 2 , 3 } E ( H ) . This yields: ψ c = v 3 ( v 3 = x ˙ 2 E v 3 v 1 E v 3 v 2 ) , where E v 1 v 3 checks whether for { 2 , 3 } E ( H ) there is a corresponding edge in the input structure (recall that p ( 2 ) = 3 and p ( 3 ) = 1 ) and E v 3 v 2 checks the same for { 1 , 2 } .
In a similar way, we get ψ d = v 3 ( v 3 = x ˙ 4 E v 3 v 1 E v 3 v 2 ) and observe that the only difference is that v 3 is made equal to x ˙ 4 instead of x ˙ 2 , the rest is the same.
Finally, for the node e, we bind two strong variables since there are two new vertices (6 and 7), but we reuse variable v 1 for 6 since the vertex 3 that used to have index 1 has dropped out of the bag. We get
ψ e = v 1 v 3 ( v 1 = x ˙ 6 v 3 = x ˙ 7 E v 1 v 2 E v 1 v 3 E v 2 v 3 ) .
Putting it all together, we have the following ϕ H , T , B , whose structure closely mirrors T’s:
x ˙ 1 x ˙ 7 i j x ˙ i x ˙ j v 1 ( v 1 = x ˙ 3 v 2 ( v 2 = x ˙ 1 v 3 ( v 3 = x ˙ 2 E v 3 v 1 E v 3 v 2 ) v 3 ( v 3 = x ˙ 4 E v 3 v 1 E v 3 v 2 ) ) v 2 ( v 2 = x ˙ 5 E v 1 v 2 v 1 v 3 ( v 1 = x ˙ 6 v 3 = x ˙ 7 E v 1 v 2 E v 1 v 3 E v 2 v 3 ) ) ) .
It remains to argue that ϕ H , T , B has the claimed properties. Clearly, by construction, the strong quantifier rank and number of strongly bound variables are as claimed. The semantic correctness also follows easily from the construction: If the input structure is a model of the formula then, clearly, the assignments of the x ˙ i to elements of the universe form an embedding since for every edge { u , v } E ( H ) somewhere in the formula we test whether E v p ( u ) v p ( v ) holds where v p ( u ) is equal to x ˙ u and v p ( v ) to x ˙ v . The other way round, given a model of the formula, any assignment to the x ˙ i that makes it true is an embedding since, first, we require that all x ˙ i are different and we require E v p ( u ) v p ( v ) for all { u , v } E ( H ) . This concludes the proof of the claim.  □
With the claim established, we can now easily derive the statement of the theorem. To show p H -EMB td ( H ) c para- AC 0 , we must present a family ( ϕ H ) H STRUC [ τ ] , td ( H ) c that describes p H -EMB td ( H ) c and that has bounded quantifier rank. Clearly, we can just set ϕ H to ϕ H , T , B where ( T , B ) is a tree-depth decomposition of H of depth c (which must exist by the assumption that td ( H ) c ). The second item of the claim immediately tells us that all ϕ H will have a strong quantifier rank of at most c; and we can use the characterization of para- AC 0 from Fact 1. For the second statement, p H -EMB tw ( H ) c para- AC 0 , we use a different family ( ψ H ) H STRUC [ τ ] , tw ( H ) c , this time setting ψ H to ϕ H , T , B where ( T , B ) is a tree decomposition of H of width c. Now the third item of the claim gives us the bound on the number of strong variables; and we can use the characterization of para- AC 0 from Theorem 1. □

5. Conclusions

In the present paper, we showed how the color coding technique can be turned into a powerful tool for parameterized descriptive complexity theory. This tool allows us to show that important results from parameterized complexity theory—like the fact that the embedding problem for graphs of bounded tree width lies in FPT—follow just from the syntactic structure of the formulas that describe the problem. The core contribution of the paper does not lie in proving new membership results for para- AC 0 or para- AC 0 , but in explaining why problems lie in these classes in terms of the syntax of formulas describing the problems. Apart from unifying previous results, we also get much shorter and simpler proofs.
In all our syntactic characterizations it was important that variables or color predicates were not allowed to be within a universal scope. The reason was that literals, disjunctions, conjunctions, and existential quantifiers all have what we called the small witness property, which universal quantifiers do not have. However, there are other quantifiers, from more powerful logics that we did not explore, that also have the small witness property. An example are operators that test whether there is a path of length at most k from one vertex to another for some fixed k: if such a path exists, its vertices form a “small witness.” Weak variables may be used inside these operators, leading to broader classes of problems that can be described by families of bounded strong quantifier rank. On the other hand, we cannot add the full transitive closure operator TC (for which it is well-known that FO [ TC ] = NL ) and hope that Theorems 2 and 3 still hold: If this were the case, we should be able to turn a formula that uses two colors C 1 and C 2 to express that there are two vertex-disjoint paths between two vertices into a FO [ TC ] formula—thus proving the unlikely result that the NP-hard disjoint path problem is in NL.
Another line of inquiry into the descriptive complexity of parameterized problems was already started in the repeatedly cited paper by Chen et al. [4]: They give first syntactic properties for families of formulas describing weighted model checking problems that imply membership in para- AC 0 . We believe that it might be possible to base an alternative notion of weak quantifiers on these syntactic properties. Ideally, we would like to prove a theorem similar to Theorem 4 in which there are just more quantifiers that count as weak and, hence, even more families have bounded strong quantifier rank. This would allow us to prove for even more problems that they lie in FPT just because of the syntactic structure of the natural formula families that describe them.

Author Contributions

Formal analysis, M.B. and T.T.; Writing—original draft, T.T.; Writing—review & editing, M.B. The authors have contributed equally. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fagin, R. Generalized First-Order Spectra and Polynomial-Time Recognizable Sets. Complex. Comput. 1974, 7, 43–74. [Google Scholar]
  2. Flum, J.; Grohe, M. Parameterized Complexity Theory; Texts in Theoretical Computer Science; Springer: Abingdon, UK, 2006. [Google Scholar] [CrossRef]
  3. Alon, N.; Yuster, R.; Zwick, U. Color-Coding. J. ACM 1995, 42, 844–856. [Google Scholar] [CrossRef]
  4. Chen, Y.; Flum, J.; Huang, X. Slicewise Definability in First-Order Logic with Bounded Quantifier Rank. arXiv 2017, arXiv:1704.03167. [Google Scholar]
  5. Bannach, M.; Tantau, T. Parallel Multivariate Meta-Theorems. In Proceedings of the Eleventh International Symposium on Parameterized and Exact Computation (IPEC 2016), Aarhus, Denmark, 24–26 August 2016; pp. 4:1–4:17. [Google Scholar] [CrossRef]
  6. Chen, H.; Müller, M. The Fine Classification of Conjunctive Queries and Parameterized Logarithmic Space. ACM Trans. Comput. Theory 2015, 7, 7:1–7:27. [Google Scholar] [CrossRef] [Green Version]
  7. Bannach, M.; Stockhusen, C.; Tantau, T. Fast Parallel Fixed-parameter Algorithms via Color Coding. In Proceedings of the Tenth International Symposium on Parameterized and Exact Computation (IPEC 2015), Patras, Greece, 16–18 September 2015; pp. 224–235. [Google Scholar] [CrossRef]
  8. Das, B.; Enduri, M.K.; Reddy, I.V. On the Parallel Parameterized Complexity of the Graph Isomorphism Problem. In Proceedings of the Twelfth International Conference and Workshop on Algorithms and Computation (WALCOM 2018), Dhaka, Bangladesh, 3–5 March 2018; pp. 252–264. [Google Scholar] [CrossRef] [Green Version]
  9. Pilipczuk, M.; Siebertz, S.; Toruńczyk, S. Parameterized circuit complexity of model-checking on sparse structures. In Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS 2018), Oxford, UK, 9–12 July 2018; pp. 789–798. [Google Scholar] [CrossRef]
  10. Flum, J.; Grohe, M. Describing Parameterized Complexity Classes. Inf. Comput. 2003, 187, 291–319. [Google Scholar] [CrossRef] [Green Version]
  11. Chen, Y.; Flum, J. Tree-depth, quantifier elimination, and quantifier rank. In Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS 2018), Oxford, UK, 9–12 July 2018; pp. 225–234. [Google Scholar] [CrossRef]
  12. Cai, L.; Chen, J. On fixed-parameter tractability and approximability of NP optimization problems. J. Comput. Syst. Sci. 1997, 54, 465–474. [Google Scholar] [CrossRef] [Green Version]
  13. Papadimitriou, C.; Yannakakis, M. Optimization, approximation, and complexity classes. J. Comput. Syst. Sci. 1991, 43. [Google Scholar] [CrossRef] [Green Version]
  14. Kolaitis, P.; Thakur, M. Logical definability of NP optimization problems. Inf. Comput. 1994, 115, 321–353. [Google Scholar] [CrossRef] [Green Version]
  15. Vollmer, H. Introduction to Circuit Complexity—A Uniform Approach; Texts in Theoretical Computer Science, An EATCS Series; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar] [CrossRef]
  16. Immerman, N. DSPACE[nk]=VAR[k+1]. In Proceedings of the Sixth Annual Structure in Complexity Theory Conference, Chicago, IL, USA, 30 June–3 July 1991; pp. 334–340. [Google Scholar] [CrossRef]
  17. Immerman, N. Descriptive Complexity; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar] [CrossRef] [Green Version]
  18. Hüffner, F.; Wernicke, S.; Zichner, T. Algorithm Engineering for Color-Coding with Applications to Signaling Pathway Detection. Algorithmica 2008, 52, 114–132. [Google Scholar] [CrossRef] [Green Version]
  19. Bannach, M.; Tantau, T. Computing Hitting Set Kernels By AC0-Circuits. In Proceedings of the 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018), Caen, France, 28 February–3 March 2018; pp. 9:1–9:14. [Google Scholar] [CrossRef]
  20. Erdős, P.; Rado, R. Intersection theorems for systems of sets. J. Lond. Math. Soc. 1960, 1, 85–90. [Google Scholar] [CrossRef]
  21. Gaifman, H. On Local and Non-Local Properties. In Studies in Logic and the Foundations of Mathematics; Elsevier: Amsterdam, The Netherlands, 1982; pp. 105–135. [Google Scholar] [CrossRef]
Figure 1. An example graph H together with a tree decomposition for it, consisting of the tree T and the bag function B indicated using the small gray mapping arrows. A consistent numbering p is indicated in red.
Figure 1. An example graph H together with a tree decomposition for it, consisting of the tree T and the bag function B indicated using the small gray mapping arrows. A consistent numbering p is indicated in red.
Algorithms 14 00096 g001
Table 1. Membership of the problems discussed in this paper in the two classes para- AC 0 and para- AC 0 and the formulas whose syntactic structures prove this. Except for clique problem, formulas exist where at least the number of strong variables is independent of the parameter—but it is not always the most “natural” formulas that have this desirable property, see the vertex cover problem.
Table 1. Membership of the problems discussed in this paper in the two classes para- AC 0 and para- AC 0 and the formulas whose syntactic structures prove this. Except for clique problem, formulas exist where at least the number of strong variables is independent of the parameter—but it is not always the most “natural” formulas that have this desirable property, see the vertex cover problem.
p k -MATCHING para- AC 0
Problem: Does a graph have a size-k matching?
Formulas: x ˙ 1 x ˙ 2 k i j x ˙ i x ˙ j i = 1 k E x ˙ 2 i 1 x ˙ 2 i , see Equation (1)
Descriptive complexity: strong-qr = 0 , strong-vars = 0
p k -TRIANGLE-PACKING para- AC 0
Problem: Does a graph contain k vertex-disjoint triangles?
Formulas: x ˙ 1 x ˙ 3 k ( i j x ˙ i x ˙ j j = 1 k x y z (
x ˙ 3 j 2 = x x ˙ 3 j 1 = y x ˙ 3 j = z E x y E x z E y z ) ) , see Equation (2)
Descriptive complexity: strong-qr = 3 , strong-vars = 3
p k - H -PACKING para- AC 0 for a fixed graph  H = ( V ( H ) , E ( H ) ) (instead of triangles)
Problem: Does a graph contain k vertex-disjoint induced copies of H?
Formulas: x ˙ 1 x ˙ k | V ( H ) | ( i j x ˙ i x ˙ j j = 1 k y 1 y | V ( H ) | (
i = 1 | V ( H ) | y i = x ˙ ( j 1 ) k + i ( p , q ) E ( H ) E y p y q ( p , q ) E ( H ) ¬ E y p y q ) )
Descriptive complexity: strong-qr = | V ( H ) | , strong-vars = | V ( H ) |
p k -LONG-PATH para- AC 0
Problem: Does a graph contain a path of length k?
Formulas: s t x ( E s x x ˙ 1 ( x ˙ 1 = x y ( E x y ( x ˙ k = x i j x ˙ i x ˙ j ) ) ) )
Descriptive complexity: strong-qr = k + 2 , strong-vars = 4 , see Equation (5)
p k -VERTEX-COVER para- AC 0
Problem: Does a graph have a size-k vertex cover?
Formulas 1: x 1 x k u v E u v i ( x i = u x i = v )
Descriptive complexity: strong-qr = k + 2 , strong-vars = k + 2
Formulas 2: Describe the Buss kernelization, see Theorem 5
Descriptive complexity: strong-qr = O ( 1 ) , strong-vars = O ( 1 )
p k , d -HITTING-SET para- AC 0
Problem: Does a hypergraph with maximum edge size d a size-k hitting set?
Formulas: Describe a kernel based on pseudo-sunflowers, see Theorem 6
Descriptive complexity: strong-qr = O ( 1 ) , strong-vars = O ( 1 )
p ψ , δ -MC ( FO ) para- AC 0
Problem: Is ψ a model of a graph with maximum degree δ ?
Formulas: Describe the disjointness of balls in the formula resulting from Gaifman’s Theorem using weak variables, see Theorem 7
Descriptive complexity: strong-qr = O ( locality-rank ( ψ ) ) , strong-vars = O ( 1 )
p H -EMB td ( H ) c para- AC 0 and p H -EMB tw ( H ) c para- AC 0
Problem: Can H, of tree depth or width c, be embedded into a graph G?
Formulas: Bind the vertices of H’s image to weak variables and use a formula whose syntactic structure exactly mimics an optimal tree decomposition of H, see Theorem 8
Descriptive complexity: strong-qr = O ( td ( H ) ) , strong-vars = O ( tw ( H ) )
p k -CLIQUE W [ 1 ]
Problem: Does a graph contain a k-clique?
Formulas 1: x 1 x k i j x i x j i j E x i x j , see Equation (3)
Formulas 2: x 1 x k u v i j ( x i = u x j = v ) E u v , see Equation (19)
Descriptive complexity: strong-qr = k , strong-vars = k in both cases
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bannach, M.; Tantau, T. On the Descriptive Complexity of Color Coding. Algorithms 2021, 14, 96. https://doi.org/10.3390/a14030096

AMA Style

Bannach M, Tantau T. On the Descriptive Complexity of Color Coding. Algorithms. 2021; 14(3):96. https://doi.org/10.3390/a14030096

Chicago/Turabian Style

Bannach, Max, and Till Tantau. 2021. "On the Descriptive Complexity of Color Coding" Algorithms 14, no. 3: 96. https://doi.org/10.3390/a14030096

APA Style

Bannach, M., & Tantau, T. (2021). On the Descriptive Complexity of Color Coding. Algorithms, 14(3), 96. https://doi.org/10.3390/a14030096

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop