Next Article in Journal
An Efficient Virtual Machine Consolidation Algorithm for Cloud Computing
Previous Article in Journal
Learning by Population Genetics and Matrix Riccati Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decision Trees for Binary Subword-Closed Languages

Computer, Electrical and Mathematical Sciences & Engineering Division and Computational Bioscience Research Center, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia
Entropy 2023, 25(2), 349; https://doi.org/10.3390/e25020349
Submission received: 17 January 2023 / Revised: 7 February 2023 / Accepted: 9 February 2023 / Published: 14 February 2023
(This article belongs to the Section Complexity)

Abstract

:
In this paper, we study arbitrary subword-closed languages over the alphabet { 0 , 1 } (binary subword-closed languages). For the set of words L ( n ) of the length n belonging to a binary subword-closed language L, we investigate the depth of the decision trees solving the recognition and the membership problems deterministically and nondeterministically. In the case of the recognition problem, for a given word from L ( n ) , we should recognize it using queries, each of which, for some i { 1 , , n } , returns the ith letter of the word. In the case of the membership problem, for a given word over the alphabet { 0 , 1 } of the length n, we should recognize if it belongs to the set L ( n ) using the same queries. With the growth of n, the minimum depth of the decision trees solving the problem of recognition deterministically is either bounded from above by a constant or grows as a logarithm, or linearly. For other types of trees and problems (decision trees solving the problem of recognition nondeterministically and decision trees solving the membership problem deterministically and nondeterministically), with the growth of n, the minimum depth of the decision trees is either bounded from above by a constant or grows linearly. We study the joint behavior of the minimum depths of the considered four types of decision trees and describe five complexity classes of binary subword-closed languages.

1. Introduction

In this paper, we study arbitrary binary languages (languages over the alphabet E = { 0 , 1 } ) that are subword closed: if a word w 1 u 1 w 2 w m u m w m + 1 belongs to a language, then the word u 1 u m belongs to this language. Subword-closed languages have attracted the attention of researchers in the field of formal languages for many years [1,2,3,4,5].
For the set of words L ( n ) of the length n belonging to a binary subword-closed language L, we investigate the depth of the decision trees solving the recognition and the membership problems deterministically and nondeterministically. In the case of the recognition problem, for a given word from L ( n ) , we should recognize it using queries, each of which, for some i { 1 , , n } , returns the ith letter of the word. In the case of the membership problem, for a given word over the alphabet E of the length n, we should recognize if it belongs to L ( n ) using the same queries.
For an arbitrary binary subword-closed language, with the growth of n, the minimum depth of the decision trees solving the problem of recognition deterministically is either bounded from above by a constant or grows as a logarithm, or linearly. For other types of trees and problems (decision trees solving the problem of recognition nondeterministically and decision trees solving the membership problem deterministically and nondeterministically), with the growth of n, the minimum depth of decision trees is either bounded from above by a constant or grows linearly. We study the joint behavior of the minimum depths of the considered four types of decision trees and describe five complexity classes of binary subword-closed languages.
In [6], the following results were announced without proof. For an arbitrary regular language, with the growth of n, (i) the minimum depth of the decision trees solving the problem of recognition deterministically is either bounded from above by a constant or grows as a logarithm, or linearly, and (ii) the minimum depth of the decision trees solving the problem of recognition nondeterministically is either bounded from above by a constant or grows linearly. Proofs for the case of decision trees solving the problem of recognition deterministically can be found in [7,8]. To apply the considered results to a given regular language, it is necessary to know a deterministic finite automaton (DFA) accepting this language.
Each subword-closed language over a finite alphabet is a regular language [3]. In this paper, we do not assume that binary subword-closed languages are given by DFAs. So, we cannot use the results from [6,7,8]. Instead of this, for binary subword-closed languages, we describe simple criteria for the behavior of the minimum depths of decision trees solving the problems of recognition and membership deterministically and nondeterministically.
This paper is a theoretical work related to the field of formal languages. It has no direct applications. In the theory of formal languages, various parameters of languages are studied, in particular the growth of the number of words of the language with the growth of the length of words and, for regular languages, the minimum number of states of the automaton accepting the language. For many years, the author has been introducing new parameters of languages into scientific use: the minimum depth of deterministic and nondeterministic decision trees for the recognition and membership problems related to the language [6,7,8,9]. The present paper continues this line of research.
There is now an extensive collection of methods for constructing decision trees. It includes (i) a variety of greedy heuristics based on measures of uncertainty, such as entropy and the Gini index [10,11,12], (ii) exact optimization algorithms based on dynamic programming, branch-and-bound search, SAT-based methods, etc., [13,14,15,16], and (iii) approximate optimization algorithms with bounds of accuracy that are applicable to obtain theoretical results about the complexity of decision trees [8,17].
In this paper, we found simple combinatorial parameters of binary subword-closed languages, which made it possible to obtain bounds on the depth of the decision trees without using the effective but rather complicated methods developed in the monographs [8,17].
The rest of this paper is organized as follows. In Section 2, we consider the main notions; in Section 3, the main results; in Section 4, the proofs; and in Section 5, short conclusions.

2. Main Notions

Let ω = { 0 , 1 , 2 , } be the set of nonnegative integers and E = { 0 , 1 } . By E * , we denote the set of all finite words over the alphabet E, including the empty word λ . Any subset L of the set E * is called a binary language. This language is called subword closed if, for any word w 1 u 1 w 2 w m u m w m + 1 belonging to L, the word u 1 u m belongs to L, where w i , u j E * , i = 1 , , m + 1 , j = 1 , , m . For any natural n, we denote by L ( n ) the set of words from L, for which length is equal to n. We consider two problems related to the set L ( n ) . The problem of recognition: for a given word from L ( n ) , we should recognize it using attributes (queries) l 1 n , , l n n , where l i n , i { 1 , , n } , is a function from E * ( n ) to E such that l i n ( a 1 a n ) = a i for any word a 1 a n E * ( n ) . The problem of membership: for a given word from E * ( n ) , we should recognize if this word belongs to the set L ( n ) using the same attributes. To solve these problems, we use decision trees over L ( n ) .
A decision tree over L ( n ) is a marked finite directed tree with the root, which has the following properties:
  • The root and the edges leaving the root are not labeled.
  • Each node, which is not the root or terminal node, is labeled with an attribute from the set { l 1 n , , l n n } .
  • Each edge leaving a node, which is not a root, is labeled with a number from E.
A decision tree over L ( n ) is called deterministic if it satisfies the following conditions:
  • Exactly one edge leaves the root.
  • For any node, which is not the root nor terminal node, the edges leaving this node are labeled with pairwise different numbers.
Let Γ be a decision tree over L ( n ) . A complete path in Γ is any sequence ξ = v 0 , e 0 , , v m , e m , v m + 1 of nodes and edges of Γ such that v 0 is the root, v m + 1 is a terminal node, v i is the initial, and v i + 1 is the terminal node of the edge e i for i = 0 , , m . We define a subset E ( n , ξ ) of the set E * ( n ) in the following way: if m = 0 , then E ( n , ξ ) = E * ( n ) . Let m > 0 , the attribute l i j n be assigned to the node v j and b j be the number assigned to the edge e j , j = 1 , , m . Then,
E ( n , ξ ) = { a 1 a n E * ( n ) : a i 1 = b 1 , , a i m = b m } .
Let L ( n ) . We say that a decision tree Γ over L ( n ) solves the problem of recognition for L ( n ) nondeterministically if Γ satisfies the following conditions:
  • Each terminal node of Γ is labeled with a word from L ( n ) .
  • For any word w L ( n ) , there exists a complete path ξ in the tree Γ such that w E ( n , ξ ) .
  • For any word w L ( n ) and for any complete path ξ in the tree Γ such that w E ( n , ξ ) , the terminal node of the path ξ is labeled with the word w.
We say that a decision tree Γ over L ( n ) solves the problem of recognition for L ( n ) deterministically if Γ is a deterministic decision tree, which solves the problem of recognition for L ( n ) nondeterministically.
We say that a decision tree Γ over L ( n ) solves the problem of membership for L ( n ) nondeterministically if Γ satisfies the following conditions:
  • Each terminal node of Γ is labeled with a number from E.
  • For any word w E * ( n ) , there exists a complete path ξ in the tree Γ such that w E ( n , ξ ) .
  • For any word w E * ( n ) and for any complete path ξ in the tree Γ such that w E ( n , ξ ) , the terminal node of the path ξ is labeled with the number 1 if w L ( n ) and with the number 0, otherwise.
We say that a decision tree Γ over L ( n ) solves the problem of membership for L ( n ) deterministically if Γ is a deterministic decision tree which solves the problem of membership for L ( n ) nondeterministically.
Let Γ be a decision tree over L ( n ) . We denote by h ( Γ ) the maximum number of nodes in a complete path in Γ that are not the root nor terminal node. The value h ( Γ ) is called the depth of the decision tree Γ .
We denote by h L r a ( n ) ( h L r d ( n ) ) the minimum depth of a decision tree, which solves the problem of recognition for L ( n ) nondeterministically (deterministically). If L ( n ) = , then h L r a ( n ) = h L r d ( n ) = 0 .
We denote by h L m a ( n ) ( h L m d ( n ) ) the minimum depth of a decision tree, which solves the problem of membership for L ( n ) nondeterministically (deterministically). If L ( n ) = , then h L m a ( n ) = h L m d ( n ) = 0 .

3. Main Results

Let L be a binary subword-closed language. For any a E and i ω , we denote by a i the word a a of the length i (if i = 0 , then a i = λ ). For any a E , let a ¯ = 1 if a = 0 and a ¯ = 0 if a = 1 .
We define the parameter H o m ( L ) of the language L, which is called the homogeneity dimension of the language L. If for each natural number m, there exists a E such that the word a m a ¯ a m belongs to L, then H o m ( L ) = . Otherwise, H o m ( L ) is the maximum number m ω such that there exists a E for which the word a m a ¯ a m belongs to L. If L = , then H o m ( L ) = 0 .
We now define the parameter H e t ( L ) of the language L, which is called the heterogeneity dimension of the language L. If for each natural number m, there exists a E such that the word a m a ¯ m belongs to L, then H e t ( L ) = . Otherwise, H e t ( L ) is the maximum number m ω such that there exists a E for which the word a m a ¯ m belongs to L. If L = , then H e t ( L ) = 0 .
Theorem 1. 
Let L be a binary subword-closed language.
(a) 
If H o m ( L ) = , then h L r d ( n ) = Θ ( n ) and h L r a ( n ) = Θ ( n ) .
(b) 
If H o m ( L ) < and H e t ( L ) = , then h L r d ( n ) = Θ ( log n ) and h L r a ( n ) = O ( 1 ) .
(c) 
If H o m ( L ) < and H e t ( L ) < , then h L r d ( n ) = O ( 1 ) and h L r a ( n ) = O ( 1 ) .
Example 1. 
Let us consider the binary subword-closed language L 0 = { 1 i 0 j : i , j ω } . One can show that H o m ( L 0 ) = 0 and H e t ( L 0 ) = . By Theorem 1, h L 0 r d ( n ) = Θ ( log n ) and h L 0 r a ( n ) = O ( 1 ) .
For a binary subword-closed language L, we denote by L C its complementary language E * L . The notation | L | = means that L is an infinite language, and the notation | L | < means that L is a finite language.
Theorem 2. 
Let L be a binary subword-closed language.
(a) 
If | L | = and L C , then h L m d ( n ) = Θ ( n ) and h L m a ( n ) = Θ ( n ) .
(b) 
If | L | < or L C = , then h L m d ( n ) = O ( 1 ) and h L m a ( n ) = O ( 1 ) .
Example 2. 
One can show that, for the binary subword-closed language L 0 = { 1 i 0 j : i , j ω } , considered in Example 1, | L 0 | = and L 0 C . By Theorem 2, h L 0 m d ( n ) = Θ ( n ) and h L 0 m a ( n ) = Θ ( n ) .
To study all possible types of joint behavior of functions h L r d ( n ) , h L r a ( n ) , h L m d ( n ) , and h L m a ( n ) for binary subword-closed languages L, we consider five classes of languages L 1 , , L 5 described in the columns 2–5 of Table 1. In particular, L 1 consists of all binary subword-closed languages L with H o m ( L ) = and L C . It is easy to show that the complexity classes L 1 , , L 5 are pairwise disjointed, and each binary subword-closed language belongs to one of these classes. The behavior of functions h L r d ( n ) , h L r a ( n ) , h L m d ( n ) , and h L m a ( n ) for languages from these classes is described in the last four columns of Table 1. For each class, the results considered in Table 1 follow from Theorems 1 and 2 and the following three remarks: (i) from the condition H o m ( L ) = , it follows | L | = , (ii) from the condition H e t ( L ) = , it follows | L | = , and (iii) from the condition H o m ( L ) < , it follows L C .
We now show that the classes L 1 , , L 5 are nonempty. To this end, we consider the following five binary subword-closed languages:
L 1 = { 0 i 10 j , 0 i : i , j ω } , L 2 = E * , L 3 = { 0 i 1 j : i , j ω } , L 4 = { 0 i : i ω } , L 5 = { 0 } .
It is easy to see that L i L i for i = 1 , , 5 .

4. Proofs of Theorems 1 and 2

In this section, we prove Theorems 1 and 2. First, we consider two auxiliary statements. For a word w , we denote by | w | its length.
Lemma 1. 
Let L be a binary subword-closed language for which H o m ( L ) < . Then, any word w from L can be represented in the form
w 1 a i w 2 a ¯ j w 3 ,
where a E , i , j ω , and w 1 , w 2 , w 3 are words from E * with length at most 2 H o m ( L ) each.
Proof. 
Denote m = H o m ( L ) . Then, the words 0 m + 1 10 m + 1 and 1 m + 1 01 m + 1 do not belong to L. Let w be a word from L. Then, for any a E , any entry of the letter a in w has at most m a ¯ s to the left of this entry (we call it l-entry of a) or at most m a ¯ s to the right of this entry (we call it r-entry of a). Let a E . We say that w is (i) a-l-word if any entry of a in w is l-entry; (ii) a-r-word if any entry of a in w is r-entry; and (iii) a-b-word if w is not a-l-word and is not a-r-word. Let c , d { l , r , b } . We say that w is c d -word if w is 0-c-word and 1-d-word. There are nine possible pairs c d . We divide them into four groups: (a) l l and r r , (b) l r and r l , (c) l b , r b , b l , and b r , and (d) b b , and consider them separately. Let
w = a 1 a n .
We assume that w contains both 0s and 1s. Otherwise, w can be represented in the form (1).
(a) Let w be l l -word. Let a n = 0 and a i be the rightmost entry of 1 in w. Because w is l l -word, there are at most m 1s to the left of a n and at most m 0s to the left of a i . Denote w 1 = a 1 a i . Then, w 1 contains at most m 0s and at most m 1s, i.e., the length of w 1 is at most 2 m . Moreover, to the right of a i , there are only 0s. Thus, w = w 1 0 n i , where | w 1 | = i 2 m , i.e., w can be represented in the form (1).
Let a n = 1 and a i be the rightmost entry of 0 in w. Denote w 1 = a 1 a i . Then, w 1 contains at most m 0s and at most m 1s, i.e., | w 1 | 2 m . Moreover, to the right of a i , there are only 1s. Thus, w = w 1 1 n i , i.e., w can be represented in the form (1).
One can prove in a similar way that any r r -word can be represented in the form (1).
(b) Let w be l r -word, a i be the rightmost entry of 0, and a j be the leftmost entry of 1. Then, either j = i + 1 or j < i . Let j = i + 1 . Then, w = 0 i 1 n i , i.e., w can be represented in the form (1). Let now j < i . Denote w 2 = a j a i . The word w has at most m 0s to the right of a j and at most m 1s to the left of a i . Therefore, | w 2 | 2 m and w = 0 j 1 w 2 1 n i , i.e., w can be represented in the form (1).
One can prove in a similar way that any r l -word can be represented in the form (1).
(c) Let w be l b -word; a i be the rightmost entry of 1 such that to the left of this entry, we have at most m 0s; and a j be the next after a i entry of 1. It is clear that to the right of a j , there are at most m 0s, j i + 2 , and all letters a i + 1 , , a j 1 are equal to 0. Let a k be the rightmost entry of 0. Then, to the left of a k , there are at most m 1s. It is clear that either k = j 1 or k > j . Denote w 1 = a 1 a i . Then, | w 1 | 2 m . Let k = j 1 . In this case, w = w 1 0 j i 1 1 n j + 1 , i.e., w can be represented in the form (1). Let k > j . Denote w 2 = a j a k . Then, | w 2 | 2 m . We have w = w 1 0 j i 1 w 2 1 n k , i.e., w can be represented in the form (1).
One can prove in a similar way that any r b - or b l - or b r -word can be represented in the form (1).
(d) Let w be b b -word, a i be the rightmost entry of 0 such that there are at most m 1s to the left of this entry, and a j be the next after a i entry of 0. Then, there are at most m 1s to the right of a j , j i + 2 , and w = a 1 a i 1 1 a j a n . Denote A = { 1 , , i } , B = { i + 1 , , j 1 } , and C = { j , , n } . Let a k be the rightmost entry of 1 such that there are at most m 0s to the left of this entry and a l be the next after a k entry of 1. Then, there are at most m 0s to the right of a l , l k + 2 , and w = a 1 a k 0 0 a l a n .
There are four possible types of location of a k and a l : (i) k A and l A , (ii) k A and l B (the combination k A and l C is impossible because all letters with indices from B are 1s, but all letters between a k and a l are 0s), (iii) k B and l C (the combination k B and l B is impossible because all letters with indices from B are 1s, but all letters between a k and a l are 0s), and (iv) k C and l C . We now consider cases (i)–(iv) in detail.
(i) Let k A and l A . Then, w = a 1 a k 0 0 a l a i 1 1 a j a n . Denote w 1 = a 1 a k , w 2 = a l a i , and w 3 = a j a n . The length of w 1 is at most 2 m because from the left of a k , there are at most m 0s, and from the left of a i , there are at most m 1s. We can prove in a similar way that | w 2 | 2 m and | w 3 | 2 m . Therefore, w can be represented in the form (1).
(ii) Let k A and l B . Then, l = i + 1 and
w = a 1 a k 0 0 a i a i + 1 1 1 a j a n ,
where a i = 0 and a i + 1 = 1 . Denote w 1 = a 1 a k and w 3 = a j a n . It is easy to show that | w 1 | 2 m and | w 3 | 2 m . Therefore, w can be represented in the form (1).
(iii) Let k B and l C . Then, k = j 1 and
w = a 1 a i 1 1 a j 1 a j 0 0 a l a n ,
where a j 1 = 1 and a j = 0 . Denote w 1 = a 1 a i and w 3 = a l a n . It is easy to show that | w 1 | 2 m and | w 3 | 2 m . Therefore, w can be represented in the form (1).
(iv) Let k C and l C . Then, w = a 1 a i 1 1 a j a k 0 0 a l a n . Denote w 1 = a 1 a i , w 2 = a j a k , and w 3 = a l a n . It is easy to show that | w 1 | 2 m , | w 2 | 2 m , and | w 3 | 2 m . Therefore, w can be represented in the form (1). □
Lemma 2. 
Let L be a binary subword-closed language for which H o m ( L ) < and H e t ( L ) < . Then, there exists natural p such that | L ( n ) | p for any natural n.
Proof. 
Denote m = max ( H o m ( L ) , H e t ( L ) ) . Then, the words 0 m + 1 1 m + 1 and 1 m + 1 0 m + 1 do not belong to L. Using Lemma 1, we obtain that each word w from L can be represented in the form w 1 a i w 2 a ¯ j w 3 , where a E , the length of w k is at most t = 2 m for k = 1 , 2 , 3 , i , j ω , and i m or j m . We now evaluate the number of such words, for which length is equal to n. Let k { 1 , 2 , 3 } . Then, the number of different words w k is at most 2 0 + 2 1 + + 2 t < 2 t + 1 . Let us assume that the words w 1 , w 2 , and w 3 are fixed and | w 1 | + | w 2 | + | w 3 | n . Then, the number of different words a i a ¯ j of the length n | w 1 | | w 2 | | w 3 | is at most 4 ( m + 1 ) because i m or j m . Thus, the number of words in L ( n ) is at most p = 2 3 t + 3 ( 2 t + 4 ) . □
Proof of Theorem 1. 
It is clear that h L r a ( n ) h L r d ( n ) for any natural n.
(a) Let H o m ( L ) = and n be a natural number. Then, there exists a E such that a n a ¯ a n L . Therefore, a n , a i a ¯ a n i 1 L ( n ) for i = 0 , , n 1 . Let Γ be a decision tree over L ( n ) , which solves the problem of recognition for L ( n ) nondeterministically and has the minimum depth h L r a ( n ) , and ξ be a complete path in Γ such that a n E ( n , ξ ) . Let us assume that there is i { 0 , , n 1 } such that the attribute l i + 1 n is not attached to any node of ξ , which is not the root nor the terminal node. Then, a i a ¯ a n i 1 E ( n , ξ ) , which is impossible. Therefore, h ( Γ ) n and h L r a ( n ) n . It is easy to show that h L r d ( n ) n . Thus, h L r a ( n ) = h L r d ( n ) = n for any natural n.
(b) Let H o m ( L ) < and H e t ( L ) = . By Lemma 1, each word from L can be represented in the form w 1 a i w 2 a ¯ j w 3 , where a E , the length of w k is at most t = 2 H o m ( L ) for k = 1 , 2 , 3 , and i , j ω . Note that either w 2 = λ or w 2 is a word of the kind a ¯ a .
Let n be a natural number such that n 10 t . We now describe the work of a decision tree over L ( n ) , which solves the problem of recognition for L ( n ) deterministically. Let w L ( n ) . We represent this word as follows: w = L 1 L 2 L 3 A R 3 R 2 R 1 , where the length of each word L 1 , L 2 , L 3 , R 3 , R 2 , R 1 is equal to t. First, we recognize all letters in the words L 1 , L 2 , R 2 , R 1 using 4 t queries (attributes). We now consider four cases.
(i) Let L 2 = R 2 = a t for some a E . Then, L 3 A R 3 = a n 4 t , and the word w is recognized.
(ii) Let L 2 = a t for some a E , and R 2 contains both 0 and 1. Then, R 2 has an intersection with the word w 2 . It is clear that w 2 has no intersection with the word A and L 3 A = a n 5 t . We recognize all letters of the word R 3 . As a result, the word w will be recognized.
(iii) Let R 2 = a t for some a E , and L 2 contains both 0 and 1. Then, L 2 has an intersection with the word w 2 . It is clear that w 2 has no intersection with the word A and A R 3 = a n 5 t . We recognize all letters of the word L 3 . As a result, the word w will be recognized.
(iv) Let L 2 = a t and R 2 = a ¯ t for some a E . Then, we need to recognize the position of the word w 2 and the word w 2 itself. Beginning with the left, we divide L 3 A R 3 and, probably, a prefix of R 2 into blocks of the length t. As a result, we have k n / t blocks. We recognize all letters in the block with the number r = k / 2 . If all letters in this block are equal to a ¯ , then we apply the same procedure to the blocks with numbers 1 , , r 1 . If all letters in this block are equal to a, then we apply the same procedure to the blocks with numbers r + 1 , , k . If the considered block contains both 0 and 1, then we recognize t letters before this block and t letters after this block and, as a result, recognize both the word w 2 and its position. After each iteration, the number of blocks is at most one-half of the previous number of blocks. Let q be the whole number of iterations. Then, after the iteration q 1 , we have at least one unchecked block. Therefore, k / 2 q 1 1 and q log 2 k + 1 .
In case (i), to recognize the word w, we make 4 t queries. In cases (ii) and (iii), we make 5 t queries. In case (iv), we make at most t log 2 ( n / t ) + 7 t queries. As a result, we have h L r d ( n ) = O ( log n ) .
Because H e t ( L ) = , for any natural n, the set L ( n ) contains for some a E words a i a ¯ n i for i = 0 , , n . Then, | L ( n ) | n + 1 , and each decision tree Γ over L ( n ) solving the problem of recognition for L ( n ) deterministically has at least n + 1 terminal nodes. One can show that the number of terminal nodes in Γ is at most 2 h ( Γ ) . Therefore, h ( Γ ) log 2 ( n + 1 ) . Thus, h L r d ( n ) = Ω ( log n ) and h L r d ( n ) = Θ ( log n ) .
We now prove that h L r a ( n ) = O ( 1 ) . To this end, it is enough to show that there is a natural number c such that, for each natural n and for each word w L ( n ) , there exists a subset B w of the set of attributes { l 1 n , , l n n } such that B w c and, for any word u L ( n ) different from w, there exists an attribute l i n B w for which l i n ( w ) l i n ( u ) . We now show that as c, we can use the number 7 t . In case (i), in the capacity of the set B w , we can choose all attributes corresponding to 4 t letters from the subwords L 1 , L 2 , R 2 , and R 1 . In case (ii), we can choose all attributes corresponding to 5 t letters from the subwords L 1 , L 2 , R 3 , R 2 , and R 1 . In case (iii), we can choose all attributes corresponding to 5 t letters from the subwords L 1 , L 2 , L 3 , R 2 , and R 1 . In case (iv), in the capacity of the set B w , we can choose all attributes corresponding to 4 t letters from the subwords L 1 , L 2 , R 2 , and R 1 , and 3 t letters from the block containing both 0 and 1 and from the blocks that are its left and right neighbors.
(c) Let H o m ( L ) < and H e t ( L ) < . By Lemma 2, there exists natural p such that | L ( n ) | p for any natural n. Let n be a natural number. Then, the set L ( n ) contains at most p words, and there exists a subset B of the set of attributes { l 1 n , , l n n } such that B p 2 and, for any two different words u , w L ( n ) , there exists an attribute l i n B for which l i n ( w ) l i n ( u ) . It is easy to construct a decision tree over L ( n ) which solves the problem of recognition for L ( n ) deterministically by sequentially computing attributes from B. The depth of this tree is at most p 2 . Therefore, h L r d ( n ) = O ( 1 ) and h L r a ( n ) = O ( 1 ) . □
Proof of Theorem 2. 
It is clear that h L m a ( n ) h L m d ( n ) for any natural n.
(a) Let | L | = , L C , and w 0 be a word with the minimum length from L C . Because | L | = , L ( n ) for any natural n. Let n be a natural number such that n > | w 0 | and Γ be a decision tree over L ( n ) that solves the problem of membership for L ( n ) nondeterministically and has the minimum depth. Let w L ( n ) and ξ be a complete path in Γ such that w E ( n , ξ ) . Then, the terminal node of ξ is labeled with the number 1. Let us assume that the number of nodes labeled with attributes in ξ is at most n | w 0 | . Then, we can change at most | w 0 | letters in the word w such that the obtained word w will satisfy the following conditions: w 0 is a subword of w and w E ( n , ξ ) . However, it is impossible because in this case w L ( n ) and w   E ( n , ξ ) , but the terminal node of ξ is labeled with the number 1. Therefore, the depth of Γ is greater than n | w 0 | . Thus, h L m a ( n ) = Ω ( n ) . It is easy to construct a decision tree over L ( n ) that solves the problem of membership for L ( n ) deterministically and has a depth equal to n. Therefore, h L m d ( n ) = O ( n ) . Thus, h L m d ( n ) = Θ ( n ) and h L m a ( n ) = Θ ( n ) .
(b) Let | L | < . Then, there exists natural m such that L ( n ) = for any natural n m . Therefore, for each natural n m , h L m d ( n ) = 0 and h L m a ( n ) = 0 .
Let L C = , n be a natural number, and Γ be a decision tree over L ( n ) which consists of the root, a terminal node labeled with 1 , and an edge that leaves the root and enters the terminal node. One can show that Γ solves the problem of membership for L ( n ) deterministically and has a depth equal to 0. Therefore, h L m d ( n ) = 0 and h L m a ( n ) = 0 . □

5. Conclusions

In this paper, we studied arbitrary binary subword-closed languages. For the set of words L ( n ) of the length n belonging to a binary subword-closed language L, we investigated the depth of the decision trees solving the recognition and the membership problems deterministically and nondeterministically. We proved that with the growth of n, the minimum depth of the decision trees solving the problem of recognition deterministically is either bounded from above by a constant or grows as a logarithm, or linearly. For other types of trees and problems, with the growth of n, the minimum depth of the decision trees is either bounded from above by a constant or grows linearly. We also studied the joint behavior of the minimum depths of the considered four types of decision trees and described five complexity classes of binary subword-closed languages.
In this paper, we did not assume that a binary subword-closed language is given by a deterministic finite automaton accepting this language. So, we could not use the parameters of the automaton for the study of decision tree complexity as it was done in [6,7,8,9]. Instead of this, for binary subword-closed languages, we described simple combinatorial criteria for the behavior of the minimum depths of the decision trees solving the problems of recognition and membership deterministically and nondeterministically.
In the future, we are planning to generalize this approach to some other classes of formal languages.

Funding

Research funded by the King Abdullah University of Science and Technology.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Research reported in this publication was supported by the King Abdullah University of Science and Technology (KAUST).

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Atminas, A.; Lozin, V.V. Deciding Atomicity of Subword-Closed Languages. In Lecture Notes in Computer Science, Proceedings of the Developments in Language Theory-26th International Conference, DLT 2022, Tampa, FL, USA, 9–13 May 2022, Proceedings; Diekert, V., Volkov, M.V., Eds.; Springer: Berlin/Heidelberg, Germany, 2022; Volume 13257, pp. 69–77. [Google Scholar]
  2. Brzozowski, J.A.; Jirásková, G.; Zou, C. Quotient Complexity of Closed Languages. Theory Comput. Syst. 2014, 54, 277–292. [Google Scholar] [CrossRef]
  3. Haines, L.H. On Free Monoids Partially Ordered by Embedding. J. Comb. Theory 1969, 6, 94–98. [Google Scholar] [CrossRef]
  4. Hospodár, M. Power, positive closure, and quotients on convex languages. Theor. Comput. Sci. 2021, 870, 53–74. [Google Scholar] [CrossRef]
  5. Okhotin, A. On the State Complexity of Scattered Substrings and Superstrings. Fundam. Inform. 2010, 99, 325–338. [Google Scholar] [CrossRef]
  6. Moshkov, M. Complexity of Deterministic and Nondeterministic Decision Trees for Regular Language Word Recognition. In Aristotle University of Thessaloniki, Proceedings of the 3rd International Conference Developments in Language Theory, Thessaloniki, Greece, 20–23 July 1997; Bozapalidis, S., Ed.; DLT: Toronto, ON, Canada, 1997; pp. 343–349. [Google Scholar]
  7. Moshkov, M. Decision Trees for Regular Language Word Recognition. Fundam. Inform. 2000, 41, 449–461. [Google Scholar] [CrossRef]
  8. Moshkov, M. Time Complexity of Decision Trees. In Trans. Rough Sets III; Peters, J.F., Skowron, A., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3400, pp. 244–459. [Google Scholar]
  9. Moshkov, M. Decision trees for regular factorial languages. Array 2022, 15, 100203. [Google Scholar] [CrossRef]
  10. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Chapman and Hall/CRC: Boca Raton, FL, USA, 1984. [Google Scholar]
  11. Quinlan, J.R. C4.5: Programs for Machine Learning; Morgan Kaufmann: Burlington, MA, USA, 1993. [Google Scholar]
  12. Rokach, L.; Maimon, O. Data Mining with Decision Trees-Theory and Applications; Series in Machine Perception and Artificial Intelligence; World Scientific: Singapore, 2007; Volume 69. [Google Scholar]
  13. AbouEisha, H.; Amin, T.; Chikalov, I.; Hussain, S.; Moshkov, M. Extensions of Dynamic Programming for Combinatorial Optimization and Data Mining; Intelligent Systems Reference Library; Springer: Berlin/Heidelberg, Germany, 2019; Volume 146. [Google Scholar]
  14. Aglin, G.; Nijssen, S.; Schaus, P. Learning optimal decision trees using caching branch-and-bound search. In Proceedings of the 34th AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 3146–3153. [Google Scholar]
  15. Narodytska, N.; Ignatiev, A.; Pereira, F.; Marques-Silva, J. Learning optimal decision trees with SAT. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 1362–1368. [Google Scholar]
  16. Verwer, S.; Zhang, Y. Learning optimal classification trees using a binary linear program formulation. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, Washington, DC, USA, 7–14 February 2019; pp. 1625–1632. [Google Scholar]
  17. Moshkov, M. Comparative Analysis of Deterministic and Nondeterministic Decision Trees; Intelligent Systems Reference Library; Springer: Berlin/Heidelberg, Germany, 2020; Volume 179. [Google Scholar]
Table 1. Joint behavior of functions h L r d , h L r a , h L m d , and h L m a for binary subword-closed languages.
Table 1. Joint behavior of functions h L r d , h L r a , h L m d , and h L m a for binary subword-closed languages.
Hom ( L ) Het ( L ) | L | L C h L rd h L ra h L md h L ma
L 1 = Θ ( n ) Θ ( n ) Θ ( n ) Θ ( n )
L 2 = = Θ ( n ) Θ ( n ) O ( 1 ) O ( 1 )
L 3 < = Θ ( log n ) O ( 1 ) Θ ( n ) Θ ( n )
L 4 < < = O ( 1 ) O ( 1 ) Θ ( n ) Θ ( n )
L 5 < < < O ( 1 ) O ( 1 ) O ( 1 ) O ( 1 )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moshkov, M. Decision Trees for Binary Subword-Closed Languages. Entropy 2023, 25, 349. https://doi.org/10.3390/e25020349

AMA Style

Moshkov M. Decision Trees for Binary Subword-Closed Languages. Entropy. 2023; 25(2):349. https://doi.org/10.3390/e25020349

Chicago/Turabian Style

Moshkov, Mikhail. 2023. "Decision Trees for Binary Subword-Closed Languages" Entropy 25, no. 2: 349. https://doi.org/10.3390/e25020349

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop