Next Article in Journal
A New Proof of a Conjecture on Nonpositive Ricci Curved Compact Kähler–Einstein Surfaces
Next Article in Special Issue
A Markovian Mechanism of Proportional Resource Allocation in the Incentive Model as a Dynamic Stochastic Inverse Stackelberg Game
Previous Article in Journal
Numerical Methods for Solving Fuzzy Linear Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Finite Automata Capturing Winning Sequences for All Possible Variants of the PQ Penny Flip Game

1
Department of Informatics, Ionian University, 7 Tsirigoti Square, Corfu 49100, Greece
2
Department of History and Philosophy of Sciences, National and Kapodistrian University of Athens, Athens 15771, Greece
*
Authors to whom correspondence should be addressed.
Mathematics 2018, 6(2), 20; https://doi.org/10.3390/math6020020
Submission received: 29 October 2017 / Revised: 17 January 2018 / Accepted: 26 January 2018 / Published: 1 February 2018
(This article belongs to the Special Issue Mathematical Game Theory)

Abstract

:
The meticulous study of finite automata has produced many important and useful results. Automata are simple yet efficient finite state machines that can be utilized in a plethora of situations. It comes, therefore, as no surprise that they have been used in classic game theory in order to model players and their actions. Game theory has recently been influenced by ideas from the field of quantum computation. As a result, quantum versions of classic games have already been introduced and studied. The P Q penny flip game is a famous quantum game introduced by Meyer in 1999. In this paper, we investigate all possible finite games that can be played between the two players Q and Picard of the original P Q game. For this purpose, we establish a rigorous connection between finite automata and the P Q game along with all its possible variations. Starting from the automaton that corresponds to the original game, we construct more elaborate automata for certain extensions of the game, before finally presenting a semiautomaton that captures the intrinsic behavior of all possible variants of the P Q game. What this means is that, from the semiautomaton in question, by setting appropriate initial and accepting states, one can construct deterministic automata able to capture every possible finite game that can be played between the two players Q and Picard. Moreover, we introduce the new concepts of a winning automaton and complete automaton for either player.

1. Introduction

Game theory studies conflict and cooperation between rational players. To this end, a sophisticated mathematical machinery has been developed that facilitates this reasoning. There are numerous textbooks that can serve as an excellent introduction to this field. In this paper, we shall use just a few fundamental concepts and we refer to [1,2] as accessible and user-friendly references, whereas [3] is a more rigorous exposition. The landmark work “Theory of Games and Economic Behavior” [4] by John Von Neumann and Oskar Morgenstern is usually credited as being the one responsible for the creation this field. Since then, Game theory has been broadly investigated due to its numerous applications, both in theory and practice. It would not be an exaggeration to claim that today the use of Game theory is pervasive in economics, political and social sciences. It has even been used in such diverse fields as biology and psychology. In every case where at least two entities are either in conflict or cooperate, Game theory provides the proper tools to analyze the situation. The entities are called players, each player has his own goals and the actions of every player affect the other players. Every player has at his disposal a set of actions, from which his set of strategies is determined. The outcome of the game from the point of view of each player is quantitatively assessed by a function that is called utility or payoff function. The players are assumed to be rational, i.e., every player acts so as to maximize his payoff.
Quantum computation is a relatively new field that was initially envisioned by Richard Feynman in the early 1980s. Today, there is a wide interest in this area and, more importantly, actual efforts for the building practical commercial quantum computing machines or at least quantum components. One could argue that quantum computing perceives the actual computation process as a natural phenomenon, in contrast to the known binary logic of classical systems. Technically, a quantum computer is expected to use qubits as the basic unit of computation instead of the classical bit. The transitions among quantum states will be achieved through the application of unitary matrices. It is hoped that the use of quantum or quantum-inspired computing machines will lead to an increase in computational capabilities and efficiency, since the quantum world is inherently probabilistic and non-classical phenomena, such as superposition and entanglement, occur. Up to now, the superiority of quantum methods over classical ones has only been proven for particular classes of problems; nevertheless the performance gains in such cases are tremendous. In the P Q penny flip game described by Meyer in [5], the quantum player Q has an overwhelming advantage over the classical player Picard. The recent field of quantum game theory is devoted to the study of quantum techniques in classical games, such as the coin flipping, the prisoners’ dilemma and many others.
Contribution. The main contribution of this work lies in establishing a rigorous connection between finite automata and the P Q game with all its finite variations. Starting from the automaton that corresponds to the original P Q game, we construct automata for various interesting variations of the game, before finally presenting a semiautomaton in Section 7.1 that captures the “essence” of the P Q game. By this we mean that this semiautomaton serves as a template for building automata (by designating appropriate initial and accepting states) that cover all possible finite games that can be played between Q and Picard. We point out that the resulting automata are almost identical, since they differ only in the initial state and/or their accepting states; however, these minor differences have a profound effect on the accepting language.
Furthermore, we introduce two novel notions, that of a winning automaton and that of a complete automaton for either player. A winning automaton for either Q or Picard accepts only those words that correspond to actions that allow him to win the game with probability 1.0 and a complete automaton (for Q or Picard) accepts all such words. This is a powerful tool because it allows us to determine whether or not an arbitrary long sequence of actions guarantees that one of the two players will surely win just be checking if the corresponding word is accepted or not by the complete automaton for that player.
We clarify that the automata we construct do more than simply accept dominant strategies. They are specifically designed to accept sequences of actions by both players, i.e., sequences that contain the actions of both players. This gives a global overview of the evolution of the game from the point of view of both players. Moreover, no information is lost and, in case one wishes to focus only on dominant strategies for a specific player, this can be simply achieved by considering a substring from each accepted word; this substring will contain only the actions of the specific player, disregarding all actions by the other player.
The paper is organized as follows: Section 2 discusses related work; Section 3 explains the notation and definitions used throughout the rest of the paper; Section 4 lays the necessary groundwork for the connection of games with automata; Section 5 describes the automaton that corresponds to the standard P Q game; Section 6 analyzes how one may construct automata that correspond to specific variants of the P Q game; Section 7 contains the most important results of this work: the semiautomaton in Section 7.1 that captures all possible finite games between Q and Picard, and the concepts of winning and complete automata for Q or Picard; and Section 8 summarizes our results and conclusions and points to directions for future work.

2. Related Work

In 1999, Mayer [5] introduced the quantum version of the penny flip game with two players and a two dimensional coin. In the original, game the two players are named Q and Picard (from a popular tv series). Picard is restricted to classic strategies, whereas Q is able to use quantum strategies. As a result, Q is able to apply unitary transformations in every possible state of the game. Mayer identifies a winning strategy for Q that boils down to the application of the Hadamard transform. Picard, on the other hand, who can either leave the coin as is or flip it, is bound to lose in every case.
Many articles extended the aforementioned game to an n-state quantum roulette using various techniques. Salimi et al. [6] used permutation matrices and the Fourier matrix as a representation of the symmetric group S n . They viewed quantum roulette as a typical n-state quantum system and developed a methodology that allowed them to solve this quantum game for arbitrary n. As an example, they employed their technique for a quantum roulette with n = 3 . Wang et al. [7] also generalized the coin tossing game to an n-state game. Ren et al. [8] developed specific methods that enabled them to solve the problem of quantum coin-tossing in a roulette game. Specifically, they used two methods, which they called analogy and isolation methods respectively, to tackle the above problem. All the previously mentioned articles focused on the expansion of states, essentially converting the coin into a roulette.
Quantum protocols from the fields of quantum and post-quantum cryptography are widely studied in the framework of quantum game theory. Several cryptographic protocols have been developed in order to provide reliable communication between two separate players regarding the coin-tossing game [9,10,11,12]. Nguyen et al. [9] analyzed how the performance of a quantum coin tossing experiment should be compared to classical protocols, taking into account the inevitable experimental imperfections. They designed an all-optical fiber experiment, in which a single coin is tossed whose randomness is higher than that of any classical protocol. In the same paper, they presented some easily realizable cheating strategies for Alice and Bob. Berlin et al. [10] introduced a quantum protocol which they proved to be completely impervious to loss. The protocol is fair when both players have the same probability for a successful cheating upon the outcome of the coin flip. They also gave explicit and optimal cheating strategies for both players. Ambainis [11] devised a protocol in which a dishonest party will not be able to ensure a specific result with probability greater than 0.75 . For this particular protocol, the use of parallelism will not lead to a decrease of its bias. In [12], Ambainis et al. investigated similar protocols in a context of multiple parties, where it was shown that the coin may not be fixed provided that a fraction of the players remain honest.
Many researchers have investigated turn-based versions of classical games such as the prisoners’ dilemma. One of the first works that associated finite automata with game theory was by Neyman [13], where he studied how finite automata can be used to acquire the complexity of strategies available to players. Rubinstein [14] studied a variation of the repeated prisoners’ dilemma, in which each player is required to play using a Moore machine (a type of finite state transducer). Rubinstein and Abreu [15] investigated the case of infinitely repeated games. They used the Nash equilibrium as a solution concept, where players seek to maximize their profit and minimize the complexity of their strategies. Inspired by the Abreu – Rubinstein style systems, Binmore and Samuelson [16] replaced the solution concept of Nash equilibrium with that of the evolutionarily stable strategy. They showed that such automata are efficient in the sense that they maximize the sum of the payoffs. Ben-Porath [17] studied repeated games and the behavior of equilibrium payoffs for players using bounded complexity strategies. The strategy complexity is measured in terms of the state size of the minimal automaton that can implement it. They observed that when the size of the automata of both players tends to infinity, the sequence of values converges to a particular value for each game. Marks [18] also studied repeated games with the assistance of finite automata.
An important work in the field of quantum game theory by Eisert et al. [19] examined the application of quantum techniques in the prisoners’ dilemma game. Their work was later debated by others, such as Benjamin and Hayden in [20] and Zhang in [21], where it was pointed out that players in the game setting of [19] were restricted and therefore the resulting Nash equilibria were not correct. The work in [22] gave an elegant introduction to quantum game theory, along with a review of the relevant literature for the first years of this newborn field. Parrondo games and quantum algorithms were discussed in [23]. The relation between Parrondo games and a type of automata, specifically quantum lattice gas automata, was the topic of [24]. Bertelle et al. [25] examined the use of probabilistic automata, evolved from a genetic algorithm, for modeling adaptive behavior in the prisoners’ dilemma game. Piotrowski et al. [26] provided a historic account and outlined the basic ideas behind the recent development of quantum game theory. They also gave their assessment about possible future developments in this field and their impact on information processing. Recently, Suwais [27] examined different types of automata variants and reviewed the use for each one of them in game theory. In a similar vein, Almanasra et al. [28] reported that finite automata are suitable for simple strategies whereas adaptive and cellular automata can be applied in complex environments.
Variants of quantum finite automata, placing emphasis on hybrid models, were presented in [29] by Li and Feng, where they obtained interesting theoretical results demonstrating the advantages of these models. The use of such finite state machines for the representation of quantum games could, perhaps, constitute an alternative to classical automata, particularly in view of some encouraging results regarding their power and expressiveness (see [30,31,32]).
The relation of quantum games with finite automata was also studied in [33]. In that work, quantum automata accepting infinite words were associated with winning strategies for abstract quantum games. The current paper differs from [33] in the following aspects: (i) the focus is in the P Q penny flip game and all its variations; (ii) the automata are either deterministic or nondeterministic finite automata; and (iii) the words accepted by the automata correspond to moves by both players.

3. Preliminary Definitions

3.1. The P Q Game

Meyer in his landmark paper [5] introduced the penny flip game. This game is played by two players named Q and Picard. The names are inspired from a successful science fiction TV show. Picard is a classical, probabilistic, player, in that he can only perform one of two actions:
  • leave the coin as is, which we denote by I, after the “identity” operator; or
  • flip the coin, which we denote by F, after the “flip” operator.
Q on the other hand is a quantum player, in that he can affect the coin not only in a classical sense, but also through the application of unitary transformations, such as the Hadamard operator, which is denoted by H. The game is played with the coin prepared in the initial state heads up. The two players act on the coin always following a specific order: Q plays first, then its Picard’s turn, and, finally, Q plays one last time. Q wins if the coin is found heads up when the game is over; otherwise, Picard wins. Mayer presents a dominant strategy for Q based on the application of the Hadamard transform H: Q starts by applying the H operator, which in a sense makes Picard’s move irrelevant. After Picard makes his move, Q applies once more the H operator, which restores the coin to its initial state, granting him victory.
The game can be rephrased in a linear algebraic form:
  • The coin is a two-dimensional quantum system. The state of the coin is represented by a ray in the two-dimensional complex Hilbert space H 2 (see [34] for details). A ray is an equivalence class of vectors whose elements differ by a multiplicative complex scalar. In the Dirac terminology and notation, which we follow in this work, vectors are called kets and are denoted by v . Hence, in this case, a ray contains kets of the form a v , for some v H 2 , and a 0 ranging over C . The standard convection dictates that a normalized ket v , i.e., a of unit length, is chosen as a ray representative. This representation of coin states by normalized kets greatly simplifies computations. Let us emphasize that the kets v and e i θ v , where θ R , represent the same state because | e i θ | = 1 .
  • The arbitrary state of the quantum coin can be expressed as
    v = a h e a d s + b t a i l s .
    The fact that v is normalized implies that the complex probability amplitudes a and b satisfy the relation | a | 2 + | b | 2 = 1 . The kets h e a d s and t a i l s describe the situation where the coin is heads up or tails up, respectively. These two kets are orthogonal unit vector in the two-dimensional complex Hilbert space H 2 , and, as such, constitute an orthonormal basis of H 2 . It is customary to denote by { 0 , 1 } the standard orthonormal basis of H 2 . Therefore, in this work, we shall interchangeably write h e a d s instead of 0 and t a i l s instead of 1 to emphasize that the coin is heads up or tails up, respectively. To avoid any possible source of confusion, we summarize our conventions below.
    h e a d s = 0 = 1 0 T , t a i l s = 1 = 0 1 T .
  • The possible actions of the two players I , F , H are represented by unitary operators. Specifically, since H 2 is two-dimensional, the operators can be represented by the following 2 × 2 matrices:
    I = 1 0 0 1 , F = 0 1 1 0 , and H = 2 2 2 2 2 2 2 2 .
  • The state of the quantum coin v is measured with respect to the orthonormal basis { h e a d s , t a i l s } . After the measurement, the state of the coin will either be h e a d s with probability | a | 2 , or t a i l s with probability | b | 2 . In our context, this means that after the measurement the coin will turn out either heads up or tails up and this will be known to both players.
In the rest of this paper, we shall refer to the P Q penny flip game simply as the P Q game.

3.2. Automata

For completeness, we will now mention the definitions of deterministic and nondeterministic finite automata, which we will use in the following chapters as a succinct tool to represent the P Q game, define new variants of the original game, and study strategies on the these variants. The definitions are taken from [35].
Before the necessary definitions about finite automata, let us explain the rationale behind our choice of automata. Although other approaches, such as game trees are, also, closely related to game theory (in the sense that they describe all possible moves), our goal here was to present a much more general tool (e.g., see the work in [36]) that would capture the character of the game. The finite termination aspect, inherent in finite automata, seems especially appropriate for describing winning strategies. Another advantage in using automata over game trees is the compact form of automata, where comparatively few states are adequate for describing the actions of the players. In general, there are many works in the literature that correlate game-theoretic notions with finite automata (see [37]).
Definition 1.
A deterministic finite automaton (DFA) is a tuple ( Q , Σ , δ , q 0 , F ) , where:
1. 
Q is a finite set of states,
2. 
Σ is a finite set of input symbols called the alphabet.
3. 
δ : Q × Σ Q is the transition function.
4. 
q 0 Q is the initial state.
5. 
F Q is the set of accepting states.
The definition of the nondeterministic finite automata (NFA) follows a similar pattern, save for some key differences: we replace the definition of the transition function δ seen above with δ : Q × Σ P ( Q ) , where P ( Q ) is the powerset of Q. We also allow for ϵ transitions. We note that DFA and NFA are equivalent in expressive power [35,38].
Definition 2.
A NFA is a tuple ( Q , Σ , δ , q 0 , F ) , where:
1. 
Q is a finite set of states.
2. 
Σ is the alphabet.
3. 
δ : Q × Σ ϵ P ( Q ) is the transition function.
4. 
q 0 Q is the initial state.
5. 
F Q is the set of accepting states.

4. Games and Words

In this work, we intend to examine all finite games that can be played between Picard and Q. These games are in a sense “similar” to the original P Q game and can, therefore, be viewed as extensions that arise from modifications of the rules of the original game. First we must precisely state what we shall keep from the P Q game. Our analysis will be based on the following four hypotheses.
Hypothesis 1. (H1)
The two players, Picard and Q, are the stars of the game. Thus, they will continue to play against each other in all the two-persons games we study. Although the games will be finite, their duration will vary. Most importantly, the pattern of the games will vary: Picard may make the first move, one player may act on the coin for a number of consecutive rounds while the other player stays idle, and so on.
Hypothesis 2. (H2)
The other cornerstone of the game is the two-dimensional coin, so the players will still act on the same coin. This means that our games take place in the two-dimensional complex Hilbert space H 2 and we shall not be concerned with higher dimensional analogs of the P Q game like those in [6,7].
Hypothesis 3. (H3)
Let us agree that the players have exactly the same actions at their disposal, that is Picard can use either I or F, and Q can use H. This will enable us to treat all games in a uniform manner by using the same alphabet and notation.
At this point, it is perhaps expedient to clarify why we have presumed that Q’s repertoire is limited to H. One of the fundamental assumptions of game theory is that the players are rational. This means that they always act so as to maximize their payoff (see references [2,3] for a more in depth analysis). Rationality will force each player to choose the best action from a set of possible actions. In this case, Q, being a quantum entity, can choose his actions from the infinite set of unitary operators (technically from the U ( 2 ) group). For instance, Q is allowed to use I, F, something like M = 5 3 2 3 2 3 5 3 , etc. Nonetheless, Q will discard such choices and will eventually play a dominant strategy such as H, followed by H, as his rationality demands. It is this line of thought that has led us to assume that Q has just one action, namely H.
Hypothesis 4. (H4)
Finally, we assume that the coin can initially be at one of the two basic states 0 (the coin is placed heads up) or 1 (the coin is placed tails up), and this state is known to both players. We note that, for each game that begins with the coin in state 0 , there exists an analogous game that begins with the coin in state 1 and vice versa. When the game is over, the state of the coin is measured in the orthonormal basis { 0 , 1 } , and, if it is found to be in the initial basic state, Q wins; otherwise, Picard wins. This settles the question of how the winner is determined.
From now on, we shall take for granted the hypotheses H1H4 without any further mention.
Let N be the set of the two players { Picard , Q } and let N be the set of all finite sequences over N. We agree that N contains the empty sequence e. Each γ N is called a sequence of moves because it encodes a game between Picard and Q. For instance the sequence ( Q , Picard , Q ) expresses the original P Q game, while the sequence ( Picard , Q , Picard , Q , Picard ) represents a five-round game variant, where Picard moves during Rounds 1, 3 and 5, and Q during Rounds 2 and 4. This idea is formalized in the next definition.
Definition 3.
Each sequence of moves γ N defines the finite game G ( s , γ ) between Picard and Q. The rules of G ( s , γ ) are:
  • The initial state of the coin is s . In view of hypothesis H4, s is either h e a d s or t a i l s .
  • If γ = e , then G ( s , e ) is the 0-round trivial game (neither Picard nor Q act on the coin, which remains at its initial state).
  • If γ = ( p 1 , p 2 , , p n ) , where p i N , 1 i n , then G ( s , γ ) is a game that lasts n rounds and p i determines which of the two players moves during round i. Specifically, if p i = Picard then it is Picard’s turn to act on the coin, whereas if p i = Q then it is Q’s turn to act on the coin.
In this work, we shall employ sequences of moves as a precise, unambiguous and succinct way for defining finite games between Picard and Q. For instance, the move sequences (Picard, Picard, Q, Q, Picard, Picard) and (Picard, Q, Picard, Q, Picard, Q, Picard, Q, Picard) correspond to a six-round and a nine-round game, respectively. These particular games will be used in Section 7.
Considering that the actions of Picard and Q are just three, namely I , F and H, we define the set of actions A c t = { I , F , H } . The set of all finite sequences of actions, which includes the empty sequence ϵ , is denoted by A c t . In the original P Q game, there are just two possible such sequences: ( H , I , H ) and ( H , F , H ) . Each action sequence is meaningful only in the appropriate game. For example, the following sequence ( F , H , H , I ) is unsuitable for the P Q game, but it makes perfect sense in a four-round game where Picard plays during the first and fourth round and Q plays during the second and third round. The precise game for which a given sequence of actions is appropriate is defined below.
Definition 4.
The function χ : A c t N , which maps sequences of actions to sequences of moves, is defined as follows.
1. 
χ ( ϵ ) = e , and
2. 
If α = ( U 1 , , U n ) , U i A c t , 1 i n , then χ ( α ) = ( p 1 , p 2 , , p n ) , where p i = Picard if U i = I or U i = F and p i = Q if U i = H .
Every action sequence, α is an admissible sequence for the underlying game G ( s , χ ( α ) ) .
If Q (Picard) wins the game G ( s , γ ) with the admissible sequence α with probability 1.0 , we say that Q (Picard) surely wins G ( s , γ ) with α, or that α is a winning sequence for Q (Picard) in G ( s , γ ) .
We employ the notation Q ( G ( s , γ ) , α ) , respectively P ( G ( s , γ ) , α ) , as an abbreviation of the foregoing assertion.
It is evident that χ is not an injective function. Take for example ( H , I , H ) and ( H , F , H ) ; both correspond to the same sequence of moves ( Q , Picard , Q ) . It is also clear that only admissible sequences are meaningful.
In this work, we shall examine several variants of the P Q game. To each one, we shall associate an automaton and study the language it accepts. As it will turn out, in every case the corresponding language has the same characteristic property. Automata are simple but fundamental models of computation. They recognize regular languages of words from a given alphabet Σ . The set of all finite words over Σ is denoted by Σ ; we recall that Σ contains the empty word ε . The operation of the automaton is very simple: starting from its start state the automaton reads a word w and ends up in a certain state. It accepts (or recognizes) w if and only if this final state belongs to the set of accept states. The set of all the words that are accepted by the automaton is the language recognized (or accepted) by the automaton. We follow the convention of denoting by L A the language recognized by the automaton A.
To associate games with automata in a productive way, we must fix an appropriate alphabet Σ and map the actions of the players to the letters of Σ . Accordingly, the alphabet Σ must also contain tree letters. Table 1 shows the 1-1 correspondence between the operators I , F and H and the letters of the alphabet Σ = { i , f , h } . In this work, we are interested only in finite games and, hence, in finite words and finite sequences of actions. For simplicity, we shall omit the adjective finite from now and simply write game, word and sequence of actions.
Definition 5.
Given the set of actions A c t = { I , F , H } of Picard and Q, the corresponding alphabet is Σ = { i , f , h } .
We define the letter assignment function λ : A c t Σ and the operator assignment function μ : Σ A c t .
1. 
λ ( I ) = i , μ ( i ) = I ;
2. 
λ ( F ) = f , μ ( f ) = F ; and
3. 
λ ( H ) = h , μ ( h ) = H .
The letter assignment function λ follows the obvious mnemonic rule of mapping each operator, which in the literature is typically denoted by an uppercase letter, to the same lowercase letter. Clearly, μ is the inverse of λ . All the automata we shall encounter share the same alphabet Σ = { i , f , h } .
Now, via λ , we can map finite sequences of actions to words and via μ we can map words to finite sequences of actions. For instance, the sequence ( H , I , H ) is mapped to h i h , the sequence ( H , F , H ) is mapped to h f h , etc. In this fashion, every sequence of actions is mapped to a word w Σ . But, this is a two-way street, meaning that each word from Σ corresponds to a sequence of actions: h i h h f h corresponds to ( H , I , H , H , F , H ) .
At this point, we should clarify that, in the rest of this paper, action sequences will be written as comma-delimited lists of actions enclosed within a pair of left and right parenthesis. This is in accordance with the practice we have followed so far, e.g., when referring to the action sequences ( H , I , H ) , ( H , F , H ) or ( H , I , H , H , F , H ) . On the other hand, words, despite also being considered as sequences of symbols from the alphabet Σ , are always written as a simple concatenation of symbols, such as h i h , h f h or h i h h f h , and never ( h , i , f ) , etc. In this work, we shall adhere to this well-established tradition.
Formally, this correspondence between action sequences and words is achieved by properly extending λ and μ .
Definition 6.
The word mapping λ ¯ : A c t Σ and the action sequence mapping μ ¯ : Σ A c t are defined recursively as follows.
1. 
λ ¯ ( ϵ ) = ε , μ ¯ ( ε ) = ϵ , and
2. 
For every U A c t , every α A c t , every l Σ , and every w Σ :
λ ¯ ( ( α , U ) ) = λ ¯ ( α ) λ ( U ) , μ ¯ ( w l ) = ( μ ¯ ( w ) , μ ( l ) ) .
Moreover, a word w Σ via the corresponding sequence of actions μ ¯ ( w ) can be thought of as describing the game G ( s , χ ( μ ¯ ( w ) ) ) . For example, the word h f i f h corresponds to a five-round game, where Q plays only during Rounds 1 and 5, whereas Picard gets to act on the coin during the consecutive Rounds 2, 3 and 4.

5. An Automaton for the PQ Game

As we have explained in previous sections, the coin in the P Q game is a two-dimensional system and so its state can be described by a normalized ket v C 2 . The players act upon the coin via the unitary operators I , F and H whose matrix representation is given in Equation (3).
The game proceeds as follows:
  • The initial state of the coin is 1 0 T = h e a d s = 0 .
  • After Q’s first move (which is an action on the coin by H), the coin enters state 2 2 2 2 T . We call this state s 2 (see Figure 1 and Table 2).
  • s 2 is a very special state in the sense that no matter what Picard chooses to play (Picard can act either by I or by F), after his move the coin remains in the state s 2 .
  • Finally, Q wins the game by applying H one last time, which in effect sends the coin back to its initial state h e a d s .
The simple automaton A P Q shown in Figure 1 expresses concisely the states of the coin and the effect of the actions of the two players. The states of the automaton are in 1-1 correspondence with the states the coin goes through during the game (see Table 2). The actions of the players, that is the unitary operators I , F , H , are in 1-1 correspondence with the alphabet Σ = { i , f , h } of A P Q (see Table 1).
The effect of the actions of the players upon the coin is captured by the transitions between the states. Technically, A P Q is a nondeterministic automaton (see [35]) that has only two states: h e a d s and s 2 , where h e a d s is the start and the unique accept state. The nondeterministic nature of A P Q stems from the fact that no outgoing transitions from h e a d s are labeled with i or f. This is a feature, not a bug, because the rules of the game stipulate that Q makes the first move and Picard’s only move takes place when the coin is in state s 2 = 2 2 2 2 T . This means that Picard never gets a chance to act when the coin is in state h e a d s = 1 0 T . Hence, A P Q is specifically designed so that the only possible action while in state h e a d s is by Q via H. This will have an effect on the words accepted by A P Q , as will be explained below. Other than this subtle point, the behavior of A P Q can be considered deterministic.
According to the rules of the P Q game, there are just two admissible sequences of actions: ( H , I , H ) and ( H , F , H ) . Both of them guarantee that Q will win with probability 1.0. The corresponding words are: h i h and h f h , both of which are accepted by A P Q and, thus, belong to L A P Q . Formally, these two words are the only ones that correspond to valid game moves.
Let us now take a step back and view A P Q as a standalone automaton. Its language L A P Q can be succinctly described by the regular expression ( h ( i f ) h ) (for more about regular expressions we refer again to [35]). Thus, L A P Q contains an infinite number of words, but only two, namely h i h and h f h , correspond to admissible sequences of game actions. What about the other words of L A P Q ?
Even though the fact that the other words of L A P Q do not correspond to permissible sequences of moves for the original P Q game, they do share a very interesting property. Given an arbitrary word w L A P Q , consider the game G ( h e a d s , χ ( μ ¯ ( w ) ) ) . If the sequence of actions μ ¯ ( w ) is played, then Q will surely win, that is Q will win with probability 1.0 . Note that μ ¯ ( w ) , in general, will contain actions by both players. We emphasize that this property holds for every word of L A P Q . To develop a better understanding of this characteristic property, let us look at some concrete examples.
  • The empty word ε that technically belongs to L A P Q can be viewed as the representation of the trivial game, where no player gets to act on the coin, so the coin stays at its initial state h e a d s and Q trivially wins.
  • Words such as h h and h h h h , i.e., having the form ( h h ) + , correspond to the most unfair (for Picard) games, where the game lasts exactly 2 n rounds, for some n 1 , and Q moves during each round (Picard does not get to make any move at all).
  • Words of the form h ( i f ) n h , where n 1 , represent games that last n + 2 rounds. In these games, Q plays only during the first and last round of the game, whereas Picard plays during the n intermediate rounds. These variants give to Picard the illusion of fairness, without changing the final outcome.
  • Words of the form ( h ( i f ) h ) , e.g., h ( i f ) 2 h h ( i f ) 3 h , correspond to more complex games. They are in effect independent repetitions of the previous category of games.
The formal definition of “winning” automata will be given in Section 7. The idea is very simple: a winning automaton for Q (Picard) accepts a word w only if Q (respectively Picard) surely wins the game G ( s , γ w ) with α w , where s is the initial state of the automaton, α w = μ ¯ ( w ) is the corresponding action sequence, and γ w = χ ( μ ¯ ( w ) ) is the corresponding move sequence. Therefore, a winning automaton for one of the players does not accept a single word for which, in the corresponding game, the associated sequence of actions will result in the other player winning with nonzero probability, for instance with probability 0.5 or 1 / 3 .

6. Variants of the Game and Their Corresponding Automata

6.1. Changing the Initial State of the Coin

Let us examine what happens if we change the initial state of the coin, while keeping the form of the game the same. Thus there are still three rounds: Q acts during the first and the third (and final) round and Picard acts during the second round. The coin is initially at state t a i l s = 0 1 T . Q wins if the coin, after measurement, is found to be in the initial state t a i l s . We designate this game variant as P Q π / 2 .
In this game, after Q’s first move, the coin will be in state 2 2 2 2 T . This state corresponds to state s 4 of the automaton A P Q π / 2 , depicted in Figure 2. Clearly, the coin will remain in this state, if Picard decides to use I because I 2 2 2 2 = 2 2 2 2 . The coin will also remain in this state even if Picard decides to use F. To see why, it suffices to write F 2 2 2 2 = 2 2 2 2 = ( 1 ) 2 2 2 2 . This demonstrates 2 2 2 2 T and 2 2 2 2 T belong to the same ray and, thus, represent the same state. Q’s final action via H will send the coin to t a i l s . When the game is over and the state of the coin is measured in the orthonormal basis { h e a d s , t a i l s } , both players will find that the coin has ended up in its initial state. Thus, Q wins this game too with probability 1.0 .
The previous analysis shows that in the P Q π / 2 game the coin may go through the states { t a i l s , s 4 } . In view of the fact that these states are all “new”, with respect to the original P Q game, we see that this variant introduces new states. Automaton A P Q π / 2 , depicted in Figure 2, captures the P Q π / 2 game. The states of the automaton are in 1-1 correspondence with the states the coin goes through during the game (see Table 2) and the actions of the players are mirrored by the transitions between the states. Like A P Q , A P Q π / 2 is nondeterministic because of the rules of the game, which imply that no outgoing transitions from h e a d s are labeled with i or f.
In the P Q π / 2 game, the two admissible sequences of moves are again ( H , I , H ) and ( H , F , H ) . Both of them lead to Q’s victory with probability 1.0 . The corresponding words h i h and h f h belong to L A P Q π / 2 . The other words of L A P Q π / 2 do not correspond to permissible moves of the P Q π / 2 game. However, it is easy to establish that A P Q π / 2 , like A P Q , is a winning automaton for Q. The following remarks, similar to the ones we made regarding A P Q , hold for pretty much the same reasons:
  • The words of L A P Q π / 2 have the general form ( h ( i f ) h ) .
  • Formally, h i h and h f h are the only words that correspond to valid game moves.
  • Again, the empty word ε belongs to L A P Q π / 2 and can be thought of as expressing the trivial game, where Q trivially wins.
  • Like before, words of the form ( h h ) + correspond to games that last at least 2 n , n 1 , rounds, and words of the form h ( i f ) n h , where n 1 , correspond to games that last n + 2 rounds. Q surely wins these games no matter what Picard’s strategies are.
  • Words of the form ( h ( i f ) h ) correspond to zero or more repetitions of the previous types of game. It is evident that Q also wins these complex games with probability 1.0 .
Again, we reach the same conclusion: all words accepted by A P Q π / 2 encode sequences of actions for which Q will surely win in the corresponding game.

6.2. Variants with More Rounds

Let us suppose now that the duration of the game is increased. The original P Q game was a three-round game, so it makes sense to examine a six-round, a nine-round, or, in general a 3 n -round, n 2 , variant of the game. We must however emphasize that these are not repeated P Q games. By repeated, we mean multistage games where the original P Q game is repeated at each stage. In other words, the moves of the players do not follow the pattern: Q → Picard → Q → Q → Picard → Q, etc. Instead, we focus on games that follow the pattern Q → Picard → Q → Picard, etc. In these games Q acts during the odd numbered rounds and Picard acts during the even numbered rounds. The initial state of the coin is h e a d s and Q wins the game if when the game is over the state of the coin is measured and found to be h e a d s . Let us denote by P Q 3 n , where n 2 , these 3 n -round games.
  • Initially, we examine the six-round game P Q 6 . Clearly, after Round 3 (i.e., after Q’s second move) the coin is at state h e a d s = 1 0 T . It may remain in this state if Picard decides to use I but, if Picard decides to use F, the coin will enter state t a i l s = 0 1 T . Q’s subsequent move will send the coin to state s 2 = 2 2 2 2 T in the first case, or to state s 4 = 2 2 2 2 T in the second case. Thus, the coin may end up in s 2 or s 4 , irrespective of whether Picard’s final action in the 6th round is I or F (recall from our previous analysis that 2 2 2 2 T and 2 2 2 2 T represent the same state).
    The associated automaton A P Q 6 is shown in Figure 3. As expected, its states correspond to the states of the coin (see Table 2) and its transitions to the actions of the players. Like the previous automata we have seen, A P Q 6 is nondeterministic because of the rules of the game, which entail, for instance, that there is no outgoing transition labeled f from state t a i l s . An important observation we can make in this case is that, by extending the duration of the game, the automata A P Q and A P Q π / 2 “merge” into the A P Q 6 .
    Strictly speaking, the only possible valid moves in P Q 6 are: ( H , I , H , I , H , I ) , ( H , I , H , I , H , F ) , ( H , I , H , F , H , I ) , ( H , I , H , F , H , F ) , ( H , F , H , I , H , I ) , ( H , F , H , I , H , F ) , ( H , F , H , F , H , I ) , and ( H , F , H , F , H , F ) . The corresponding words are: h i h i h i , h i h i h f , h i h f h i , h i h f h f , h f h i h i , h f h i h f , h f h f h i , and h f h f h f ; none of them is recognized by A P Q 6 . This does not imply that L A P Q 6 is empty. On the contrary, L A P Q 6 is infinite. For example, h f h i h f h belongs to L A P Q 6 . This particular word corresponds to a 7-round game and Q will surely win in this game if the corresponding sequence of actions ( H , F , H , I , H , F , H ) is played by Q and Picard. A P Q 6 is a winning automaton for Q that accepts the language ( i h ( i f ) h ) . It is therefore consistent with the winning property that all the words corresponding to the action sequences that are admissible for the P Q 6 game are rejected because they do not guarantee that Q will surely win. As a matter of fact, with the admissible action sequences both Q and Picard have equal probability 0.5 to win.
  • Finally, we look at the general 3 n -round variant P Q 3 n , for n 3 . According to our previous analysis, after Round 6, the coin may be at one of the states s 2 or s 4 . Consequently, Q’s move will send it to one of h e a d s or t a i l s . Picard’s action will either leave the coin to its current state or forward it to one of t a i l s or h e a d s ; in any case, after Picard’s move the coin will either be at h e a d s or t a i l s . Finally, Q’s last action will result in the coin entering one of the states s 2 or s 4 . This behavior is captured by the automaton A Q , depicted in Figure 4. We can go on, but it should be clear by now that, no matter how many more rounds are played, no more new states will appear.
    Up to this point, we have constructed the automata A P Q 6 and A Q , shown in Figure 3 and Figure 4, respectively. They are all winning automata for Q, exactly like A P Q and A P Q π / 2 . This is more or less evident, but we shall give a formal proof in the next section. We close this section with an important observation. A Q has four states and is the biggest, in terms of number of transitions, automaton we have encountered so far. In a way, A Q “contains” all the previous automata. The most striking difference with the previous automata is the fact that A Q is deterministic, whereas A P Q , A P Q π / 2 , and A P Q 6 were nondeterministic. Exactly three transitions, one for each letter i , f and h, emanate from every state. This gives A Q a type of completeness because whatever action is taken by any player, the outcome will correspond to a state of A Q . Hence, A Q is able to accurately mirror the behavior of the coin.

7. Automata Capturing Sets of Games

In this section, we shall prove that A Q is a “better”, more “complete” representation of the finite games between Picard and Q compared to all the previous automata. As a matter of fact, in a precise sense A Q captures all the finite games between Picard and Q.
We begin by giving the formal definition of winning automaton.
Definition 7 (Winning automaton).
Consider an automaton A with initial state s, where s is either h e a d s or t a i l s . Let w Σ be a word accepted by A, let α w = μ ¯ ( w ) be the corresponding sequence of actions, and let γ w = χ ( μ ¯ ( w ) ) be the corresponding sequence of moves.
If for every word w accepted by A, Q surely wins in the game G ( s , γ w ) with α w , then A is a winning automaton for Q.
Symmetrically, A is a winning automaton for Picard, if for each word w accepted by A, Picard surely wins in the game G ( s , γ w ) with α w .
A more succinct way to express that A is a winning automaton for Q or Picard would be to write
w L A : Q ( G ( s , γ w ) , α w ) , and
w L A : P ( G ( s , γ w ) , α w ) ,
respectively.
First, we consider all finite games between Picard and Q that satisfy the following conditions (recall the hypotheses at the beginning of Section 4):
  • Picard’s actions are either I or F and Q’s action is H.
  • The coin is initially at state 0 .
  • Q wins if, when the game is over and the state of the coin is measured, it is found to be in state 0 ; otherwise, Picard wins.
The proofs of the main results of this section are easy but lengthy, so they are given in the Appendix A.
Theorem 1 (Winning automata for Q).
The automata A P Q , A P Q π / 2 , A P Q 6 , and A Q are all winning automata for Q.
Definition 8 (Complete automaton for winning sequences).
An automaton A with initial state s (s is either h e a d s or t a i l s ) is complete with respect to the winning sequences for Q if for every finite game between Picard and Q in which the coin is initially at state s , every sequence of actions that enables Q to win the game with probability 1.0 corresponds to a word accepted by A.
Symmetrically, A is complete with respect to the winning sequences for Picard, if for every finite game between Picard and Q and for every sequence of actions that enables Picard to win with probability 1.0 , the corresponding word is accepted by A.
More formally, the completeness property can be expressed as follows
γ N α A c t : Q ( G ( s , γ ) , α ) λ ¯ ( α ) L A , and
γ N α A c t : P ( G ( s , γ ) , α ) λ ¯ ( α ) L A .
Theorem 2 (Complete automaton for Q).
A Q is complete with respect to the winning sequences for Q.
To appreciate the importance of the completeness property, we point out that A P Q 6 is not complete for Q. Let us first consider the six-round game (Picard, Picard, Q, Q, Picard, Picard). In this game, Q surely wins if the action sequence ( F , F , H , H , F , F ) is played. The corresponding word is f f h h f f , which belongs to L A Q but not to L A P Q 6 . Thus, A P Q 6 fails to accept all winning sequences for Q, i.e., it is not complete in this respect. This counterexample demonstrates that A P Q 6 fails to be complete for Q.

7.1. Devising Other Variants

We can be even more flexible by using the semiautomaton A shown in Figure 5. Technically, A is not an automaton because no initial state and no final states are specified. However, A captures the essence of all games between Picard and Q because it can serve as a template for automata that correspond to games that satisfy specific properties. This is easily seen by considering the examples that follow. Recall that we always operate under the assumption that Q wins if, when the game is over and the state of the coin is measured, it is found to be in the initial state; otherwise Picard wins.

7.1.1. Changing the Initial State of the Coin

Suppose we want to construct a complete winning automaton for Q for all the games in which the coin is initially at state t a i l s = 1 . Starting from the semiautomaton A of Figure 5 we define
  • state t a i l s as the initial state, and
  • state t a i l s as the only accept state.
The resulting automaton A Q is depicted in Figure 6. The following theorem holds for A Q .
Theorem 3 (Complete and winning automaton II for Q).
A Q is a complete and winning automaton for Q for all the games in which the initial state of the coin is t a i l s = 1 .

7.1.2. Picard Surely Wins

By suitably modifying the semiautomaton A, we can also design a complete winning automaton for Picard for all the games in which the coin is initially at state h e a d s = 0 . We can do that by
  • setting h e a d s as the initial state; and
  • setting t a i l s as the only accept state.
This will result in the automaton A P depicted in Figure 7, for which one can easily prove the next theorem.
Theorem 4 (Complete and winning automaton for Picard).
A P is a complete and winning automaton for Picard for all the games in which the initial state of the coin is h e a d s = 0 .
Similarly, we can define a complete winning automaton for Picard for all the games in which the coin is initially at state t a i l s = 1 . All we have to do is
  • set t a i l s as the initial state; and
  • set h e a d s as the only accept state.
This will result in the automaton A P shown in Figure 8, for which one can easily show that the following theorem holds.
Theorem 5 (Complete and winning automaton II for Picard).
A P is a complete and winning automaton for Picard for all the games in which the initial state of the coin is t a i l s = 1 .

7.1.3. Fair Games

Up to this point, we have focused on winning action sequences for Q or Picard, that is sequences for which Q or Picard, respectively, wins the game with probability 1.0 . However, we can also capture action sequences for which both players have equal probability 0.5 to win the game. We call such sequences fair.
Definition 9.
Let α be an admissible sequence for the underlying game G ( s , χ ( α ) ) . If both Q and Picard have equal probability 0.5 to win the game G ( s , χ ( α ) ) using α, we say that α is a fair sequence for Q and Picard in G ( s , χ ( α ) ) .
An automaton A with initial state s (s is either h e a d s or t a i l s ) is complete with respect to the fair sequences if for every finite game between Picard and Q in which the coin is initially at state s , every fair sequence corresponds to a word accepted by A.
The semiautomaton A of Figure 5 can help in this case too. The states s 2 and s 4 of A correspond to the states 2 2 0 + 2 2 1 and 2 2 0 2 2 1 of the coin, respectively, as can be seen in Table 2. These states share a common characteristic: if the coin ends up in any of them, then, after the measurement in the orthonormal basis { 0 , 1 } , the state of the coin will either be the basic ket 0 with probability 0.5 , or the basic ket 1 with equal probability 0.5 . Hence, if the coin ends up in these states, then both Q and Picard have equal probability 0.5 to win. Therefore, we can design an automaton that accepts all the fair sequences for all the games in which the coin is initially at state h e a d s = 0 by
  • setting h e a d s as the initial state; and
  • setting s 2 and s 4 as the accept states.
Symmetrically, we can define an automaton that accepts all the fair sequences for all the games in which the coin is initially at state t a i l s = 1 by
  • setting t a i l s as the initial state; and
  • setting s 2 and s 4 as the accept states.
The resulting automata are A 1 / 2 and A 1 / 2 , shown in Figure 9 and Figure 10, respectively.
Theorem 6 (Complete automata for fair sequences).
A 1 / 2 and A 1 / 2 are complete for fair sequences, that is they accept all fair sequences for all the games in which the initial state of the coin is h e a d s = 0 and t a i l s = 1 , respectively.

8. Conclusions and Further Work

Quantum technologies have attracted the interest of not only the academic community but also of the industry. This has led to further research on the relationship between classical and quantum computation. Standard and well-established notions and systems have to be examined and, if necessary, revised in the light of the upcoming quantum era.
In this work, we have presented a way to construct automata, and a semiautomaton, from the P Q game, such that the resulting automata and semiautomaton capture, in a specific sense, every conceivable variation and extension of the game. That is, the automata can be used to study possible variants of the game, and their accepting language can be used to determine strategies for any player, dominant or otherwise. Specifically, starting from the automaton that corresponds to the standard P Q game, we construct automata for various interesting variations of the P Q game, before finally presenting a semiautomaton that is in a sense “complete” with regard to the game and captures the “essence” of the generalized P Q game. This simply means that, by providing appropriate initial and final states for the semiautomaton, we can study any possible variation of the P Q game.
We remark that the automata presented here do much more than accepting dominant strategies. In game theory a strategy i for a player is strongly dominated by strategy j if the player’s payoff from i is strictly less than that from j. A strategy i for a player is a strongly dominant strategy iff all other strategies for this player are strongly dominated by i (see [1,2] for details). In our context, the strategy ( H , H ) for the original P Q game is a strongly dominant strategy for Q. The automata we have constructed accept sequences of actions by both players, i.e., sequences that contain the actions of both players. As we have explained in Section 7, they can be designed so as to accept all action sequences of all possible games between Picard and Q for which either Q surely wins, or Picard surely wins or even they both have probability 0.5 to win.
We believe that the current methodology can be easily extended to account for greater variation in the actions of Picard and Q. Our analysis was based on the premise that the set of actions is precisely A c t = { I , F , H } . This set can be augmented by adding a finite number of actions, as long as these actions represent rotations R θ through an angle θ about the origin and reflections F φ about a line through the origin that makes an angle φ with the positive x-axis, where θ = 2 π n and φ = 2 π m , where n , m are positive integers. In that way, Q, being the quantum player, would have many more actions in his disposal, rather than only H. However, more actions may not necessarily mean more winning strategies for Q. Obviously, in such a case, the resulting finite automata would have more states that the automata presented in this paper.
Future directions for this work are numerous, including the construction of automata expressing other quantum games, and the application of automata-theoretic notions to such games. The connection of standard finite automata with the players actions on a particular quantum game can only be seen as a first step in the direction of checking, not only other games, but also different game modes on already known setups.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments that greatly contributed to improving the quality of the final version of the paper.

Author Contributions

All of the authors have contributed extensively to this work. T.A. and M.V. conceived the initial idea. K.G. and A.Sir. assisted T.A. in developing the theory presented in the main part of this paper. M.V., K.K., and A.Sin. thoroughly analyzed the current literature. T.A. and K.G. were responsible for supervising the completion of this work. A.Sir. and T.A. contributed by giving the formal definitions and the mathematical proofs used in the paper. All authors contributed to the writing of the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PQPicard-Q
NFANondeterministic finite automaton
DFADeterministic finite automaton

Appendix A. Proofs of the Main Results

It is clear from our prior analysis that, under the assumptions that the coin is initially at state h e a d s = 0 or t a i l s = 1 and the actions of the players are precisely I , F and H, the only states the coin may pass through are the eight states shown in Table 2. This fact prompts the following definition.
Definition A1.
The set of kets { 0 , 2 2 0 + 2 2 1 , 1 , 2 2 0 2 2 1 } that represent the possible states of the coin is denoted by C. C H 2 is a finite subset of the two-dimensional complex Hilbert space H 2 .
For completeness, we state the following Lemma A1. Its proof is trivial and is omitted.
Lemma A1.
C is closed with respect to the actions I , F and H.
To prove the main theorems of this paper, we will have to give a few technical definitions.
Definition A2.
The transition function δ of a deterministic automaton A can be extended to a function δ ¯ : K × Σ K , where K is the set of states and Σ the alphabet of A. Let q K , l Σ , and w 0 , w Σ ; then δ ¯ is defined recursively as follows:
δ ¯ ( q , w ) = q , w = ε δ ( δ ¯ ( q , w 0 ) , l ) , w = w 0 l .
If a deterministic automaton is in state q and reads the word w, it will end up in state δ ¯ ( q , w ) . In this respect, the extended transition function is a convenient way to specify how an arbitrary word will affect the state of the automaton. For instance A Q , whose initial state is h e a d s , when fed with the input word f h f it will end up in state s 5 . In an analogous fashion, it will be useful to define a function that will specify how a sequence of actions will affect the state of the coin. Without further ado, we state the next definition.
Definition A3.
We define the function S : C × A c t C which gives the state of the coin after the application of the action sequence α, assuming that the coin is initially in state s . Formally,
S ( s , α ) = s , α = ϵ U ( S ( s , α 0 ) ) , α = ( α 0 , U ) ,
where U A c t and α 0 , α A c t .
Consider for example the action sequence α = ( I , F , H ) ; then S ( 0 , α ) = 2 2 0 2 2 1 and S ( 1 , α ) = 2 2 0 + 2 2 . Finally, we define the function φ and its inverse φ 1 . φ maps states of the automaton A Q to states of the coin. This function conveys exactly the same information as Table 2 and it will enable us to rigorously express what we mean by saying that A Q captures all the finite games between Picard and Q.
Definition A4.
We define the function φ : K C , where K is the set of states of the automaton A Q .
φ ( h e a d s ) = 0 , φ ( s 2 ) = 2 2 0 + 2 2 1 , φ ( t a i l s ) = 1 , φ ( s 4 ) = 2 2 0 2 2 1 .
Clearly, φ is a bijection, so it has an inverse function φ 1 : C K .
φ 1 ( 0 ) = h e a d s , φ 1 ( 2 2 0 + 2 2 1 ) = s 2 , φ 1 ( 1 ) = t a i l s , φ 1 ( 2 2 0 2 2 1 ) = s 4 .
The next Lemma states that A Q is a faithful representation of the coin.
Lemma A2 (Faithful representation Lemma).
The states and the transitions of the coin are faithfully represented by the states and the transitions of A Q in the following precise sense
w Σ q K : φ ( δ ¯ ( q , w ) ) = S ( φ ( q ) , μ ¯ ( w ) ) , and
α A c t s C : φ 1 ( S ( s , α ) ) = δ ¯ ( φ 1 ( s ) , λ ¯ ( α ) ) .
Proof. 
Typically, the proof is by simultaneous induction on the length n of w and α .
  • When n = 0 , the only word of length 0 is the empty word ε . In this case, by Definition 6 μ ¯ ( ε ) = ϵ , by Definition A2 δ ¯ ( q , ε ) = q and, by Definition A3, S ( φ ( q ) , ϵ ) = φ ( q ) . Equation (A5) then reduces to φ ( q ) = φ ( q ) , which is trivially true.
    Similarly, when n = 0 , α is the empty action sequence ϵ , in which case λ ¯ ( ϵ ) = ε (Definition 6), δ ¯ ( φ 1 ( s ) , ε ) = φ 1 ( s ) (Definition A2), and S ( s , ϵ ) = s (Definition A3). In this special case, Equation (A6) becomes φ 1 ( s ) = φ 1 ( s ) , which is of course true.
  • We assume that (A5) and (A6) hold for n = k and for all q K and s C .
  • It remains to prove Equations (A5) and (A6) for n = k + 1 .
    Consider an arbitrary word w over Σ of length k + 1 . w can be written as w 0 l where w 0 is a word of length k and l is one of i , f or h. By the induction hypothesis we know that
    q K : φ ( δ ¯ ( q , w 0 ) ) = S ( φ ( q ) , μ ¯ ( w 0 ) ) .
    There are three cases to consider, depending on whether l = i , l = f or l = h .
    If l = i , then w = w 0 i and the transition function of A Q (Figure 4) ensures that δ ¯ ( q , w 0 ) = δ ¯ ( q , w 0 i ) ( ) . At the same time, by Definition 6, μ ¯ ( w 0 i ) = ( μ ¯ ( w 0 ) , I ) and, by Definition A3, S ( φ ( q ) , ( μ ¯ ( w 0 ) , I ) ) = I ( S ( φ ( q ) , μ ¯ ( w 0 ) ) ) = S ( φ ( q ) , μ ¯ ( w 0 ) ) ( ) because I is the identity operator. Using ( ) , ( ) , and the induction hypothesis in Equation (A7), we get φ ( δ ¯ ( q , w 0 i ) ) = ( ) φ ( δ ¯ ( q , w 0 ) ) = ( A 7 ) S ( φ ( q ) , μ ¯ ( w 0 ) ) = ( ) S ( φ ( q ) , ( μ ¯ ( w 0 ) , I ) ) . Thus, in this case, Equation (A5) holds.
    If l = f , then w = w 0 f . With respect to f the transition function of A Q (Figure 4) is a bit more complicated, which implies that each state of A Q must be examined separately. Let’s begin with state h e a d s , that is let’s assume that δ ¯ ( q , w 0 ) = h e a d s . Then, the transition function requires that δ ¯ ( q , w 0 f ) = t a i l s . Accordingly, Definition A4 implies that
    φ ( δ ¯ ( q , w 0 ) ) = φ ( h e a d s ) = 0 φ ( δ ¯ ( q , w 0 f ) ) = φ ( t a i l s ) = 1 .
    By the induction hypothesis in Equation (A7) and ( * ) we can deduce that
    S ( φ ( q ) , μ ¯ ( w 0 ) ) = ( A 7 ) φ ( δ ¯ ( q , w 0 ) ) = ( * ) 0 .
    Combining Definitions 6 and (A3) with ( * * ) we derive that μ ¯ ( w 0 f ) = ( μ ¯ ( w 0 ) , F ) and
    S ( φ ( q ) , ( μ ¯ ( w 0 ) , F ) ) = ( D e f . A 3 ) F ( S ( φ ( q ) , μ ¯ ( w 0 ) ) ) = ( * * ) F 0 = 1
    because F is the flip operator. Therefore, if δ ¯ ( q , w 0 ) = h e a d s , then
    φ ( δ ¯ ( q , w 0 f ) ) = ( * ) 1 = ( * * * ) S ( φ ( q ) , ( μ ¯ ( w 0 ) , F ) ) ,
    that is Equation (A5) holds. It is straightforward to repeat the same reasoning for the remaining states of A Q and verify in each case the validity of Equation (A5).
    If l = h , then w = w 0 h . As in the previous case, we have to examine each state of A Q separately. If δ ¯ ( q , w 0 ) = h e a d s , then, according to the transition function, δ ¯ ( q , w 0 h ) = s 2 . Recalling Definition A4 we see that
    φ ( δ ¯ ( q , w 0 ) ) = φ ( h e a d s ) = 0 φ ( δ ¯ ( q , w 0 h ) ) = φ ( s 2 ) = 2 2 0 + 2 2 1 .
    By the induction hypothesis in Equation (A7) and ( ) we conclude that
    S ( φ ( q ) , μ ¯ ( w 0 ) ) = ( A 7 ) φ ( δ ¯ ( q , w 0 ) ) = ( ) 0 .
    Together, Definitions 6 and A3 and ( ) imply that μ ¯ ( w 0 h ) = ( μ ¯ ( w 0 ) , H ) and
    S ( φ ( q ) , ( μ ¯ ( w 0 ) , H ) ) = ( D e f . A 3 ) H ( S ( φ ( q ) , μ ¯ ( w 0 ) ) ) = ( ) H 0 = 2 2 0 + 2 2 1
    because H is the Hadamard operator. Hence, if δ ¯ ( q , w 0 ) = h e a d s , then
    φ ( δ ¯ ( q , w 0 h ) ) = ( ) 2 2 0 + 2 2 1 = ( ) S ( φ ( q ) , ( μ ¯ ( w 0 ) , H ) ) ,
    showing that Equation (A5) holds. Repeating analogous arguments for the remaining states of A Q allows us to establish the validity of Equation (A5).
    We proceed now to show that Equation (A6) holds. Consider an arbitrary action sequence α of length k + 1 : α = ( α 0 , U ) , where α 0 is the prefix action sequence of length k and U is one of the unitary operators I , F or H. In this case the induction hypothesis becomes
    s C : φ 1 ( S ( s , α 0 ) ) = δ ¯ ( φ 1 ( s ) , λ ¯ ( α 0 ) ) .
    Since U stands for one of I , F or H, we must distinguish three cases.
    If U is the identity operator I then, by Definition A3, S ( s , ( α 0 , I ) ) = I ( S ( s , α 0 ) ) = S ( s , α 0 ) ( ) . Hence, φ 1 ( S ( s , α ) ) = ( ) φ 1 ( S ( s , α 0 ) ) = ( A 8 ) δ ¯ ( φ 1 ( s ) , λ ¯ ( α 0 ) ) ( ) . The transition function of A Q (Figure 4) guarantees that w Σ q K δ ¯ ( q , w ) = δ ¯ ( q , w i ) . Therefore, δ ¯ ( φ 1 ( s ) , λ ¯ ( α 0 ) ) = δ ¯ ( φ 1 ( s ) , λ ¯ ( α 0 ) i ) = ( D e f . 5 ) δ ¯ ( φ 1 ( s ) , λ ¯ ( α 0 ) λ ( I ) ) = ( D e f . 6 ) δ ¯ ( φ 1 ( s ) , λ ¯ ( α ) ) ( ) . Combining ( ) and ( ) , we conclude that φ 1 ( S ( s , α ) ) = δ ¯ ( φ 1 ( s ) , λ ¯ ( α ) ) , i.e., (A6) holds.
    If U is the flip operator F, then each ket of C must be examined separately. Let us begin with ket 0 , that is let’s assume that S ( s , α 0 ) = 0 . Then, by Definition A3, S ( s , α ) = S ( s , ( α 0 , F ) ) = F ( S ( s , α 0 ) ) = 1 . In this case Definition A4 implies that
    φ 1 ( S ( s , α 0 ) ) = φ 1 ( 0 ) = h e a d s φ 1 ( S ( s , α ) ) = φ 1 ( 1 ) = t a i l s .
    By the induction hypothesis in Equation (A8) and ( * ) we see that
    δ ¯ ( φ 1 ( s ) , λ ¯ ( α 0 ) ) = ( A 8 ) φ 1 ( S ( s , α 0 ) ) = ( * ) h e a d s .
    Combining Definitions 6 and A2 with ( * * ) we derive that λ ¯ ( α ) = λ ¯ ( α 0 ) λ ( F ) = λ ¯ ( α 0 ) f and
    δ ¯ ( φ 1 ( s ) , λ ¯ ( α ) ) = δ ¯ ( φ 1 ( s ) , λ ¯ ( α 0 ) f ) = ( D e f . A 2 ) δ ( δ ¯ ( φ 1 ( s ) , λ ¯ ( α 0 ) ) , f ) = ( * * ) δ ( h e a d s , f ) = t a i l s ,
    by the transition function of transition function of A Q (Figure 4). Consequently,
    φ 1 ( S ( s , α ) ) = ( * ) t a i l s = ( * * * ) δ ¯ ( φ 1 ( s ) , λ ¯ ( α ) ) ,
    that is Equation (A6) holds. It is straightforward to repeat the same reasoning for the remaining kets of C and verify in each case the validity of Equation (A6).
    The last case we have to examine is when U is the Hadamard operator H, in which case α = ( α 0 , H ) . As in the previous case, we have to check each ket of C. Let us consider first the case where S ( s , α 0 ) = 0 . Then, by Definition A3, S ( s , α ) = S ( s , ( α 0 , H ) ) = H ( S ( s , α 0 ) ) = 2 2 0 + 2 2 1 . In this case Definition A4 implies that
    φ 1 ( S ( s , α 0 ) ) = h e a d s φ 1 ( S ( s , α ) ) = s 2 .
    By the induction hypothesis in Equation (A8) and ( ) we see that
    δ ¯ ( φ 1 ( s ) , λ ¯ ( α 0 ) ) = ( A 8 ) φ 1 ( S ( s , α 0 ) ) = ( ) h e a d s .
    Combining Definitions 6 and A2 with ( ) we derive that λ ¯ ( α ) = λ ¯ ( α 0 ) λ ( H ) = λ ¯ ( α 0 ) h and
    δ ¯ ( φ 1 ( s ) , λ ¯ ( α ) ) = δ ¯ ( φ 1 ( s ) , λ ¯ ( α 0 ) h ) = ( D e f . A 2 ) δ ( δ ¯ ( φ 1 ( s ) , λ ¯ ( α 0 ) ) , h ) = ( ) δ ( h e a d s , h ) = s 2 ,
    by the transition function of transition function of A Q (Figure 4). Finally,
    φ 1 ( S ( s , α ) ) = ( ) s 2 = ( ) δ ¯ ( φ 1 ( s ) , λ ¯ ( α ) ) ,
    that is Equation (A6) holds. Using similar arguments, we can prove Equation (A6) for the remaining kets of C.
Theorem A1 (Winning automaton).
A Q is a winning automaton for Q.
Proof. 
Recalling Definition 7 and taking into account that the initial state of A Q is h e a d s , we see that we must prove that
w L A Q : Q ( G ( 0 , γ w ) , α w ) ,
where α w = μ ¯ ( w ) and γ w = χ ( μ ¯ ( w ) ) .
Let us first consider the special case where w is the empty word ε , which obviously belongs to L A Q . By Definition 6, ε corresponds to the empty action sequence ϵ , which, by Definition 4, corresponds to empty sequence of moves e, which, by Definition 3, corresponds to the trivial game G ( 0 , e ) . Q wins this game, so in this special case Q ( G ( 0 , e ) , ϵ ) is true.
We consider now an arbitrary word w of L A Q . Applying Lemma A2 and taking into account that the initial state of A Q is h e a d s , we arrive at the conclusion that
φ ( δ ¯ ( h e a d s , w ) ) = S ( 0 , μ ¯ ( w ) ) .
The fact that w is accepted by A Q means that δ ¯ ( h e a d s , w ) = h e a d s , which in turn implies (recall Definition A4) that
φ ( δ ¯ ( h e a d s , w ) ) = 0 .
Together ( ) and ( ) give
S ( 0 , μ ¯ ( w ) ) = 0 .
Hence, if the initial state of the coin is 0 , and the sequence of actions μ ¯ ( w ) is applied, then the coin will end up, prior to measurement, in state 0 . The outcome of the measurement in the orthonormal basis { 0 , 1 } will be 0 with probability 1.0 . Finally, by Definition 4, μ ¯ ( w ) is a winning sequence for G ( 0 , χ ( μ ¯ ( w ) ) ) . Therefore, Equation (A9) holds. ☐
In an identical manner we can show the next Corollary.
Corollary A1.
The automata A P Q , A P Q π / 2 , and A P Q 6 are all winning automata for Q.
Theorem A2 (Complete automaton for Q).
A Q is complete with respect to the winning sequences for Q.
Proof. 
We must show that
γ N α A c t : Q ( G ( 0 , γ ) , α ) λ ¯ ( α ) L A Q .
Let us first consider the special case where γ is the empty sequence of moves e, which, by Definition 3, corresponds to the trivial game G ( 0 , e ) . In this case, the only admissible action sequence α is the empty sequence ϵ , which is a winning sequence for Q. Obviously, the corresponding word is the empty word, which is, of course, recognized by A Q . Thus, in this special case, Equation (A11) is true.
We consider now an arbitrary sequence of moves γ and an arbitrary winning sequence α for the game G ( 0 , γ ) . Applying Lemma A2 and taking into account that the initial state of A Q is h e a d s , we arrive at the conclusion that
S ( φ ( h e a d s ) , α ) = S ( 0 , α ) = φ ( δ ¯ ( h e a d s , λ ¯ ( α ) ) ) .
The fact that Q wins with probability 1.0 means the final state of the coin, before measurement, is 0 , that is S ( 0 , α ) = 0 , which, in view of ( ) , implies that φ ( δ ¯ ( h e a d s , λ ¯ ( α ) ) ) = 0 . Consequently, by Definition A4
δ ¯ ( h e a d s , λ ¯ ( α ) ) = h e a d s .
Hence, A Q starting from the initial state h e a d s will surely end up in state h e a d s upon reading the word λ ¯ ( α ) . The fact that h e a d s is an accepting state, allows us to conclude that λ ¯ ( α ) belongs to L A Q and (A11) holds. ☐
Theorem A3 (Complete and winning automaton II for Q).
A Q is a complete and winning automaton for Q for all the games in which the initial state of the coin is t a i l s = 1 .
Proof. 
The proof is just a repetition of the proofs of Theorems A1 and A2, the only difference being that this time the games begin with the coin at state t a i l s = 1 . ☐
Theorem A4 (Complete and winning automaton for Picard).
A P is a complete and winning automaton for Picard for all the games in which the initial state of the coin is h e a d s = 0 .
Proof. 
Again the proof is just a repetition of the proofs of Theorems A1 and A2. The difference now is that the accepting state is t a i l s . ☐
Theorem A5 (Complete and winning automaton II for Picard).
A P is a complete and winning automaton for Picard for all the games in which the initial state of the coin is t a i l s = 1 .
Proof. 
Once more we repeat the proofs of Theorems A1 and A2. In this case the games begin with the coin at state t a i l s = 1 , the initial state of A P is t a i l s and the accepting state is h e a d s . ☐
Theorem A6 (Complete automata for fair sequences).
A 1 / 2 and A 1 / 2 are complete for fair sequences, that is they accept all fair sequences for all the games in which the initial state of the coin is h e a d s = 0 and t a i l s = 1 , respectively.
Proof. 
We first show that γ N α A c t
If Q and Picard have probability 0.5 to win G ( 0 , γ ) using α , then λ ¯ ( α ) L A 1 / 2 .
Before we give the proof let us point out that this time γ cannot be the empty sequence of moves e because, by Definition 3, it would correspond to the trivial game G ( 0 , e ) . For the trivial game the only admissible action sequence α is the empty sequence ϵ , which is not a fair sequence. Naturally, the corresponding empty word is not accepted by A 1 / 2 .
We consider now an arbitrary sequence of moves γ and an arbitrary fair sequence α for the game G ( 0 , γ ) . Applying Lemma A2 and taking into account that the initial state of A 1 / 2 is h e a d s , we arrive at the conclusion that
S ( φ ( h e a d s ) , α ) = S ( 0 , α ) = φ ( δ ¯ ( h e a d s , λ ¯ ( α ) ) ) .
The fact that both Q and Picard have probability 0.5 to win means the final state of the coin before measurement is either 2 2 0 + 2 2 1 or 2 2 0 2 2 1 . This is guaranteed by Lemma A1 which asserts that the coin can only pass through the states in C. Hence, S ( 0 , α ) = 2 2 0 + 2 2 1 or S ( 0 , α ) = 2 2 0 2 2 1 . In view of ( ) , this means that φ ( δ ¯ ( h e a d s , λ ¯ ( α ) ) ) = 2 2 0 + 2 2 1 , or φ ( δ ¯ ( h e a d s , λ ¯ ( α ) ) ) = 2 2 0 2 2 1 . Therefore, by Definition A4, δ ¯ ( h e a d s , λ ¯ ( α ) ) is either s 2 or s 4 ( ). Thus, A 1 / 2 starting from the initial state h e a d s will end up in one of s 2 or s 4 upon reading the word λ ¯ ( α ) . Since both of these states are accepting states, we conclude that λ ¯ ( α ) belongs to L A 1 / 2 and A 1 / 2 is complete for fair sequences.
In a similar manner we can show that A 1 / 2 is also complete for fair sequences. ☐

References

  1. Gintis, H. Game Theory Evolving: A Problem-Centered Introduction to Modeling Strategic Interaction, 2nd ed.; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  2. Tadelis, S. Game Theory: An Introduction; Princeton University Press: Princeton, NJ, USA, 2013. [Google Scholar]
  3. Myerson, R. Game Theory; Harvard University Press: Cambridge, MA, USA, 1997. [Google Scholar]
  4. Von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Science Editions; J. Wiley: Hoboken, NJ, USA, 1944. [Google Scholar]
  5. Meyer, D.A. Quantum strategies. Phys. Rev. Lett. 1999, 82, 1052–1055. [Google Scholar] [CrossRef]
  6. Salimi, S.; Soltanzadeh, M. Investigation of quantum roulette. Int. J. Quantum Inf. 2009, 7, 615–626. [Google Scholar] [CrossRef]
  7. Wang, X.B.; Kwek, L.; Oh, C. Quantum roulette: An extended quantum strategy. Phys. Lett. A 2000, 278, 44–46. [Google Scholar]
  8. Ren, H.F.; Wang, Q.L. Quantum game of two discriminable coins. Int. J. Theor. Phys. 2008, 47, 1828–1835. [Google Scholar] [CrossRef]
  9. Nguyen, A.T.; Frison, J.; Huy, K.P.; Massar, S. Experimental quantum tossing of a single coin. New J. Phys. 2008, 10, 083037. [Google Scholar] [CrossRef]
  10. Berlin, G.; Brassard, G.; Bussieres, F.; Godbout, N. Fair loss-tolerant quantum coin flipping. Phys. Rev. A 2009, 80, 062321. [Google Scholar] [CrossRef]
  11. Ambainis, A. A new protocol and lower bounds for quantum coin flipping. In Proceedings of the Thirty-Third Annual ACM Symposium on Theory of Computing, Crete, Greece, 6–8 July 2001; pp. 134–142. [Google Scholar]
  12. Ambainis, A.; Buhrman, H.; Dodis, Y.; Rohrig, H. Multiparty quantum coin flipping. In Proceedings of the 19th IEEE Annual Conference on Computational Complexity, Amherst, MA, USA, 24 June 2004; pp. 250–259. [Google Scholar]
  13. Neyman, A. Bounded complexity justifies cooperation in the finitely repeated prisoners’ dilemma. Econ. Lett. 1985, 19, 227–229. [Google Scholar] [CrossRef]
  14. Rubinstein, A. Finite automata play the repeated prisoner’s dilemma. J. Econ. Theory 1986, 39, 83–96. [Google Scholar] [CrossRef]
  15. Abreu, D.; Rubinstein, A. The structure of Nash equilibrium in repeated games with finite automata. Econometrica 1988, 56, 1259–1281. [Google Scholar] [CrossRef]
  16. Binmore, K.G.; Samuelson, L. Evolutionary stability in repeated games played by finite automata. J. Econ. Theory 1992, 57, 278–305. [Google Scholar]
  17. Ben-Porath, E. Repeated games with finite automata. J. Econ. Theory 1993, 59, 17–32. [Google Scholar] [CrossRef]
  18. Marks, R.E. Repeated Games and Finite Automata; Australian Graduate School of Management, University of New South Wales: Sydney, Australia, 1990. [Google Scholar]
  19. Eisert, J.; Wilkens, M.; Lewenstein, M. Quantum games and quantum strategies. Phys. Rev. Lett. 1999, 83, 3077. [Google Scholar]
  20. Benjamin, S.C.; Hayden, P.M. Comment on “Quantum Games and Quantum Strategies”. Phys. Rev. Lett. 2001, 87, 069801. [Google Scholar] [CrossRef] [PubMed]
  21. Zhang, S. Quantum strategic game theory. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, Cambridge, MA, USA, 8–10 January 2012; ACM: New York, NY, USA, 2012; pp. 39–59. [Google Scholar]
  22. Flitney, A.P.; Abbott, D. An introduction to quantum game theory. Fluct. Noise Lett. 2002, 2, R175–R187. [Google Scholar] [CrossRef]
  23. Lee, C.F.; Johnson, N. Parrondo games and quantum algorithms. arXiv, 2002; quant-ph/0203043. [Google Scholar]
  24. Meyer, D.A.; Blumer, H. Parrondo games as lattice gas automata. J. Stat. Phys. 2002, 107, 225–239. [Google Scholar] [CrossRef]
  25. Bertelle, C.; Flouret, M.; Jay, V.; Olivier, D.; Ponty, J.L. Adaptive behaviour for prisoner dilemma strategies based on automata with multiplicities. In Proceedings of the ESS 2002 Conference, Dresden, Germany, 23–26 October 2002. [Google Scholar]
  26. Piotrowski, E.W.; Sladkowski, J. The next stage: Quantum game theory. arXiv, 2003; quant-ph/0308027. [Google Scholar]
  27. Suwais, K. Assessing the Utilization of Automata in Representing Players’ Behaviors in Game Theory. Int. J. Ambient Comput. Intell. 2014, 6, 1–14. [Google Scholar] [CrossRef]
  28. Almanasra, S.; Suwais, K.; Rafie, M. The Applications of Automata in Game Theory. In Intelligent Technologies and Techniques for Pervasive Computing; IGI Global: Hershey, PA, USA, 2013; pp. 204–217. [Google Scholar]
  29. Li, L.; Feng, Y. On hybrid models of quantum finite automata. J. Comput. Syst. Sci. 2015, 81, 1144–1158. [Google Scholar] [CrossRef]
  30. Zheng, S.; Li, L.; Qiu, D.; Gruska, J. Promise problems solved by quantum and classical finite automata. Theor. Comput. Sci. 2017, 666, 48–64. [Google Scholar] [CrossRef]
  31. Li, L.; Qiu, D. Lower bounds on the size of semi-quantum finite automata. Theor. Comput. Sci. 2016, 623, 75–82. [Google Scholar] [CrossRef]
  32. Gainutdinova, A.; Yakaryılmaz, A. Unary probabilistic and quantum automata on promise problems. Quantum Inf. Process. 2018, 17, 28. [Google Scholar] [CrossRef]
  33. Giannakis, K.; Papalitsas, C.; Kastampolidou, K.; Singh, A.; Andronikos, T. Dominant Strategies of Quantum Games on Quantum Periodic Automata. Computation 2015, 3, 586–599. [Google Scholar] [CrossRef]
  34. Preskill, J. Quantum Information and Computation. 2017. Available online: http://www.theory.caltech.edu/preskill/ph219/ph219_2017 (accessed on 18 December 2017).
  35. Sipser, M. Introduction to the Theory of Computation, 2nd ed.; Course Technology: Boston, MA, USA, 2006. [Google Scholar]
  36. Yakhnis, A.; Yakhnis, V. Gurevich–Harrington’s games defined by finite automata. Ann. Pure Appl. Logic 1993, 62, 265–294. [Google Scholar] [CrossRef]
  37. Cox, E.; Schkufza, E.; Madsen, R.; Genesereth, M. Factoring general games using propositional automata. In Proceedings of the IJCAI Workshop on General Intelligence in Game-Playing Agents (GIGA), Pasadena, CA, USA, 13 July 2009; pp. 13–20. [Google Scholar]
  38. Rabin, M.O.; Scott, D. Finite automata and their decision problems. IBM J. Res. Dev. 1959, 3, 114–125. [Google Scholar] [CrossRef]
Figure 1. This two state automaton A P Q captures the moves of the P Q game.
Figure 1. This two state automaton A P Q captures the moves of the P Q game.
Mathematics 06 00020 g001
Figure 2. The two-state automaton A P Q π / 2 captures the possible moves of the P Q π / 2 game, in which the initial state of the coin is t a i l s . The accepting state now is t a i l s .
Figure 2. The two-state automaton A P Q π / 2 captures the possible moves of the P Q π / 2 game, in which the initial state of the coin is t a i l s . The accepting state now is t a i l s .
Mathematics 06 00020 g002
Figure 3. The automaton A P Q 6 corresponding to the six-round P Q 6 game.
Figure 3. The automaton A P Q 6 corresponding to the six-round P Q 6 game.
Mathematics 06 00020 g003
Figure 4. The automaton A Q corresponding to the 3 n -round variant P Q 3 n , for n 3 .
Figure 4. The automaton A Q corresponding to the 3 n -round variant P Q 3 n , for n 3 .
Mathematics 06 00020 g004
Figure 5. The semiautomaton A capturing the essence of the P Q game and its variants.
Figure 5. The semiautomaton A capturing the essence of the P Q game and its variants.
Mathematics 06 00020 g005
Figure 6. The automaton A Q accepts all winning sequences for Q when the initial state of the coin is t a i l s .
Figure 6. The automaton A Q accepts all winning sequences for Q when the initial state of the coin is t a i l s .
Mathematics 06 00020 g006
Figure 7. The automaton A P accepts all winning sequences for Picard when the initial state of the coin is h e a d s .
Figure 7. The automaton A P accepts all winning sequences for Picard when the initial state of the coin is h e a d s .
Mathematics 06 00020 g007
Figure 8. The automaton A P accepts all winning sequences for P when the coin is initially tails up.
Figure 8. The automaton A P accepts all winning sequences for P when the coin is initially tails up.
Mathematics 06 00020 g008
Figure 9. The automaton A 1 / 2 captures the fair action sequences when the coin is initially heads up.
Figure 9. The automaton A 1 / 2 captures the fair action sequences when the coin is initially heads up.
Mathematics 06 00020 g009
Figure 10. The automaton A 1 / 2 captures the fair action sequences when the initial state of the coin is t a i l s .
Figure 10. The automaton A 1 / 2 captures the fair action sequences when the initial state of the coin is t a i l s .
Mathematics 06 00020 g010
Table 1. Correspondence between the operators I , F and H and the letters of the alphabet Σ = { i , f , h } .
Table 1. Correspondence between the operators I , F and H and the letters of the alphabet Σ = { i , f , h } .
(a)(b)(c)
Operators vs. LettersLetter Assignment λ Operator Assignment μ
OperatorsLetters λ : { I , F , H } { i , f , h } μ : { i , f , h } { I , F , H }
Ii λ ( I ) = i μ ( i ) = I
Ff λ ( F ) = f μ ( f ) = F
Hh λ ( H ) = h μ ( h ) = H
Table 2. During the games played by Picard and Q, the coin may pass through the states shown in the left column of this Table. The corresponding states of the automata that capture these game are shown in the right column of this Table.
Table 2. During the games played by Picard and Q, the coin may pass through the states shown in the left column of this Table. The corresponding states of the automata that capture these game are shown in the right column of this Table.
Coin StateAutomaton State
1 0 T = h e a d s = 0 h e a d s
2 2 2 2 T = 2 2 0 + 2 2 1 s 2
0 1 T = t a i l s = 1 t a i l s
2 2 2 2 T = 2 2 0 2 2 1 s 4

Share and Cite

MDPI and ACS Style

Andronikos, T.; Sirokofskich, A.; Kastampolidou, K.; Varvouzou, M.; Giannakis, K.; Singh, A. Finite Automata Capturing Winning Sequences for All Possible Variants of the PQ Penny Flip Game. Mathematics 2018, 6, 20. https://doi.org/10.3390/math6020020

AMA Style

Andronikos T, Sirokofskich A, Kastampolidou K, Varvouzou M, Giannakis K, Singh A. Finite Automata Capturing Winning Sequences for All Possible Variants of the PQ Penny Flip Game. Mathematics. 2018; 6(2):20. https://doi.org/10.3390/math6020020

Chicago/Turabian Style

Andronikos, Theodore, Alla Sirokofskich, Kalliopi Kastampolidou, Magdalini Varvouzou, Konstantinos Giannakis, and Alexander Singh. 2018. "Finite Automata Capturing Winning Sequences for All Possible Variants of the PQ Penny Flip Game" Mathematics 6, no. 2: 20. https://doi.org/10.3390/math6020020

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop