1. Introduction
Quantum physics has the potential to make a great impact upon information technology, especially through the development of universal quantum computers. However, near-term quantum devices will not be capable of fault-tolerant, universal quantum computation. Luckily these devices will still be of use for information processing tasks, in particular as genuine random number generators. Certifiable (private) random numbers can then be used for cryptography, the simulation of physical systems, or other randomised algorithms. By certifiable we mean that there is a certificate guaranteeing that the randomness is private and unpredictable from any external agent (who is not directly using the device). This certificate may be predicated on certain assumptions, which could be computational or physical in nature, depending on the degree of security desired.
It is now well established that quantum systems are capable of producing data that is unpredictable, and thus random to any external agent, even when one has perfect knowledge of the quantum system. Unfortunately, in practice it can be difficult to have perfect knowledge of quantum systems, especially if they are somewhat noisy, as near-term quantum devices will be. These (often classical) sources of noise can appear as unpredictable as the randomness resulting from the quantum systems, so one must have an excellent characterisation of the sources of noise to extract the true quantum randomness. Indeed, if the noise is just classical data then it could have been generated by some external process and thus an external agent could, in principle, keep a copy of this data and use it to predict the output data of a quantum device.
There does exist a convenient approach to certifiable quantum random number generation, which is
device-independent randomness certification. In this scenario one does not need a complete characterisation of a device; genuine randomness is certified by the violation of a Bell inequality [
1] between two, or more, devices. That is, certification is achieved via the statistics produced in a Bell test, without any specific assumptions made on the devices producing the statistics. The kind of assumption made in this approach is to assume that devices are quantum mechanical or that there multiple, non-communicating devices that might share some resource. Furthermore, there are no computational assumptions made about the device producing the randomness. The downside of this approach is that a genuine violation of a Bell inequality is experimentally daunting, with the first loophole-free demonstrations emerging very recently [
2].
Given the experimental challenges of device-independent random number generation, [
3,
4] a promising and practical route to certifiable randomness generation is within the scope of one-sided device-independent quantum information [
5]. In this setting, certain devices are assumed to be perfectly characterised (through trusted and characterised measurement devices) while others are not. Randomness can be certified based on the violation of a steering inequality [
6], which is the analogue of a Bell inequality for this new setting.
Within the framework of device-independent randomness certification it was shown that a single entangled pair of qubits (in a pure state) can be a source of “unbounded" random numbers, one qubit for each wing of the Bell experiment [
7]. That is, one can fix a value
N of random bits that one would like to obtain, and then construct a scheme with sequences of measurements on the two-qubit state that will produce
N bits of randomness. Thus by using a sequence of measurements, one can exceed the randomness possible from a single general measurement, which for a qubit is 2 bits [
8]. One issue is that this randomness certification scheme involves a large number of measurements (exponential in the size of the output random string) for one of the parties and limits its utility for various protocols.
In this work, we study the adaptation of the above sequential measurement scenario to the one-sided device independent scenario. In doing so, we develop a more robust scheme no longer requiring exponentially many measurements for one of the parties. We present an analytical bound on the min entropy of our randomness generation scheme. We then go on to give numerical results to derive more optimal rates of randomness generation. Furthermore, we discuss how the scheme could be implemented in current architectures for networked quantum information processing. This is an extended version of the following conference paper [
9].
Related Work
In
Table 1 we compare our work with that of [
7], showing how, by trusting one party’s measurements, we exponentially reduce the number of measurements required. In work by Skrzypczyk and Cavalcanti, it was shown how, by increasing the local Hilbert space dimension of the quantum state held by Alice and Bob, more randomness can be certified in the one-sided device-independent scenario [
10]. In particular, for a local dimension
d, then
bits can be certified. This work is built on a series of works in one-sided device-independent randomness certification, with [
11] establishing tools based on semi-definite programming.
Our cryptographic scenario is intermediate between the device-independent and the device-dependent scenarios. Another such example of an intermediate scenario is that of semi-device-independent quantum information [
12,
13], where one bounds the dimension (or energy) of the Hilbert space of the systems involved. Randomness certification has been shown in this scenario, with experimental implementations of various protocols [
14,
15]. This scenario is not comparable with that of one-sided device-independence due to the different assumptions, but it demonstrates that such intermediate scenarios are of broad interest.
2. One-Sided Device Independence and Randomness Certification
Before introducing the scenario it is worthwhile briefly motivating it first from an experimental point-of-view. One particular kind of experimental set-up we have in mind is an atom-photon hybrid experiment, where one system is an atom in a cavity, and the other system is a photon, which is emitted from the atom. Instead of an atom in a cavity, an ion in a trap is another possibility. Photons are convenient for long-range communication, and ion trap technology is associated with high fidelity operations and excellent system control. As a result the detection efficiency in an ion trap is very close to perfect, but in spite of recent advances, photo-detectors are not. In a device-independent scheme, a lower detection efficiency can compromise the security of a protocol, so to circumvent these issues we can resort to the one-sided device-independent setting (1sDI). In this setting, the photonic system is taken to be trusted and well characterised thus ruling out detector-based attacks, and the atomic system is treated as a black box.
An extra motivation for this 1sDI scenario will be when one wants to consider sequences of measurements on the same system, as we will do. We need our technology to allow for the possibility of returning a quantum system after a measurement (thus being a non-trivial quantum instrument). This is experimentally challenging for photonic systems, but feasible within ion trap technology. Ideally we would thus like our trusted system to make very simple operations, such as a single measurement that does not return a quantum state as an output. In this way, we can see one-sided device independence as exploiting the best features of a hybrid quantum information experiment. This will be pertinent when we come to discuss implementations of our randomness certification scheme.
The idea of producing certifiable randomness using steering was first studied by Law et al. [
16], and then by Passaro et al. [
11], who utilised the techniques of semi-definite programming. The broad scenario considered in 1sDI information processing for randomness generation is the following. There are two parties, Alice (A), and Bob (B), who can share some resource. We allow for the possibility of a third party, Eve (E), having prepared the shared quantum resource. Alice’s share of the resource is assumed to be a quantum system with a known Hilbert space, upon which Alice can perform arbitrary (characterised) quantum operations. In particular, Alice can perform tomographically complete measurements. Bob’s share of the resource is contained within a black box and he can only input classical data into the box and retrieve more classical data; he does not have any knowledge of the inner workings of the black box, only that it has a quantum description. Bob can only collect statistics of the input and output data.
Given this scenario, the way in which we certify the randomness generated is through a (slightly modified) non-local guessing game [
11,
17]. We give a schematic of this guessing game in
Figure 1. In this game, in each round, Eve prepares a quantum state
, which we can assume to be pure through the Stinespring dilation (we could dilate the Hilbert spaces of Bob and Eve, for example). Then one subsystem is each distributed to Alice and Bob so that they share the joint state
. Since Alice has access to her respective subsystem she is able to characterise
, but Bob does not have direct access to his subsystem. Inside Bob’s device if he inputs the classical variable
y, which is his choice of measurement, and gets the output
b then a measurement is made on Bob’s subsystem, which is described by the positive operator
such that
for all
y. Eve will then in each round perform a measurement that will generate an outcome
z, which will be her guess of Bob’s outcome
b; this measurement will be described by a positive operator
such that
.
In this setting, Eve’s goal is to optimise over the state
and measurements
that will give her the best chance to guess the outcome of Bob’s measurement. Importantly, Eve’s strategy has to be compatible with the observed statistics of what Alice and Bob observe. Note that in this game, the most compact way of describing what Alice and Bob observe (assuming Alice performs tomography on her system) is described by the
assemblage, which is a set where each element can be described as
which can be viewed as a sub-normalised density matrix describing the state of Alice’s system after the measurement
is made such that
for all
y. This assemblage is merely Alice’s and Bob’s observed assemblage, but really every element is obtained in the following way:
where we have course-grained over all of Eve’s measurement outcomes, or guesses, and introduced the identity
which can be seen as the sub-normalised state of Alice’s system conditioned on Bob’s and Eve’s particular measurement outcomes.
Returning to the game, we quantify Eve’s ability to guess Bob’s outcome with the guessing probability. We first assume that that Bob will aim to generate randomness from only one particular input, denoted by
, and Eve knows
. The guessing probability for Eve’s output
z to correctly guess Bob’s output
b for choice
is then
This can be seen as the sum over
z of the probabilities
when
[
11].
We will now expand upon this set-up to allow for Bob’s measurement to be a sequence of measurements. That is, we describe Bob’s input
y and output
b to be tuples of length
n, so that
and
. That is, Bob makes a sequence of measurements where each
ith measurement in the sequence corresponds to the measurement choice
with output
. We assume that the output
is obtained before the choice
is made, and thus we impose a constraint of causality: Measurement outcomes in the past are independent of future measurement choices. A consequence of this, for example, is that
, i.e., the probability of observing
given
is independent of the future choice of
. Since
for
and
, this then has consequences for the assemblage. For example, for
,
and likewise for larger
n. At this point it is worthwhile pointing out that any assemblage that satisfies these causality constraints in addition to non-signalling constraints, i.e.,
for all
y, can be realised by Alice and Bob sharing a quantum state and Bob making an appropriate sequence of measurements, as proven in [
18].
These are all of the constraints in the scenario that we are considering when allowing for sequences of measurements on a state. The goal is given all of these constraints, to give bounds on the guessing probability
given an observed assemblage
. One method for doing this is through semi-definite programming [
11], and we will return to this technique when it comes to presenting numerical results. We will also give analytical results based on self-testing in the steering scenario [
19]. One unifying aspect to our results is that instead of certifying randomness given the observed assemblages, we can certify randomness based on the violation of steering inequalities, which are analogous to Bell inequalities. More generally, a steering inequality violation results directly from some observed statistics for Alice. Therefore we can certify randomness based on statistical tests given particular (known) measurements made by Alice. Within this work, it will be made clear how
is being calculated.
Given the guessing probability
, we can compute a related quantity, which is the certifiable min entropy of Bob’s outcomes:
As we can see this is directly related to the guessing probability. That is, if the set of possible outcomes b has cardinality and then the min entropy associated with Bob’s outcomes is m bits. In this way, Bob’s device is a source of m bits of certifiable randomness.
3. A Scheme for Unbounded Randomness Generation
In this section we will describe an honest strategy, in which a sequence of measurements made upon half of a two-qubit entangled state can result in a large amount of observed randomness. In the subsequent sections we will give methods to certify that this is genuine randomness, but for now we will not concern ourselves with certification.
The scheme is similar to that of [
7]. We will call this scheme the two-qubit sequential measurement (TQSM) scheme. We have that Bob can implement non-projective measurements in “rotated versions” of the Pauli-
X and
Z bases, and Alice has the functionality implement a tomographically complete set of measurements, for example to measure the Pauli observables,
since this is sufficient for her to do quantum state tomography to certify Bob’s random outcomes.
First, for simplicity, we will consider Bob just making one sequence, i.e., a sequence of n measurements for so that and . We have that Bob can make a choice between two dichomotic measurements, so that y, . When Bob makes choice (), he will make a (possibly non-projective) rotated version of a measurement in the Pauli-Z (Pauli-X) basis.
We will now describe these “rotated” measurements in terms of their associated Kraus operators. These operators are of the form
where
is an angle and
b,
y are the bits as defined above. Consider the following operators:
The positive operator valued measure (POVM) constructed from these Kraus operators that Bob implements on his half of the shared state will be of the form
These Kraus operators reduce to the usual projective Pauli-X and Pauli-Z basis projectors for
. Therefore, if Alice and Bob share the pure quantum state
and Bob makes a measurement in, say, the rotated Pauli-X basis, and gets the outcome
, the post-measurement state will be
Very similar expressions are then obtained for the other Kraus operators. It should be noted that for all pure states
the post-measurement state
will also be pure [
7]. The post-measurement pure state shared by Alice and Bob after outcome
b for input
y will be
where unitaries
and
, and angle
depend on the initial quantum state and the angle of the rotated measurement. We point out that such an angle and unitaries exist (and can be calculated).
What is the probability of getting the outcome
b given
y? This will be
. We will only care about the case where
, since for this case if
we have that
Therefore, assuming that Alice and Bob share that state and Bob makes that measurement (in the honest setting) then Bob’s outcome for will be perfectly random. This will then be the basis of the certified randomness in this scheme.
The above is what happens for a sequence consisting of one measurement. For sequences of measurements of length
n for
, the post-measurement state
as described in (
9) will be relevant. Note that up to the unitaries
and
, the state
is of the form
. Therefore, if after his first measurement, Bob applies the unitary
to his share of the state, the joint state will be
Now after applying this unitary, Bob can make another measurement that is a rotated Pauli measurement. Now Bob’s input y will be a tuple of length 2, i.e., . For the second round, Bob’s choices of measurements are again between two rotated Pauli basis measurements, where is for the Z basis and is for the X basis.
If
, then Bob performs the rotated X measurement, followed by a correcting unitary, then another rotated X measurement and another corrective unitary. The post-measurement state after this second measurement (and unitary) will be
again for appropriately chosen unitaries and angles.
The probability of getting the outcomes for inputs is straightforwardly calculated to be . Thus for a sequence of two measurements with each being the rotated X basis measurement, we have two perfectly random outcomes . In general, for this sequence of rotated measurement, correcting unitary, rotated measurement, and so on, if there n measurements, then the probability .
This TQSM scheme thus gives us randomness assuming a particular state and sequence of measurements made by Bob. In subsequent sections the goal will be to remove the assumptions of the state and measurements but certify (almost) the same amount of randomness in the 1sDI scenario. It turns out that the randomness in the TQSM scheme can be certified. That is, to reproduce the observed assemblage (or statistics) between Alice and Bob, Eve would have to prepare devices that implement something equivalent to, or extremely close to, the TQSM scheme. Since this scheme produces a great deal of randomness, so will the certified version.
Before moving on, it is worthwhile to point out how this scheme differs from that presented in [
7]. The important distinction is that in the scheme of [
7], in addition to Bob making a sequence of measurements, Alice had to choose from a number of measurements that increased with the length of the sequence. This is because the certification was done in the device-independent setting, and not the 1sDI setting. The number of measurements Alice makes will never depend on the number of measurements in the sequence; it will only depend on the dimension of Alice’s Hilbert space since she only needs to do at most a tomographically complete measurement.
4. Certifiable Unbounded Randomness Generation
In this section we will give an analytical method for certifying the randomness in a sequential scenario that is suited to the TQSM scheme. In particular, we will show that the TQSM scheme can produce an unbounded amount of certifiable randomness: for an arbitrary integer N, there is a sequence of measurements that produces bits of certifiable randomness.
In order to certify randomness in the 1sDI setting, we cannot assume the initial state shared by Alice, Bob, and Eve nor the measurement sequence made by Bob; we can only assume the Hilbert space of Alice’s system, which from now on will be assumed to be two, i.e., Alice holds a qubit system. As mentioned earlier, it can assume that the state
shared by Alice, Bob, and Eve is pure. We can additionally assume for cryptographic purposes that the measurements in Bob’s sequence are all projective. For example, the non-projective measurements in the TQSM scheme can be simulated by projective measurements on a potentially larger Hilbert space (we outline such an approach in
Appendix A).
We introduce notation to refer to Bob’s measurements. In particular we will introduce observables for each of Bob’s measurements in the sequence. For the first measurement in the sequence, the choice of measurement corresponding to and will have the observable and respectively, where being Bob’s POVM corresponding to the outcome for input . For subsequent measurements we will introduce a piece of notation that and will be the tuple of all values of and from 1 to i consecutively (and inclusive). The observable corresponding to the th measurement in the sequence after obtaining the outcomes for choices will be denoted as and .
The method for certifying this randomness is for Alice to choose between three of the Pauli measurements. Note that Alice does not have to randomly choose between measurements. In each round of the guessing game, Alice can choose a different Pauli basis, but this can be chosen deterministically. Based on the statistics gathered from these three Pauli measurements and Bob’s sequence of measurements. Note that every single-qubit observable can be written as a linear combination of Pauli matrices so it is sufficient to make Pauli measurements and calculate the statistics for an arbitrary observable a posteriori. As part of the certification we have statistical criteria that the statistics obtained by Alice and Bob need to satisfy. If the statistics satisfy the criteria then this is the certificate that the outcomes of Bob’s sequence of measurements is random. To wit, Eve will not be able to perfectly predict the outcomes of Bob’s measurements. The statistical criteria will be based on the TQSM scheme.
From the TQSM scheme we have that after the
ith measurement and correctly unitary, the state of Alice and Bob’s two-qubit state will be
We will use the unitaries and angles in this post-measurement state to outline the statistical criteria. For each measurement in a sequence, there will be statistical criteria that should be satisfied. For simplicity we will start with the first measurement in the sequence.
The statistical criteria we will use can be derived from considering Alice and Bob both making Pauli-X and Pauli-Z measurements on a two-qubit pure entangled state of the form
. The criteria essentially compares the observed statistics with those that would be obtained from perfect Pauli measurements on such an entangled state. These criteria will be then be used for self-testing the devices by showing that their behaviour will not deviate from Pauli measurements on an entangled state. For future work, it would be of interest to use a steering inequality instead of these three seperate criteria. Recall that the TQSM scheme is very similar to pure Pauli measurements on a two-qubit pure entangled state, except for some rotation in the typically non-projective measurements. Hence we wish to leverage this fact to produce certifiable randomness. The statistical criteria is
where
and
are the Pauli-Z and Pauli-X observables respectively and
,
are real, positive numbers. The angle
just comes from the target pure state between Alice and Bob
. For subsequent measurements in the sequence, after the
ith measurement, we have the following criteria for the
th measurement in the sequence:
where the unitary
and
are the same as in (
11). Just as with (
11),
and
are real, positive numbers. We will call the conjunction of the critera in (
11) and all criteria (
12) for all
i the sequential steering criteria (SSC).
It should be emphasised again that in the SCC, Alice does not need to make a measurement corresponding to the observable , say, since for a known unitary , this observable can be written as a real linear combination of Pauli matrices. Thus Alice only needs to measure the Pauli observables to recover the relevant expectation values.
If we take the TQSM scheme and start introducing parameters for the rotated measurements, then we can adjust the SSC parameters to suit the TQSM scheme. For each measurement in the sequence for the measurement in the rotated Pauli Z basis, we will fix the angle to be equal to zero so that the POVM is for the outcome 0 (1) is (). For the rotated Pauli X measurement we fix the angle to be for the ith measurement in the sequence, which we can fix later but it will be in the range . Therefore the POVM for , we have and .
One point to make at this stage is for the choice parameters to give the criteria in (
13), after Bob makes the measurement choice
in any round then he makes a projective measurement. The problem with this is that the post-measurement state will be a product state, and no longer entangled; entanglement is necessary to certify randomness in the 1sDI scenario we have here. To get around this issue, we alter the scheme, as is suggested in [
7], such that after any time Bob makes a projective measurement, he does not make any more measurements in the sequence. That is, a measurement in the
th round will only follow the measurement choice
. Therefore, the only bit-strings
y that will be produced by Bob will be consist of a bit-value (0 or 1) prefixed by all ones. When we look at numerical approaches to randomness certification we will relax this constraint to look for optimal amounts of randomness.
When we put these details and values for the measurements into the SSC we obtain the following bounds:
We will use these values to certify the randomness produced by the TQSM scheme.
Coming back to randomness certification, for a sequence of measurements, the sequence of inputs , from which we will obtain a string of n random bits will be the all-ones string, i.e., . The following (informally stated) result gives an upper bound on the guessing probability for Eve to correctly guess Bob’s sequence of measurement outcomes.
Theorem 1. For Bob making a sequence of n measurements yielding the outcome bit-string b of length n, if Alice, Bob, and Eve share some initial state , and if Eve makes a measurement associated with operators , where z is Eve’s guess of Bob’s outcome b, Eve’s guessing probability isand if for each i, if the SSC is satisfied and for all and (with and ) from the statements of the SSC, then and if for all i, then The proof of this theorem can be found in the
Appendix A. This theorem uses techniques from self-testing in the 1sDI setting as developed in [
19]. Of independent interest we present a method to self-test all partially entangled two-qubit states in a robust manner.
Given Theorem 1 we can certify an unbounded amount of randomness assuming all of the SSC is satisfied. In particular, for the TQSM scheme we can give bounds on the amount of bits that will be certified, as indicated in the following the result.
Theorem 2. If all statistics satisfy the SSC with and for all i and as a free choice of angle that is assumed to be small, then the certifiable randomness for Bob’s sequence of n measurements iswhere . Furthermore, the TQSM scheme achieves this asymptotic behaviour as its resulting statistics will satisfy the SSC for the chosen values and for all i. Proof. If we take the result of Theorem 1 and convert the probability into a min entropy we have
where in the third line we have that
, and in the fourth line we use the value of
from the statement of the theorem. In the fifth line we have that
and
for
, which will always be the case by construction. Then in the sixth line, we used the fact that
for
, which will always be the case by construction. In the conditions, we can choose
for constant
, such that
, thus completing the proof. □
Note that by appropriate choice of the measurement parameters for the rotated Pauli-X basis measurement we can get arbitrarily close to the n bits of randomness by reducing the constant c in the statement of the theorem. We cannot reduce this constant to 0 since this would involve one of the rotated Pauli-X measurements would become projective, and we would not be able to certify randomness.
5. Numerical Results
The previous analytical results indicate that unbounded randomness is possible, but the methods employed are perhaps sub-optimal in extracting the most randomness from the TQSM scheme. In this section we will employ numerical techniques, similar to those developed in [
11], to give an indication of how robust the scheme is for randomness generation.
The methods employed in this section are based in semi-definite programming (SDP). We will take the approach that given the violation of a steering inequality, can we certify the randomness. A violation of a steering inequality implies that there must be certifiable randomness present. In this way the violation of the steering inequality is the certificate for the randomness. First we will outline how to derive a steering inequality from assemblages.
Given an assemblage, a method was derived to determine the steerability of the assemblage via an SDP by Skrzypczyk et al. [
20]. The steering weight (SW) is given to be the solution to the following SDP, (
17):
where
is an assemblage that Eve could produce for Alice using hidden variables
. This SDP has a corresponding dual program given by:
The dual program, (
18), is the most relevant for this work, as shown in [
20], the dual variables of the SDP, (
18), in fact define a steering inequality,
, for which the assemblage,
, produces an optimal violation, if one exists. We will use these steering inequalities as the fundamental building block for our sequential certification scheme.
We now return to calculating the certifiable randomness in terms of the guessing probability for Eve to guess Bob’s measurement outcomes. For simplicity, we will first study the case of a single measurement before giving the results for a sequence of measurements. With just a single measurement, the maximum guessing probability is given as the solution to the following SDP:
The steering inequality
is the one determined by the SDP (
18), which is optimally violated by the observed assemblage. The SDP (
19) allows Eve to create, for Alice, any assemblage,
, as long as this assemblage obeys the constraints in the SDP.
The first constraint enforces the fact that this assemblage should produce the observed violation of the steering inequality,
, which is found as a result of Alice computing the optimal values for the steering weight SDP (
18). Of course, if the assemblage that Alice observes is not steerable, i.e., it produces a steering weight of 0, then this will be reflected in the observed violation of a steering inequality, i.e., there will not be one for any steering inequality. The second constraint enforces that Alice and Bob cannot communicate faster than the speed of light (no-signalling condition), while the last constraint enforces that Eve must produce a valid assemblage for Alice i.e., it must be a positive semidefinite matrix.
We can now extend this scenario to one in which Bob implements a sequence of measurements on his half of the shared state. Defining the protocol for n rounds is therefore straightforward. The idea will be that for each measurement in the sequence there will be a steering inequality, and an observed violation. The steering inequalities and violations will be obtained from the assemblages produced by the TQSM scheme, where the SW is calculated and a steering inequality generated for each measurement round in the sequence of measurements. Once we have this set of steering inequalities, she can determine the guessing probability for Eve, as the solution of the following SDP:
| |
| |
| |
The solution of this SDP is the guessing probability and the maximum over the trace of all the assemblages that Eve can create for Alice at the end of the protocol,
, for a particular input string,
. Again, Eve knows from which measurement settings,
, Bob wants to extract randomness. The steering inequality violations can be calculated by Alice for the assemblage she observes. The constraints of the SDP are similar to the single measurement case except for the addition of one new set of constraints that are required for a sequence. These particular constraints enforce causality in the measurement sequence, as mentioned earlier. Recall that, as mentioned earlier, any assemblage satisfying these constraints can be implemented by Alice and Bob sharing a quantum state and by Bob making appropriate measurements [
18].
To obtain the most amount of randomness, for the final measurement round, the measurement operators will become projective, i.e.,
and the state at round
should be a pure entangled state. In this case, it is possible to define the steering inequality explicitly, as done in [
20]:
where
is chosen sufficiently large. A choice of
was chosen for all numerical results in this paper. Clearly, this choice of a steering inequality automatically gives a violation value of
.
Ideal Case
In this section, we present numerical results to illustrate the performance of the TQSM scheme assuming ideal functionality of devices. As a convention, it will be assumed that Bob always measures in the noisy X basis in the first round, with the final measurement round in the protocol being projective, or , depending on whether n is odd or even. We will also allow for the possibility that both of Bob’s possible measurements for each measurement in the sequence can be non-projective.
For completeness, the min entropy for one round of measurement is plotted as a function of measurement angles used for the first round, with the rotated X measurements for a range of values of
, as seen in
Figure 2. All measurements are applied on the following initial pure state:
was measured for values of:
. The solution of this SDP clearly reproduces the already known results for a single measurement round, as is done in [
11,
21], but using our SDP, which is slightly different than the one derived in those works. As expected, when
, no randomness can be certified as the state becomes a product state. In the opposite end of the spectrum, for
, the maximal amount of randomness can be certified, since this state is maximally entangled between Alice and Bob.
Figure 3a,b shows the results after two measurement rounds. In
Figure 3a, the measurement in round one was taken to be in the noisy
X basis, with a range of initial angles
, and the measurement in round two was taken to be in the usual computational basis,
.
Figure 3b illustrates the difference in choosing different measurement choices for the second round, i.e., between
, or
, with maximal randomness certified after sequential measurements in alternating bases,
. We cut the graphs at the extremes of the measurement angles (
in order to avoid the discontinuity that occurs as soon as the first round measurement undergoes the transition from projective to non-projective.
An interesting feature of the protocol can be seen in
Figure 3a, for the case of
. It turns out that in this case a maximal amount of randomness can be certified, for all initial measurement angles,
. This behaviour illustrates the fundamental difference between the steering, and fully device independent scenario and the more robust nature of quantum steering. In the latter, one observes the amount of certifiable randomness decreases monotonically as (
), corresponding to the first round measurement becoming non-interactive. We leave a further analysis of this phenomenon to future work.
Finally,
Figure 4 illustrates numerical results for the protocol with three measurement rounds. The protocol proceeds in exactly the same manner as for one and two rounds. In particular, in the first round, Bob can choose between a non-projective measurement in the noisy
basis, or if the particular run of the protocol is a test, he will measure in the projective
basis. In the second round, he will choose to measure in the noisy
basis, or the
basis for a test run. In the final round, he will choose to measure in the projective (
)
basis, or the projective (
)
basis for a test. Again,
Figure 4b reiterates the optimality of using an alternating sequence of non-projective measurements, with the most randomness produced with the setting
in this example.
Figure 4c shows the results for various second round measurement angles, and the amount of randomness that can be certified increases as the measurement angle,
.
In these results we see that the amount of randomness that can be certified using the numerics is quite robust. This then could make this scheme amenable to experiment. In the next section we will adapt these numerical techniques to look at experimental feasibility of this randomness certification scheme.
6. Towards Experimental Implementations
6.1. Networked Ion Trap Implementation
The framework in which we have designed this protocol, assuming a malicious adversary, Eve, is general enough to include the scenario in which she is not intentionally trying to interfere with our randomness generation, but instead we can imagine that Eve simply made some error in building the devices. This would correspond to introducing some noise, for example, in our state preparation and/or measurement apparatus. This noise assumption is clearly a subcase of the malicious adversary scenario. This mentality allows us to use our protocol to evaluate the usefulness of some current available technologies for randomness generation purposes, in some simple cases. In particular, we will restrict to assuming we only have some noise in our state preparation, but all other parts of the device works perfectly. To do so, we test the state introduced in [
22], which can be produced between two parties in a networked architecture of ion traps:
where
,
,
, and
are the standard 2-qubit Bell states. The state, (
22), is a mixed state assuming uniform depolarising noise. In [
22], this state is assumed to be one produced by two ion traps entangled by a photonic link. The simple noise model is chosen to allow use of a technique to purify the state. In particular, after three rounds of this purification protocol, the resulting states are given by:
where
is the state produced after
i rounds of the purification protocol.
Currently, raw entanglement between two ion traps, connected with an entangling photon, has been achieved with a fidelity of about
[
23]. Starting with this level of raw infidelity, the purification protocol produces states of infidelity
, and
after one, two, and three rounds respectively. The fidelity is given by (
27) [
24], and taken to be between the actual state
, and the pure Bell state,
:
Given the levels of entanglement present in the states above, we test the advantage of using a sequence of measurements vs. a single measurement on a noisy entangled state.
Figure 5a shows the result after a single X measurement on the states (choosing
) (
23)–(
26). Clearly, maximal randomness can be certified in the case where the measurement is projective, as expected. It can also be seen that by using the raw entangled state, (
23), very little randomness can be certified, with a maximum of approximately 0.15 bits.
Figure 5b illustrates the results after two rounds of measurements, where the second round measurements are projective,
. The case of
gives the same result as the single measurement scenario, since in this case the first measurement is projective and hence no randomness can be certified in the second round.
Unfortunately, it can be seen that no extra randomness can be certified in two measurement rounds on the raw entangled state, (
22). However, after two or more rounds of the purification protocol, indeed more randomness can be certified by using a sequence vs. a single measurement, as indicated by the peaks in
Figure 5b. The infidelity for which the sequence becomes more useful than a single measurement can be seen to be approximately in the interval
.
Finally,
Figure 6a shows the results after three rounds of measurements, where the third, and final round of measurements are projective with
. The second round of measurements is chosen in this case to be a noisy
Z measurement, with
.
Unfortunately, it can be seen that no extra randomness can be certified by implementing three measurements, than with two rounds. This is even the case for the purified states, (
24)–(
26), so even these levels of purity are not sufficient to extract more randomness from a single state with three rounds of measurements. The perfect pure state, with
is also plotted for comparison.
Clearly, one would expect the existence of
some level at which the state becomes pure enough to be useful so
Figure 6b shows the results of the protocol for very small infidelities, specifically:
It can be seen that for an infidelity approximately in the interval, , the state is pure enough to be able to certify more randomness with three rounds of measurement, than with two. This corresponds to being able to create pure entangled states experimentally with fidelities of greater than . This level could be reached by repeating the purification protocol more times but clearly this decreases the efficiency of the protocol as many more extra qubits would need to be introduced to implement this purification. Furthermore, for four and higher rounds of measurement, states that have an even higher level of purity would be required to make the protocol worthwhile, i.e., so that rounds of measurements on a single state would give better results than single measurements on new states each time.
6.2. Atom–Photon Implementation
We also examined a potential state arising from an atom–photon (AP) interaction. This case is even more applicable to the above 1sDI scenario as discussed in
Section 1. In light of this, it makes sense to consider a situation where an entangled state is produced by some process between an atom, and a coherent photon state. As an example, we investigate the state produced in [
25], which is the simplest for our purposes since it only involves single photon and vacuum states. However, an alternative method, using coherent photon states, such as the approach of [
26,
27] could be studied. These scenarios are particularly relevant as the authors have the aim of performing a Bell test, and observing a violation of a Bell inequality.
It is possible to examine two possible cases in this scenario, since the setup is asymmetric. We can either consider noise introduced in either imperfections in the atom side, or on the photon side.
The ideal case considered in [
25] is given by (keeping our notation):
where
are two atomic states (held by Bob) and
are the photon vacuum and single photon state respectively, held by Alice.
For simplicity, we will consider two of the cases presented in [
25] as sources of imperfections. The first error is introduced in the transmission efficiency, and we also consider the possibility that the photon was lost during the transmission. The transmission inefficiency is given by
, and if the photon is lost, we get an extra contribution to the overall state corresponding to
, with a weight of
, such that the final state is given by:
where:
Since both sets of atomic,
, and photonic,
, states are orthogonal to each other, we can make the translation to ‘logical’ basis states:
the state is given in the computational basis by:
where we have defined
Figure 7 illustrates the results after one, two, and three measurement rounds for an atom–photon state (
29) with
. Values of the transmission efficiency,
were chosen for interest to correspond with those described in [
26]. In that paper, the authors examine Bell inequality violations where Bob (Alice in our case) has access to the photonic system, and can make either homodyne measurements or photon counting to determine his Bell statistics. This intuitively corresponds in our case to his choice of measurement basis. Alternatively, Bob might not use photon counting, but instead choose between homodyne measurements in two different quadratures. Values of
are the required levels of efficiency to produce a Bell violation, if the measurements on the photonic side are either homodyne and photon counting, or both homodyne respectively. It should be noted that in our case, we would not need to distinguish between these two cases as we do not need to reproduce statistics for binary outcomes on Alice’s side, since she is fully trusted, and needs only to do state tomography on the photonic mode. As such, the measurement scheme that allows her to more easily do tomography is the one that should be chosen in the actual implementation of our protocol. Also, a value of
was plotted as this is the level that would be required to close the locality loophole in the Bell violation, as stated in [
26].
From
Figure 7a, it can be seen that for a value of
, more randomness can be certified with a single measurement with the AP state, than with the one produced in two ion traps, with a fidelity of
, as in the latter case, only about 0.15 random bits could be certified, but in the former over 0.2 random bits can be certified.
In this implementation, we see once again see the same general trends as with the ion trap apparatus. At some level of transmission efficiency, illustrated by
in
Figure 7b,c respectively, the state becomes pure enough for a sequence to become worthwhile. In particular, for
, two measurements on the state generates more certifiable randomness than is possible with one, and for
, we can get more than two certifiable random bits.
As a result, we can see that this particular atom photon model has more promise for randomness certification than the ion trap model, although it may be an unfair comparison to directly compare transmission efficiency in the former case to state fidelity in the latter. However, while the state in [
22] only takes into account a very simple depolarising noise model, which may be unrealistic in practice, the atom–photon state, (
29), of [
26] takes into account all coupling errors in the state preparation between Alice and Bob. Another interesting property to investigate would be the detection efficiency of the photons and how this effects the protocol.
6.3. Nitrogen-Vacancy Center Implementation
Next, we consider an entangled state produced between Alice and Bob using qubits based on electronic spins of nitrogen-vacancy defect centers in diamond. In particular, we examine the state used in the first loophole free Bell test, [
2,
28]. This state is again relevant due to its use in the Bell test, and as mentioned in [
2], the setup could readily be used for randomness certification, albeit in a fully device independent scenario. The shared state between Alice and Bob in this experiment is given by the following density matrix:
where,
, and
V is the visibility that describes the indistinguishibility of the photons used to create entanglement. The residual errors,
, are due to the spin-photon coupling, as described in [
2]. In this case, the ideal case is not particular Bell state we have assumed above,
, instead it is another Bell state,
. The best estimate for the visibility is given to be
, and the residual errors are found to be
. For these values, the fidelity of the state used in their Bell test is reported to be
, and
.
Figure 8 shows the results of the protocol when the electronic spin state, (
33), is used. In the experiment described in [
2], a very pure state was required to implement a reliable Bell test, and due to this, the state is substantially better for randomness certification than that available in the ion trap, or atom–photon implementation, with it being possible to certify
random bits using electronic spins with a single measurement. Also, in both
Figure 8b,c, the effect of the residual errors can be seen to have a large consequence when it comes to randomness certification, and ultimately the state purity. For example, in
Figure 8b with a perfect visibility of
and using a value of
a maximum of 1.5 bits can be certified with two measurements, which is substantially less than then maximal amount of 2 bits that can certified with a perfect pure state. A similar feature can be seen in
Figure 8c for three measurement rounds. It would also be interesting to study the effect of the,
, derived from the statistical uncertainties on
, on the amount of randomness producible by the state.
was assumed in our numerical results for clarity.
The sensitivity of the randomness certification to errors is especially apparent in
Figure 8c. For a reduction in visibility,
V, by only
, the amount of random bits drops by almost a full unit. Similarly, a reduction in
by
leads to a loss of
a random bit, and even with this small drop, the situation changes from one in which a sequence of three measurements can do better than is ever possible with two, to a scenario in which two measurement rounds produce a very similar amount of certifiable randomness, and the third measurement is almost unnecessary.
6.4. Implementation on Rigetti Forest Platform
As a final example, we implement the protocol using Rigetti’s Forest Platform, [
29]. This is done in a proof of principle way using the following circuit:
where we have defined the following two qubit unitary gates, that effectively implement the non-projective measurements in the X and Z bases, denoted as
, respectively.
The index on the ancilla represents the measurement round it is used in. The input string, y, for n measurement rounds is used as classical input to the circuit, and conditioned on this input for each round, either the noisy X or noisy Z measurement is implemented. As mentioned above, it is the topmost ancilla that is used as a control qubit for each gate in the circuit. At the end of each round of the protocol, a single ancillas can be measured in the usual computational basis, where represents the measurement done in round k. Clearly, if the input , the noisy Z measurement is implemented, , while if , the noisy X measurement is implemented, , and the other is not. In this fashion, only one quantum gate acts on the state per measurement round. Also, the state is only Bob’s initial reduced state.
The circuit could be further improved since it is possible to only use a single ancilla. This ancilla would undergo multiple measurements, with the addition of a series of CNOT gates to the ancilla wire in order to reset the ancilla post measurement. These CNOT gates would return the ancilla to the usual state conditioned on the previous measurement outcome. It is actually essential that the measurements occur in a sequential manner, i.e., it is Bob’s post-measurement state, which is rotated in the next round of the protocol. In this way, the measurements actually cannot be deferred to the end of the circuit since if this was done, there would be a cheating strategy for Eve. Causality is essential for the proper security of the protocol. However, since the quantum hardware prohibits intermediate measurements in a quantum circuit, and instead it is necessary to defer all measurements to the end of the circuit. While this would not be sufficient for security against a malicious adversary, it is useful as a proof-of principle, assuming any deviation occurs from noise errors alone.
To implement the protocol, we proceed as described above and perform tomography on Alice’s qubit,
after the sequence of measurements on Bob’s qubit,
(deferred onto the ancillary qubits). We proceed using the simulator of the
Aspen quantum processing unit (QPU) with the sublattice
Aspen-4-3Q-A. With this scheme, we require an
qubit chip to implement
n sequential measurements. We perform direct inversion tomography [
30] by measuring the expectation values of the Pauli Observables,
to reconstruct the state:
Direct inversion tomography is the simplest method of state tomography, and compensates for the fact that, due to measurement errors, the state which is estimated naively may lie outside the Bloch Sphere (i.e., it has a norm greater than 1). If this is the case, the vector,
is simply rescaled by its norm in the following way:
where
is the
norm. The original vector is estimated by approximating the expectation values,
. This is achieved by counting the number of times the positive eigenvalue is observed, minus the number of times the negative eigenvalue is observed and normalising the answer, for each operator.
However, when implementing the protocol on the simulator, Alice can use her foreknowledge that Bob makes measurements only in the noisy bases. In this case, the steered states, , would have no Y contribution so Alice would only be required to estimate . However, if the protocol was to run on the physical hardware, it would be necessary to include measurements of the Y observable also. To generate the full assemblage, this must be done for each of Bob’s measurement choices and outcomes, . The full protocol requires the assemblage after each round, , but it is sufficient to compute these from the final round assemblages elements. This is due to the causality relationship .
Figure 9 illustrates the protocol using Rigetti’s Simulator of sublattices of the QPU containing three, four, and five qubits to implement one, two, and three measurement rounds in
Figure 9a,
Figure 9b, and
Figure 9c respectively. For a single measurement, the results are encouraging, but this is because 100,000 measurements allows a good characterisation of the four assemblage elements received by Alice. It is apparent that the exponential scaling quickly overtakes the number of measurements such that three measurement rounds do not increase the randomness certified over two, it actually reduces it. Unfortunately, we were not able to get sensible results when running the protocol on the QPU versions of the corresponding simulators in
Figure 9, even for a single measurement, due to noise. Potentially, this could be mitigated by using more sophisticated tomography techniques.
7. Discussion
We presented a novel protocol to certify an unbounded amount of random numbers from sequential measurements on one half of a quantum state shared by two parties, building on the work of [
7,
11,
21]. The ‘certificate’ in this case can be a set of statistical criteria or the states into which the other party is ‘steered’ as a result of the sequence of well-chosen measurements. We studied the behaviour of the scheme both in the ideal setting, and in experimentally realistic settings, [
22], including those which have actually been implemented [
2,
25]. We also demonstrated the feasibility of the scheme in being able to certify multiple random bits produced from a single quantum state, rather than multiple states each producing only one bit. This distinction is important given the valuable nature of controlled quantum systems, and hence represents an important step in resource reduction.
Our scheme could be readily turned into a protocol for randomness expansion, especially now that we have improved upon the work of [
7] in reducing the number of measurements required. We leave this to future work.
Interesting future work would be to investigate the reason behind the apparent anomaly in the steering scenario with two sequential measurements on a maximally entangled state, as discussed in
Section 5. Given our focus in this work on studying the behaviour of the protocol in realistic experimental implementations, it would be insightful to actually implement the protocol in a physical system, similar to those carried out in Bell testing, [
2].
8. Materials and Methods
All numerical results in this work were obtained using the Matlab convex optimisation package,
cvx, [
31] and a package for managing quantum states,
qetlab, [
32]. The resulting code required to produce all images in this work is available at [
33].