Next Article in Journal
The Information Length Concept Applied to Plasma Turbulence
Previous Article in Journal
Chaos-Assisted Dynamical Tunneling in Flat Band Superwires
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classical Modeling of a Lossy Gaussian Bosonic Sampler

by
Mikhail V. Umanskii
1 and
Alexey N. Rubtsov
1,2,*
1
Department of Physics, Lomonosov Moscow State University, Leninskie Gory 1, 119991 Moscow, Russia
2
Russian Quantum Center, Bolshoy Bulvar 30, bld. 1, Skolkovo, 121205 Moscow, Russia
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(6), 493; https://doi.org/10.3390/e26060493
Submission received: 22 April 2024 / Revised: 23 May 2024 / Accepted: 1 June 2024 / Published: 5 June 2024
(This article belongs to the Topic Quantum Information and Quantum Computing, 2nd Volume)

Abstract

:
Gaussian boson sampling (GBS) is considered a candidate problem for demonstrating quantum advantage. We propose an algorithm for the approximate classical simulation of a lossy GBS instance. The algorithm relies on the Taylor series expansion, and increasing the number of terms of the expansion that are used in the calculation yields greater accuracy. The complexity of the algorithm is polynomial in the number of modes given the number of terms is fixed. We describe conditions for the input state squeezing parameter and loss level that provide the best efficiency for this algorithm (by efficient, we mean that the Taylor series converges quickly). In recent experiments that claim to have demonstrated quantum advantage, these conditions are satisfied; thus, this algorithm can be used to classically simulate these experiments.

1. Introduction

Quantum computers are computational devices which operate using phenomena described by quantum mechanics. Therefore, they can carry out the operations which are not available for classical computers. The ability of a quantum computer to solve a specific task faster than any classical computer is usually referred to as quantum advantage. Although quantum algorithms that provide exponential speedup over classical ones are known, they are hard to implement in practice. Examples of such algorithms include Shor’s algorithm of factoring integers [1], that works in polynomial time, whereas all classical algorithms require exponential time. Modern quantum computers are far from experimentally demonstrating quantum advantage on basic problems like integer factorization.
Boson sampling [2] is a problem that was proposed as a good candidate for demonstrating quantum advantage due to its nature. A boson sampler is a linear-optical device that consists of non-classical sources of indistinguishable photons, a multichannel interferometer mixing photons of different sources, and photon detectors at the output channels of the interferometer. In the original proposal, the indistinguishable photons were prepared in Fock states. The problem then is to calculate the photon statistics after the interferometer given an input state and the interferometer matrix. The relevant parameters are the number of modes N and the total number of photons injected in the interferometer M. Experimentally, it corresponds to performing multiple measurements of the photon counts at the outputs of such a device [3].
Due to the technological complexity of generating Fock states, several variants of the original boson sampling problem have been proposed. They aim at improving the photon generation efficiency and increasing the scale of implementations. One such example is the scattershot boson sampling, which uses many parametric down-conversion sources to improve the single photon generation rate. It has been implemented experimentally using a 13-mode integrated photonic chip and six PDC photon sources [4].
Another variant is the Gaussian boson sampling [5,6], in which Gaussian states are injected into the interferometer instead of Fock states. Gaussian input states can be generated using PDC sources, and it allows the non-classical input states to be prepared deterministically. In this variant, the relative input photon phases can affect the sampling distribution. Experiments were carried out with N = 12 [7], N = 100 [8] and N = 144 [9,10], with up to 255 photons registered in one event. The latter implementations used PPKTP crystals as PDC sources and employed an active phase-locking mechanism to ensure a coherent superposition.
Any experimental setup, of course, differs from the idealized model considered in theoretical modeling. Bosonic samplers suffer from two fundamental types of imperfections. First, the parameters of a real device, such as the reflection coefficients of the beam splitters and the phase rotations, are never known exactly. A small change in the interferometer parameters can affect the sampling statistics drastically, so that the modeling of an ideal device no longer makes much sense. Another type of imperfections is photon losses. These losses happen because of imperfections in photon preparation, absorption inside the interferometer and imperfect detectors and coupling.
There are different ways of modeling losses: for example, by introducing extra beam splitters [11] or replacing the interferometer matrix by a combination of lossless linear optics transformations and the diagonal matrix that contains transmission coefficients [12]. In the algorithm described in this paper, we will assume that losses occur on the inputs of the interferometer, and we will describe the exact way that we model them.
Imperfections in middle-sized systems make them, in general, easier to emulate with classical computers [13]. It was shown [14] that with the increase of losses in a system, the complexity of the task decreases. When the number of photons M that arrive at the outputs is less than M , the problem of boson sampling can be efficiently solved using classical computers. On the other hand, if the losses are low, the problem remains hard for classical computers [15].
In this paper, we propose a classical algorithm for calculating probabilities of output states in a GBS problem. The algorithm uses Taylor series expansion, and it converges faster depending on the parameters of the problem: namely, the amount of losses in the system and the squeezing parameter of the input states. The higher the losses in the system, the less orders of the series are needed to approximate the probability of observing a given output state.
The work by Oh et al. [16] used the following approach to simulating GBS: the covariance matrix of the output Gaussian state was decomposed into “quantum” and “classical” parts, in which the “quantum” part was simulated using matrix product states and the “classical” part was simulated by random displacement. Thus, when the photon loss rate is high, the computational complexity of this algorithm is reduced.
The algorithm that we propose in this paper uses some similar ideas: namely, the zeroth order of the Taylor series may be considered the “classical” part that is computed quite easily, while the remaining terms are the “quantum” part that is more computationally complex. The contribution of this “quantum” part is smaller when the losses in the system are high; thus, our algorithm also has optimal conditions that depend on the magnitude of losses. We also analyze some recent GBS implementations to compare the conditions in those experiments with the optimal conditions for our algorithm.

2. Problem Specification

Let us first consider a lossless linear-optics interferometer with a transmission matrix U:
a ^ i = j U i j d ^ j , a ^ i = j U i j * d ^ j
where creation operators acting on the i-th input and output modes are denoted a i and d i . Suppose the input modes are injected with single-mode squeezed states:
| ψ = e i α i 2 ( a ^ i ) 2 | 0 ,
where we omit the state’s normalizing constant ( 1 | α | 2 ) N / 4 .
The goal is to calculate the probability of detecting n 1 photons in the first output mode, n 2 photons in the second output mode and so on. This probability can be calculated in the following way:
T r ρ ^ o u t n ^ = T r ρ ^ o u t i | n i n i | ,
where ρ ^ o u t is the density matrix of the output state.

Modeling Losses

In real-life bosonic samplers, there will always be losses. Here, we will model them by substituting
a i c a i + s b i ,
where b i acts on a mode that we cannot observe, and c 2 + s 2 = 1 , c , s R . Now, the goal is to compute the same probability ( T r ρ o u t ^ n ^ ) but taking losses into account. The input state will now be
| ψ = e i α i 2 ( c a ^ i + s b ^ i ) 2 | 0 a 0 b
and we now take partial trace over all loss modes when calculating the density matrix:
ρ ^ = T r b e i α i 2 ( c a ^ i + s b ^ i ) 2 | 0 a 0 b 0 a 0 b | e i α i 2 ( c a ^ i + s b ^ i ) 2 .

3. Algorithm Derivation

Let us consider a single mode:
| ψ = e α 2 ( c a ^ + s b ^ ) 2 | 0 a 0 b ,
ρ ^ = T r b e α 2 ( c a ^ + s b ^ ) 2 | 0 a 0 b 0 a 0 b | e α 2 ( c a ^ + s b ^ ) 2 .

3.1. Calculating Partial Trace

We start by applying the Hubbard–Stratonovich transformation [17,18]
e A ^ 2 2 = 1 2 π + e ξ A ^ ξ 2 2 d ξ
to both exponents in the density matrix operator. This gives us the following:
ρ ^ = 1 2 π + + T r b e ξ α ( c a ^ + s b ^ ) | 0 a 0 b 0 a 0 b | e ξ ˜ α ( c a ^ + s b ^ ) e ξ 2 + ξ ˜ 2 2 d ξ d ξ ˜ = 1 2 π α + + T r b e ξ α ( c a ^ + s b ^ ) | 0 a 0 b 0 a 0 b | e ξ ˜ α ( c a ^ + s b ^ ) e ( ξ α ) 2 + ( ξ ˜ α ) 2 2 α d ( ξ α ) d ( ξ ˜ α ) .
Let us redefine ξ α ξ , ξ ˜ α ξ ˜ for convenience:
ρ ^ = 1 2 π α + + T r b e ξ ( c a ^ + s b ^ ) | 0 a 0 b 0 a 0 b | e ξ ˜ ( c a ^ + s b ^ ) e ξ 2 + ξ ˜ 2 2 α d ξ d ξ ˜ .
We can now calculate the partial trace over loss modes:
T r b e ξ ( c a ^ + s b ^ ) | 0 a 0 b 0 a 0 b | e ξ ˜ ( c a ^ + s b ^ ) = e ξ c a ^ | 0 a 0 a | e ξ ˜ c a ^ · T r e ξ s b ^ | 0 b 0 b | e ξ ˜ s b ^ = e ξ c a ^ | 0 a 0 a | e ξ ˜ c a ^ · 0 b | e ξ ˜ s b ^ e ξ s b ^ | 0 b .
The following expression can be simplified:
0 b | e ξ ˜ s b ^ e ξ s b ^ | 0 b = 0 b | ( 1 + ξ ˜ s b ^ + 1 2 ( ξ s b ^ ) 2 + ) ( 1 + ξ s b ^ + 1 2 ( ξ s b ^ ) 2 + ) | 0 b = 0 b | + ξ ˜ s 1 b | + 1 2 ( ξ ˜ s ) 2 2 b | + 0 b + ξ s | 1 b + 1 2 ( ξ s ) 2 | 2 b + = 1 + ξ ξ ˜ s 2 + 1 2 ( ξ ξ ˜ s 2 ) 2 + = e ξ ξ ˜ s 2 .
The density matrix now can be written in the following way:
ρ ^ = 1 2 π α + + e ξ c a ^ | 0 0 | e ξ ˜ c a ^ · e ξ 2 + ξ ˜ 2 2 α + ξ ξ ˜ s 2 d ξ d ξ ˜ .

3.2. Switching between Probability Density Functions

We can view this integral as taking an expected value over a t w o -dimensional normal distribution. ξ and ξ ˜ then become normally distributed random variables with a mean vector equal to zero. Their covariance matrix has the following form:
Σ = 1 / α s 2 s 2 1 / α 1 = 1 1 / α 2 s 4 1 / α s 2 s 2 1 / α .
Then, we can write
ρ ^ = ( d e t Σ ) 1 / 2 α 1 2 π ( d e t Σ ) 1 / 2 + + e ξ c a ^ | 0 0 | e ξ ˜ c a ^ e ξ 2 + ξ ˜ 2 2 α + ξ ξ ˜ s 2 d ξ d ξ ˜ = ( d e t Σ ) 1 / 2 α · E N ( 0 , Σ ) e ξ c a ^ | 0 0 | e ξ ˜ c a ^ ,
where E N ( 0 , Σ ) denotes averaging over the t w o -dimensional normal distribution N ( 0 , Σ ) .
The expression e ξ c a ^ | 0 0 | e ξ ˜ c a ^ is troublesome to calculate, since there are two different variables ξ and ξ ˜ . We want to arrive somehow at an expression with only one such variable, i.e., e ξ c a ^ | 0 0 | e ξ c a ^ , which we will denote ν ^ ( ξ c ) .
We now will choose normally distributed random variables ξ 0 , χ , χ ˜ R such that ξ = ξ 0 + χ , ξ ˜ = ξ 0 + χ ˜ and the distributions over ξ , ξ ˜ and ξ 0 , χ , χ ˜ have the same moments:
ξ 2 ¯ = ( ξ 0 + χ ) 2 ¯ = ξ 0 2 ¯ + 2 ξ 0 χ ¯ + χ 2 ¯ , ξ ˜ 2 ¯ = ( ξ 0 + χ ˜ ) 2 ¯ = ξ 0 2 ¯ + 2 ξ 0 χ ˜ ¯ + χ ˜ 2 ¯ , ξ ξ ˜ ¯ = ( ξ 0 + χ ) ( ξ 0 + χ ˜ ) ¯ = ξ 0 2 ¯ + ξ 0 χ ¯ + ξ 0 χ ˜ ¯ + χ χ ˜ ¯ .
We have some freedom in choosing these variables; we will set ξ 0 χ ¯ = ξ 0 χ ˜ ¯ = 0 so that ξ 0 χ and ξ 0 χ ˜ . Then, the covariance matrix Γ of ξ 0 , χ , χ ˜ will be determined by one parameter h = χ χ ˜ ¯ :
ξ 2 ¯ = ξ 0 2 ¯ + χ 2 ¯ , ξ ˜ 2 ¯ = ξ 0 2 ¯ + χ ˜ 2 ¯ , ξ ξ ˜ ¯ = ξ 0 2 ¯ + h .
ξ 0 2 ¯ = ξ ξ ˜ ¯ h = s 2 1 / α 2 s 4 h , χ 2 ¯ = χ ˜ 2 ¯ = ξ 2 ¯ ξ ξ ˜ ¯ + h = 1 / α s 2 1 / α 2 s 4 + h = 1 1 / α + s 2 + h .
Note that 1 1 / α + s 2 h s 2 1 / α 2 s 4 . We will later find an optimal way to choose h. The density matrix in terms of the new variables ξ 0 , χ , χ ˜ N ( 0 , Γ ) is
ρ ^ = ( d e t Σ ) 1 / 2 α · E N ( 0 , Γ ) e ( ξ 0 + χ ) c a ^ | 0 0 | e ( ξ 0 + χ ˜ ) c a ^ .
Since ξ 0 χ and ξ 0 χ ˜ , the distribution E N ( 0 , Γ ) can be split into a combination of distributions over ξ 0 and over χ , χ ˜ . The covariance matrix Λ of χ and χ ˜ is
Λ = χ 2 ¯ χ χ ˜ ¯ χ χ ˜ ¯ χ ˜ 2 ¯ = 1 1 / α + s 2 + h h h 1 1 / α + s 2 + h ,
and the distribution E N ( 0 , Γ ) can be written as
N ( 0 , Γ ) = N ( 0 , ξ 0 2 ¯ ) · N ( 0 , Λ )

3.3. Taylor Series Expansion

We now consider the Taylor series of the expression e ( ξ 0 + χ ) c a ^ | 0 0 | e ( ξ 0 + χ ˜ ) c a ^ , leaving only ξ 0 in the exponent:
e ( ξ 0 + χ ) c a ^ | 0 0 | e ( ξ 0 + χ ˜ ) c a ^ = e χ c a ^ e ξ 0 c a ^ | 0 0 | e ξ 0 c a ^ e χ ˜ c a ^ = e χ c a ^ ν ^ ( ξ 0 c ) e χ ˜ c a ^ = 1 + χ c a ^ + ( χ c a ^ ) 2 2 + ν ^ ( ξ 0 c ) 1 + χ ˜ c a ^ + ( χ ˜ c a ^ ) 2 2 + .
Each term in the expression will be proportional to
c n + m χ n χ ˜ m · ( a ^ ) n ν ^ ( ξ 0 c ) a ^ m ,
and since ξ 0 χ and ξ 0 χ ˜ , the integral over ξ 0 , χ , χ ˜ can be written as a product of integrals over ξ 0 and χ , χ ˜ :
E N ( 0 , Γ ) c n + m χ n χ ˜ m · ( a ^ ) n ν ^ ( ξ 0 c ) a ^ m = c n + m · E N ( 0 , Λ ) χ n χ ˜ m · E N ( 0 , ξ 0 2 ¯ ) ( a ^ ) n ν ^ ( ξ 0 c ) a ^ m .
The moments E N ( 0 , Λ ) χ n χ ˜ m can be calculated analytically using Wick’s probability theorem.

3.4. Choosing Γ

The idea consists of minimizing the “perturbation parameter” so that each subsequent order of the Taylor series expansion has less impact on the expression. Since higher orders of the expansion contain higher powers of c 2 and higher moments E N ( 0 , Λ ) χ n χ ˜ m , and these moments can be calculated via second moments χ 2 ¯ = χ ˜ 2 ¯ and χ χ ˜ ¯ = h , the role of the “perturbation parameter” is played by ε = c 2 · max ( χ 2 ¯ , | χ χ ˜ ¯ | ) .
Let us consider the conditions that must be satisfied by h. Firstly, h must satisfy 1 1 / α + s 2 h s 2 1 / α 2 s 4 , because ξ 0 2 ¯ 0 and χ 2 ¯ 0 . Secondly, since Γ is a covariance matrix, its eigenvalues must be non-negative. The eigenvalues of Γ are ξ 0 2 ¯ , χ 2 ¯ h and χ 2 ¯ + h . Thus, h needs to satisfy
χ 2 ¯ + h 0 1 1 / α + s 2 + 2 h 0 h 1 2 1 1 / α + s 2 .
The minimum of max χ 2 ¯ , | h | is realized when h = χ 2 ¯ = 1 2 1 1 / α + s 2 .

3.5. Multimode Case

Let us apply the steps described above to the case of N modes. We start with an input state
| ψ ( N ) = i = 1 N e α 2 ( c a ^ i + s b ^ i ) 2 | 0 a 0 b .
We construct a density matrix and take the partial trace over loss modes:
ρ ^ = T r b e i α 2 ( c a ^ i + s b ^ i ) 2 | 0 a 0 b 0 a 0 b | e i α 2 ( c a ^ i + s b ^ i ) 2 .
We apply the Hubbard–Stratonovich transformation 2 N times, resulting in an integral over i = 1 N d ξ i d ξ ˜ i :
ρ ^ = 1 ( 2 π ) N R 2 N T r b e i ξ i α ( c a ^ i + s b ^ i ) | 0 a 0 b 0 a 0 b | e i ξ ˜ i α ( c a ^ i + s b ^ i ) e i ξ i 2 + ξ ˜ i 2 2 i d ξ i d ξ ˜ i = 1 ( 2 π α ) N R 2 N T r b e i ξ i α ( c a ^ i + s b ^ i ) | 0 a 0 b 0 a 0 b | e i ξ ˜ i α ( c a ^ i + s b ^ i ) · e i ( ξ i α ) 2 + ( ξ ˜ i α ) 2 2 α i d ( ξ i α ) d ( ξ ˜ i α ) ,
where the integral for each variable ξ i and ξ ˜ i is calculated over ( , + ) .
Again, we redefine ξ i α ξ i , ξ ˜ i α ξ ˜ i :
ρ ^ = 1 ( 2 π α ) N R 2 N T r b e i ξ i ( c a ^ i + s b ^ i ) | 0 a 0 b 0 a 0 b | e i ξ ˜ i ( c a ^ i + s b ^ i ) e i ξ i 2 + ξ ˜ i 2 2 α i i d ξ i d ξ ˜ i .
We compute partial trace over loss modes:
ρ ^ = 1 ( 2 π α ) N R 2 N e i ξ i c a ^ i | 0 0 | e i ξ ˜ i c a ^ i e i ξ i 2 + ξ ˜ i 2 2 α + ξ i ξ ˜ i s 2 i d ξ i d ξ ˜ i .
This expression now can be considered as taking an expected value over a 2 N -dimensional normal distribution where all variable pairs ξ i , ξ ˜ i are independent. Every variable pair ξ i , ξ ˜ i has covariance matrix Σ , and we can write this expression in the following way:
ρ ^ = ( d e t Σ ) N / 2 α N · E i N ( 0 , Σ ) e i ξ i c a ^ i | 0 0 | e i ξ i ˜ c a ^ i .
For each variable pair ξ i , ξ ˜ i we now choose ξ 0 i , χ i , χ ˜ i in a way that is described above. Then,
ρ ^ = ( d e t Σ ) N / 2 α N · E i N ( 0 , Γ ) e i ( ξ 0 i + χ i ) c a ^ i | 0 0 | e i ( ξ 0 i + χ ˜ i ) c a ^ i .
We now consider the Taylor series expansion (up to the second order) of the expression in the square brackets, which we will denote μ ^ :
μ ^ = e i χ i c a ^ i e i ξ 0 i c a ^ i | 0 0 | e i ξ 0 i c a ^ i e i χ i ˜ c a ^ i = i 1 + χ i c a ^ i + ( χ i c a ^ i ) 2 2 e i ξ 0 i c a ^ i | 0 0 | e i ξ 0 i c a ^ i i 1 + χ i ˜ c a ^ i + ( χ i ˜ c a ^ i ) 2 2 .
The creation operators a ^ i that act on the input modes can be written in terms of the operators d ^ i that act on the output modes:
μ ^ = i 1 + χ i c j U i j d ^ j + ( χ i c j U i j d ^ j ) 2 2 e i j ξ 0 i c U i j d ^ j | 0 · 0 | e i j ξ 0 i c U i j * d ^ j i 1 + χ i ˜ c j U i j * d ^ j + ( χ i ˜ c j U i j * d ^ j ) 2 2 .
We will denote
ν ^ ( ξ 0 c ) = e i j ξ 0 i c U i j d ^ j | 0 0 | e i j ξ 0 i c U i j * d ^ j .
We can expand the brackets in the expression for μ ^ , leaving the terms up to the second order:
i 1 + χ i c j U i j d ^ j + ( χ i c j U i j d ^ j ) 2 2 = 1 + j d ^ j i c χ i U i j + j k d ^ j d ^ k 1 2 i c 2 χ i 2 U i j U i k + i l c χ i c l χ l U i j U l k .
i 1 + χ i ˜ c j U i j * d ^ j + ( χ i ˜ c j U i j * d ^ j ) 2 2 = 1 + j d ^ j i c χ i ˜ U i j * + j k d ^ j d k ^ 1 2 i c 2 χ ˜ i 2 U i j * U i k * + i l c χ i ˜ c l χ l ˜ U i j * U l k * .
When we take the product of these two expressions, most of the resulting terms will have zero expected value because of the properties of the normal distribution. Then
μ ^ = ν ^ ( ξ 0 c ) + 1 2 i χ i 2 c 2 j k U i j U i k · d ^ j d ^ k ν ^ ( ξ 0 c ) + 1 2 i χ i ˜ 2 c 2 j k U i j * U i k * · ν ^ ( ξ 0 c ) d ^ j d ^ k + i χ i χ i ˜ c 2 j k U i j U i k * · d ^ j ν ^ ( ξ 0 c ) d ^ k + 1 4 i j χ i 2 χ j ˜ 2 c 4 k l m n U i k U i l U j m * U j n * · d ^ k d ^ l ν ^ ( ξ 0 c ) d ^ m d ^ n + i j χ i χ j χ i ˜ χ j ˜ c 4 k l m n U i k U j l U i m * U j n * · d ^ k d ^ l ν ^ ( ξ 0 c ) d ^ m d ^ n .
The integrals over χ i , χ ˜ i result in specific moments of the distribution, and the integral over ξ 0 i can be calculated using Monte-Carlo methods. The final expression is
T r ρ ^ o u t n ^ = ( det Σ ) N / 2 α N E i N ( 0 , ξ 0 2 ¯ ) [ T r ν ^ ( ξ 0 c ) n ^ + 1 2 χ 2 ¯ c 2 i j k U i j U i k · T r d ^ j d ^ k ν ^ ( ξ 0 c ) n ^ + 1 2 χ ˜ 2 ¯ c 2 i j k U i j * U i k * · T r ν ^ ( ξ 0 c ) d ^ j d ^ k n ^ + χ χ ˜ ¯ c 2 i j k U i j U i k * · T r d ^ j ν ^ ( ξ 0 c ) d ^ k n ^ + 1 4 c 4 i j χ 2 ¯ 2 + 2 δ i j χ χ ˜ ¯ 2 k l m n U i k U i l U j m * U j n * · T r d ^ k d ^ l ν ^ ( ξ 0 c ) d ^ m d ^ n n ^ + χ χ ˜ ¯ 2 c 4 i j k l m n U i k U j l U i m * U j n * · T r d ^ k d ^ l ν ^ ( ξ 0 c ) d ^ m d ^ n n ^ ] ,
where by Wick’s probability theorem χ i 2 χ ˜ j 2 ¯ = χ i 2 ¯ · χ ˜ j 2 ¯ + χ i χ ˜ j ¯ · χ i χ ˜ j ¯ + χ i χ ˜ j ¯ · χ i χ ˜ j ¯ = χ 2 ¯ 2 + 2 δ i j χ χ ˜ ¯ 2 .

3.6. Calculating Traces

In order to calculate T r ρ ^ o u t n ^ , we need to be able to calculate expressions T r ν ^ ( x ) n ^ , T r d ^ j d ^ k ν ^ ( x ) n ^ , T r ν ^ ( x ) d ^ j d ^ k n ^ , T r d ^ j ν ^ ( x ) d ^ k n ^ , etc., for different x . The first one can be calculated fairly easily:
T r ν ^ ( x ) n ^ = T r e i j x i U i j d ^ j | 0 0 | e i j x i U i j * d ^ j | n n | = 0 | e i j x i U i j * d ^ j | n n | e i j x i c U i j d ^ j | 0 = j 0 e i x i U i j * d ^ j | n j n j | e i x i U i j d ^ j | 0 = j 0 | i x i U i j * d ^ j n j n j ! | n j n j | i x i U i j d ^ j n j n j ! | 0 = j i x i U i j * n j n j ! · i x i U i j n j n j ! = j 1 n j ! i x i U i j 2 n j .
Now, suppose we need to calculate T r ( d ^ 1 ) q 1 ( d ^ N ) q N ν ^ ( x ) ( d ^ 1 ) q 1 ( d ^ N ) q N n ^ . First, we note that
T r ( d ^ 1 ) q 1 ( d ^ N ) q N ν ^ ( x ) ( d ^ 1 ) p 1 ( d ^ N ) p N n ^ = T r ν ^ ( x ) ( d ^ 1 ) p 1 ( d ^ N ) p N n ^ ( d ^ 1 ) q 1 ( d ^ N ) q N = T r ν ^ ( x ) | n p n q | · j n j ( n j 1 ) ( n j p j + 1 ) n j ( n j 1 ) ( n j q j + 1 ) = T r ν ^ ( x ) | n p n q | · j n j ! ( n j p j ) ! n j ! ( n j q j ) ! ,
where by, e.g., | n p we mean i | n i p i .
T r ν ^ ( x ) | n p n q | = j 0 e i x i U i j * d ^ j | n j p j n j q j | e i x i U i j d ^ j 0 = j i x i U i j * n j p j ( n j p j ) ! · i x i U i j n j q j ( n j q j ) ! = j 1 ( n j p j ) ! ( n j q j ) ! · i x i U i j 2 n j i x i U i j * p j i x i U i j q j = j i x i U i j 2 n j i x i U i j * p j i x i U i j q j · n j ! ( n j p j ) ! n j ! ( n j q j ) ! n j ! = T r ν ^ ( x ) n ^ j n j ! ( n j p j ) ! n j ! ( n j q j ) ! 1 i x i U i j * p j i x i U i j q j .
Finally, we can write
T r ( d ^ 1 ) q 1 ( d ^ N ) q N ν ^ ( x ) ( d ^ 1 ) p 1 ( d ^ N ) p N n ^ = T r ν ^ ( x ) n ^ j n j ! ( n j p j ) ! n j ! ( n j q j ) ! 1 i x i U i j * p j i x i U i j q j .

4. Algorithm Overview

The goal of the algorithm is to calculate the probability of a state | n , given n , α , c, s and U. We assume that the Taylor series expansion is calculated up to the desired order before computation starts. The integrals over χ i and χ ˜ i should also be computed (it can be completed analytically via Wick’s probability theorem).
We start by calculating two-variable covariance matrix Σ using α and s. We now select Γ in the way specified above such that it minimizes the series expansion parameter ε . In order to compute the integrals over ξ 0 i , we sample ξ 0 i for each i from a normal distribution N ( 0 , ξ 0 2 ¯ ) .
We now compute T r μ ^ n ^ , which by linearity consists in computing traces of the form described above; for each sample, ξ 0 we need only a polynomial number of operations.
Finally, we take an average over our samples and multiply by the necessary constant det Σ α ( 1 | α | 2 ) N .

5. Taylor Series Convergence for Actual Experimental Conditions

We have discussed above the fact that the role of the “perturbation parameter” in the series expansion is played by c 2 · max χ 2 ¯ , | h | , which we can choose to be equal to ε = 1 2 c 2 1 / α + s 2 . This parameter depends on the experimental conditions (i.e., the squeezing parameter of the input state α and loss level s 2 ). The smaller this parameter is, the faster the series will converge. Thus, the best conditions for this algorithm are achieved when the loss level s 2 is high and the squeezing parameter α is low. Let us consider the actual experimental implementation of the Gaussian boson sampling problem and estimate how small this parameter is in those conditions.
Let us consider the relation between α and the average amount of photons per state n . If the squeezing parameter is ζ = r e i φ , then α = tanh r , while n = sinh 2 r .
In a paper by Zhong et al. [8], 25 PPKTP crystals were used to produce 25 two-mode squeezed states, which is equivalent to 50 single-mode squeezed states. The average number of photons registered by the detectors was 43. Thus, the average amount of photons per mode n is around 43 50 ; r = arcsinh ( n ) 0.855 , α = tanh r 0.694 . The average collection efficiency is said to be c 2 = 0.628 . Then, ε = 1 2 c 2 1 α + s 2 0.18 .
In another paper by Zhong et al. [9], the average amount of photons produced was increased to 70 at maximum pump intensity. This corresponds to α 0.76 . The overall transmission rate in the experiment is said in the paper to be 48 % and 54 % for different settings, so we take s 2 0.5 . This yields ε 0.14 .
In the most recent experiment by Deng et al. [10], the average amount of photons was increased even more, measuring states with 50 , 75 and 100 photons on average with different pump intensities while still producing 25 two-mode squeezed states. The efficiency of the setup is said to be 43 % , yielding ε 0.11 , ε 0.12 and ε 0.12 , respectively.
To estimate the expected accuracy of the algorithm, we can assume that the numerical values of each order are approximately equal, meaning that we can write
T r ρ ^ o u t n ^ = P 0 + ε P 2 + ε 2 P 4 + ,
where P k denotes the sum of all the terms of the k-th order, and P 0 P 2 P 4 P k is assumed. The expression then becomes a geometric progression with common ratio ε . Then, on average, the 0-th order contributes to the probability a part equal to 1 ε , while the second order contributes ε ( 1 ε ) , the fourth contributes ε 2 ( 1 ε ) , etc.
Calculating up to the second order then discards a total contribution of ε , which is approximately 0.182 = 3.2%, 0.142 = 1.96%, 0.112 = 1.21% and 0.122 = 1.44% for the conditions that are analyzed above. When the calculation is performed up to the fourth order, the lost contribution is approximately 0.183 ≈ 0.58%, 0.143 ≈ 0.27%, 0.113 ≈ 0.13% and 0.123 ≈ 0.17%.
The conclusion that we draw is that even in large GBS experiments which are said to demonstrate quantum advantage, the conditions are such that ε is fairly small, and calculating up to the fourth order is enough for the lost contribution to be below 1%.

6. Implementation Details

6.1. Contraction Precomputation

Let us consider the term
1 2 χ 2 ¯ c 2 i j k U i j U i k · T r d ^ j d ^ k ν ^ ( ξ 0 c ) n ^ .
We can rewrite it as
1 2 χ 2 ¯ c 2 j k T r d ^ j d ^ k ν ^ ( ξ 0 c ) n ^ i U i j U i k = 1 2 χ 2 ¯ c 2 j k T r d ^ j d ^ k ν ^ ( ξ 0 c ) n ^ T j k ,
where T j k = i U i j U i k is a contraction of U with itself. It depends only on U and can be calculated before sampling ξ 0 , which reduces the amount of operations required to calculate each probability sample from a ξ 0 sample.

6.2. Factorial Fractions Precomputation

In calculating traces of the form described above, we need to calculate factorial fractions of the form m ! ( m p ) ! F p m , where 0 p m . Since the target state n ^ is fixed, m max ( n i ) .

6.3. Reusing i x i U i j

During calculation, while calculating each trace, we can calculate i x i U i j only once for each ξ 0 sample and then reuse it, thus using less operations to calculate each trace. Let us denote S j = i x i U i j ; S = U T x . Then,
T r ν ^ ( x ) n ^ = j 1 n j ! S j 2 n j
and
T r ( d ^ 1 ) q 1 ( d ^ N ) q N ν ^ ( x ) ( d ^ 1 ) p 1 ( d ^ N ) p N n ^ = T r ν ^ ( x ) n ^ j F p j n j F q j n j S j * p j S j q j .

7. Complexity Analysis

7.1. Precomputation

In this section, we will analyze the computational complexity of precomputation. By precomputation we mean the calculations that need to be carried out only once before ξ 0 sampling and before calculating probability samples for each ξ 0 . The multiplicative constant det Σ α ( 1 | α | 2 ) N can be calculated with O ( N ) multiplication operations. For each term in the resulting sum, we will define its order to be the number of variables χ and χ ˜ , or, equivalently, the power of the loss parameter c. Thus, the term
χ χ ˜ ¯ c 2 i j k U i j U i k * · T r d ^ j ν ^ ( ξ 0 c ) d ^ k n ^
will be of the second order. Then, each term of the order K will have a contraction of the form
j 1 j K U i 1 j 1 U i 2 j 2 U i K j K
where some of the U j i can be conjugated. This leaves at most K + 1 different ways to conjugate the factors. Each contraction has K free indices, and calculating the sum requires N K additions and N K ( K 1 ) multiplications. The total number of additions is N 2 K and the number of multiplications is N 2 K ( K 1 ) , where K is the maximum order we choose to calculate.
Calculating all F p m m ! ( m p ) ! for 0 p m max ( n i ) requires only around max ( n i ) 2 2 multiplications, since m F 0 m = 1 , F 1 m = m , F 2 m = m ( m 1 ) = ( m 1 ) F 1 m , …, F k m = ( m k + 1 ) F k 1 m .

7.2. Probability Sample Computation

Here, we will analyze the computational complexity of calculating a single probability sample given ξ 0 . We will assume that the terms are calculated up to some order K.
Calculating the trace
T r ν ^ ( x ) n ^ = j 1 n j ! i x i U i j 2 n j .
requires one multiplication of an N × N matrix by a N-dimensional vector, N exponentiation operations and 2 N multiplication operations. This calculations needs to be made only once for each x . Calculating any other trace of the form
T r ( d ^ 1 ) q 1 ( d ^ N ) q N ν ^ ( x ) ( d ^ 1 ) p 1 ( d ^ N ) p N n ^
requires 2 N exponentiation operations and 4 N multiplication operations (since factorial fractions are precomputed).
The number of terms for a given order K is N K times the number of different non-zero K-th order moments χ i 1 χ i r χ ˜ i r + 1 χ ˜ i K ¯ . The exact amount is hard to calculate, but the total number of moments (including those that are zero) is ( K + 1 ) N K . Thus, the maximum amount of terms required to compute is ( K + 1 ) N 2 K .
Since the amount of operations required to calculate each term is O ( N ) , the total computational complexity of calculating a probability sample for a given ξ 0 is O K · N 2 K .

8. Results

Below are the results of probability calculation for N = 5 for different output states. The calculated probabilities are compared to exact solutions. The parameters are α = 0.9 , c = s = 2 2 . The number of samples is 4096.
These results show that for calculating a single output state probability accurately, the number of samples needs to be on the order of 10 4 . Below are the results of using fewer samples per state, but instead of comparing individual probabilities, we look at the cosine similarity between the exact and approximated probability distributions over all two-photon states, Figure 1, Figure 2 and Figure 3.
The above graph suggests that the number of samples per state needed to approximate the distribution does not depend much on N. It is computationally hard to check this when comparing to the exact solution, but if we assume that the cosine similarity converges to a value close to 1, we can estimate how quickly it converges. Below, we look at the cosine similarity between a distribution calculated using H samples per state and a distribution calculated with H + Δ H samples per state for different H, where we choose Δ H = 10 . This allows us to estimate how much the distribution changes with Δ H new samples: if the cosine similarity is close to 1, then new samples do not alter the distribution significantly. Figure 4 suggests more strongly that the number of samples per state required for accurate approximation is not really influenced by N. This can be explained by the fact that the number of t w o -photon states increases with N. If the number of samples per state is constant, then the total number of samples increases with N.
Below are benchmark results that show the average precomputation time, which depends only on N, and the time per sample, which depends on N and the amount of photons M in the target state, Figure 5 and Figure 6.
These results show that even N = 40 mode GBS can be simulated on an average laptop using this algorithm.
A direct comparison of the performance of our method with other published results is problematic, since our algorithm calculates the probability of output states rather than directly sampling states from some approximate distribution. The closest algorithm is described in [16]. However, the performance comparison is still problematic because the error bars of the two methods cannot be compared directly. Nevertheless, judging by Figure 9 from [16], our algorithm is more memory-efficient, since it uses about 0.1 GB of memory to run in conditions where the transmission rate is 0.5 and the number of modes is 40, whereas the algorithm [16] uses 1 TB, and the memory requirement grows exponentially with the number of modes.

9. Conclusions

In this paper, we have presented a new algorithm for the approximate calculation of the probability of observing a given output state in the Gaussian boson sampling instance. We have discussed various implementation details that help to reduce the number of operations needed to calculate each probability sample. We also analyze the total computational complexity both of the calculations that need to be carried out once for each specific problem and of computing each probability sample.
This algorithm relies on the Taylor series expansion where the “perturbation” parameter is dependent on the problem conditions. The algorithm consists of calculating the terms of this Taylor series up to some finite order. For a fixed maximum order, the computational complexity of the algorithm is polynomial in N.
We have demonstrated that increasing the maximum order does increase the accuracy of the answer. We have also measured the precomputation and sampling time for a regular CPU, showing that even large instances of Gaussian boson sampling ( N 40 ) can be solved in reasonable time.
We have considered recent GBS experiments and estimated the parameters of the problem for those conditions. We conclude that the contribution of the terms that are discarded when the calculation completed up to the second order is less than 5 % , and if the calculation is completed up to the fourth order, this number drops to 1 % .

Author Contributions

Methodology, A.N.R. and M.V.U.; Software, M.V.U.; Investigation, M.V.U.; Resources, A.N.R.; Data curation, M.V.U.; Writing—original draft, M.V.U.; Writing—review & editing, A.N.R.; Supervision, A.N.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was carried out in the framework of the Russian Quantum Technologies Roadmap.

Data Availability Statement

Data and program code are available upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shor, P.W. Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer. Siam J. Comput. 1997, 26, 1484–1509. [Google Scholar] [CrossRef]
  2. Aaronson, S.; Arkhipov, A. The Computational Complexity of Linear Optics. arXiv 2010, arXiv:1011.3245. [Google Scholar]
  3. Gard, B.T.; Motes, K.R.; Olson, J.P.; Rohde, P.P.; Dowling, J.P. An Introduction to Boson-Sampling. In From Atomic to Mesoscale; World Scientific: Singapore, 2015; pp. 167–192. [Google Scholar] [CrossRef]
  4. Bentivegna, M.; Spagnolo, N.; Vitelli, C.; Flamini, F.; Viggianiello, N.; Latmiral, L.; Mataloni, P.; Brod, D.J.; Galvão, E.F.; Crespi, A.; et al. Experimental scattershot boson sampling. Sci. Adv. 2015, 1, e1400255. [Google Scholar] [CrossRef] [PubMed]
  5. Hamilton, C.S.; Kruse, R.; Sansoni, L.; Barkhofen, S.; Silberhorn, C.; Jex, I. Gaussian Boson Sampling. Phys. Rev. Lett. 2017, 119, 170501. [Google Scholar] [CrossRef] [PubMed]
  6. Lund, A.P.; Laing, A.; Rahimi-Keshari, S.; Rudolph, T.; O’Brien, J.L.; Ralph, T.C. Boson Sampling from a Gaussian State. Phys. Rev. Lett. 2014, 113, 100502. [Google Scholar] [CrossRef] [PubMed]
  7. Zhong, H.S.; Peng, L.C.; Li, Y.; Hu, Y.; Li, W.; Qin, J.; Wu, D.; Zhang, W.; Li, H.; Zhang, L.; et al. Experimental Gaussian Boson sampling. Sci. Bull. 2019, 64, 511–515. [Google Scholar] [CrossRef] [PubMed]
  8. Zhong, H.S.; Wang, H.; Deng, Y.H.; Chen, M.C.; Peng, L.C.; Luo, Y.H.; Qin, J.; Wu, D.; Ding, X.; Hu, Y.; et al. Quantum computational advantage using photons. Science 2020, 370, 1460–1463. [Google Scholar] [CrossRef] [PubMed]
  9. Zhong, H.S.; Deng, Y.H.; Qin, J.; Wang, H.; Chen, M.C.; Peng, L.C.; Luo, Y.H.; Wu, D.; Gong, S.Q.; Su, H.; et al. Phase-Programmable Gaussian Boson Sampling Using Stimulated Squeezed Light. Phys. Rev. Lett. 2021, 127, 180502. [Google Scholar] [CrossRef] [PubMed]
  10. Deng, Y.H.; Gu, Y.C.; Liu, H.L.; Gong, S.Q.; Su, H.; Zhang, Z.J.; Tang, H.Y.; Jia, M.H.; Xu, J.M.; Chen, M.C.; et al. Gaussian Boson Sampling with Pseudo-Photon-Number Resolving Detectors and Quantum Computational Advantage. Phys. Rev. Lett. 2023, 131, 150601. [Google Scholar] [CrossRef] [PubMed]
  11. Oh, C.; Noh, K.; Fefferman, B.; Jiang, L. Classical simulation of lossy boson sampling using matrix product operators. Phys. Rev. 2021, 104, 022407. [Google Scholar] [CrossRef]
  12. García-Patrón, R.; Renema, J.J.; Shchesnovich, V. Simulating boson sampling in lossy architectures. Quantum 2019, 3, 169. [Google Scholar] [CrossRef]
  13. Popova, A.S.; Rubtsov, A.N. Cracking the Quantum Advantage Threshold for Gaussian Boson Sampling. arXiv 2021, arXiv:2106.01445. [Google Scholar]
  14. Qi, H.; Brod, D.J.; Quesada, N.; García-Patrón, R. Regimes of Classical Simulability for Noisy Gaussian Boson Sampling. Phys. Rev. Lett. 2020, 124, 100502. [Google Scholar] [CrossRef] [PubMed]
  15. Aaronson, S.; Brod, D.J. BosonSampling with lost photons. Phys. Rev. 2016, 93, 012335. [Google Scholar] [CrossRef]
  16. Oh, C.; Liu, M.; Alexeev, Y.; Fefferman, B.; Jiang, L. Classical Algorithm for Simulating Experimental Gaussian Boson Sampling. arXiv 2023, arXiv:quant-ph/2306.03709. [Google Scholar]
  17. Hubbard, J. Calculation of Partition Functions. Phys. Rev. Lett. 1959, 3, 77–78. [Google Scholar] [CrossRef]
  18. Stratonovich, R.L. On a Method of Calculating Quantum Distribution Functions. Sov. Phys. Dokl. 1957, 2, 416. [Google Scholar]
Figure 1. Probability calculation for 5 modes for different 2-photon output states.
Figure 1. Probability calculation for 5 modes for different 2-photon output states.
Entropy 26 00493 g001
Figure 2. Graph of the average probability and the standard deviation calculated up to different orders for different numbers of samples. The state for this graph is 2-photon.
Figure 2. Graph of the average probability and the standard deviation calculated up to different orders for different numbers of samples. The state for this graph is 2-photon.
Entropy 26 00493 g002
Figure 3. Convergence of the cosine similarity between estimated probability distribution over the set of all 2-photon states and ground truth for different N.
Figure 3. Convergence of the cosine similarity between estimated probability distribution over the set of all 2-photon states and ground truth for different N.
Entropy 26 00493 g003
Figure 4. Cosine similarity between probability distribution over the set of all 2-photon states after H samples and after H + 10 samples for different N.
Figure 4. Cosine similarity between probability distribution over the set of all 2-photon states after H samples and after H + 10 samples for different N.
Entropy 26 00493 g004
Figure 5. Precomputation time on an Intel i5 CPU in ms versus the number of modes.
Figure 5. Precomputation time on an Intel i5 CPU in ms versus the number of modes.
Entropy 26 00493 g005
Figure 6. Average time per sample on an Intel i5 CPU versus the number of modes for states with different photon numbers.
Figure 6. Average time per sample on an Intel i5 CPU versus the number of modes for states with different photon numbers.
Entropy 26 00493 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Umanskii, M.V.; Rubtsov, A.N. Classical Modeling of a Lossy Gaussian Bosonic Sampler. Entropy 2024, 26, 493. https://doi.org/10.3390/e26060493

AMA Style

Umanskii MV, Rubtsov AN. Classical Modeling of a Lossy Gaussian Bosonic Sampler. Entropy. 2024; 26(6):493. https://doi.org/10.3390/e26060493

Chicago/Turabian Style

Umanskii, Mikhail V., and Alexey N. Rubtsov. 2024. "Classical Modeling of a Lossy Gaussian Bosonic Sampler" Entropy 26, no. 6: 493. https://doi.org/10.3390/e26060493

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop