Next Article in Journal
Ring and Linear Structures of CdTe Clusters
Next Article in Special Issue
EPR Correlations Using Quaternion Spin
Previous Article in Journal
The Computational Universe: Quantum Quirks and Everyday Reality, Actual Time, Free Will, the Classical Limit Problem in Quantum Loop Gravity and Causal Dynamical Triangulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Eliminating the Second-Order Time Dependence from the Time Dependent Schrödinger Equation Using Recursive Fourier Transforms

by
Sky Nelson-Isaacs
Independent Researcher, El Cerrito, CA 94530, USA
Quantum Rep. 2024, 6(3), 323-348; https://doi.org/10.3390/quantum6030021
Submission received: 23 May 2024 / Revised: 15 June 2024 / Accepted: 16 June 2024 / Published: 25 June 2024
(This article belongs to the Special Issue 100 Years of Quantum Mechanics)

Abstract

:
A strategy is developed for writing the time-dependent Schrödinger Equation (TDSE), and more generally the Dyson Series, as a convolution equation using recursive Fourier transforms, thereby decoupling the second-order integral from the first without using the time ordering operator. The energy distribution is calculated for a number of standard perturbation theory examples at first- and second-order. Possible applications include characterization of photonic spectra for bosonic sampling and four-wave mixing in quantum computation and Bardeen tunneling amplitude in quantum mechanics.

1. Introduction

The Time-Dependent Schrödinger Equation (TDSE), although unsolvable in exact terms, is often approached through various perturbative methodologies, such as those pioneered by Rayleigh–Schrödinger [1], Dirac [2], Dyson [3], Lippmann–Schwinger [4], WKB [5], Feynman [6,7,8], and others [9]. Under a weak time-dependent perturbation, the TDSE solution can be written in terms of the known eigenvectors of the time independent Schrödinger Equation (TISE), which results in a Dyson series—an infinite recursion of coupled time integrals. In field theory, it is standard to use the time-ordering operator for decoupling the integrals.
Using the observation that these coupled integrals can be written as repeated Fourier transforms and their inverse, along with appropriate phase factors, a method is developed for decoupling the integrals at second order.
Many methods exist to integrate the Schrödinger Equation [5,10,11,12,13,14]. In particular, some methods use the Fourier transform explicitly, but usually as a trick to make calculations easier, while in others such as split-step, multi-slice, or Fourier space filtration [15,16,17], the Fourier transform plays a more fundamental role relating to the Fourier dual spaces.
In this study, similar to split-step, we break the Hamiltonian into kinetic and potential energy portions and then employ a Recursive Fourier Transform (RFT) technique in a novel way to decouple the second-order integral from the first, bypassing the need to invoke time ordering. Thus, we can represent TDSE, plausibly to any order, as a convolution equation by invoking the Convolution Theorem. This presentation aims to offer an efficient second-order analytical solution to the TDSE, while also emphasizing the method’s utility as a first principle rather than a calculational trick [18].
Using this technique to efficiently and precisely calculate the spectral response of a time-limited perturbation has relevance for recent advances in single-photon generation [19,20,21,22], quantum computing [23,24,25,26,27,28,29,30,31,32], optical traps [33,34,35], quantum cryptography [36], quantum tunneling and microscopy [37,38,39,40,41], quantum information and entanglement [42,43,44], high harmonic generation [45,46,47], ultra fast light pulse generation [48,49,50], and measurements of gravity [51,52]. Applications to open systems may be possible by extending this method to, for example, the Lindblad master Equation [53]. This method may also hold pedagogical promise in physics education by expanding the range of calculable use cases for the TDSE [54,55].
The procedure will be applied here to basic examples, such as Gaussian potentials, but it is quite general to any potential whose Fourier transform exists. One may also find useful application in studying quantum Zeno dynamics, for instance in suppressing second-order photon exchange relative to first order for the purpose of generating entanglement [56].
The procedure may also be useful in quantum error correction techniques by characterizing spectral properties of noise sources leading to the decoherence of qubits.
A detailed breakdown of the paper is as follows: first-order RFT technique introduction to familiar use cases (Section 2.1), a new second-order RFT decoupling technique (Section 2.2), and experimental and theoretical applications (Section 3). Appendices with supplemental sections on basic definitions and concepts (Appendix A and Appendix B), mathematical property examination (Appendix C), numerical accuracy of the method (Appendix D), and interpretation of the results (Appendix E) are also included.

2. Methods

The TDSE will now be evaluated to first and second order using the recursive Fourier transform (RFT) method. We use as a starting point the standard formulation of TDSE, for instance as in [57].
The matrix element for the transition between the initial state | ω i and the final state | ω f is
ω f | ψ ( T ) ( 1 ) = i ( ω f | ( 1 i 0 T d t 1 V ^ I ( t 1 ) + ( i / ) 2 0 T d t 1 0 t 1 d t 2 V ^ I ( t 1 ) V ^ I ( t 2 ) + ) | ω i ) c i ( 0 ) ,
where the potential V ^ I is written in the interaction picture. The potential has both an operator component and a continuous time dependence. We will focus our discussion on the latter.

2.1. Formulating the First-Order Time Dependent Schrödinger Equation through Recursive Fourier Transforms (RFT)

The first-order term in Equation (1) has the well-known approximation [58]
ω f | ψ ( T ) ( 1 ) 1 i i 0 T d t 1 ω f | V ^ | ω i V ( t 1 ) e i ( ω f ω i ) t 1 c i ( 0 ) i V f i c i ( 0 ) V ˜ ( ω f ω i )
where the symbol ∼ indicates the Fourier transform, c i ( 0 ) is the amplitude of state | ω i at t = 0 , and V f i = ω f | V ^ | ω i is the matrix element of the potential that connects the initial and final states and will be insignificant to the current discussion. In the second line, a standard approximation was made by allowing the time interval to become infinite in both directions, “asymptotic time”, so that Equation (2) becomes the Fourier transform of the potential. This relationship to the Fourier transform becomes further intriguing when we write Equation (2) in a suggestive way,
ω f | ψ ( T ) ( 1 ) = 1 i 0 T d t 1 e i ω f t 1 V ( t 1 ) i e i ω i t 1 ω f | V ^ | ω i c i ( 0 ) .
Clearly this expression involves a transformation (though not formally a Fourier-type transform) from a frequency representation to a time representation, and a subsequent transformation back to a frequency representation.
Based on this intuition, we modify Equation (2) to not assume asymptotic time by inserting an indicator function (or mask) that is non-zero only within the specified time range, 0 T ,
ω f | ψ ( T ) ( 1 ) = 1 i i V f i c i ( 0 ) d t 1 e i ω f t 1 r e c t t 1 T 1 2 V ( t 1 ) e i ω i t 1 .
Everything on the right-hand side is written in the time domain over parameter t 1 , but by writing each factor as the inverse Fourier transforms of their Fourier transforms,
r e c t t 1 T 1 2 = F 1 ω t 1 T exp ( i ω T / 2 ) s i n c ( ω T / 2 ) V ( t 1 ) = F 1 ω t 1 V ˜ ( ω ) e i ω i t 1 = F 1 ω t 1 δ ( ω ω i ) ,
we can write the integral over t 1 as follows:
F t 1 ω F 1 ω t 1 T exp ( i ω T / 2 ) s i n c ( ω T / 2 ) F 1 ω t 1 V ˜ ( ω ) F 1 ω t 1 δ ( ω ω i ) | ω = ω f ,
where s i n c ( x ) s i n ( x ) / x , and the symbol F t 1 ω is an obvious notation that makes explicit that the transform is converting from an expression in t 1 to an expression in ω .
Note that the outer operation in Equation (6) is not technically a Fourier transform to ω but rather a projection onto a single exp ( i ω f t 1 ) basis state. To accomplish this, we computed a Fourier transform to switch to a continuous energy basis and then evaluated the result at a discrete energy ω = ω f . Conceptually, this is important because ω is a dummy convolution variable that is replaced by the measurable energy ω f .
By applying the convolution theorem to Equation (6) and inserting into Equation (4), we obtain an expression for the first-order transition amplitude,
ω f | ψ ( T ) ( 1 ) = T 2 π i i V f i e i ω T / 2 s i n c ( ω T / 2 ) V ˜ ( ω ) δ ( ω ω i ) | ω = ω f c i ( 0 ) .
This is the first main result, the first order spectral response to the time-dependant perturbation V. Comparing with Equation (2), we observe variations in the frequency domain with a “spatial frequency” 4 π / T , where T is the duration of the measurement (see Figure 1b).
Two standard cases will be considered to illustrate the validity of the result above.

2.1.1. Example: Gaussian-Kicked Harmonic Oscillator

A simple system to consider is the harmonic oscillator “kicked” by a small Gaussian pulse,
V ( t ) e t 2 2 τ 2 ,
where τ is the characteristic time of the Gaussian. For instance, this can represent a cold atom in an optical trap [33,34].
To find the transition amplitude from the i to the f state using Equation (2), the asymptotic time approximation results in the expression
c ( 1 ) ( t ) τ Ω e ω 2 τ 2 / 2 ,
where Ω is a normalization constant owing to the fixed natural frequency of the oscillator, and ω parameterizes the frequency response of the Gaussian perturbation [58].
Using instead Equation (7), the same transition can be written as
ω f | ψ ( T ) ( 1 ) 1 i V f i c i ( 0 ) d t 1 e i ω f t 1 r e c t t 1 T 1 2 e t 1 2 2 τ 2 e i ω i t 1 τ T 2 π i V f i e i ω T / 2 s i n c ( ω T / 2 ) e ω 2 τ 2 / 2 δ ( ω ω i ) | ω = ω f ,
where T is the measurement interval, and τ is the characteristic width of the Gaussian. Equation (10) is valid under the usual conditions necessary for a Taylor series (weak interaction), but unlike Equation (9), it does not make the asymptotic time approximation.
The effect of convolution is to add a small ripple to the Gaussian (see Figure 1b). This ripple was ignored in the standard approach (Equation (9)), when the limits of integration are arbitrarily set to infinity.

2.1.2. Example: Fermi’s Golden Rule

It will next be verified that Equation (7) reduces to the well-known Fermi Golden Rule. A typical example involves calculating the transition probability for an electron around the stationary atom absorbing a photon and transitioning from a bound state to a continuum of states. The Hamiltonian is
H ^ = H ^ 0 for t 0 H ^ = H ^ 0 + V ^ ( t ) for t > 0 V ^ ( t ) = 2 V ^ 0 c o s ( ω d t )
where V ^ ( t ) is the time-dependent perturbation, and H ^ 0 and V ^ 0 are independent of time.
The standard integral in Equation (2) results in the following i f transition amplitude:
ω f | ψ ( T ) ( 1 ) = 1 i 0 T d t 1 e i ( ω f ω i ) t 1 ( e i ω d t 1 + e i ω d t 1 ) ω f | V ^ | ω i = V f i i e i ( ω f i + ω d ) T 1 i ( ω f i + ω d ) + e i ( ω f i ω d ) T 1 i ( ω f i ω d ) V f i i e i ( ω f i ω d ) 2 T s i n ( ( ω f i ω d ) T / 2 ) ( ω f i ω d ) ,
where ω f i ω f ω i . We dropped the first term, as is customary, in favor of the second term, which dominates around the resonant frequency ω f i ω d [59].
Equation (7) obtains the same result. Computing
F { V ( t ) } V 0 2 δ ( ω ω d ) + δ ( ω + ω d ) ,
and dropping the + term for the same reason as above, gives
ω f | ψ ( T ) ( 1 ) = 1 2 π i V f i T e i ω T / 2 s i n c ( ω T / 2 ) δ ( ω ω d ) δ ( ω ω i ) | ω = ω f = 1 2 π i V f i T e i ( ω f ω i ω d ) T / 2 s i n c ( ( ω f ω i ω d ) T / 2 )
which is the standard result (up to constant factors).

2.2. Decoupling the Second-Order TDSE through RFT

Having examined the familiar first-order result using recursive Fourier transform methods, we now derive our second main result: an expression for the second-order term in the TDSE expansion.
The integrals for the second-order amplitudes are more complicated because the upper limit of integration for the nested integral is the integration parameter for the outer integral t 1 . Starting from Equation (1), by inserting a discrete basis of equally spaced states, the second-order transition amplitude is
ω f | ψ ( T ) ( 2 ) = 1 ( i ) 2 k i 0 T d t 1 0 t 1 d t 2 ω f | V ^ I ( t 1 ) | ω k ω k | V ^ I ( t 2 ) | ω i c i ( 0 ) = 1 ( i ) 2 0 T d t 1 { e i ω f t 1 V ( t 1 ) k e i ω k t 1 V f k 0 t 1 d t 2 e i ω k t 2 V ( t 2 ) i e i ω i t 2 V k i c i ( 0 ) }
The integrals are therefore coupled, and the method in Section 2.1 must be modified. This is a Dyson series and was decoupled by Dyson by introducing the time-ordering operator. This is used widely in quantum field theory [3].
Here, the integrals will be decoupled in a new way in the following four steps. We assume that the spectra of the energy eigenstates ω k are discrete. For simplicity, we consider only the case in which they are equally spaced, that is, a simple harmonic oscillator. Then, we can write
ω k = k ω ( 0 ) ,
for integers k.
  • Step 1. Apply the convolution theorem to the nested integral
The limits of integration of the nested integral are extended to infinity, using a rectangular mask, as in Equation (4),
ω f | ψ ( T ) ( 2 ) = 1 ( i ) 2 0 T d t 1 e i ω f t 1 V ( t 1 ) k e i ω k t 1 V f k d t 2 e i ω k t 2 r e c t t 2 t 1 1 2 V ( t 2 ) i e i ω i t 2 V k i c i ( 0 )
Because we truncated the signal using a r e c t ( t ) mask, this step was exact. The integrals are still coupled via t 1 , but the coupling now parameterizes the width of the mask rather than the integration domain.
By following the steps in Equation (4), we can write each factor in the integrand of the second line of Equation (16) in the frequency domain,
ω f | ψ ( T ) ( 2 ) = 1 2 π ( i ) 2 0 T d t 1 e i ω f t 1 V ( t 1 ) k i e i ω k t 1 V f k V k i c i ( 0 ) F t 2 ω k F 1 ω t 2 { t 1 exp ( i ω t 1 / 2 ) s i n c ( ω t 1 / 2 ) · F 1 ω t 2 { V ˜ ( ω ) } F 1 ω t 2 { δ ( ω ω i ) } ,
and apply the convolution theorem,
ω f | ψ ( T ) ( 2 ) = 1 2 π ( i ) 2 0 T d t 1 e i ω f t 1 V ( t 1 ) k i e i ω k t 1 V f k V k i c i ( 0 ) t 1 e i ( ω ω i ) t 1 / 2 s i n c ( ( ω ω i ) t 1 / 2 ) V ˜ ( ω ) | ω = ω k
In evaluating the Fourier transforms, we have transformed bases from the original parameter of integration, t 2 , to ω and then to ω k , an intermediate basis of energy states. Note that the expression inside the parenthesis on the last line of Equation (18) is a continuous distribution in a dummy parameter ω , evaluated at a specific value ω = ω k after performing the convolution.
The nested integral is now a convolution in ω -space, but the s i n c function’s width depends on t 1 , which is coupled to the outer integral. How do we compute a convolution of a signal whose shape is changing as t 1 is integrated over?
  • Step 2. Discretize the integral over t 1 as a Riemann sum and move it inside the sum over k and i
It is easier to handle Equation (18) by writing the integral over t 1 as a Riemann sum of step size Δ T , and rearranging the sums (switching the order of the sum and the integral in an infinite series can have unpredictable effects on the convergence of the series in general, but for our purposes we only examine the second-order expansion; this poses the same limitation on validity as other variational approaches such as Feynman diagrams):
ω f | ψ ( T ) ( 2 ) = 1 2 π ( i ) 2 k i V f k V k i c i ( 0 ) n = n i n f Δ T e i ω f n Δ T V ( n Δ T ) e i ω k n Δ T n Δ T exp ( i ( ω ω i ) n Δ T / 2 ) s i n c ( ( ω ω i ) n Δ T / 2 ) V ˜ ( ω ) | ω = ω k
Because the second line is a distribution in the frequency domain evaluated at a specific point, it is simply a c-number for each term in the Riemann sum.
  • Step 3. Allow the variation over time to vary the width of the distribution s i n c ( ω n Δ T )
Here is the central insight to decouple the integrals. For each step in the Riemann sum over n (coupling variable), we identify the expression in the second row of Equation (19), in the continuous limit n Δ T t 1 , as having the form of an “impulse response” (in the time domain),
h k i [ t 1 ] exp ( i ( ω ω i ) t 1 / 2 ) s i n ( ( ω ω i ) t 1 / 2 ) ( ω ω i ) / 2 V ˜ ( ω ) | ω = ω k
Equation (20) is the impulse response of the system to a perturbation of duration t 1 , the nested integration variable. The intermediate frequency ω k is defined in Equation (15).
  • Step 4. Apply the convolution theorem to the outer integral
Now, we can change the Riemann sum back to an integral over t 1 . Crucially, h k i , which is an explicit distribution in ω -space, appears inside an integral over time t 1 . We can therefore interpret it as a function of time rather than frequency. We can now repeat the earlier technique of extending the integration domain in the first-order to ± and inserting a rectangular function of width T,
ω f | ψ ( T ) ( 2 ) = 1 2 π ( i ) 2 k i V f k V k i c i ( 0 ) d t 1 e i ω f t 1 r e c t t 1 T 1 2 V ( t 1 ) e i ω k t 1 h k i [ t 1 ] .
The effect of the outer integral over t 1 on the nested integral is to vary the width t 1 of the impulse response across the duration of the measurement window from 0 T , and sample it at ω k to generate h k i [ t 1 ] (see Figure 2).
Because of Step 3, everything inside the t 1 integral in Equation (21) can be treated as a distribution in t 1 , and the convolution theorem can be used again. The t 1 integral becomes a Fourier transform by explicitly writing each factor in the ω -domain,
F t 1 ω f F 1 ω t 1 T e i ω T / 2 s i n c ( ω T / 2 ) · F 1 ω t 1 V ˜ ( ω ) F 1 ω t 1 δ ( ω ω k ) · F 1 ω t 1 H ˜ k i ( ω ) ,
where H ˜ k i ( ω ) F t 1 ω { h k i [ t 1 ] } is the Fourier transform of the impulse response, also called the amplitude transfer function (see Figure 3).
The final ω -domain expression for second-order transition amplitude from i f is
ω f | ψ ( T ) ( 2 ) = k i d k i T e i ω f T / 2 s i n c ω f T 2 V ˜ ( ω f ) δ ( ω f ω k ) H ˜ k i ( ω f )
where
H ˜ k i ( ω ) = F t 1 ω exp ( i ( ω ω i ) t 1 / 2 ) s i n ( ( ω ω i ) t 1 / 2 ) ( ω ω i ) / 2 V ˜ ( ω ) | ω = ω k
For notational simplicity, we have defined d k i c i ( 0 ) V f k V k i / ( 2 π i ) 2 .
This is the second main result, expressing the second-order contribution to the transition amplitude owing to an arbitrary time-limited perturbation of a system with evenly spaced energy eigenkets | ω k . A comparison between second order and first order (Equation (7)) distributions is given in Figure 4.
A detailed analysis of Equations (23) and (24) has been placed in Appendix C.

2.2.1. Example: Second-Order Harmonic Perturbation Golden Rule

To illustrate the results of Section 2.2, we compare the recursive Fourier transform method with the standard second-order approach [60,61].
Consider the ramped up oscillating potential,
V ( t ) = e ϵ t e i ω d t ,
where ϵ is a small positive constant which ensures ramp up of the potential from t = , and ω d is the driving frequency of the potential. This can model, for instance, light interacting with an atom [57].
Via the traditional application of the TDSE at second-order, we integrate the potential twice (Equation (1)) to obtain
ω f | ψ ( T ) ( 2 ) = 1 i 2 e i ( ω f ω i 2 ω d ) T e 2 ϵ T ω f ω i 2 ω d 2 i ϵ k V f k V k i ω k ω i ω d i ϵ .
This can be interpreted as an amplitude for a particular transition through an intermediate state k, summed over all such possible paths (see [60]). Taking the time-derivative of the squared amplitude in the small ϵ limit results in an expression for the transition rate,
lim ϵ 0 d d T | ω f | ψ ( T ) ( 2 ) | 2 = 1 i 2 | k V f k V k i ω k ω i ω d i ϵ | 2 δ ( ω f ω i 2 ω d )
where the δ -function comes from the small ϵ limit of
lim ϵ 0 2 ϵ ( ω f ω i 2 ω d ) 2 + ϵ 2 = δ ( ω f ω i 2 ω d ) .
This is the second-order version of Fermi’s Golden Rule.
A similar expression can be achieved using recursive Fourier transforms. Starting with Equations (23) and (24), the second order transition amplitude is
ω f | ψ ( T ) ( 2 ) = 1 i 2 k V f k V k i t e i ω f t / 2 s i n c ( ω f t / 2 ) V ˜ ( ω f ) δ ( ω f ω k ) H ˜ k i ( ω f ) ,
where
H ˜ k i ( ω f ) = F t 1 ω f { e i ( ω ω i ) t 1 2 s i n ( ( ω ω i ) t 1 2 ) ( ω ω i ) / 2 V ˜ ( ω ) | ω ω k } .
In Equation (29), the convolution is over ω f , while in Equation (30), the convolution is over ω . For ϵ > 0 , the Fourier transform of the potential (Equation (25)) is
V ^ ( ω ) = F t ω { e ϵ | t | e i ω d t } = 2 ϵ ( ω ω d ) 2 + ϵ 2 ,
which, in the limit ϵ 0 , becomes V ˜ ( ω ) = δ ( ω ω d ) (see Equation (28)). Then, the transfer function in Equations (A1) and (A3) becomes
H ˜ k i ( ω f ) 2 π i ω k ω i ω d δ ( ω f + ( ω k ω i ω d ) ) δ ( ω f ω d ) .
This creates a series of descending harmonic spikes as in Figure A4, but in this case centered on ω d . This is precisely the content of the summation terms in Equation (26), but the new calculation exposes the hidden structure in the harmonics, as described in Appendix C.
Inserting Equations (31) and (32) into Equation (29) obtains
ω f | ψ ( T ) ( 2 ) = 1 i 2 k V f k V k i t e i ω f t / 2 s i n c ( ω f t / 2 ) δ ( ω f ω d ) δ ( ω f ω k ) 2 π i ω k ω i ω d δ ( ω f + ( ω k ω i ω d ) δ ( ω f ω d ) ) = 1 i 2 k 2 π i t V f k V k i ω k ω i ω d ( e i ( ω f ω i 2 ω d ) t / 2 s i n c ( ( ω f ω i 2 ω d ) t / 2 ) e i ( ω f ω k 2 ω d ) t / 2 s i n c ( ( ω f ω k 2 ω d ) t / 2 ) )
In the limit of large t, the s i n c function and its pre-factor approach a delta function, δ ( ω f ω i 2 ω d ) . In the limit of small ϵ , the factor exp ( 2 ϵ t ) 1 .
The time dependence of Equation (33) is the same as in Equation (27), only made more explicit by expressing the result in terms of frequency rather than in terms of energy. The time derivative (of the amplitude, in this case) is constant, as in Equation (27).
Thus, the new method given in Equation (29) and the traditional method given in Equation (27) give similar results for the measurable transition probability under the given conditions.

3. Discussion

The recursive Fourier transform method for decoupling the Dyson series has application in both experiment and theory. A few possibilities are discussed.

3.1. Bosonic Sampling and Quantum Computation

Bosonic sampling [23] with indistinguishable photons represents a computational challenge that can only be tackled by quantum computers and thus would demonstrate so-called quantum supremacy.
Tamma and Laibacher explain that “for a given interferometric network, the interference of all the possible multi-photon detection amplitudes…depends only on the pairwise overlap of the spectral distributions of the single photons” [62]. They emphasize extracting quantum information from the “spectral correlation landscapes” of photons [24]. So characterizing the frequency spectrum of a single photon is an essential task.
Various physical properties are related to the spectra of the photon. Further elaborating, Tamma and Laibacher assert that their results reveal the “ability to zoom into the structure of the frequency correlations to directly probe the spectral properties of the full N-photon input state…” [24], where “single-photon states”
| ψ : = 0 d ω c ( ω ) e i ω Δ t 0 a ^ ( ω ) | 0
are characterized by a spectral distribution c ( ω ) [24].
The indistinguishability of photon pairs, time delays between photons, generation of ultra short photons, and probability of detection in a multi photon experiment can all be related to the spectral distribution.
The recursive Fourier transform approach we explored in this study allows us to calculate the spectral distributions c ( ω ) of photons with greater precision and efficiency, potentially leading to improvements in the above areas of research.

3.2. Quantum Field Theory

Dyson decoupled the nested integrals in higher-order TDSE (Equation (1)) by introducing a time-ordering operator that places all operators in order of increasing time from right to left. Then, TDSE can be written as a complex exponential,
U f i = 1 i i f d t V I ( t ) + ( i ) 2 2 i f d t 1 i t 1 d t 2 V I ( t 1 ) V I ( t 2 ) + O ( V I 3 ) = 1 i i f d t V I ( t ) + ( i ) 2 2 i f d t 1 i f d t 2 T { V I ( t 1 ) V I ( t 2 ) } + O ( V I 3 ) = T exp i i f d t V I ( t )
where V I is expressed in the Interaction Picture of Dirac. From this method, the usual field theory methods for calculating field correlation functions are typically derived.
In this study, we accomplished decoupling in a novel way, with no appeal to time ordering. This may be a more efficient method for directly calculating higher-order correlation functions or Feynman amplitudes by using convolution. It also removes the asymptotic time assumption, because the limited time intervals are computed exactly, without approximating the integration domain to be infinite.
If the recursive Fourier transform method allows for efficient calculation of higher order terms, one may be able to relax the constraint for small perturbations and allow for a broader range of potential strengths, moving out of the regime of weak coupling forces.

3.3. Bardeen Tunneling

Bardeen investigated electron tunneling at a voltage-biased junction between the conductive components. Following [37], describe the tunneling potential between tip and ssample of an electron microscope as V ( t ) exp ( η t / ) , and after making several assumptions (for instance, keep only first-order terms, and small tunneling current), the solution the the TDSE is
c f ( t ) = M i f e i ( ω f ω i + i η ) t ( ω f ω i + i η ) ,
where M i f is the matrix element of the Hamiltonian perturbation. The energies ω f and ω i correspond to the sample and tip of an electron microscope, respectively. Electrons come in with energy ω i , and in a junction biased at voltage V 0 , ω i ω i + e V 0 .
Next one typically estimates the tunneling current as the time rate of change of the tunneling probability, in the limit that η 0 .
I ( ω f ) = lim η 0 d d t | c f ( t ) | 2 = | M i f | 2 d d t e 2 η t ( ω f ω i e V 0 ) 2 + η 2 ) = 2 π | M i f | 2 δ ( ω f ω i e V 0 ) ,
where we used the identity lim ( η 0 ) ( η / ( ω 2 + η 2 ) ) = δ ( ω ) .
Bardeen’s formulation is valid under certain conditions [38]. This result is useful because it describes tunneling in terms of time rather than space.
A related calculation can be executed using Equation (7) and identifying V ˜ = δ ( E ) , since the potential is constant in time, and ω i = E i + e V 0 , since the initial state includes and offset due to the bias potential.
c f ( t ) = i 2 V i f exp ( i ω t / 2 ) ) sin ( ω t / 2 ) ω δ ( ω e V 0 ) δ ( ω ω i ) | ω = ω f = i 2 V i f exp ( i ( ω f ω i e V 0 ) t / 2 ) ) sin ( ( ω f ω i e V 0 ) t / 2 ) ω f ω i e V 0
With increasing time, Equation (7) converges to a δ -function.
In the first case, we find that the tunneling current is non-zero when ω f = ω i + e V 0 . In the second case, we find that the transition amplitude is non-zero under the same condition, which has the same physical meaning.
The recursive Fourier transform approach efficiently determines the probability amplitude for each outgoing energy mode. This method could permit a more detailed description of the energy transitions across the tunneling barrier, including for arbitrarily short time-scales. For instance, in tunneling across a voltage-biased barrier, the energy profile Equation (7) correlates with the excess kinetic energy profile of an electron ensemble post-barrier-crossing. The ensemble’s velocity profile could be measured.

Example: 2nd Order Bardeen Tunneling

Although a second-order expression is not typically attempted with Bardeen’s approach, the recursive Fourier transform approach to the general TDSE allows us to guess at a second-order result for Bardeen tunneling.
First, we determine the transfer function for second order Bardeen tunneling. Again using V ˜ = δ ( E ) and ω i = E i + e V 0 , Equation (24) obtains
H ˜ k i ( ω f ) = F t 1 ω f exp ( i ( ω e V 0 / ) t 1 2 ) s i n ( ( ω e V 0 / ) t 1 2 ) ( ω e V 0 / ) δ ( ω ) | ω = ω k = F t 1 ω f exp ( i ( ω k e V 0 / ) t 1 2 ) s i n ( ( ω k e V 0 / ) t 1 2 ) ( ω k e V 0 / )
Note that ω is the convolution parameter. As usual, indices i , f , k correspond to initial, final, and intermediate energy states, respectively. Performing the Fourier transform results in
H ˜ k i ( ω f ) = 1 2 π i ( ω k e V 0 / ) δ ( ω f + ( ω k e V 0 / ) ) δ ( ω f ) .
Following the steps in Appendix C.1, we write
δ ( ω f ω k ) H ˜ k i ( ω f ) = 1 2 π i ( ω k e V 0 / ) δ ( ω f e V 0 / ) δ ( ω f ω k )
so using Equation (23), the 2nd order Bardeen tunneling amplitude is
ω f | ψ ( T ) ( 2 ) = 2 π i T V ^ f k V ^ k i k 1 ( ω k e V 0 ) e i ω f T 2 s i n c ω f T 2 δ ( ω f e V 0 ) δ ( ω f ω k ) ,
where V ^ a b denotes the matrix elements of the potential operator, and the convolution variable is ω f .
Equation (38) incorporates two series of complex s i n c functions. The first series is centered at ω f = e V 0 / . The distributions in the second series are centered at ω f = ω k , descending in amplitude away from e V 0 / . See Figure A4.
This represents the distribution of kinetic energies of tunneled electrons described by Equation (38). This result is significant as a transient effect for small times only. It illustrates the usefulness of the RFT method for extending existing methods of calculation.

3.4. Joint Spectral Amplitude Function

Recent experiments in quantum optics [63,64,65,66,67,68] rely on the generation of entangled photons characterized by a joint spectral amplitude function (JSA), F ( ω s , ω i ) , such that
| Ψ ( ω s , ω i ) d ω s d ω i F ( ω s , ω i ) | ω s | ω i
To be concrete, consider 4-wave mixing with signal, idler, and pump photons given by
E s ( i ) ( t , z ) = d ω s ( i ) a s ( i ) + e i k ( ω s ( i ) ) z + i ω s ( i ) t E p ( t , z ) = e i γ P z d ω p e ( ω p ω 0 ) 2 2 σ 2 e i k ( ω p ) z i ω p t
In this process, two pump photons at frequency ω 0 annihilate to generate two outgoing photons (signal and idler) at frequencies ω 0 ± Δ for some frequency detuning value Δ . This is a statement of energy conservation. The photons are assumed to be in a non-linear, dispersive medium with wave vector k ( ω ) (the non-linearity is contained in the expression with γ , but is not important for this derivation). At first order, the Schrödinger equation gives
| Ψ ( ω s , ω i ) 0 T d t L 0 d z E s E i E p E p | 0 d ω s d ω i 0 T d t L 0 d z e i ( k s ( ω s ) z ω s t ) e i ( k i ( ω i ) z ω i t ) E p E p | ω s | ω i d ω s d ω i F ( ω s , ω i ) | ω s | ω i
where F ( ω s , ω i ) is the JSA.
The wave vector mismatch (due to dispersion) is calculated from the Taylor expansion of the wave vectors,
Δ k = 2 k ( ω p ) k ( ω s ) k ( ω i ) = ( ω p ω 0 ) 2 k Δ k ( ω s , ω i )
where
Δ k ( ω s , ω i ) k 4 ( ω s ω i ) 2 .
The JSA can be written as
F ( ω s , ω i ) = L 0 d z e i ( k ( ω s ) + k ( ω i ) ) z 0 T d t e i ( ω s + ω i ) t e i γ P z d ω p e ( ω p ω 0 ) 2 2 σ 2 e i k ( ω p ) z i ω p t 2 = L 0 d z e i ( Δ k ( ω s , ω i ) + 2 γ P ) z 0 T d t e i ( ω s + ω i ) t d ω p e ( ω p ω 0 ) 2 2 σ 2 ( 1 2 i k σ 2 z ) e i ω p t 2 ,
For Gaussian pump photons, the integral can be evaluated in closed form to obtain an expression (to second order) for the JSA [63],
F ( ω s , ω i ) = L 0 d z 1 1 i k 2 σ p 2 z e i ( Δ k + 2 γ P ) z e ( ω s + ω i 2 ω 0 ) 2 4 σ p 2 e i ( Δ k + 2 γ P ) L 2 s i n c ( ( Δ k + 2 γ P ) L 2 ) e ( ω s + ω i 2 ω 0 ) 2 4 σ p 2 ,
where in the last step, the radical is set to unity using the fiber approximation | k | < < 1 σ p 2 L .
Alternately, this expression can be derived using the recursive Fourier transform process developed in this paper. Rewrite the integrals in Equation (40) as
F ( ω s , ω i ) = d z r e c t ( z L 0 + 1 2 ) e i Δ k z d t r e c t ( t T 0 1 2 ) e i ( ω s + ω i ) t e i ω 0 t e i γ P z d Ω e Ω 2 2 σ 2 e i 1 2 Ω 2 k z i Ω t 2
where Ω ω p ω 0 .
Using the methods of Section 2, we write
F ( ω s , ω i ) = L 0 T 0 d z e i Δ k z F 1 k z { e i k L 0 2 s i n c ( k L 0 2 ) } F 1 k z δ ( k + 2 γ P ) d t e i ( ω s + ω i ) t F 1 ω t { e i ω T 0 2 s i n c ( ω T 0 2 ) } F 1 ω t δ ( ω 2 ω 0 ) d Ω e i Ω t exp ( Ω 2 2 σ 2 ( 1 i k σ 2 z ) ) 2
Performing the Fourier transform over z first results in a factor δ ( k + Ω 2 k / 2 ) , and using the convolution theorem, we arrive at
F = L 0 T 0 [ e i ω T 0 2 s i n c ( ω T 0 2 ) ω δ ( ω 2 ω 0 ) ω e ω 2 4 σ 2 e i k L 0 2 s i n c ( k L 0 2 ) k δ ( k + 2 γ P ) k δ ( k + ω 2 k ) k = Δ k ( ω s , ω i ) ] ω = ω s + ω i
The domain of convolution is specified with a subscript, and the distributions are evaluated at the given expressions of signal and idler frequencies.
To compare Equation (42) to the standard result, Equation (41), we assume asymptotic time ( T 0 ), in which case s i n c ( ω T 0 ) δ ( ω ) , so the convolution over ω reduces to the identity operation, and we assume that the pump photon dispersion, ω 2 k , is negligible. Under these conditions,
F ( ω s , ω i ) L 0 δ ( ω 2 ω 0 ) ω e ω 2 4 σ 2 e i ( Δ k + 2 γ P ) L 0 2 s i n c ( ( Δ k + 2 γ P ) L 0 2 ) ω = ω s + ω i L 0 e ( ω s + ω i 2 ω 0 ) 2 4 σ 2 e i ( Δ k + 2 γ P ) L 0 2 s i n c ( ( Δ k + 2 γ P ) L 0 2 )
which matches Equation (41).
The newly-derived expression allows for short-time calculations using easily computable routines. In place of a radical in the denominator, the higher order effects of the pump are incorporated into the convolution operations.

3.5. Quantum Zeno Dynamics

The quantum Zeno effect would be a potentially fruitful experimental case to verify the utility of the new method for calculating second order amplitudes. This effect can be used, for instance, to generate entanglement by suppressing first-order terms relative to second-order terms in the Schrodinger Equation [56]. Nodurft et al. use Fermi’s Golden Rule to demonstrate their scheme utilizing a photonic waveguide coupled with a series of perturbing atoms to eliminate outcomes in which both photons appear at the same output port. This effectively entangles the photons by making the wave function inseparable. They write, “this imbalance between single and two photon absorption rates facilitates the Zeno effect by suppressing terms where both a left and right photon are found, but not when a single photon is found”. Performing this calculation to higher precision using the RFT method presented here may provide more control and efficiency in creating entangled states. It would be interesting to use as a test case for verification that this method (see Section 2.1.2 and Section 2.2.1) does indeed improve upon Fermi’s Golden Rule.
Other relevant potential applications of this calculation applied to the Zeno effect include error correction in quantum computers [69] and construction of quantum gates [70].

3.6. Solitons and Non-Linear Cases

The method provided here can be directly applied to any dynamical case in which the potential function has a Fourier transform. Its generality may make it particularly helpful for nonlinear versions of the Schrodinger equation. As an example, Bayindir et. al. study q-deformed Rosen–Morse potentials to analyze the time evolution of solitons [71]. They utilize a spectral method to determine the stationary states of the system and the Runge–Kutta (RK) method to evolve the stationary states in time.
Non-linear terms are hard to handle because they involve convolution in the frequency domain, due to the convolution theorem. One typically avoids this by iterating them in the spatial domain, whereas the linear terms are iterated in the frequency domain. In the method presented here, the convolutions in the frequency domain are taken into account explicitly, rendering the spatial iteration of the nonlinear terms unnecessary. It seems that this method could be used as an alternative to RK to compute time evolution, readily handling linear and nonlinear terms.
It should be noted that the RK method results in a time domain wave function, whereas the methods here result in a frequency domain wave function. These can be easily compared using forward or inverse Fourier transforms.

3.7. Master Equations

When considering open quantum systems, the Lindblad equation can be used to account for dissipative or decoherence effects from the system to the environment. These effects are accomplished by a nonlinear term added to the Schrodinger equation, requiring careful treatment. There is still, however, a time integration that must be accomplished, which is often done using the Runge–Kutta method. The alternate method proposed here can likely be applied to that integration process, providing some utility in these cases. A careful analysis of these benefits was not undertaken.

3.8. Other Applications

The method proposed based upon Fourier transforms has a quite general form and might be used in other scenarios. The TDSE describes the diffraction of the wave function around a small temporal perturbation. One might consider application in the spatial domain, rederiving the usual single slit diffraction formula, and then extending this to a second-order calculation. Furthermore, in the spatial domain, one might apply the RFT technique to a tunneling barrier, for instance in a scanning tunneling microscope or in the alpha decay.

4. Conclusions

In this work, we devised a novel technique that decouples nested integrals in the Dyson series for the time-dependent Schrödinger Equation (TDSE) using recursive Fourier transforms (RFT). This provides an approach which is particularly suited for computation on both classical and quantum computers.
This method shares similarities with existing multi-slice or split-operator techniques, but it is used to refine accuracy of wavefunction spectra rather than propagate a wavefunction over time. The RFT approach computes the temporal diffraction of a wavefunction under a perturbing force of finite duration. It can be used, for instance, in the characterization of single photons in cases where indistinguishability is important.
The decoupling of the integrals at second-order is achieved by shifting to the frequency domain to obtain a nested s i n c function, then interpreting the nested s i n c as a function of time while also swapping the order of operators to perform the outer time integral before the sum over energy. This varies the width of the s i n c function in the frequency domain, which can be sampled at a given frequency to extract an amplitude in the frequency domain. This allows the TDSE to be expressed as a sum of (non-nested) convolutions. We anticipate that this procedure can be iterated to higher orders.

Funding

This research received no external funding.

Data Availability Statement

The author declares that there are no data associated with this paper. Simulation code to generate figures can be supplied upon request.

Acknowledgments

The author is grateful to Jeff Butler, Richard Pham Vo, Marcin Nowakowski, Eliahu Cohen, Paul Borrill, Andrei Vazhnov, Stefano Gottardi, Daniel Sheehan, Joe Schindler, Mark Prusten, and Justin Kader for helpful comments and feedback.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
RFTRecursive Fourier transform
TDSEtime-dependent Schrödinger equation

Appendix A. Definitions of the Fourier Transform

The following definitions of the Fourier transform and its inverse are used.
F ( ω ) = 1 2 π f ( t ) e i ω t d t f ( t ) = 1 2 π F ( ω ) e i ω t d ω
F ( k ) = 1 2 π f ( x ) e i k x d x f ( x ) = 1 2 π F ( k ) e i k x d k

Appendix B. Using the Appropriate Dual Domain

In systems linked by Fourier transforms, a proper domain emphasis can sometimes be overlooked. For example, TDSE coefficients c ( t ) are written as functions of time to establish the time dependence of the wavefunction.
However, Equation (7) (Figure 1b) represents a ω -space distribution featuring an ω convolution. Time does not appear directly in this expression. Instead, T is a constant that shapes the oscillatory pattern of the distribution at a given moment. By varying T, we must recompute the convolution at each time step and then sample the distribution at point ω f to yield a meaningful transition amplitude. This requires distinguishing “integration parameters” from “coordinates”, in the sense of [18].
Consider Fermi’s Golden Rule: a bound state ω i transitions to a continuum state ω f under a driving frequency ω d , producing the transition amplitude expressed in Equation (12). By varying ω d , ω i , or ω f , Figure A1 is useful for identifying the relevant dependencies.
However, Figure A1 can also be interpreted time-wise because the s i n c function depends symmetrically on time and energy. Over time, the s i n c function (as a function of ω f i ω d ) becomes more peaked, and the image in Figure A1 is considered to be a snapshot of time. Thus, we interpret the amplitude as time-dependent, c ( t ) .
However, this interpretation misreads the proper domain. Amplitude c f i is a frequency distribution and not a time distribution. The time dependence is implicit; evolving time means updating the entire distribution, after which we can derive the frequency-dependent amplitude at that time.
Figure A1. Fermi’s Golden Rule, transition amplitude. There is more than one possible interpretation for this plot. (Top) For a given driving frequency, we treat the energy jump ω f i as the variable. The greatest amplitude occurs when ω f i matches the driving frequency, ω d . (Bottom) At a later time, the resonance curve has changed, so this can serve to predict time evolution despite that it is a distribution in ω space.
Figure A1. Fermi’s Golden Rule, transition amplitude. There is more than one possible interpretation for this plot. (Top) For a given driving frequency, we treat the energy jump ω f i as the variable. The greatest amplitude occurs when ω f i matches the driving frequency, ω d . (Bottom) At a later time, the resonance curve has changed, so this can serve to predict time evolution despite that it is a distribution in ω space.
Quantumrep 06 00021 g0a1
These processes are distinct. The former involves only number reading from the graph, while the latter requires repeated graphing and sampling. The former is a function, whereas the latter is a functional.
The same reasoning applies to the usual kicked harmonic oscillator treatment (perturbed by a small Gaussian pulse, Section 2.1). The standard methods lead to the coefficient in Equation (9), which is implicitly defined by the elapsed time T but explicitly a function of ω . This can help determine the best pulse duration T to match the natural oscillator frequency ω , but it overshadows the more natural c = c ( ω ) dependence.
Each t value is a unique experiment leading to a different distribution. In varying t, Equation (9) becomes a functional by creating a configuration space for each t value.

Appendix C. Detailed Analysis

We will now examine Equations (23) and (24) in more detail.

Appendix C.1. Evaluating the Transfer Function

Evaluating the form of the transfer function H ˜ k i can be performed first for the case of a negligible potential, V ˜ ( ω ) = δ ( ω ) (infinitely wide in the time domain). Because convolution with a δ -function is the identity operation, we can then evaluate h k i [ t ] explicitly and take its Fourier transform,
H ˜ k i = F t 1 ω exp ( i ω k i t 1 / 2 ) s i n ( ω k i t 1 / 2 ) ω k i / 2 = 2 π i ω k i δ ( ω + ω k i / 2 + ω k i / 2 ) δ ( ω ω k i / 2 + ω k i / 2 ) = 2 π i ω k i δ ( ω + ω k i ) δ ( ω ) .
In other words, the transfer function is composed of a series of discrete impulses spaced at integer multiples of ω ( 0 ) (because ω k i = k ω ( 0 ) ). See Figure 3.
Next, H ˜ k i is convolved with the other factors in Equation (24), resulting in
δ ( ω ω k ) H ˜ k i = 2 π i ω k i δ ( ω ω k ) δ ( ω + ω k i ) δ ( ω ) = 2 π i ω k i δ ( ω ω i ) δ ( ω ω k )
As shown in Figure A2 and Figure A3, each term in the sum over k contributes complex impulses at ω k and ω i .
The special case k = i must be handled separately. Here, the desirable properties of the s i n c function at the origin are required, and Equation (A1) is the Fourier transform of a constant as follows:
H ˜ 00 = F constant = 2 π δ ( ω ) .
This contributed to a purely real amplitude at the origin, as shown in the middle graph of Figure A2.
Summing over all H ˜ k i contributions over the range k = ± 25 , it can be seen in Figure A4 that for k 0 , H ˜ k i contributes a real portion at ω k = 0 which is amplified as more frequencies are included ( k ± ), whereas the real portion remains small and finite at every other ω k , vanishing when the distribution is normalized (top). Conversely, the imaginary portions cancel at ω k = 0 but are significant everywhere else, decaying inversely with respect to | k | (middle).
An analogy can be drawn to the frequency-domain decomposition of sound signals. In the second-order calculation, the probability amplitude signal was deflected into a series of higher harmonics. Similarly, an acoustic musical instrument generates sound through the combination of a pluck (impulse) and resonant cavity that amplifies higher harmonics (impulse response). This is similar to the relationship between H ˜ k i in Equation (23) and the rest of that equation.

Appendix C.2. Stepping through the Algorithm for Transfer Function

A comparison was made between the direct integration of Equation (1) and the convolution approach in Equations (23) and (24) using MATLAB. The program begins by generating a nested impulse response, Equation (20). This has the form s i n c V and width t 1 , as shown in Figure 3 (far-left).
Figure A2. Contributions to the second-order complex valued “transfer function” H ˜ k i from the frequencies k = 1 , 0 , 1 (top to bottom, respectively). The cases k = ± 1 contribute a real portion (solid) and an imaginary portion (dashed) at ω k = ± 1 . The case k = 0 contributes only a real portion (solid). See Equations (A1) and (A3).
Figure A2. Contributions to the second-order complex valued “transfer function” H ˜ k i from the frequencies k = 1 , 0 , 1 (top to bottom, respectively). The cases k = ± 1 contribute a real portion (solid) and an imaginary portion (dashed) at ω k = ± 1 . The case k = 0 contributes only a real portion (solid). See Equations (A1) and (A3).
Quantumrep 06 00021 g0a2
Figure A3. The frequency-distributions for k = 2 , 3 contributions. (Top) The real (solid) and imaginary (dashed) parts of the complex s i n c function in Equation (23), corresponding to the outer integral over t 1 in Equation (18). (2nd/3rd row) The transfer function H ˜ k i ( ω ) in Equation (24) captures information from the nested t 2 integral as a series of spikes, δ ( ω 2 ω ( 0 ) ) + δ ( ω ) , (left), δ ( ω 3 ω ( 0 ) ) + δ ( ω ) , (right). See Equation (A1). (4th row) The real (solid) and imaginary (dashed) parts of the convolution in Equation (23). (Bottom) The combined result from k = 2 and k = 3 .
Figure A3. The frequency-distributions for k = 2 , 3 contributions. (Top) The real (solid) and imaginary (dashed) parts of the complex s i n c function in Equation (23), corresponding to the outer integral over t 1 in Equation (18). (2nd/3rd row) The transfer function H ˜ k i ( ω ) in Equation (24) captures information from the nested t 2 integral as a series of spikes, δ ( ω 2 ω ( 0 ) ) + δ ( ω ) , (left), δ ( ω 3 ω ( 0 ) ) + δ ( ω ) , (right). See Equation (A1). (4th row) The real (solid) and imaginary (dashed) parts of the convolution in Equation (23). (Bottom) The combined result from k = 2 and k = 3 .
Quantumrep 06 00021 g0a3
This impulse response (in ω ) is then varied over time across the integration limits 0 < t 1 < T , generating a sequence of s i n c graphs of varying widths. The sample value at the vertical line ω k for each graph was stored as a new array h k i [ t ] (Figure 3, middle). For a given ω k , these samples oscillate with a frequency profile that is dependent on the physics of the experiment (such as the properties of the potential and duration of the window of measurement).
The Fourier transform of h k i [ t ] is H ˜ k i ( ω ) (Figure 3 right panel). It is a series of impulses representing each intermediate contribution to the second-order amplitude (see Figure A2 and Figure A4). If the perturbation is negligible, we can write V ˜ ( ω ) = δ ( ω ) . In this case, H ˜ k i is composed of δ -function impulses at ω k and ω i . If the potential is strong, other harmonics appear in this graph (see Figure A5 bottom-right).
Finally, in Equation (23), H ˜ k i is convolved with a phase-shifted s i n c impulse response so that a copy of the impulse response is placed wherever H ˜ k i has a spike, as shown in Figure A3. This is performed for every possible intermediate state ω k , and the amplitude plots for each are summed. Each code loop over ω k contributes an impulse response centered at ω k and another centered at ω i (Figure A2).
After looping over all 2 k + 1 intermediate distributions, the impulses centered on ω i reinforce 2 k + 1 times, whereas the second-order signal at each ω k appears only once. The result is a strong central peak and decaying wings (Figure 4).
Figure A4. H ˜ k i ( ω ) for 25 < k < 25 . The contribution to H ˜ k i ( ω ) at k = 0 is non-zero for every k 0 , growing without bound, as we include more momenta ( k ), see Equation (A3). There is also a small, constant, real contribution at each ω k , which vanishes when the distribution is normalized (top). The imaginary portions cancel at ω k = 0 and are built everywhere else, decaying inversely to | k | (middle), see Equation (A1). The absolute magnitude of H ˜ k i was a harmonic series of impulses (bottom). These impulses are convolved with the first-order impulse response, Equation (7).
Figure A4. H ˜ k i ( ω ) for 25 < k < 25 . The contribution to H ˜ k i ( ω ) at k = 0 is non-zero for every k 0 , growing without bound, as we include more momenta ( k ), see Equation (A3). There is also a small, constant, real contribution at each ω k , which vanishes when the distribution is normalized (top). The imaginary portions cancel at ω k = 0 and are built everywhere else, decaying inversely to | k | (middle), see Equation (A1). The absolute magnitude of H ˜ k i was a harmonic series of impulses (bottom). These impulses are convolved with the first-order impulse response, Equation (7).
Quantumrep 06 00021 g0a4

Appendix C.3. Domain and Resolution of Transfer Function

In the code implementation of Equation (20), the length of h k i [ t 1 ] is not equal to the length of the original signal. This is because h k i [ t 1 ] is generated by scanning t 2 over the variable range 0 < t 2 < t 1 . Thus, the corresponding resolution of its Fourier transform, H ˜ k i , is reduced (Figure A5, top left).
To compensate for the band-limited spectrum, H ˜ k i was padded with copies of itself. This is necessary for the convolution operation to be well-defined. The time window T was chosen to be an integer fraction of the duration of the original signal so that the padding fits evenly (this is necessary to avoid artifacts in the Fourier transform).
This defines a fundamental harmonic frequency associated with the measurement,
ω ( 1 ) total duration of original signal / duration of time integration = 1 T ,
(see the harmonic spacing in Figure A4).
When the Gaussian potential is weak, the tiled instances of h k i [ t 1 ] line up smoothly, and H ˜ k i only contains two spikes, as in Equation (A1) and Figure A2. When the potential is stronger, h k i [ t 1 ] does not line up on its endpoints, and spectral artifacts occur at integer multiples of ω ( 1 ) .
For reasons that are not fully understood by the author, the interpretation of the second-order results is clear only when ω ( 1 ) = ω ( 0 ) , which is known as cyclotron resonance. This appears to be related to the interpretation of Equations (23) and (24) as a signal reconstruction problem using sinc-interpolation: this is the only case considered in this study.
Figure A5. Bandlimited signal: Because the duration of the time integration of the TDSE is less than the full signal (5% shown here, top left), h k i [ t ] is smaller than V ˜ by that factor, and the resolution is decreased (bottom left, the distinct spikes are only resolvable because horizontal scale is expanded). In the top right, h k i [ t ] was padded with copies of itself to compensate for the limited bandwidth of the signal. This ensures its Fourier transform has the desired high resolution (bottom right). Shown here is the k = + 2 term of Equation (24). A moderate strength potential was used, resulting in harmonics at nearby states (see small spikes at k = + 1 , + 3 and other integers).
Figure A5. Bandlimited signal: Because the duration of the time integration of the TDSE is less than the full signal (5% shown here, top left), h k i [ t ] is smaller than V ˜ by that factor, and the resolution is decreased (bottom left, the distinct spikes are only resolvable because horizontal scale is expanded). In the top right, h k i [ t ] was padded with copies of itself to compensate for the limited bandwidth of the signal. This ensures its Fourier transform has the desired high resolution (bottom right). Shown here is the k = + 2 term of Equation (24). A moderate strength potential was used, resulting in harmonics at nearby states (see small spikes at k = + 1 , + 3 and other integers).
Quantumrep 06 00021 g0a5

Appendix C.4. Effect of Time Window Shift on the Form of the Transfer Function

In Figure A4, the imaginary part of H ˜ k i decays inversely with | k | . The time measurement window was shown to extend from the origin to t, leading to a translational factor of 1 2 in the r e c t ( t ) function. In the general case, the measurement window can be translated r units by shifting the r e c t ( t ) function again, r e c t ( t t 1 2 r t ) , leading to an overall phase shift in the frequency domain, exp ( i ω k ( r / t ) ) . This leads to an oscillating envelope for the impulses in Figure A4 (envelope not shown). In the MATLAB simulation, a phase shift of this sort was introduced to compensate for coding artifacts (the base index for the time window started at 1 instead of 0).

Appendix C.5. Normalizability of the Transfer Function

The appearance of ω k i in the denominator of Equation (20) inside summation over both i and k is the cause for questioning whether this expression can be normalized. However, owing to the good properties of the s i n c function, h k i and H ˜ k i are non-singular.
To observe this, note that when 1 / ω k i becomes singular, we use Equation (A3) (which is well defined) instead of Equation (A1).
In general, H ˜ k i is a series of harmonics of spacing ω ( 0 ) , as shown in Figure A4. The middle plot shows imaginary impulses at every non-zero integer ω k that form a harmonic series, which is well known to not converge, and therefore, it is not clear whether the final expression Equation (23) is convergent. The upper plot shows an impulse at w i = 0 resulting from each term in the sum over k. The height of this impulse increases without bounds for k ± . This is ultraviolet divergence.
This can easily be resolved from a practical perspective. Because the height of the impulse at the origin is proportional to the size of the domain, | k m a x | , in the code implementation, this expression can be normalized by dividing by the maximum value of k, where only a finite number of terms are included.
From a theoretical perspective regarding the convergence of the second-order, the issue is whether the s i n c functions, each of which are normalizable and arranged in a harmonic series (which does not converge) are normalizable. This was not addressed in this study.

Appendix D. Accuracy of Method

To analyze the accuracy of Equations (7), (23) and (24), we used MATLAB to compare the convolution method and the method of direct integration of the Schrödinger equation in Equation (14).

Appendix D.1. Comparing First-Order to Second-Order Convolution

The first-order contributions and second-order corrections for the convolution (or recursive Fourier transform) method were compared, as shown in Figure 4. The second-order contribution shortens the central peak, while heightening the wings of the distribution by a small amount. This is reasonable because we expect the potential to deflect the system away from its original state each time it is applied.

Appendix D.2. Frequency Profile versus Potential Strength

For the measurement of duration T, Figure A6a–c show the second-order amplitude calculated by direct integration and convolution. In both methods, increasing the potential spreads the central peak and smoothens the ripples in the distribution. For a given increase in the potential strength, the ripples are preserved to a greater extent in the convolution method. Furthermore, the smoothing occurred differently in both cases. In particular, Figure A6b shows that for the convolution method, the odd-numbered zero-crossings are preserved longer than the even ones, as the potential strength is increased.
Figure A6. Second-order comparison of spectra for various values of the potential for convolution versus integration methods. Potential increasing from left figure to right figure.
Figure A6. Second-order comparison of spectra for various values of the potential for convolution versus integration methods. Potential increasing from left figure to right figure.
Quantumrep 06 00021 g0a6

Appendix D.3. Frequency Profile versus Range of Intermediate States

Figure A7a,b demonstrate the distinct behavior of direct integration versus convolution with respect to the number of intermediate states k m a x that are summed over. The convolution method converges faster than the direct integration with respect to the number of intermediate terms included. The convolution method relies heavily on non-local surrounding states. This is not surprising when considering the similarity between Equations (23) and (24) to the s i n c interpolation signal reconstruction (as shown in Appendix E.2).
Figure A7. Second-order comparison of spectra for two values of the k m a x range for convolution versus integration methods. The methods match each other more closely far from the origin as more k terms are added. Left (right) figure case is k m a x = ± 4 ( ± 10 ) .
Figure A7. Second-order comparison of spectra for two values of the k m a x range for convolution versus integration methods. The methods match each other more closely far from the origin as more k terms are added. Left (right) figure case is k m a x = ± 4 ( ± 10 ) .
Quantumrep 06 00021 g0a7

Appendix E. Interpretation

Appendix E.1. Kicked Frequency and Natural Frequency for Harmonic Oscillator

Equation (24) presents two distinct energy scales: the unperturbed harmonic oscillator’s discrete spacing ω ( 0 ) , and the truncated perturbation-induced minima spacing ω ( 1 ) = 2 π / T . The latter corresponds to the s i n c function zeros in Equations (7) and (24). This truncated perturbation is comparable to the periodic kick potential in the Floquet theory, where the Hamiltonian splits into a free part (with a discrete eigenspectrum) and a kick part (with a continuous eigenspectrum).
The unitary evolution operator U ^ comprises U ^ f r e e , which determines ω ( 0 ) and the unperturbed basis states, and U ^ k i c k , which establishes ω ( 1 ) .
The dynamics of a Floquet system strongly hinge on the relationship between the natural frequencies of the two Hamiltonian parts: ω ( 1 ) and ω ( 0 ) . Engels [72] investigated structured stochastic webs arising in the phase space for various energy scale values. When these two are integer ratios, the phase-space web is distinct, with clear allowed and forbidden regions. By contrast, when the two are irrational, the web structure collapses, allowing the entire phase space. Floquet systems show that the discrete energy states ω ( 0 ) endure through time evolution, whereas the continuous states defined by ω ( 1 ) disperse.
This scenario underscores the distinction between oscillator frequency ω ( 0 ) and perturbation frequency ω ( 1 ) [72]. We focused on the “cyclotron resonance” case in Figure A4 and the subsequent graphs, where ω ( 0 ) = ω ( 1 ) . Here, the periodic kicks of the Floquet system coincided with the natural motion of the unperturbed oscillator.

Appendix E.2. Frequency Sampling Interpretation

It is interesting to note the similarity between Equation (7) and the Shannon–Nyquist sampling theorem
f ( t ) = i f ( n t ) · s i n c ( ω t n π ) .
The theorem states that any signal f ( t ) can be exactly reconstructed from its samples, f ( n t ) , using a series of ideal s i n c interpolation functions centered on each sample, separated by a Nyquist period.
In the case of Equation (7), a signal in the frequency domain, c ( ω ) , is smoothly reconstructed from discrete samples, c ( n Δ ω ( 0 ) ) , using a series of s i n c -like interpolation functions centered on each energy eigenstate. In the case of zero potential ( V ˜ ( ω ) = δ ( ω ) ), if we constrain the measurement window to be the inverse of the oscillator frequency, T = 1 / ω ( 1 ) = 1 / ω ( 0 ) , then Equation (7) exactly reconstructs the original wavefunction with an ideal s i n c interpolator. As the perturbation increases from zero, the interpolation function changes, and the reconstructed signal is no longer identical to the original.

References

  1. Schrödinger, E. Quantisierung als Eigenwertproblem (Vierte Mitteilung). Ann. Phys. 1926, 386, 109–139. [Google Scholar] [CrossRef]
  2. Dirac, P.A.M. The Principles of Quantum Mechanics; Clarendon Press: Oxford, UK, 1930. [Google Scholar]
  3. Dyson, F.J. Divergence of Perturbation Theory in Quantum Electrodynamics. Phys. Rev. 1952, 85, 631–632. [Google Scholar] [CrossRef]
  4. Schwartz, M.D. Quantum Field Theory and the Standard Model; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  5. Walker, J.S.; Gathright, J. Exploring one-dimensional quantum mechanics with transfer matrices. Am. J. Phys. 1994, 62, 408–422. [Google Scholar] [CrossRef]
  6. Feynman, R.P. Space-Time Approach to Non-Relativistic Quantum Mechanics. Rev. Mod. Phys. 1948, 20, 367–387. [Google Scholar] [CrossRef]
  7. Feynman, R.P. A Relativistic Cut-off for Classical Electrodynamics. Phys. Rev. 1948, 74, 939–946. [Google Scholar] [CrossRef]
  8. Feynman, R.P. Relativistic Cut-Off for Quantum Electrodynamics. Phys. Rev. 1948, 74, 1430–1438. [Google Scholar] [CrossRef]
  9. Schroeter, D.J.G.D.F. Introduction to Quantum Mechanics, 3rd ed.; University Cambridge Press: Cambridge, UK, 2018. [Google Scholar]
  10. Paganin, D.M.; Pelliccia, D. X-ray phase-contrast imaging: A broad overview of some fundamentals. Adv. Imaging Electron Phys. 2021, 218, 63–158. [Google Scholar]
  11. Strang, G. On the construction and comparison of difference schemes. SIAM J. Numer. Anal. 1968, 5, 506–517. [Google Scholar] [CrossRef]
  12. Kosloff, D.; Kosloff, R. A Fourier Method Solution for the Time Dependent Schrödinger Equation as a Tool in Molecular Dynamics. J. Comput. Phys. 1983, 52, 35–53. [Google Scholar] [CrossRef]
  13. Dateo, C.E.; Engel, V.; Almeida, R.; Horia, M. Numerical solutions of the time-dependent Schrödinger equation in spherical coordinates by Fourier transform methods. Comput. Phys. Commun. 1991, 63, 435–445. [Google Scholar] [CrossRef]
  14. Van Dyck, D. Image Calculations in High-Resolution Electron Microscopy: Problems. Progress. and Prospects. Adv. Electron. Electron. Phys. 1985, 65, 295–355. [Google Scholar]
  15. Taha, T.R.; Ablowitz, M.I. Analytical and Numerical Aspects of Certain Nonlinear Evolution Equations. II. Numerical, Nonlinear Schrödinger Equation. J. Comput. Phys. 1984, 55, 203–230. [Google Scholar] [CrossRef]
  16. Bandrauk, A.D.; Shen, H. Higher order exponential split operator method for solving time-dependent Schrodinger equations. Can. J. Chem 1992, 70, 555–559. [Google Scholar] [CrossRef]
  17. Hansson, T.; Wabnitz, S. Dynamics of microresonator frequency comb generation: Models and stability. Nanophotonics 2016, 5, 231–243. [Google Scholar] [CrossRef]
  18. Nelson-Isaacs, S.E. Spacetime Paths as a Whole. Quantum Rep. 2021, 3, 13–41. [Google Scholar] [CrossRef]
  19. Hong, C.K.; Ou, Z.Y.; Mandel, L. Measurement of subpicosecond time intervals between two photons by interference. Phys. Rev. Lett. 1987, 59, 2044–2046. [Google Scholar] [CrossRef] [PubMed]
  20. Davis, A.O.C.; Thiel, V.; Karpiński, M.; Smith, B.J. Measuring the Single-Photon Temporal-Spectral Wave Function. Phys. Rev. Lett. 2018, 121, 083602. [Google Scholar] [CrossRef] [PubMed]
  21. Mosley, P.J.; Lundeen, J.S.; Smith, B.J.; Wasylczyk, P.; U’Ren, A.B.; Silberhorn, C.; Walmsley, I.A. Heralded Generation of Ultrafast Single Photons in Pure Quantum States. Phys. Rev. Lett. 2008, 100, 133601. [Google Scholar] [CrossRef] [PubMed]
  22. Müller, P.; Tentrup, T.; Bienert, M.; Morigi, G.; Eschner, J. Spectral properties of single photons from quantum emitters. Phys. Rev. A 2017, 96, 023861. [Google Scholar] [CrossRef]
  23. Tamma, V.; Laibacher, S. Multiboson Correlation Interferometry with Arbitrary Single-Photon Pure States. Phys. Rev. Lett. 2015, 114, 243601. [Google Scholar] [CrossRef]
  24. Laibacher, S.; Tamma, V. Symmetries and entanglement features of inner-mode-resolved correlations of interfering nonidentical photons. Phys. Rev. A 2018, 98, 053829. [Google Scholar] [CrossRef]
  25. Tamma, V.; Laibacher, S. Boson sampling with random numbers of photons. Phys. Rev. A 2021, 104, 032204. [Google Scholar] [CrossRef]
  26. Tamma, V.; Laibacher, S. Scattershot multiboson correlation sampling with random photonic inner-mode multiplexing. Eur. Phys. J. Plus 2023, 138, 335. [Google Scholar] [CrossRef]
  27. Wang, X.J.; Jing, B.; Sun, P.F.; Yang, C.W.; Yu, Y.; Tamma, V.; Bao, X.H.; Pan, J.W. Experimental Time-Resolved Interference with Multiple Photons of Different Colors. Phys. Rev. Lett. 2018, 121, 080501. [Google Scholar] [CrossRef]
  28. Triggiani, D.; Psaroudis, G.; Tamma, V. Ultimate Quantum Sensitivity in the Estimation of the Delay between two Interfering Photons through Frequency-Resolving Sampling. Phys. Rev. Appl. 2023, 19, 044068. [Google Scholar] [CrossRef]
  29. Cui, L.; Li, X.; Zhao, N. Minimizing the frequency correlation of photon pairs in photonic crystal fibers. New J. Phys. 2012, 14, 123001. [Google Scholar] [CrossRef]
  30. Zhang, H.; Sun, L.; Hirschman, J.; Shariatdoust, M.S.; Belli, F.; Carbajo, S. Optimizing Spectral Phase Transfer in Four-Wave Mixing with Gas-filled Capillaries: A Trade-off Study. arXiv 2024, arXiv:2404.16993. [Google Scholar]
  31. Asavanant, W.; Furusawa, A. Multipartite continuous-variable optical quantum entanglement: Generation and application. Phys. Rev. A 2024, 109, 040101. [Google Scholar] [CrossRef]
  32. Bartolucci, S.; Birchall, P.; Bombín, H.; Cable, H.; Dawson, C.; Gimeno-Segovia, M.; Johnston, E.; Kieling, K.; Nickerson, N.; Pant, M.; et al. Fusion-based quantum computation. Nat. Commun. 2023, 14, 912. [Google Scholar] [CrossRef]
  33. Lu, W.; Krasavin, A.V.; Lan, S.; Zayats, A.V.; Dai, Q. Gradient-induced long-range optical pulling force based on photonic band gap. Light Sci. Appl. 2024, 13, 93. [Google Scholar] [CrossRef]
  34. Neuman, K.C.; Block, S.M. Optical trapping. Rev. Sci. Instrum. 2004, 75, 2787–2809. [Google Scholar] [CrossRef] [PubMed]
  35. Pérez-García, L.; Selin, M.; Ciarlo, A.; Magazzù, A.; Pesce, G.; Sasso, A.; Volpe, G.; Pérez Castillo, I.; Arzola, A.V. Optimal calibration of optical tweezers with arbitrary integration time and sampling frequencies: A general framework [Invited]. Biomed. Opt. Express 2023, 14, 6442–6469. [Google Scholar] [CrossRef] [PubMed]
  36. Panda, D.K.; Benjamin, C. Quantum cryptographic protocols with dual messaging system via 2D alternate quantum walks and genuine single particle entangled states. arXiv 2024, arXiv:2405.00663. [Google Scholar]
  37. Lounis, S. Theory of Scanning Tunneling Microscopy. arXiv 2014, arXiv:1404.0961. [Google Scholar]
  38. Gottlieb, A.D.; Wesoloski, L. Bardeen’s Tunneling Theory as Applied to Scanning Tunneling Microscopy: A Technical Guide to the Traditional Interpretation. Nanotechnology 2006, 17, R57. [Google Scholar] [CrossRef]
  39. Grewal, A.; Leon, C.C.; Kuhnke, K.; Kern, K.; Gunnarsson, O. Scanning Tunneling Microscopy for Molecules: Effects of Electron Propagation into Vacuum. ACS Nano 2024, 18, 12158–12167. [Google Scholar] [CrossRef] [PubMed]
  40. Dessai, M.; Kulkarni, A.V. Calculation of tunneling current across trapezoidal potential barrier in a scanning tunneling microscope. J. Appl. Phys. 2022, 132, 244901. [Google Scholar] [CrossRef]
  41. Gaida, J.H.; Lourenço-Martins, H.; Sivis, M.; Rittmann, T.; Feist, A.; de Abajo, F.J.G.; Ropers, C. Attosecond electron microscopy by free-electron homodyne detection. Nat. Photonics 2024, 18, 509–515. [Google Scholar] [CrossRef]
  42. Cao, A.; Eckner, W.J.; Yelin, T.L.; Young, A.W.; Jandura, S.; Yan, L.; Kim, K.; Pupillo, G.; Ye, J.; Oppong, N.D.; et al. Multi-qubit gates and ‘Schrödinger cat’ states in an optical clock. arXiv 2024, arXiv:2402.16289. [Google Scholar]
  43. Kawasaki, A. Real-time observation of picosecond-timescale optical quantum entanglement toward ultrafast quantum information processing. arXiv 2024, arXiv:2403.07357. [Google Scholar]
  44. Kawasaki, A. High-rate Generation and State Tomography of Non–Gaussian Quantum States for Ultra-fast Clock Frequency Quantum Processors. arXiv 2024, arXiv:2402.17408. [Google Scholar]
  45. Nishidome, H.; Omoto, M.; Nagai, K.; Uchida, K.; Murakami, Y.; Eda, J.; Okubo, H.; Ueji, K.; Yomogida, Y.; Kono, J.; et al. Influence of Laser Intensity and Location of the Fermi Level on Tunneling Processes for High-Harmonic Generation in Arrayed Semiconducting Carbon Nanotubes. ACS Photonics 2024, 11, 171–179. [Google Scholar] [CrossRef]
  46. Majidi, S.; Aghbolaghi, R.; Navid, H.A.; Mokhlesi, R. Optimization of cut-off frequency in high harmonic generation in noble gas. Appl. Phys. B 2023, 130, 11. [Google Scholar] [CrossRef]
  47. Farkas, G.; Tóth, C. Proposal for attosecond light pulse generation using laser-induced multiple harmonic conversion processes in rare gases. Phys. Lett. A 1992, 168, 447–450. [Google Scholar] [CrossRef]
  48. Lewenstein, M.; Balcou, P.; Ivanov, M.Y.; L’Huillier, A.; Corkum, P.B. Theory of high-harmonic generation by low-frequency laser fields. Phys. Rev. A 1994, 49, 2117–2132. [Google Scholar] [CrossRef]
  49. Ryabikin, M.Y.; Emelin, M.Y.; Strelkov, V.V. Attosecond electromagnetic pulses: Generation, measurement, and application. Attosecond metrology and spectroscopy. Phys. Usp. 2023, 66, 360–380. [Google Scholar] [CrossRef]
  50. Hänsch, T. A proposed sub-femtosecond pulse synthesizer using separate phase-locked laser oscillators. Opt. Commun. 1990, 80, 71–75. [Google Scholar] [CrossRef]
  51. Abele, H.; Jenke, T.; Leeb, H.; Schmiedmayer, J. Ramsey’s method of separated oscillating fields and its application to gravitationally induced quantum phase shifts. Phys. Rev. D 2010, 81, 065019. [Google Scholar] [CrossRef]
  52. Jenke, T.; Geltenbort, P.; Lemmel, H.; Abele, H. Realization of a gravity-resonance-spectroscopy technique. Nat. Phys. 2011, 7, 468–472. [Google Scholar] [CrossRef]
  53. Villegas-Martínez, B.M.; Soto-Eguibar, F.; Moya-Cessa, H.M. Application of Perturbation Theory to a Master Equation. Adv. Math. Phys. 2016, 2016, 9265039. [Google Scholar] [CrossRef]
  54. Krijtenburg-Lewerissa, K.; Pol, H.J.; Brinkman, A.; van Joolingen, W.R. Insights into teaching quantum mechanics in secondary and lower undergraduate education. Phys. Rev. Phys. Educ. Res. 2017, 13, 010109. [Google Scholar] [CrossRef]
  55. Singh, C.; Belloni, M.; Christian, W. Improving students’ understanding of quantum mechanics. Phys. Today 2006, 59, 43–49. [Google Scholar] [CrossRef]
  56. Nodurft, I.C.; Shaw, H.C.; Glasser, R.T.; Kirby, B.T.; Searles, T.A. Generation of polarization entanglement via the quantum Zeno effect. Opt. Express 2022, 30, 31971–31985. [Google Scholar] [CrossRef] [PubMed]
  57. Sakurai, J.J.; Napolitano, J. Modern Quantum Mechanics, 2nd ed.; Pearson Education Inc.: London, UK, 2011. [Google Scholar]
  58. Tokmakoff, A. Time-Dependent Quantum Mechanics and Spectroscopy. University of Chicago. 2014. Available online: https://tdqms.uchicago.edu/full-tdqms-notes-upload-december-2014/ (accessed on 23 May 2024).
  59. Zhang, J.M.; Liu, Y. Fermi’s golden rule: Its derivation and breakdown by an ideal model. Eur. J. Phys. 2016, 37, 065406. [Google Scholar] [CrossRef]
  60. Baym, G. Lectures on Quantum Mechanics, 1st ed.; CRC Press: Boca Raton, FL, USA, 1969. [Google Scholar] [CrossRef]
  61. Fowler, M. Time-Dependent Perturbation Theory. 2007. Available online: https://galileo.phys.virginia.edu/classes/752.mf1i.spring03/Time_Dep_PT.pdf (accessed on 26 December 2023).
  62. Tamma, V.; Laibacher, S. Boson sampling with non-identical single photons. J. Mod. Opt. 2016, 63, 41–45. [Google Scholar] [CrossRef]
  63. Li, X.; Ma, X.; Ou, Z.Y.; Yang, L.; Cui, L.; Yu, D. Spectral study of photon pairs generated in dispersion shifted fiber with a pulsed pump. Opt. Express 2008, 16, 32–44. [Google Scholar] [CrossRef]
  64. Chen, J.; Li, X.; Kumar, P. Two-photon-state generation via four-wave mixing in optical fibers. Phys. Rev. A 2005, 72, 033801. [Google Scholar] [CrossRef]
  65. Sharping, J.E.; Chen, J.; Li, X.; Kumar, P.; Windeler, R.S. Quantum-correlated twin photons from microstructure fiber. Opt. Express 2004, 12, 3086–3094. [Google Scholar] [CrossRef] [PubMed]
  66. Garay-Palmett, K.; McGuinness, H.J.; Cohen, O.; Lundeen, J.S.; Rangel-Rojo, R.; U’Ren, A.B.; Raymer, M.G.; McKinstrie, C.J.; Radic, S.; Walmsley, I.A. Photon pair-state preparation with tailored spectral properties by spontaneous four-wave mixing in photonic-crystal fiber. Opt. Express 2007, 15, 14870–14886. [Google Scholar] [CrossRef]
  67. Keller, T.E.; Rubin, M.H. Theory of two-photon entanglement for spontaneous parametric down-conversion driven by a narrow pump pulse. Phys. Rev. A 1997, 56, 1534–1541. [Google Scholar] [CrossRef]
  68. Rubin, M.H.; Klyshko, D.N.; Shih, Y.H.; Sergienko, A.V. Theory of two-photon entanglement in type-II optical parametric down-conversion. Phys. Rev. A 1994, 50, 5122–5133. [Google Scholar] [CrossRef] [PubMed]
  69. Erez, N.; Aharonov, Y.; Reznik, B.; Vaidman, L. Correcting quantum errors with the Zeno effect. Phys. Rev. A 2004, 69, 062315. [Google Scholar] [CrossRef]
  70. Franson, J.D.; Jacobs, B.C.; Pittman, T.B. Quantum computing using single photons and the Zeno effect. Phys. Rev. A 2004, 70, 062302. [Google Scholar] [CrossRef]
  71. Bayındır, C.; Altintas, A.A.; Ozaydin, F. Self-localized solitons of a q-deformed quantum system. Commun. Nonlinear Sci. Numer. Simul. 2021, 92, 105474. [Google Scholar] [CrossRef]
  72. Engel, U.M. On Quantum Chaos, Stochastic Webs and Localization in a Quantum Mechanical Kick System; Logos Verlag: Berlin, Germany, 2007. [Google Scholar]
Figure 1. In the first order TDSE in Equation (7), a Gaussian potential’s tails are truncated by the measurement. (a) The integration windows for the TDSE are marked as vertical lines. The tails of the Gaussian are excluded, leading to ringing in the frequency domain (not shown). (b) A Gaussian potential in ω -space convolved with s i n c ( ω T / 2 ) in Equation (10) shows the variations in the frequency domain with “spatial frequency” 4 π / T .
Figure 1. In the first order TDSE in Equation (7), a Gaussian potential’s tails are truncated by the measurement. (a) The integration windows for the TDSE are marked as vertical lines. The tails of the Gaussian are excluded, leading to ringing in the frequency domain (not shown). (b) A Gaussian potential in ω -space convolved with s i n c ( ω T / 2 ) in Equation (10) shows the variations in the frequency domain with “spatial frequency” 4 π / T .
Quantumrep 06 00021 g001
Figure 2. The width of the s i n c function in the impulse response depends on t 1 . As t 1 increases from top to bottom in the figures, we take samples at ω = ω k (defined in Equation (15), shown as a diamond on vertical line in the figure; here k = 2 ). The sample values trace out another s i n c function.
Figure 2. The width of the s i n c function in the impulse response depends on t 1 . As t 1 increases from top to bottom in the figures, we take samples at ω = ω k (defined in Equation (15), shown as a diamond on vertical line in the figure; here k = 2 ). The sample values trace out another s i n c function.
Quantumrep 06 00021 g002
Figure 3. We interpret Equation (20) as a function of t 1 instead of ω . (Left) Nested s i n c distribution (second line in Equation (18)), varied in width over t 1 , while repeatedly sampled at ω k = 4 ω ( 0 ) (vertical line with diamond marking the sample value). As the width of the s i n c decreases, its height grows linearly with t 1 , so the height of the sample oscillations is constant. (Middle) The samples ( h k i [ t 1 ] ) oscillate as t 1 is varied. (Right) The Fourier transform H ˜ k i ( ω ) = F { h k i [ t 1 ] } is a series of spikes representing second-order impulses. These δ -functions are convolved in Equation (23) to place a copy of the outer s i n c at each impulse. Note that for illustrative purposes the potential was ignored, i.e., chosen such that V ˜ = δ ( ω ) .
Figure 3. We interpret Equation (20) as a function of t 1 instead of ω . (Left) Nested s i n c distribution (second line in Equation (18)), varied in width over t 1 , while repeatedly sampled at ω k = 4 ω ( 0 ) (vertical line with diamond marking the sample value). As the width of the s i n c decreases, its height grows linearly with t 1 , so the height of the sample oscillations is constant. (Middle) The samples ( h k i [ t 1 ] ) oscillate as t 1 is varied. (Right) The Fourier transform H ˜ k i ( ω ) = F { h k i [ t 1 ] } is a series of spikes representing second-order impulses. These δ -functions are convolved in Equation (23) to place a copy of the outer s i n c at each impulse. Note that for illustrative purposes the potential was ignored, i.e., chosen such that V ˜ = δ ( ω ) .
Quantumrep 06 00021 g003
Figure 4. Comparison of first and second-order transition amplitude, relative to initial state ω i , calculated with the RFT method introduced here. The second-order calculation computes paths through intermediate energies at ± 20 ω ( 0 ) . For the second-order calculation, the central peak is reduced, the wings are amplified, and the minima are increased. Only the range ± 10 ω ( 0 ) is shown, but the contributions from terms outside of this range have a significant effect on the accuracy of the result. Not drawn to scale: the second-order contribution is in reality reduced by a factor of relative to first-order.
Figure 4. Comparison of first and second-order transition amplitude, relative to initial state ω i , calculated with the RFT method introduced here. The second-order calculation computes paths through intermediate energies at ± 20 ω ( 0 ) . For the second-order calculation, the central peak is reduced, the wings are amplified, and the minima are increased. Only the range ± 10 ω ( 0 ) is shown, but the contributions from terms outside of this range have a significant effect on the accuracy of the result. Not drawn to scale: the second-order contribution is in reality reduced by a factor of relative to first-order.
Quantumrep 06 00021 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nelson-Isaacs, S. Eliminating the Second-Order Time Dependence from the Time Dependent Schrödinger Equation Using Recursive Fourier Transforms. Quantum Rep. 2024, 6, 323-348. https://doi.org/10.3390/quantum6030021

AMA Style

Nelson-Isaacs S. Eliminating the Second-Order Time Dependence from the Time Dependent Schrödinger Equation Using Recursive Fourier Transforms. Quantum Reports. 2024; 6(3):323-348. https://doi.org/10.3390/quantum6030021

Chicago/Turabian Style

Nelson-Isaacs, Sky. 2024. "Eliminating the Second-Order Time Dependence from the Time Dependent Schrödinger Equation Using Recursive Fourier Transforms" Quantum Reports 6, no. 3: 323-348. https://doi.org/10.3390/quantum6030021

Article Metrics

Back to TopTop