Next Article in Journal
Covid-19 Predictions Using a Gauss Model, Based on Data from April 2
Previous Article in Journal
A Model Study of a Homogeneous Light-Water Thorium Reactor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum Hopfield Model

by
Masha Shcherbina
1,†,
Brunello Tirozzi
2,*,† and
Camillo Tassi
3,†
1
Institute for Low Temperatures, 61103 Kharkov, Ukraine
2
Department of Physics, University of “La Sapienza”, 00185 Rome, Italy
3
Department of Physics and Nanoscience Center, University of Jyväskylä, 40500 Jyväskylä, Finland
*
Author to whom correspondence should be addressed.
Current address: Centro Ricerche Frascati, Via Enrico Fermi, 00044 Frascati, Rome, Italy.
Physics 2020, 2(2), 184-196; https://doi.org/10.3390/physics2020012
Submission received: 1 March 2020 / Revised: 5 May 2020 / Accepted: 10 May 2020 / Published: 27 May 2020
(This article belongs to the Section Statistical Physics and Nonlinear Phenomena)

Abstract

:
We find the free-energy in the thermodynamic limit of a one-dimensional XY model associated to a system of N qubits. The coupling among the σ i z is a long range two-body random interaction. The randomness in the couplings is the typical interaction of the Hopfield model with p patterns ( p < N ), where the patterns are p sequences of N independent identically distributed random variables (i.i.d.r.v.), assuming values ± 1 with probability 1 / 2 . We show also that in the case p α N , α 0 , the free-energy is asymptotically independent from the choice of the patterns, i.e., it is self-averaging.

1. Introduction

Neural networks have been developed in the past decades in many different directions. From the application to neural signals [1,2,3,4], and other types of signals [5,6,7,8], or image retrieval [9]. The research also has touched the topics of retrieval and storage of patterns. Many different theoretical aspects have been developed. One of the most popular has been the use of statistical mechanics methods, evaluation of the free-energy in the thermodynamic limit N , N being the number of spins of the system, we mention the many different rigorous methods developed in these years. One approach has been based on the rigorous cavity method with which the results obtained by the non rigorous replica trick have been derived [10,11,12,13,14,15]. In the paper [16], the interpolation scheme of Guerra [17,18] and the cavity method was applied for solving the Boltzmann machine. Other rigorous results have been found by means of probabilistic arguments [19,20,21], in the papers [22,23,24], other rigorous results have been derived using Gaussian processes, in [25] the storage capacity has been discussed and in [26] the fluctuation of the free energy of the Hopfield model has been studied. The research on quantum computers has found many algorithms, as for example the Grover’s algorithm [27], or the quantum Fourier transform [28]. Many efforts have been done to connect the property of associative memory with quantum algorithms. In some papers the similarity among quantum evolution and Hopfield dynamic [29] is developed, in other papers the Hopfield network is written as a quantum system where the classical spins are substituted by quantum spins. In [30], the similarity between the associative memory and the Hopfield network is introduced. The idea is that the patterns of the Hopfield network correspond to eigenfunctions and the threshold dynamic is the quantum evolution of wave functions. In this way one can store into the system many different eigenstates and retrieve them with the dynamics. Since the wave functions have a phase then quantum holography is considered. This interpretation is also based on the fact that certain oscillations of the neural synapsis in real neurobiological systems are consequence of quantum effects, but the relevance of this phenomena in the neurobiological researches is marginal. Another example is the approach in [31], where the evolution of the neurons is due to quantum signals propagating by neighbors, the external current of an integrate and fire (I&F) model being substituted by quantum signals. The neurons are classical I&F objects but the signals are quantum. In [32], the Glover algorithm is proposed for search and retrieval of the patterns, demonstrating the possibility of using quantum operations for retrieving patterns, but there is no proposal and study of a quantum neural network which is the aim of our paper.
The interest in the quantized Hopfield model is that it has an experimental implementation [33] which has been made possible by nuclear magnetic resonance (NMR) technology [34]. The quantum Hopfield model is a system of quantum spins with Hebbian random interaction defined by the Hamiltonian H N
H N = 1 2 i , j = 1 N J i j σ i z σ j z i = 1 N h i σ i z d i = 1 N σ i x ,
where
σ i z = 1 0 0 1 , σ i x = 0 1 1 0
are the Pauli matrices associated to the components of the spins in the x and z direction, the system is bidimensional.
{ ξ i μ } i = 1 , , N , μ = 1 , p is a system of independent identically distributed random variables (i.i.d.r.v.), ξ i μ = ± 1 with probability 1 / 2 . { ξ i μ } i = 1 , , N , μ = 1 , p are by definition the patterns or information, coded by ± 1 bits, to be stored in the system, p is the number of patterns, μ is the index of the pattern. Each of this pattern is coded by the N bits { ξ i μ } i = 1 , , N , . α = p / N represents the capacity of the system. The J i j are the couplings among the spins and are the Hebbian interaction
J i j = 1 N μ = 1 p ξ i μ ξ j μ .
Thus it is a model of associative memory introduced by Hopfield [29] but the spins are quantum objects instead of the classical Hopfield model. The quantum spins are placed in a regular lattice in the case of the q bits or of a magnetic device, or distributed randomly in the case of the NMR. In order to get a magnetization an applied magnetic field h i (Tesla or gauss units) is parallel to the z direction and a transverse magnetic field d is parallel to the x direction. In this paper we study the asymptotic properties of the free-energy associated to H N in the limit N :
f N ( β ) = 1 β N log Tr e β H N , Z N = Tr e β H N ,
β has the meaning of the inverse of the temperature and is a parameter, Z N is the partition function.
In [33], an analogous model is implemented on a NMR quantum computer. The Quantum Adiabatic Computation (QAC) [35] is used for building the two q bit state. The retrieval states are obtained with some input Hamiltonian. The retrieval states were constructed with Quantum Annealing [36] and they were the minima of the Hamiltonian. This is the first example of implementation of a neural network of the Hopfield type with quantum spins. This proposal differs from the one cited above [30,31,32,37], but we think that the system built in [33] is more adequate to the function of retrieval and storage typical of the neural networks. More recently, a quantum algorithm has been proposed for the pattern retrieval of the Hopfield neural network [38]. Another problem is that the retrieval and storage in classical neural networks is connected with the self-averaging of the free-energy [39] and of the overlap parameters, i.e., the network should have the same performance of storing and retrieving independently on the kind of information under consideration. Using a more formal language one can state that there should be storage and retrieval for almost all patterns. The equality between the free energy and its average is not a simple question and it can be proven, with non trivial argument, only in the thermodynamic limit N in the classical case. We give here the proof of this important property in the quantum Hopfield model also in the thermodynamic limit. The necessity of large values of N arises from the application of probability estimates and from the extension of the law of large numbers which is largely applied for studying this limiting properties. We are aware of the fact that large N limit is far beyond the actual possibilities of the experiments but nevertheless we think that it is important to establish all the useful concepts and theoretical results. The recent research has been concentrated on the implementation of quantum annealing on devices and on improving the algorithms. In [40], a nonlinear annealing path is implemented resulting in a 50 % improvement in the qubit performance. This result demonstrates a low-level quantum control scheme which enhances the success probability of a quantum annealer. In [41], temporal planning has been tested on various state-of-the art hardware architectures from leading quantum computing companies. The results suggest that temporal planning can be an effective approach for more complex computing algorithms and architectures. In [42], the power of quantum signal processing (QSP) has been explored. It is an algorithm which exactly implements matrix polynomials on quantum computers. Asymptotic analysis of quantum algorithms based on QSP has shown that asymptotically optimal results can in principle be obtained for a range of tasks such as Hamiltonian simulations and the quantum linear problem. Given two n-dimensional vectors u ( u 1 , , u n ) and w ( w 1 , , w n ) , with w i , u i taking values 0 , 1 , disjointness consists in deciding whether u i = w i = 1 for some index i. In [43], it is studied the problem of computing this function in a distributed computing scenario in which the inputs u and w are given to the processors at the two extremities of a path of the length l. In [44,45], the Quantum Approximate Optimization Algorithm (QAOA) was proposed as a heuristic method for solving combinatorial optimization problems on near-term quantum computers and may be among the first algorithms to perform useful computations in the post-supremacy, noisy, intermediate scale era of quantum computing. The aim of QAOA is not very different from the aim of this paper but with the great difference that we are studying the asymptotic properties for N while the optimization problem in [44] is for N finite, we have an Hamiltonian with random quadratic interaction in the spins with the random Hopfield structure and they study a problem with a simpler random Hamiltonian. Their aim is to calculate the quantum average
F N ( γ , η ) = < ψ N ( γ , η ) | H C | ψ N > ,
where γ ( γ 1 , , γ N ) and η ( η 1 , , η N ) are two given N dimensional costant vectors. The Hamiltonian H C is defined by
H C = i j w i j ( 1 σ i z σ j z ) / 2 ,
and
| ψ N ( γ , η ) > = e i η N H B e i γ N H C e i η 1 H B e i γ 1 H C | + >
where γ i and η i , i = 1 , N , are evolution times, | ψ N ( γ , η ) > is the quantum state. The w i j are chosen with uniform probability in the interval ( 0 , 1 ) and | + > is the quantum state of N spins. H B = j σ j x .
In this paper, we use a more complex Hamiltonian and study the asymptotic property of the free-energy f N . We show also the self averaging property with respect to the ξ j μ of f N . It is clear that asymptotically the state which minimizes H N gives the largest contribution to f N . The model with Hamiltonian Equation (1) and free-energy Equation (4) has been extensively studied also in a series of papers [46,47,48] and the spin-glass system in [49]. In particular, the question of self-averaging of the free energy with respect to the random interaction has been investigated numerically in [49] for the Quantum Spin-Glass System; it is sometime called ergodicity but we prefer to use self-averaging, since ergodicity hides the probability problematic. In the papers [46,47,48], the saddle-point equations for the order parameters are derived using the replica-trick. The quantum free-energy is transformed using the Suzuki transformation based on the Trotter’s formula; the system obtained in this way is a classical spin system on two slices with Hopfield random interaction. Thus it is possible to introduce the computation of the replica system. The saddle point technique is used for deriving the equations for the set of order parameters which are the classical overlap parameter, the spin glass parameter and the sum of the overlaps over the patterns which do not condense, with the addition of two new parameters which are the spin glass parameter for spins on the two Trotter slices and another one is measure of quantum fluctuations. The values of these order parameters are solutions of the saddle point equations obtained averaging with respect to the Gibbs measure and with respect to the patterns. In [37], a quantum neural network with spins described by Pauli matrices has been introduced and the dynamics is the quantum montecarlo dynamics; the system is reduced with Suzuki–Trotter decomposition to a classical system with M slices, M . The capacity α = p N is a main parameter as in our case. It is clear that the self-averaging property in Equation (6) of the free-energy and of the order parameters is fundamental. We establish in a rigorous way this property for the free-energy. Moreover, the replica solution is not mathematically founded, the limit integer n going to zero makes non sense. Our results open the way to derive the saddle point-equations without using the replica-trick as we did in [50].
The plan of this work is the following: in the next section we explain the results and in the other section we show their proofs.

2. Results

In this work we treat the question of the self averaging of the free-energy of a one-dimensional system of q bits. with Hebbian interaction and i.i.d.r.v. of values ± 1 with equal probability. We compute also the free-energy and find a phase transition and the index for the critical temperature. The system is described by the Hamiltonian (1) and spins (2) of the type considered in [33], so it is the XY model in the one-dimensional case with Hebbian long range interaction, and with the perturbation of the QAC term.
This Hamiltonian has been considered also in other papers like for example [46,47,48], and is the usual Hamiltonian for performing quantum measurements. The term multiplying σ i x is the one used in QAC for constructing the interesting quantum states, its presence is fundamental for the phenomenology of the system. If d = 0 , this system is the usual Hopfield network which has been solved already in [51] with the non rigorous replica trick and in [50] using the rigorous cavity method.For d < 1 and d 1 we find a critical point for h i = 0 and β different from the usual Ising transitions [52].
We can establish the main results of this paper in the following two theorems.
Theorem 1.
Consider the Hopfield model in Equation (1) with h i independent of { ξ j μ } j i and such that E { h i 2 } C . Then for any bounded α 0 and β 0 , if p , N , and p / N α , then the free energy f N ( β , H ) is self averaging in the limit N
E | f N ( β ) E { f N ( β ) } | 2 C / N .
Here and below we denote by E { . } the averaging with respect to all random parameters of the problem.
Remark 1.
We proof the self-averaging of the free energy, i.e., the fact that the free-energy is asymptotically equal to its average with respect to the patterns. The problem of the difference of the limiting value of the free-energy and its average has been solved in the classical Hopfield model in [50]. The same problem was considered in [49] for the quantum spin glass. It was studied by means of numerical simulations.
Remark 2.
One can easily see from the proof of Theorem 1 that Bernoulli { ξ i μ } can be replaced by i.i.d.r.v. { ξ i μ } with any distribution satisfying the conditions E { ξ i μ } = 0 , E { | ξ i μ | 2 } = 1 , and E { | ξ i μ | 4 + ε } C , where ε is any positive number. Moreover, the operators σ i z , σ i x of Equation (2) can be replaced by any bounded operators.
Theorem 2.
Consider the Hopfield model in Equation (1) with h i = h ξ i 1 . Then the mean free energy E { f N ( β ) } satisfies the inequality
| E { f N ( β ) } min m f 0 ( m , h ) | C α 1 / 3 , α : = p / N ,
where
f 0 ( m , h ) = 1 β log ( 2 cosh ( β ( m + h ) 2 + d 2 ) 1 / 2 ) ) + m 2 2
is the free energy of the Curie–Weiss model (the Hopfield model with only one pattern). Here, m is the magnetization, and h is the constant defined above.
Remark 3.
Thus Equation (7) shows that for α 0 the free-energy coincides with that of the Curie–Weiss model, i.e., the classical Hopfield model with only one pattern and this holds for any choice of the input patterns { ξ i μ } .
Remark 4.
Since the free energy is continuous function with respect to h, it follows from Theorem 2 that the statement of the theorem is valid also for h = 0 .
Remark 5.
From the proof of Theorem 2 it will be seen that here also Bernoulli { ξ i μ } can be replaced by i.i.d. { ξ i μ } with any distribution satisfying the same condition as in Remark 2. In this case the expression for f 0 ( m , h ) takes the form
f 0 ( m , h ) = 1 β E { log ( 2 cosh ( β ( ξ 1 1 ( m + h ) 2 + d 2 ) 1 / 2 ) ) } + m 2 2 .
Moreover, the matrices σ i z , σ i x of Equation (2) can be replaced by spin matrices of any dimension. In this case the expression for f 0 ( m , h ) will be different from Equation (9).
The limiting value of the free energy and of the overlap parameters has been obtained in this case for p N , i.e., in the limit N and p / N 0 , so the capacity of the network, defined as usual as α = p N , goes to zero in the thermodynamic limit in this case.

3. Proofs

Proofs of Theorems 1 and 2 are based on the method proposed in [53]. We start from the following general proposition:
Proposition 1.
Let H , H 1 be any hermitian 2 N × 2 N matrices, H ( t ) : = H + t H 1
f N ( β , t ) = 1 β N log Tr e β H ( t ) , Z N ( t ) = Tr e β H ( t )
and { e k } k = 1 2 N are eigen vectors of H ( t ) , such that
H ( t ) e k = E k e k , H j k 1 : = ( H 1 e k , e j ) , H 1 : = H 1 H 1 H ( t ) .
Then
2 t 2 f N ( β , t ) = 1 Z N k , j = 1 2 N | H j k 1 | 2 e β E j e β E k E k E j 0 ,
and the Bogolyubov inequality [53] holds
1 N H 1 H ( 1 ) f N ( β , 1 ) f N ( β , 0 ) 1 N H 1 H ( 0 ) ,
where
H 1 H ( 1 ) = Tr e β H ( 1 ) H 1 Z N ( 1 )
and
H 1 H ( 0 ) = Tr e β H ( 0 ) H 1 Z N ( 0 ) .
Proof of Proposition 1.
According to the Duhamel formula [54] we have
2 t 2 f N ( β , t ) = β N Z N 0 1 Tr H 1 e β H ( t ) τ H 1 e β H ( t ) ( 1 τ ) d τ = β N Z N k , j = 1 2 N | H j k 1 | 2 0 1 e β E k τ β E j ( 1 τ ) d τ = 1 Z N k , j = 1 2 N | H j k 1 | 2 e β E j e β E k E k E j .
To prove Equation (11) we observe that
2 t 2 f ( β , t ) 0 t f N ( β , 1 ) t f N ( β , t ) t f N ( β , 0 ) .
Integrating the last inequality with respect to t from 0 to 1, we obtain Equation (11).□
Proof of Theorem 1.
Denote E k the averaging with respect to { ξ i μ } 1 i k , 1 μ p . Then, according to the standard martingale method (see [55]), we have
Var { f N ( β ) } = k = 1 n E { | E k 1 { f N ( β ) } E k { f N ( β ) } | 2 } .
Denote E k the averaging with respect to { ξ k μ } 1 μ p and H ( k ) : = H | ξ k μ = 0 , μ = 1 , , p . Then, using the Schwarz inequality [56], we get that
| E k 1 { f N ( β , H ) } E k { f N ( β , H ) } | 2 = | E k 1 { f N ( β , H ) E k { f N ( β , H ) } | 2 E k 1 { | f N ( β , H ) E k { f N ( β , H ) } | 2 } E k 1 { | f N ( β , H ) f N ( β , H ( k ) ) | 2 } .
Hence
Var { f N ( β , H ) } k = 1 n E { | f N ( β , H ) f N ( β , H ( k ) ) | 2 } .
By the Bogolyubov inequality [57]
Δ h k σ k z H + 1 N j = 1 N J k j σ k z σ j z H f N ( β , H ) f N ( β , H ( k ) ) Δ h k σ k z H ( k ) + 1 N j = 1 N J k j σ k z σ j z H ( k ) ,
where Δ h k = h k h k | ξ k μ = 0 , μ = 1 , , p . Hence
E { | f N ( β , H ) f N ( β , H ( k ) ) | 2 } E 1 N j = 1 N J k j σ k z σ j z H ( k ) 2 + E 1 N j = 1 N J k j σ k z σ j z H 2 + 4 E { h k 2 } .
Since H ( k ) does not depend on { ξ k μ } 1 μ p , averaging with respect to { ξ k μ } 1 μ p and using that
E { J i k J j k } = N 1 J i j ,
we get
k = 1 N E 1 N j = 1 N J k j σ k z σ j z H ( k ) 2 = k = 1 N E 1 N 2 i , j = 1 N J i j σ k z σ i z H ( k ) σ k z σ j z H ( k ) E { | | J | | } N ,
where | | J | | is the norm of the random matrix J i j .
For the second term in the right-hand-side (r.h.s.) of Equation (14) after summation with respect to k we get
k = 1 N E 1 N 2 i , j = 1 N J k i J k j σ k z σ i z H σ k z σ j z H k = 1 N E 1 N 2 i , j = 1 N J k i J k j ( σ k z ) 2 σ i z σ j z H = E 1 N 2 i , j , k = 1 N ( J 2 ) i j σ i z σ j z H E { | | J 2 | | } N .
Since it is well known that (see, e.g., [39])
E { | | J 2 | | } C ,
the last two bounds combined with Equations (13) and (14) prove Theorem 1.□
Proof of Theorem 2.
To prove Theorem 2 let us introduce some additional Gaussian field to the Hamiltonian H
H ( γ ¯ ) = H + N μ = 1 p γ μ m μ ,
where
m μ : = 1 N i = 1 N ξ i μ σ i z ,
are so-called overlaps parameters and { γ μ } μ = 1 p are independent of { ξ j μ } and of each other Gaussian random variables with zero mean and variance 1. Using the Bogolyubov inequality in Equation (11) and then the Schwarz inequality and Equation (15), we get
0 E { f N ( β , H ( γ ¯ ) ) } E { f N ( β , H ) } 1 N E μ = 1 p γ μ m μ H ( γ ¯ ) α N 1 / 2 E 1 / 2 μ = 1 p m μ H ( γ ¯ ) 2
= α N 1 / 2 N E 1 / 2 i , j = 1 N J i j σ i z H ( γ ¯ ) σ i z H ( γ ¯ ) C α N 1 / 2 .
Consider also the “approximate” Hamiltonian of the form
H a ( γ ¯ , c ¯ ) = H ( γ ¯ ) + N 2 μ = 1 p ( m μ c μ ) 2 = N μ = 1 p m μ c μ i = 1 N h i σ i z i = 1 N d σ i x + N 2 μ = 1 p ( c μ ) 2 + N μ = 1 p γ μ m μ .
Note that similarly to Equation (18) we have uniformly in c ¯ R p
| E { f N ( β , H a ( γ ¯ , c ¯ ) ) } E { f N ( β , H a ( 0 , c ¯ ) ) } | α N 1 / 2 C .
By the Bogolyubov inequality in Equation (11) for any c ¯ R p
0 f N ( β , H a ( γ ¯ , c ¯ ) ) f N ( β , H ( γ ¯ ) ) 1 2 μ = 1 p ( m μ c μ ) 2 H ( γ ¯ ) .
Hence
0 min c R p f N ( β , H a ( γ ¯ , c ¯ ) ) f N ( β , H ( γ ¯ ) ) 1 2 μ = 1 p ( m μ ) 2 H ( γ ¯ ) .
Here and below we denote
m μ : = m μ m μ H ( γ ¯ ) .
Let { e ¯ k } k = 1 2 N be the basis in which H ( γ ¯ ) is diagonal and
H ( γ ¯ ) e ¯ k = E k e ¯ k , M j k μ = ( m μ e ¯ k , e ¯ j ) .
It is easy to see that
μ = 1 p ( m μ m μ ) 2 H = 1 Z N μ = 1 p k , j = 1 2 N | M j k μ | 2 1 2 ( e β E j + e β E k ) .
Using the inequality
cosh x sinh x x + | sinh x | ,
we can write
μ = 1 p ( m μ m μ ) 2 H = 1 Z N μ = 1 p k , j = 1 2 N | M j k μ | 2 1 2 ( e β E j + e β E k ) 1 Z N μ = 1 p k , j = 1 2 N | M j k μ | 2 e β E j e β E k E k E j + 1 2 | e β E j e β E k | = μ = 1 p 2 γ μ 2 f N ( β , H ( γ ¯ ) ) + Σ ,
where we have used that according to Equation (10)
1 Z N k , j = 1 2 N | M j k μ | 2 e β E j e β E k E k E j = 2 γ μ 2 f N ( β , H ( γ ¯ ) ) .
To estimate Σ , we use the Holder inequality [58], which yields
Σ : = 1 2 Z N μ = 1 p k , j = 1 2 N | M j k μ | 2 | e β E j e β E k | 1 2 Z N μ = 1 p k , j = 1 2 N | M j k μ | 2 | e β E j e β E k | | E k E j | | E k E j |
1 2 Z N μ = 1 p k , j = 1 2 N | M j k μ | 2 | e β E j e β E k | | E k E j | 2 / 3 × 1 2 Z N μ = 1 p k , j = 1 2 N | M j k μ | 2 ( e β E j + e β E k ) | E k E j | 2 1 / 3 = μ = 1 p 2 γ μ 2 f N ( β , H ( γ ¯ ) ) 2 / 3 Σ 1 1 / 3 .
It is easy to see that
Σ 1 : = 1 2 Z N μ = 1 p k , j = 1 2 N | M j k μ | 2 ( e β E j + e β E k ) | E k E j | 2 = μ = 1 p [ m μ , H ] 2 H = μ = 1 p 1 N j = 1 N ξ j μ [ σ j z , d σ j x ] 2 H = 4 d 2 N i , j = 1 N J i j σ i y σ j y 4 d 2 | | J | | .
On the other hand, averaging with respect to the Gaussian variables γ μ , we obtain a bound similar to Equation (18)
E μ = 1 p 2 γ μ 2 f N ( β , H ( γ ¯ ) = E μ = 1 p γ μ γ μ f N ( β , H ( γ ¯ ) = E 1 N μ = 1 p γ μ m μ H E 1 / 2 1 N μ = 1 p γ μ 2 E 1 / 2 1 N i , j = 1 N J i j σ i z σ j z C α n 1 / 2 .
The above inequalities combined with Equations (22), (18) and (20) yield
C α n 1 / 2 E { min c ¯ R p f N ( β , H a ( 0 , c ¯ ) ) } E { f N ( β , H ) } C α N 1 / 3 .
In view of this bound it suffices to find E { min c ¯ R p f ( H a ( 0 , c ¯ ) ) } . By using the convexity of the function log 2 cosh β x in x for x > 0 , we get for C i = ξ i 1 h + μ ξ i μ c μ
f ( H a ( 0 , c ¯ ) ) = 1 β N i = 1 N log 2 cosh β ( C i 2 + d 2 ) 1 / 2 + 1 2 μ = 1 p ( c μ ) 2 1 β log 2 cosh β 1 N i = 1 N C i 2 + d 2 1 / 2 + 1 2 μ = 1 p ( c μ ) 2 = 1 β log 2 cosh β ( ( c 1 + h ) 2 + μ = 2 p ( c μ ) 2 + d 2 + ( A ( c ¯ + h e ¯ 1 ) , ( c ¯ + h e ¯ 1 ) ) ) 1 / 2 + 1 2 μ = 1 p ( c μ ) 2 1 β log 2 cosh β ( ( c 1 + h ) 2 + μ = 2 p ( c μ ) 2 + d 2 ) 1 / 2 + 1 2 μ = 1 p ( c μ ) 2 || A || 2 ( h 2 + ( c ¯ , c ¯ ) ) .
Here e ¯ 1 = ( 1 , 0 , , 0 ) R p , and the matrix A is defined as
A μ ν = 1 N ( 1 δ μ ν ) ( ξ μ , ξ ν ) .
It is known (see e.g., [39]) that
E { | | A | | 2 } C α N .
Moreover, it is easy to see that if c ¯ is a minimum point, we have that
c μ = m μ H a μ = 2 p ( c μ ) 2 = 1 N J i j σ i z H a σ j z H a | | J | | .
Hence Equations (24) and (15) imply
E { min c ¯ R p f ( H a ( 0 , c ¯ ) ) } min c ¯ R p 1 β log ( 2 cosh ( β ( c 1 + h ) 2 ) ) ) ) + μ = 2 p ( c μ ) 2 ) ) + d 2 1 / 2 + 1 2 μ = 1 p ( c μ ) 2 C α N min r 0 , 0 φ 2 π 1 β log ( 2 cosh ( β ( r sin φ + h ) 2 ) + r 2 cos 2 φ + d 2 ) ) 1 / 2 + r 2 2 C α N min m 0 1 β log ( 2 cosh ( β ( ( m + h ) 2 + d 2 ) ) 1 / 2 + m 2 2 C α N .
Let us now discuss briefly the equation for the point in which the r.h.s. of Equation (8) attains its minimum. It is easy to see that it has the form
m = ( m + h ) tanh ( β ( ( m + h ) 2 + d 2 ) ) 1 / 2 ( m + h ) 2 + d 2 1 / 2 .
For h = 0 it takes the form
m = m tanh β m 2 + d 2 1 / 2 m 2 + d 2 1 / 2 .
It is evident that it always has a solution m = 0 . To find another solution we should study the equation
m 2 + d 2 1 / 2 = tanh β m 2 + d 2 1 / 2 .
This equation for d > 1 has no solutions because the r.h.s. is less than 1 and the left-hand side (l.h.s.) is larger than 1. For β < 1 the equation also has no solutions, since it is well known that the equation tanh β v = v has no solutions except v = 0 for β < 1 .
For d < 1 there is a critical point β ( d ) such that for β > β ( d ) Equation (29) has the unique solution m = 0 and β > β ( d ) there is also non zero solution of for Equation (29). This critical value β ( d ) is a solution of the equation
d = tanh ( β d ) .
It is easy to see that β ( 0 ) = 1 and β ( d ) as d 1 ( d < 1 ). One can see also that β ( d ) 0 since it follows from Equation (31)
β ( d ) = d 1 cosh 2 ( β d ) 1 β cosh 2 β d
and the r.h.s. here is positive, since β cosh 2 ( β d ) is the derivative of the r.h.s. of Equation (31) with respect to d, and at the solution point of Equation (31) this derivative is less than 1. Moreover, since tanh ( β d ) 1 e 2 β d for β d , we have from Equation (31) that
e 2 β d ( 1 d ) 1 β ( d ) 1 2 log ( 1 d ) 1 , d 1 .

Author Contributions

Conceptualization, M.S. and B.T.; supervision, M.S. and B.T.; writing—review and editing, C.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Feng, J.; Tirozzi, B. Stochastic resonance tuned by correlations in neural models. Phys. Rev. E 2000, 61, 4207. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Feng, J.; Brown, D.; Wei, G.; Tirozzi, B. Detectable and undetectable input signals for the integrate-and-fire model. J. Phys. A Math. Gen. 2001, 34, 1637. [Google Scholar] [CrossRef]
  3. Feng, J.; Tartaglia, G.; Tirozzi, B. A note on minimum-variance theory and beyond. J. Phys. A Math. Gen. 2004, 37, 4685. [Google Scholar] [CrossRef] [Green Version]
  4. Reutskiy, S.; Rossoni, E.; Tirozzi, B. Conduction in bundles of demyelinated nerve fibers: Computer simulation. Biol. Cybern. 2003, 89, 439–448. [Google Scholar] [CrossRef]
  5. Menna, M.; Rotundo, G.; Tirozzi, B. Distinguishing between chaotic and stochastic systems in financial time series. Int. J. Mod. Phys. C 2002, 13, 31–39. [Google Scholar] [CrossRef]
  6. Casaioli, M.; Mantovani, R.; Scorzoni, F.P.; Puca, S.; Speranza, A.; Tirozzi, B. Linear and nonlinear post-processing of numerically forecasted surface temperature. Nonlinear Process. Geophys. 2003, 10, 373–383. [Google Scholar] [CrossRef]
  7. Tirozzi, B.; Puca, S.; Pittalis, S.; Bruschi, A.; Morucci, S.; Ferraro, E.; Corsini, S. Neural Networks and Sea Time Series: Reconstruction and Extreme-Event Analysis; Birkhäuser: Basel, Switzerland, 2007. [Google Scholar]
  8. Puca, S.; Tirozzi, B. A Neural Algorithm for the Reconstruction of Space-Time Correlated Series. Semin. Fachbereich Math. Hagen 2003, 74, 81–89. [Google Scholar]
  9. Belardinelli, P.; Capoleoni, S.; Tirozzi, B.; Coluzza, C. Application of a segmentation algorithm to quantum dots study. J. Vac. Sci. Technol. B Microelectron. Nanometer Struct. Process. Meas. Phenom. 2004, 22, 588–591. [Google Scholar] [CrossRef]
  10. Feng, J.; Shcherbina, M.; Tirozzi, B. On the critical capacity of the Hopfield model. Commun. Math. Phys. 2001, 216, 139–177. [Google Scholar] [CrossRef] [Green Version]
  11. Shcherbina, M.; Tirozzi, B. Generalization and learning error for nonlinear perceptron. Math. Comput. Model. 2002, 35, 259–271. [Google Scholar] [CrossRef]
  12. Shcherbina, M.; Tirozzi, B. On the volume of the intersection of a sphere with random half spaces. Comptes Rendus Math. 2002, 334, 803–806. [Google Scholar] [CrossRef]
  13. Shcherbina, M.; Tirozzi, B. Rigorous solution of the Gardner problem. Commun. Math. Phys. 2003, 234, 383–422. [Google Scholar] [CrossRef] [Green Version]
  14. Shcherbina, M.; Tirozzi, B. Central Limit Theorems for the Free Energy of the Modified Gardner Model. Markov Process. Relat. Fields 2005, 11, 133–144. [Google Scholar]
  15. Shcherbina, M.; Tirozzi, B. A perturbative expansion for the Hopfield model. Helv. Phys. Acta 1995, 68, 470–491. [Google Scholar]
  16. Agliari, E.; Barra, A.; Tirozzi, B. Free energies of Boltzmann Machines: Self-averaging, annealed and replica symmetric approximations in the thermodynamic limit. J. Stat. Mech. Theory Exp. 2019, 2019, 033301. [Google Scholar] [CrossRef] [Green Version]
  17. Guerra, F. Broken replica symmetry bounds in the mean field spin glass model. Commun. Math. Phys. 2003, 233, 1–12. [Google Scholar] [CrossRef] [Green Version]
  18. Barra, A.; Guerra, F.; Mingione, E. Interpolating the Sherrington–Kirkpatrick replica trick. Philos. Mag. 2012, 92, 78–97. [Google Scholar] [CrossRef]
  19. Bovier, A.; Gayrard, V. Hopfield models as generalized random mean field models. In Mathematical Aspects of Spin Glasses and Neural Networks; Birkhäuser: Boston, MA, USA, 1998; pp. 3–89. [Google Scholar]
  20. Bovier, A.; Gayrard, V.; Picco, P. Gibbs states of the Hopfield model with extensively many patterns. J. Stat. Phys. 1995, 79, 395–414. [Google Scholar] [CrossRef]
  21. Bovier, A. Statistical Mechanics of Disordered Systems: A Mathematical Perspective; Cambridge University Press: Cambridge, UK, 2006; Volume 18. [Google Scholar]
  22. Talagrand, M. Spin Glasses: A Challenge for Mathematicians: Cavity and Mean Field Models; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2003; Volume 46. [Google Scholar]
  23. Talagrand, M. Rigorous results for the Hopfield model with many patterns. Probab. Theory Relat. Fields 1998, 110, 177–275. [Google Scholar] [CrossRef]
  24. Talagrand, M. Exponential inequalities and convergence of moments in the replica-symmetric regime of the Hopfield model. Ann. Probab. 2000, 28, 1393–1469. [Google Scholar] [CrossRef]
  25. Löwe, M.; Vermet, F. The storage capacity of the Hopfield model and moderate deviations. Stat. Probab. Lett. 2005, 75, 237–248. [Google Scholar] [CrossRef] [Green Version]
  26. Comets, F.; Kurkova, I.; Trashorras, J. Fluctuations of the free energy in the high temperature Hopfield model. Stoch. Process. Their Appl. 2004, 113, 1–35. [Google Scholar] [CrossRef]
  27. Grover, L.K. Quantum mechanics helps in searching for a needle in a haystack. Phys. Rev. Lett. 1997, 79, 325. [Google Scholar] [CrossRef] [Green Version]
  28. Weinstein, Y.S.; Pravia, M.; Fortunato, E.; Lloyd, S.; Cory, D.G. Implementation of the quantum Fourier transform. Phys. Rev. Lett. 2001, 86, 1889. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Perus, M.; Bischof, H. A neural-network-like quantum information processing system. arXiv 2003, arXiv:quant-ph/0305072. [Google Scholar]
  31. Shafee, F. Semiclassical neural network. arXiv 2002, arXiv:quant-ph/0202015. [Google Scholar]
  32. Ventura, D.; Martinez, T. Quantum associative memory. Inf. Sci. 2000, 124, 273–296. [Google Scholar] [CrossRef] [Green Version]
  33. Neigovzen, R.; Glaser, S.J.; Sollacher, R.; Neves, J. Quantum pattern recognition with liquid-state nuclear magnetic resonance. arXiv 2008, arXiv:0802.1592. [Google Scholar]
  34. Vandersypen, L.M.; Chuang, I.L.; Suter, D. Liquid-State NMR Quantum Computing. eMagRes 2010. [Google Scholar] [CrossRef]
  35. Farhi, E.; Goldstone, J.; Gutmann, S.; Sipser, M. Quantum computation by adiabatic evolution. arXiv 2000, arXiv:quant-ph/0001106. [Google Scholar]
  36. Sorella, S.; Santoro, G.E.; Becca, F. SISSA Lecture Notes on Numerical Methods for Strongly Correlated Electrons; International School for Advanced Studies (SISSA): Trieste, Italy, 2016. [Google Scholar]
  37. Inoue, J.I. Pattern-recalling processes in quantum Hopfield networks far from saturation. J. Phys. Conf. Ser. 2011, 297, 012012. [Google Scholar] [CrossRef] [Green Version]
  38. Rebentrost, P.; Bromley, T.R.; Weedbrook, C.; Lloyd, S. Quantum Hopfield neural network. Phys. Rev. A 2018, 98, 042308. [Google Scholar] [CrossRef] [Green Version]
  39. Shcherbina, M.; Tirozzi, B. The free energy of a class of Hopfield models. J. Stat. Phys. 1993, 72, 113–125. [Google Scholar] [CrossRef]
  40. Khezri, M.; Grover, J.A.; Basham, J.I.; Disseler, S.M.; Chen, H.; Novikov, S.; Zick, K.M.; Lidar, D.A. Anneal-path correction in flux qubits. arXiv 2020, arXiv:2002.11217. [Google Scholar]
  41. Do, M.; Wang, Z.; O’Gorman, B.; Venturelli, D.; Rieffel, E.; Frank, J. Planning for Compilation of a Quantum Algorithm for Graph Coloring. arXiv 2020, arXiv:2002.10917. [Google Scholar]
  42. Dong, Y.; Meng, X.; Whaley, K.B.; Lin, L. Efficient phase factor evaluation in quantum signal processing. arXiv 2020, arXiv:2002.11649. [Google Scholar]
  43. Magniez, F.; Nayak, A. Quantum Distributed Complexity of Set Disjointness on a Line. arXiv 2020, arXiv:2002.11795. [Google Scholar]
  44. Zhou, L.; Wang, S.T.; Choi, S.; Pichler, H.; Lukin, M.D. Quantum approximate optimization algorithm: performance, mechanism, and implementation on near-term devices. arXiv 2018, arXiv:1812.01041. [Google Scholar]
  45. Headley, D.; Müller, T.; Martin, A.; Solano, E.; Sanz, M.; Wilhelm, F.K. Approximating the Quantum Approximate Optimisation Algorithm. arXiv 2020, arXiv:2002.12215. [Google Scholar]
  46. Suzuki, S.; Inoue, J.I.; Chakrabarti, B.K. Quantum Ising Phases and Transitions in Transverse Ising Models; Springer: Berlin/Heidelberg, Germany, 2012; Volume 862. [Google Scholar]
  47. Suzuki, M. Relationship between d-dimensional quantal spin systems and (d+ 1)-dimensional ising systems: Equivalence, critical exponents and systematic approximants of the partition function and spin correlations. Prog. Theor. Phys. 1976, 56, 1454–1469. [Google Scholar] [CrossRef]
  48. Nishimori, H.; Nonomura, Y. Quantum effects in neural networks. J. Phys. Soc. Jpn. 1996, 65, 3780–3796. [Google Scholar] [CrossRef] [Green Version]
  49. Mukherjee, S.; Chakrabarti, B.K. On the Question of Ergodicity in Quantum Spin Glass Phase and Its Role in Quantum Annealing. J. Phys. Soc. Jpn. 2019, 88, 061004. [Google Scholar] [CrossRef] [Green Version]
  50. Pastur, L.; Shcherbina, M.; Tirozzi, B. The replica-symmetric solution without replica trick for the Hopfield model. J. Stat. Phys. 1994, 74, 1161–1183. [Google Scholar] [CrossRef]
  51. Amit, D.J.; Gutfreund, H.; Sompolinsky, H. Statistical mechanics of neural networks near saturation. Ann. Phys. 1987, 173, 30–67. [Google Scholar] [CrossRef]
  52. Gallavotti, G. Instabilities and phase transitions in the Ising model. A review. La Rivista del Nuovo Cimento (1971–1977) 1972, 2, 133–169. [Google Scholar] [CrossRef]
  53. Bogolyubov, N.N. A Method for Studying Model Hamiltonians: A Minimax Principle for Problems in Statistical Physics; Elsevier: Amsterdam, The Netherlands, 2013; Volume 43. [Google Scholar]
  54. John, F. Partial Differential Equations; Applied Mathematical Sciences; Springer: New York, NY, USA, 1991. [Google Scholar]
  55. Dharmadhikari, S.; Fabian, V.; Jogdeo, K. Bounds on the moments of martingales. Ann. Math. Stat. 1968, 39, 1719–1723. [Google Scholar] [CrossRef]
  56. Bityutskov, V. Bunyakovskii inequality. In Encyclopaedia of Mathematics; Hazewinkel, M., Ed.; Kluwer Academic Publishers: Berlin, Germany, 2001. [Google Scholar]
  57. Bogolyubov, N.N., Jr. On model dynamical systems in statistical mechanics. Physica 1966, 32, 933–944. [Google Scholar] [CrossRef]
  58. Rudin, W. Real and Complex Analysis; Mathematics Series; McGraw-Hill: New York, NY, USA, 1987. [Google Scholar]

Share and Cite

MDPI and ACS Style

Shcherbina, M.; Tirozzi, B.; Tassi, C. Quantum Hopfield Model. Physics 2020, 2, 184-196. https://doi.org/10.3390/physics2020012

AMA Style

Shcherbina M, Tirozzi B, Tassi C. Quantum Hopfield Model. Physics. 2020; 2(2):184-196. https://doi.org/10.3390/physics2020012

Chicago/Turabian Style

Shcherbina, Masha, Brunello Tirozzi, and Camillo Tassi. 2020. "Quantum Hopfield Model" Physics 2, no. 2: 184-196. https://doi.org/10.3390/physics2020012

APA Style

Shcherbina, M., Tirozzi, B., & Tassi, C. (2020). Quantum Hopfield Model. Physics, 2(2), 184-196. https://doi.org/10.3390/physics2020012

Article Metrics

Back to TopTop