Next Article in Journal
Marking Vertices to Find Graph Isomorphism Mapping Based on Continuous-Time Quantum Walk
Previous Article in Journal
Nature of Heat and Thermal Energy: From Caloric to Carnot’s Reflections, to Entropy, Exergy, Entransy and Beyond
Previous Article in Special Issue
Energy and Entropy Measures of Fuzzy Relations for Data Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Investigation of Finite-Size 2D Ising Model with a Noisy Matrix of Spin-Spin Interactions

by
Boris Kryzhanovsky
,
Magomed Malsagov
and
Iakov Karandashev
*
Scientific Research Institute for System Analysis, Russian Academy of Sciences, 117218 Moscow, Russia
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(8), 585; https://doi.org/10.3390/e20080585
Submission received: 20 June 2018 / Revised: 13 July 2018 / Accepted: 2 August 2018 / Published: 7 August 2018
(This article belongs to the Special Issue Entropy and Complexity of Data)

Abstract

:
We analyze changes in the thermodynamic properties of a spin system when it passes from the classical two-dimensional Ising model to the spin glass model, where spin-spin interactions are random in their values and signs. Formally, the transition reduces to a gradual change in the amplitude of the multiplicative noise (distributed uniformly with a mean equal to one) superimposed over the initial Ising matrix of interacting spins. Considering the noise, we obtain analytical expressions that are valid for lattices of finite sizes. We compare our results with the results of computer simulations performed for square N = L × L lattices with linear dimensions L = 50 ÷ 1000. We find experimentally the dependencies of the critical values (the critical temperature, the internal energy, entropy and the specific heat) as well as the dependencies of the energy of the ground state and its magnetization on the amplitude of the noise. We show that when the variance of the noise reaches one, there is a jump of the ground state from the fully correlated state to an uncorrelated state and its magnetization jumps from 1 to 0. In the same time, a phase transition that is present at a lower level of the noise disappears.

1. Introduction

Calculation of the partition function is an essential of statistical physics and informatics. A few conceptual models allow exact solutions [1,2,3,4,5,6]. Among these a 2D Ising model [7], though simple, deserves special attention because of its importance for investigating critical effects. Having contributed a lot to the development of the spin glass theory, the Edwards-Anderson model [8] and Sherrington-Kirkpatrick model [9] are also worth mentioning. However, there are not many models that permit exact solutions. This is the reason why numerical methods are mostly used for tackling complex systems. Of them, two methods are most suitable for our purpose. The first is the Monte-Carlo method [10,11]. It enables us to analyze a system and determine its critical parameters quite accurately [12,13,14,15,16]. The thorough consideration of the method can be found in papers [17,18]. Unfortunately, the method needs a great deal of computations and does not allow direct calculation of the free energy. The second method uses the approach [19,20], which has recently given rise to the fast algorithm [21,22] that finds the free energy by computing the determinant of a matrix. The algorithm is popular because it allows the user to compute the free energy quite accurately and at the same time determine the energy and configuration of the ground state of a system.
The methods of statistical physics help researchers to understand the behavior of complex neural nets and evaluate the capacity of neural-net storage systems [23,24,25,26,27,28]. The machine learning and computer-aided image processing need fast calculations of the partition function of specific interconnect matrices [29,30]. The realization of Hinton’s ideas [31,32] gave rise to algorithms of deep learning and image processing [33,34,35,36]. Based on the optimization of the free energy of a spin (neuron) system, the algorithms, from the formal viewpoint, comes down to the optimization of the spin correlation in neighboring layers or within a single layer of a neural net. It should be understood that the system has a phase transition because the spin correlation grows abruptly at the critical point (the correlation length becomes nearly as great as the size of the whole system). In this case the optimization of the neural network becomes temperature dependent, which makes the learning algorithm almost impracticable.
The aim of the paper is to study the properties of a finite spin system whose Hamiltonian is defined as a quadratic Functional (1). The functional is often used in machine learning and image processing. Quantities s i = ± 1 may stand for either pixel class (object/background) in an image [35], or the neuron activity indication in a Bayes neural network [36]. We will use the physical notation calling quantities s i = ± 1 spins. The model under consideration has two limiting cases. The conventional 2D Ising model with regular interconnections presents the first case; the Edwards-Anderson model is the second case. The properties of our model lie somewhere in between. We introduce adjusting parameters in Functional (1), which allows us to go from the 2D Ising model to Edwards-Anderson model in a smooth manner and investigate the thermodynamic characteristics of the system in the transient state.
To avoid misunderstanding, let us point out two things. First, our interest is finite systems. For this reason, there is an expected discrepancy with Onsager results obtained at N . Second, we cannot use the results of the spin glass theory to the full because the finite system under consideration is ergodic: it does not have multiple phase transitions caused by frustrations and provide self-averaging [37,38].

2. Essential Expressions, the Equation of State

Let us consider a system described by the Hamiltonian:
E = 1 N i > j N J i j s i s j .
This system consists of N Ising spins s i = ± 1 ( i = 1 , 2 , , N ) , positioned at the nodes of a planar grid, the nodes being numbered by index i . Only interactions with four nearest neighbors are considered. Spin-to-spin interactions J i j are random and defined as
J i j = J ( 1 + ε i j )
where ε i j is a random zero-mean variable uniformly distributed over the interval ε i j [ η , η ] . We have chosen the uniform distribution of ε i j to be able to control J i j : when η 1 , all interactions are positive ( J i j 0 ) . For the sake of simplicity, we assume that J = 1 .
Our interest is the free energy of the system:
f = 1 N ln Z
where the partition function Z = S e N β E ( S ) is defined as a sum over all possible configurations S and β = 1 / k T is the reverse temperature. The knowledge of the free energy makes it possible to compute the basic measurable parameters of the system:
U = f β , σ 2 = 2 f β 2 , C = β 2 2 f β 2
where free energy U = E is the ensemble average at given β , σ 2 = E 2 E 2 is the variance of energy and C = β 2 σ 2 is the specific heat.
Along with that, we are interested in the configuration S 0 of the ground state, its energy E 0 = E ( S 0 ) and the magnetization M 0 = 1 N i = 1 N S 0 i .
The properties of the system depend on the dimension of the system N and adjusting parameter η . Unfortunately, we cannot allow for the effect of the both parameters simultaneously, so we consider the contribution of each separately.

2.1. The Effect of the Finite Grid Dimension

Let us consider how the fact of the grid having a finite dimension affects its properties. Let us take η = 0 as the starting point. In this case the behavior of the system can be described by the expression (see reference [39]) which is true for finite systems with free boundary conditions:
f = ln 2 2 ln ( cosh z ) 1 2 π 0 π ln ( 1 + 1 κ 2 cos 2 θ ) d θ , U = 1 1 + Δ { 2 tanh z + sinh 2 z 1 sinh z cosh z [ 2 π K 1 1 ] } , σ 2 = 4 J 2 coth 2 z π ( 1 + Δ ) 2 { a 1 ( K 1 K 2 ) ( 1 tanh 2 z ) [ π 2 + ( 2 a 2 tanh 2 z 1 ) K 1 ] } ,
where
z = 2 β J 1 + Δ , κ = 2 sinh z ( 1 + δ ) cosh 2 z , Δ = 5 4 L , δ = π 2 L 2 , a 1 = p ( 1 + δ ) 2 , a 2 = 2 p 1 , p = ( 1 sinh 2 z ) 2 ( 1 + δ ) 2 cosh 4 z 4 sinh 2 z .
Here K 1 = K 1 ( κ ) and K 2 = K 2 ( κ ) are full elliptical integrals of the first and second type correspondingly:
K 1 ( κ ) = o π / 2 ( 1 κ 2 sin 2 φ ) 1 / 2 d φ ,     K 2 ( κ ) = o π / 2 ( 1 κ 2 sin 2 φ ) 1 / 2 d φ
Expressions (5)–(7) are the well-known Onsager solution [7], which is true for N , modified for the case of finite N . Though true for N 1 , the expressions agree well with the experimental data even at relatively small grid dimensions ( L 25 ) . As could be expected, when N , Formula (6) give p 1 , a 1 , 2 1 , Δ 0 , δ 0 and Expression (5) turn into well-known ones [7].
Expression (5) agree excellently with experimental data: the relative error is less than 0.2% at L = 50 . With the growing L , the error decreases rapidly and is within the limits of experimental error at L = 1000 ( 10 5   for   σ 2 ) . By way of comparison Figures 3, 6 and 7 gives the plots of Function (5) for L = 400 .
Expression (5) allow the N -dependences of the critical values of the reverse temperature, internal energy and energy variance of the system:
β c 0 = β ( 1 + 1 L ) , U c 0 = 2 ( 1 1 L ) , σ c 0 2 = 2.4 ( ln L 0.5 ) ,
where β = 1 2 ln ( 2 + 1 ) is the critical value for L [7].

2.2. The Effect of Noise

Let us consider the random character of quantities J i j ( η 0 ) . Let D ( E ) be the number of states of energy E . Then the sum of states can be presented as Z = E D ( E ) e N β E . Passing from summation to integration, we get (to within an insignificant constant):
Z ~ e N [ Ψ ( E ) β E ] d E
where Ψ ( E ) = ln D ( E ) / N . Applying the saddle-point method to integral (9), we get Z exp [ N f ( β ) ] , where
f ( β ) = β E Ψ ( E ) ,     d Ψ ( E ) d E = β .
The first expression in (10) defines the free energy, the second determines E at the saddle point where the derivative of function Ψ ( E ) β E turns to zero.
The form of spectral function Ψ ( E ) is known only for the one-dimensional Ising model. That is why we turn to the so-called n-vicinity method [28] to calculate the spectral function. The idea of the method is to divide the whole space of 2 N states into N classes ( n vicinities) and approximate the energy distribution in each class by a corresponding Gaussian. In brief, the approach is as follows: Let us denote the ground-state configuration as S 0 . Let class Ω n be a set of configurations S n that differs from S 0 in that they have n spins directed oppositely to the spins in S 0 . The number of configurations in the class is equal to the number of compositions of N in n , all configurations having the same (relative) magnetization m = N 1 S m S 0 T = 1 2 n / N . The distribution of state energies within the n-vicinity was shown [28] to follow the normal distribution D n ( E ) :
D n ( E ) ( N n ) N 2 π σ m 2 exp [ 1 2 N ( E E m σ m ) 2 ] ,
where
E m = E 0 m 2 ,     σ m 2 = 2 ( 1 m 2 ) ( 1 α m 2 ) ,     α = 1 σ h 0 2 / 2 .
Here E 0 is the ground state energy, σ h 0 2 is the variance of ground-state local fields. In this case we have σ h 0 2 = σ η 2 / ( 1 + σ η 2 ) , where σ η 2 = η 2 / 3 is the variance of interconnections J i j .
The sought-for distribution D ( E ) is found by summing D n ( E ) over all n . Using the Stirling formula and passing from summation to integration with respect to variable m = 1 2 n / N , we get for D ( E ) :
D ( E ) = n = 0 N D n ( E ) = N 2 π 0 1 e N F ( m , E ) d m σ n 1 m 2 ,
where
F ( m , E ) = ln N + 1 2 [ ( 1 m ) ln ( 1 m ) + ( 1 + m ) ln ( 1 + m ) + ( E E m ) 2 σ m 2 ] .
If we evaluate integral (13) by the saddle-point method, for the spectral function we get Ψ ( E ) = F ( m , E ) , where m is the solution of equation F ( m , E ) / m = 0 . Let us combine (13)–(14) and (9)–(10). Then the free energy can be written as
f ( β ) = F ( m , E ) + β E ,
where variables m = m ( β ) and E = E ( β ) are derived from the equations:
ln 1 + m 1 m + 2 E E m σ m m ( E E m σ m ) = 0 ,     E E m σ m 2 + β = 0 .
It is easy to notice that set of Equation (16) is solvable when m = 0 . Correspondingly, when the values of β are less than certain critical value β c , (16) and (12) gives us E m = 0 , σ m 2 = 2 and E = 2 β , the free energy taking the form f ( β ) = ln N β 2 . The phase transition occurs when β allows yet another solution to (16) at m 0 . Note that substituting the second equation from (16) into the first one allows us to eliminate variable E . Doing things this way and performing several transformations, we obtain the equation of state that holds only one variable m :
1 4 m ln 1 + m 1 m = β ¯ β ¯ 2 ( 1 m 2 ) ( 1 + 1 2 σ η 2 ) ,
where β ¯ = β / r . Here we introduced adjusting coefficient r to allow for the finite grid dimension: r = 1 when L , r = 1.11 giving the excellent agreement with experiments at L 400 . The critical temperature is defined as value β = β c at which there is a nontrivial solution of (17). This solution has to be found by a numerical method: when β > β c , we find m 0 that satisfies (17) and compute the corresponding value of energy E = E m β σ m 2 . Substitution of these values in (15) yields the corresponding value f ( β ) .
Unfortunately, the n-vicinity method has an essential fault: it is applicable only when the condition ( J i j ) 2 / ( N J i j 2 ) 4 ln 2 holds. In our case this condition works when ( 1 + σ η 2 ) ln 2 1 , that is when η < 1.2 . For such relatively small values of η Formulae (15)–(17) gives β c and f ( β ) that predict the experimental results well (see Figure 1 and Figure 2).

2.3. Evaluating the Spectral Density

The algorithm we use allows us to compute function f = f ( β ) and its derivatives. In turn, this allows us to investigate how energy distribution D ( E ) = exp [ N Ψ ( E ) ] varies with the noise amplitude. Indeed, it is easy to derive from Formulae (10) the equation for the spectral function:
Ψ ( E ) = β E f ( β ) ,     E = d f d β
and its derivatives
d Ψ d E = β ,     d 2 Ψ d E 2 = ( d 2 f d β 2 ) 1
Note that Ψ ( E ) is entropy up to a constant and Equation (18) are well-known Legendre transformations, which are applicable for analyzing the spectral density of finite-dimension models [40,41]. It follows from these equations that when β varies from β = 0 to β = , E changes from 0 to E 0 and for each value of β we have a pair of values of E and Ψ ( E ) . In so doing we determine the form of function Ψ ( E ) and its derivatives. The plots of function Ψ ( E ) and its derivatives presenting experimental data are given in Section 4.
The minimum of function d 2 Ψ / d E 2 at point E = 0 changes into the maximum as the noise amplitude grows. Let us find η at which it occurs. It can be noticed that with E 0 the entropy can be presented as the series:
Ψ ( E ) = ln 2 1 2 E 2 σ J 2 + μ 4 4 ! E 4 σ J 4 ,     σ J 2 = 2 J i j 2 = 2 ( 1 + σ η 2 )
where μ 4 = E 4 / σ J 4 is the fourth cumulant, which in our case is described by the expression [28]:
μ 4 = 4 ( 5 6 σ η 2 9 5 σ η 4 ) / σ J 4 .
From (20)–(21) it follows that in the center point of the curve ( E = 0 ) quantity d 2 Ψ / d E 2 is determined by expression:
d 2 Ψ d E 2 | E = 0 = 1 2 ( 1 + σ η 2 )
and the fourth derivative d 4 Ψ / d E 4 | E = 0 = μ 4 / σ J 4 changes its sign at η = η c , when μ 4 = 0 :
η c = [ 5 ( 2 1 ) ] 1 / 2 .

3. The Experiment Description

We make an intensive use of the Kasteleyn-Fisher algorithm [19,20] here to compute the free energy of the 2D square spin system. The algorithm gives exact results because the finding of the partition function is reduced to computation of the determinant of a matrix generated in accordance with the model under consideration. The algorithm permits us to exactly calculate the free energy of a spin system for an arbitrary planar graph with arbitrary links in a polynomial time. More information about the algorithm can be found in [21]. In this paper, we use the realization [22] of the algorithm that can give the same results in a shorter time. Using this algorithm, we were able to examine the behavior of free energy f = f ( β ; η ) and its derivatives for a few lattices of different dimensions N = L × L . Additionally, paper [22] offers the algorithm for searching the ground state. This algorithm helped us to investigate the energy and magnetization of the ground state as functions of noise amplitude. For each value, we generated a great number of matrices but the results were practically the same when we changed one matrix to another.
Let us point out that the both algorithms we use are only applicable to planar lattices. It means that we considered only lattices with free boundary conditions because lattices with periodic boundary conditions do not belong to a planar graph. The length of the lattice varied from L = 25 to L = 10 3 . Most of the plots present the results for L = 400 . The results for other sizes did not differ qualitatively.
The free energy is computed to 15-digit accuracy after the decimal point. Because we use the finite-difference method to compute the derivatives, the number of significant digits after the decimal point is about 7 for U ( β ) and 4 for σ 2 ( β ) . With large grid dimensions ( L 1000 ) and with β > 1 the computation error becomes too big and the plots of second derivatives start exhibiting oscillations. It is interesting to notice that introduction of little noise into grid interconnections allows us to decrease these oscillations.

4. Experimental Results

In the experiments, we calculate the free energy and its derivatives and find the ground-state configuration and energy. The accent is given to the finding of the critical point and corresponding quantities. The location of the maximum of curve σ 2 = σ 2 ( β ) is used to find the critical temperature. Most important experimental data are presented in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 and Table 1.

4.1. The Free and Internal Energy

Experimental dependencies f = f ( β ) and U = U ( β ) are shown in Figure 1 and Figure 2. It is seen from Figure 1 that the curves go down with η because the ground-state energy grows. When noise is small ( η < 1.2 ), the curves of free energy f ( β ) and internal energy U ( β ) almost merge (Figure 1 and Figure 2). When η < 1.7 the curves U ( β ) demonstrate a cusp (Figure 2) which corresponds to the phase transition. When η ~ 1.7 , the cusp disappears and the further increase of noise changes only the asymptotic behavior of curves f ( β ) and U ( β ) according to (26).

4.2. The Energy Variance

Curves σ 2 = σ 2 ( β ) are shown in Figure 3. It should be noted that because the n-vicinity method gives a piecewise-linear approximation of the energy variance, the red marks in Figure 3 indicates values obtained by using the generalization of Onsager solution to a finite-dimension case according to Formula (5). The formula gives a perfect agreement with experimental data, yet it is applicable only in a zero-noise case.
The behavior of curves σ 2 = σ 2 ( β ) near point β = 0 is quite expected for any η : when β = 0 , the energy variance is equal to σ 0 2 and, according to (20), grows gradually in proportion to noise variance σ η 2 = η 2 / 3 . With great β the behavior of curves σ = σ ( β ) is much dependent on η . It is seen in Figure 3 that the energy variance peaks corresponding to the phase transition are observed only at η < 1.7 . The peaks become lower with growing η and move to the right at the same time. When η > 1.8 , the peaks disappear at all, only the maximum at β = 0 remains.
It is interesting that all the curves in Figure 3a have the common intersection point near β 0.29 . We could not find out why it is so. The intersection moves to the right slowly with the growing noise amplitude.

4.3. The Critical Temperature

The critical temperature is defined by the location of the maximum of curve σ = σ ( β ) or by the presence of a cusp on it. Figure 4 shows how the variance peak location and height vary with the growing noise. Holding true only for η < 1.2 , the numerical solution of the equation of state (17) gives β с that agrees with the experimental data perfectly. For greater η it is possible to use the approximate expression resulting from the experiment:
β с β c 0 ( 1 + σ η 2 2 ) ,
where β c 0 is the zero-noise critical value resulted from (8). The peak height lowers linearly with the growing noise amplitude:
σ с 2 σ с 0 2 ( 1 σ η ) ,
where σ с 0 2 is the variance at η = 0 defined in (8). It follows that if η 3 , σ с 2 falls to zero. It means that when η > 3 , the variance peak disappears and we can say that the critical temperature is zero.

4.4. The Ground State

The results we obtained testify that when the noise amplitude η 1.7 (at σ η 1 ), the quality of the system changes. The ground state configuration experiences the most noticeable changes (see Figure 5). Clear that with zero noise the ground state is fully correlated, that is, all spins are the same s i = 1 . The situation keeps as long as all matrix elements J i j > 0 , that is, η < 1 . However, (see Figure 5) the ground-state energy proved to remain almost the same for σ η as big as σ η 1 . Then it starts decreasing gradually and comes to an asymptotic value [42]:
E 0 = 1.317 σ η ,
corresponding to the energy of the ground state in the Edwards-Anderson model. The ground-state magnetization changes stepwise from 1 to 0 when the noise deviation comes close to unit σ η 1 . A similar instability was discussed in [43,44].

4.5. The Entropy

The change of the ground-state configuration and energy results in a change of energy distribution density Ψ ( E ) . The curves of Ψ ( E ) and its derivatives are shown in Figure 6 and Figure 7.
The disappearance of the phase transition is easy to notice if we look at the curve of the second derivative d 2 Ψ / d E 2 . It is seen in Figure 7a that the sink in the middle of the curve ( E = 0 ) rises with growing η and, according to (23) the minimum of d 2 Ψ / d E 2 at E = 0 turns into a maximum when η 1.5 . The peaks at points E = ± U c separate with growing η ( U c E 0 ) and become lower like d 2 Ψ / d E 2 = σ c 2 until full disappearance at η 1.7 .
When η > 1.7 , curve d 2 Ψ / d E 2 has a noticeably convex shape and the phase transition peaks disappear. Moreover, in this case function d 2 Ψ / d E 2 is well described by the expression:
d 2 Ψ d E 2 = 1 σ J 2 ( 1 ε 2 ) ,     ε = E 2 E 0 ( 1 + E 2 E 0 2 ) .
Formula (27) gives good approximation of experimental data (accurate to 0.5% over the energy interval 0 | E | 0.91 | E 0 | ).

5. Conclusions

In this paper, we have considered the Ising model on a two-dimensional grid with noise-polluted interconnections. In the limiting case N such system demonstrates the following properties: with low noise the system have all characteristics of conventional Ising model, with high noise it turns into the Edwards-Anderson spin glass model. The goal of our experiments was to observe the transition between these two limiting cases in the finite-dimension system ( N 10 6 ) . It proved that when the noise is weak ( σ η < 1 ), the behavior of the system is much like the behavior of the conventional Ising model. We expected that with heavy noise ( σ η > > 1 ), the behavior of the system would be like that of the Edwards-Anderson model. However, the experimental results are significantly different from the expectation. It turned out that even when the noise is relatively weak ( σ η ~ 1 ), the system undergoes considerable changes.
First, when σ η ~ 1 , the energy spectrum D ( E ) changes radically (it is clearly seen in Figure 7): the curves of d 2 Ψ / d E 2 has a two-humped form at σ η < 1 and with σ η > 1 become simply convex. Moreover, the ground-state magnetization changes to zero when σ η > 1 . It means that when the threshold value η = 3 is surpassed, the ground-state configuration goes off the initial state by distance of 1 2 N in the Hamming’s terms. In other words, the system undergoes a zero-temperature phase transition. The transition is followed by the change of the ground-state energy from E 0 = 2 J to asymptotic value (26).
Second, the experimental relation between the critical temperature and noise divergence differs greatly from the well-known [8] expression k T c = ( 2 9 α J i α 2 ) 1 / 2 , which in our terms takes the form:
β c = 3 2 2 ( 1 + σ η 2 ) .
We can see that the classical theory predicts that β c should fall with the growing deviation of noise. Moreover, Expression (28) predicts finite values of β c for any large η . The experiment yields the opposite result: in accordance with (24) β c grows in proportion with σ η 2 . The experiment also shows that β c grows with η and when η 3 it reaches its maximum β c = 0.625 , the phase transition disappears at η > 3 ( σ η > 1 ). It can be said conceptually that when the threshold value η = 3 is surpassed, the jump T c 0 occurs.
In our opinion, the difference between the experiment and theoretical predictions is due to the finite dimension of the system. First, the finite system is ergodic and even at low temperatures does not have spontaneous magnetization, which can be tested easily with the help of Monte-Carlo algorithm. Second, the self-averaging principle used for building the theory for N is not realizable for finite N . Additionally, the use of terms “critical temperature” and “phase transition” is not quite correct in description of finite-dimension systems. For our estimates, we use approximate Expressions (5) and (6), which are valid for a special case of the free boundaries conditions and finite L . More general and more accurate estimates can be obtained using the results of papers [45,46], where the authors analyzed the Ising random-bond model with a tunable fraction of negative bonds and the paper [47], where the finite size of the lattice was taken into account accurately.
Finite-dimension grids are of interest in image processing and machine learning. In our paper, the grid dimensions were N = L × L with L = 25 ÷ 1000 . If we consider a planar grid as a model of a flat pixel image, such dimensions are very popular. The main conclusion that can be drawn from our results is that the learning algorithms based on free energy optimization are temperature insensitive in the most popular condition of η > > 1 because there is no observable phase transition in this case.

Author Contributions

Authors contributed equally. All authors participated in the design of the survey, its realization, and in the writing of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We thank V.S. Dotsenko for valuable discussions and helpful comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baxter, R.J. Exactly Solved Models in Statistical Mechanics; Academic Press: London, UK, 1982. [Google Scholar]
  2. Stanley, H. Introduction to Phase Transitions and Critical Phenomena; Clarendon Press: Oxford, UK, 1971. [Google Scholar]
  3. Becker, R.; Doring, W. Ferromagnetism; Springer: Berlin, Germany, 1939. [Google Scholar]
  4. Huang, K. Statistical Mechanics; Wiley: New York, NY, USA, 1987. [Google Scholar]
  5. Kubo, R. An analytic method in statistical mechanics. Busserion Kenk. 1943, 1, 1–13. [Google Scholar]
  6. Dixon, J.M.; Tuszynski, J.A.; Clarkson, P. From Nonlinearity to Coherence, Universal Features of Nonlinear Behaviour in Many-Body Physics; Clarendon Press: Oxford, UK, 1997. [Google Scholar]
  7. Onsager, L. Crystal statistics. A two-dimensional model with an order–disorder transition. Phys. Rev. 1944, 65, 117–149. [Google Scholar] [CrossRef]
  8. Edwards, S.F.; Anderson, P.W. Theory of spin glasses. J. Phys. F Met. Phys. 1975, 5, 965. [Google Scholar] [CrossRef]
  9. Sherrington, D.; Kirkpatrick, P. Solvable model of a spin-glass. Phys. Rev. Lett. 1975, 35, 1792. [Google Scholar] [CrossRef]
  10. Metropolis, N.; Ulam, S. The Monte Carlo Method. J. Am. Stat. Assoc. 1949, 44, 335–341. [Google Scholar] [CrossRef] [PubMed]
  11. Fishman, G.S. Monte Carlo: Concepts, Algorithms, and Applications; Springer: Berlin, Germnay, 1996. [Google Scholar]
  12. Bielajew, A.F. Fundamentals of the Monte Carlo Method for Neutral and Charged Particle Transport; The University of Michigan: Ann Arbor, MI, USA, 2001. [Google Scholar]
  13. Foulkes, W.M.C.; Mitas, L.; Needs, R.J.; Rajagopal, G. Quantum Monte Carlo simulations of solids. Rev. Mod. Phys. 2001, 73, 33. [Google Scholar] [CrossRef] [Green Version]
  14. Lyklema, J.W. Monte Carlo study of the one-dimensional quantum Heisenberg ferromagnet near = 0. Phys. Rev. B 1983, 27, 3108–3110. [Google Scholar] [CrossRef]
  15. Marcu, M.; Muller, J.; Schmatzer, F.-K. Quantum Monte Carlo simulation of the one-dimensional spin-S xxz model. II. High precision calculations for S = ½. J. Phys. A 1985, 18, 3189–3203. [Google Scholar] [CrossRef]
  16. Häggkvist, R.; Rosengren, A.; Lundow, P.H.; Markström, K.; Andren, D.; Kundrotas, P. On the Ising model for the simple cubic lattice. Adv. Phys. 2007, 5, 653–755. [Google Scholar] [CrossRef]
  17. Binder, K. Finite Size Scaling Analysis of Ising Model Block Distribution Functions. Z. Phys. B Condens. Matter 1981, 43, 119–140. [Google Scholar] [CrossRef]
  18. Binder, K.; Luijten, E. Monte Carlo tests of renormalization-group predictions for critical phenomena in Ising models. Phys. Rep. 2001, 344, 179–253. [Google Scholar] [CrossRef] [Green Version]
  19. Kasteleyn, P. Dimer statistics and phase transitions. J. Math. Phys. 1963, 4, 287–293. [Google Scholar] [CrossRef]
  20. Fisher, M. On the dimer solution of planar Ising models. J. Math. Phys. 1966, 7, 1776–1781. [Google Scholar] [CrossRef]
  21. Karandashev, Y.M.; Malsagov, M.Y. Polynomial algorithm for exact calculation of partition function for binary spin model on planar graphs. Opt. Mem. Neural Netw. (Inf. Opt.) 2017, 26, 87–95. [Google Scholar] [CrossRef] [Green Version]
  22. Schraudolph, N.; Kamenetsky, D. Efficient Exact Inference in Planar Ising Models. In NIPS. 2008. Available online: https://arxiv.org/abs/0810.4401 (accessed on 24 October 2008).
  23. Amit, D.; Gutfreund, H.; Sompolinsky, H. Statistical Mechanics of Neural Networks near Saturation. Ann. Phys. 1987, 173, 30–67. [Google Scholar] [CrossRef]
  24. Kohring, G.A. A High Precision Study of the Hopfield Model in the Phase of Broken Replica Symmetry. J. Stat. Phys. 1990, 59, 1077–1086. [Google Scholar] [CrossRef]
  25. Van Hemmen, J.L.; Kuhn, R. Collective Phenomena in Neural Networks. In Models of Neural Networks; Domany, E., van Hemmen, J.L., Shulten, K., Eds.; Springer: Berlin, Germany, 1992. [Google Scholar]
  26. Martin, O.C.; Monasson, R.; Zecchina, R. Statistical mechanics methods and phase transitions in optimization problems. Theor. Comput. Sci. 2001, 265, 3–67. [Google Scholar] [CrossRef]
  27. Karandashev, I.; Kryzhanovsky, B.; Litinskii, L. Weighted patterns as a tool to improve the Hopfield model. Phys. Rev. E 2012, 85, 041925. [Google Scholar] [CrossRef] [PubMed]
  28. Kryzhanovsky, B.V.; Litinskii, L.B. Generalized Bragg-Williams Equation for Systems with Arbitrary Long-Range Interaction. Dokl. Math. 2014, 90, 784–787. [Google Scholar] [CrossRef]
  29. Yedidia, J.S.; Freeman, W.T.; Weiss, Y. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Trans. Inf. Theory 2005, 51, 2282–2312. [Google Scholar] [CrossRef]
  30. Wainwright, M.J.; Jaakkola, T.; Willsky, A.S. A new class of upper bounds on the log partition function. IEEE Trans. Inf. Theory 2005, 51, 2313–2335. [Google Scholar] [CrossRef]
  31. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed]
  32. Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
  33. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436. [Google Scholar] [CrossRef] [PubMed]
  34. Lin, H.W.; Tegmark, M.; Rolnick, D. Why does deep and cheap learning work so well? J. Stat. Phys. 2017, 168, 1223–1247. [Google Scholar] [CrossRef]
  35. Wang, C.; Komodakis, N.; Paragios, N. Markov random field modeling, inference & learning in computer vision & image understanding: A survey. Comput. Vis. Image Understand. 2013, 117, 1610–1627. [Google Scholar]
  36. Krizhevsky, A.; Hinton, G.E. Using Very Deep Autoencoders for Content-Based Image Retrieval. In Proceedings of the 9th European Symposium on Artificial Neural Networks ESANN-2011, Bruges, Belgium, 27–29 April 2011. [Google Scholar]
  37. Gorban, A.N.; Gorban, P.A.; Judge, G. Entropy: The Markov Ordering Approach. Entropy 2010, 12, 1145–1193. [Google Scholar] [CrossRef] [Green Version]
  38. Dotsenko, V.S. Physics of the spin-glass state. Phys.-Uspekhi 1993, 36, 455–485. [Google Scholar] [CrossRef]
  39. Karandashev, I.M.; Kryzhanovsky, B.V.; Malsagov, M.Y. The Analytical Expressions for a Finite-Size 2D Ising Model. Opt. Mem. Neural Netw. 2017, 26, 165–171. [Google Scholar] [CrossRef]
  40. Häggkvist, R.; Rosengren, A.; Andrén, D.; Kundrotas, P.; Lundow, P.H.; Markström, K. Computation of the Ising partition function for 2-dimensional square grids. Phys. Rev. E 2004, 69, 046104. [Google Scholar] [CrossRef] [PubMed]
  41. Beale, P.D. Exact distribution of energies in the two-dimensional Ising model. Phys. Rev. Lett. 1996, 76, 78–81. [Google Scholar] [CrossRef] [PubMed]
  42. Kryzhanovsky, B.; Malsagov, M. The Spectra of Local Minima in Spin-Glass Models. Opt. Mem. Neural Netw. (Inf. Opt.) 2016, 25, 1–15. [Google Scholar] [CrossRef]
  43. Colangeli, M.; Giardinà, C.; Giberti, C.; Vernia, C. Nonequilibrium two-dimensional Ising model with stationary uphill diffusion. Phys. Rev. E 2018, 97, 030103. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Bodineau, T.; Presutti, E. Surface Tension and Wulff Shape for a Lattice Model without Spin Flip Symmetry. Ann. Henri Poincaré 2003, 4, 847–896. [Google Scholar] [CrossRef] [Green Version]
  45. Ohzeki, M.; Nishimori, H. Analytical evidence for the absence of spin glass transition on self-dual lattices. J. Phys. A Math. Theor. 2009, 42, 332001. [Google Scholar] [CrossRef] [Green Version]
  46. Thomas, C.K.; Katzgraber, H.G. Simplest model to study reentrance in physical systems. Phys. Rev. E 2011, 84, 040101. [Google Scholar] [CrossRef] [PubMed]
  47. Izmailian, N. Finite size and boundary effects in critical two-dimensional free-fermion models. Eur. Phys. J. B 2017, 90, 160. [Google Scholar] [CrossRef]
Figure 1. Free energy f ( β ) at different noise amplitudes η = 0 ; 0.4 ; 0.8 ; 1.2 ; 1.6 ; 2.0 ; 2.5 ; 3 . Lower curves correspond to greater values of η . The red marks indicate the values that are found by the n-vicinity method with the aid of Formulae (15)–(17) at zero noise amplitude. The grid dimension L = 400 .
Figure 1. Free energy f ( β ) at different noise amplitudes η = 0 ; 0.4 ; 0.8 ; 1.2 ; 1.6 ; 2.0 ; 2.5 ; 3 . Lower curves correspond to greater values of η . The red marks indicate the values that are found by the n-vicinity method with the aid of Formulae (15)–(17) at zero noise amplitude. The grid dimension L = 400 .
Entropy 20 00585 g001
Figure 2. (a) Internal energy U ( β ) at different noise amplitudes η [ 0 , 1.7 ] spaced by 0.1 intervals. The red marks indicate the values that are found by the n-vicinity method with the aid of Formulae (15)–(17) at zero noise amplitude. (b) η [ 1.8 , 3.0 ] spaced by 0.1 intervals, the lower curves correspond to greater η . The grid dimension L = 400 .
Figure 2. (a) Internal energy U ( β ) at different noise amplitudes η [ 0 , 1.7 ] spaced by 0.1 intervals. The red marks indicate the values that are found by the n-vicinity method with the aid of Formulae (15)–(17) at zero noise amplitude. (b) η [ 1.8 , 3.0 ] spaced by 0.1 intervals, the lower curves correspond to greater η . The grid dimension L = 400 .
Entropy 20 00585 g002
Figure 3. The energy variance σ 2 ( β ) at different noise amplitudes η : (a) η [ 0 , 1.7 ] and (b) it changes by 0.1 intervals in range η [ 1.8 , 3.0 ] . The red marks indicate values σ 2 produced by Formula (5). The grid dimension L = 400 .
Figure 3. The energy variance σ 2 ( β ) at different noise amplitudes η : (a) η [ 0 , 1.7 ] and (b) it changes by 0.1 intervals in range η [ 1.8 , 3.0 ] . The red marks indicate values σ 2 produced by Formula (5). The grid dimension L = 400 .
Entropy 20 00585 g003
Figure 4. (a) The critical temperature β с and (b) energy variance at the critical temperature σ c 2 as functions of noise amplitude η . The solid lines correspond to Formulae (24)–(25). L = 400 .
Figure 4. (a) The critical temperature β с and (b) energy variance at the critical temperature σ c 2 as functions of noise amplitude η . The solid lines correspond to Formulae (24)–(25). L = 400 .
Entropy 20 00585 g004
Figure 5. (a) Energy E 0 and (b) magnetization M 0 of the ground state of the system as a function of noise amplitude. L = 400 .
Figure 5. (a) Energy E 0 and (b) magnetization M 0 of the ground state of the system as a function of noise amplitude. L = 400 .
Entropy 20 00585 g005
Figure 6. (a) Spectral density Ψ ( E ) and (b) its first derivative for some noise amplitudes η = 0 ; 0.3 ; 0.7 ; 1.1 ; 1.5 ; 1.8 ; 2.2 ; 2.5 ; 3 . The marks show the zero-noise curve. The grid dimension L = 400 .
Figure 6. (a) Spectral density Ψ ( E ) and (b) its first derivative for some noise amplitudes η = 0 ; 0.3 ; 0.7 ; 1.1 ; 1.5 ; 1.8 ; 2.2 ; 2.5 ; 3 . The marks show the zero-noise curve. The grid dimension L = 400 .
Entropy 20 00585 g006
Figure 7. The second derivative of spectral density Ψ ¨ ( E ) at (a) η = [ 0 , 1.7 ] and (b) η = [ 1.8 , 3 ] , the reading spacing is 0.1. The marks denote the zero-noise curve (a) and the curve for η = 1.8 resulted from (27) (b). The grid dimension L = 400 .
Figure 7. The second derivative of spectral density Ψ ¨ ( E ) at (a) η = [ 0 , 1.7 ] and (b) η = [ 1.8 , 3 ] , the reading spacing is 0.1. The marks denote the zero-noise curve (a) and the curve for η = 1.8 resulted from (27) (b). The grid dimension L = 400 .
Entropy 20 00585 g007
Table 1. The energy of ground state E 0 and its magnetization M 0 , critical values β c , f c , U c and σ c 2 for different noise amplitudes.
Table 1. The energy of ground state E 0 and its magnetization M 0 , critical values β c , f c , U c and σ c 2 for different noise amplitudes.
η E 0 M 0 β c f c U c σ c 2
0−1.99510.442−0.6931−1.978 × 10512.958
0.1−1.99510.443−0.6931−1.986 × 10511.427
0.2−1.99510.444−0.6932−0.010112.566
0.3−1.99510.445−0.6932−0.010311.627
0.4−1.99610.452−0.6933−0.021111.476
0.5−1.99410.454−0.6934−0.032410.666
0.6−1.99310.459−0.6936−0.04479.719
0.7−1.99410.465−0.6939−0.05818.328
0.8−1.99610.476−0.6946−0.08497.642
0.9−1.99610.484−0.6957−0.11436.518
1.0−1.99310.503−0.6979−0.15995.603
1.1−1.9960.99980.515−0.7010−0.21094.656
1.2−1.9950.99870.536−0.7065−0.28153.629
1.3−1.9940.99430.562−0.7156−0.37472.775
1.4−1.9960.98390.591−0.7327−0.51071.998
1.5−2.0020.96020.623−0.7527−0.64141.380
1.6−2.0140.9060----
1.7−2.0330.2155----
1.8−2.0650.0312----
1.9−2.0980.0241----
2.0−2.1390.0058----

Share and Cite

MDPI and ACS Style

Kryzhanovsky, B.; Malsagov, M.; Karandashev, I. Investigation of Finite-Size 2D Ising Model with a Noisy Matrix of Spin-Spin Interactions. Entropy 2018, 20, 585. https://doi.org/10.3390/e20080585

AMA Style

Kryzhanovsky B, Malsagov M, Karandashev I. Investigation of Finite-Size 2D Ising Model with a Noisy Matrix of Spin-Spin Interactions. Entropy. 2018; 20(8):585. https://doi.org/10.3390/e20080585

Chicago/Turabian Style

Kryzhanovsky, Boris, Magomed Malsagov, and Iakov Karandashev. 2018. "Investigation of Finite-Size 2D Ising Model with a Noisy Matrix of Spin-Spin Interactions" Entropy 20, no. 8: 585. https://doi.org/10.3390/e20080585

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop