Abstract. By making use of numerical simulations, with Monte Carlo algorithm for Potts spin models, hopes to recover the physical phenomenon of phase transitions. Different updating algorithms including Metropolis and Wolff were implemented to 50\(\times\)50 square lattice to evaluate efficiency near critical temperature.
With the use of Markov Chains, this article aims to make use of Ising model to regenerate the randomness of spin alignment under different temperatures. The effective hamiltonian of the ising model is $$ H = -J \sum_{\left < ij \right>} S_i S_j $$ where critical phenonmenons can only be observed when external magnetic field is insignificant implying that the magnetic moment does not play a significant role s.t. $$ -\mu \sum_j h_j S_j \rightarrow 0$$.
In an ideal ferromagnet, in the absence of an external field, the neighbouring dipoles align parallel tro each other. In order to predict the thermal behaviour of the system, a partition function is calculated by the following expression $$ Z = \sum_{\{S_i\}} \exp \left(-\beta H\right)$$, where \(\beta = 1/k T\), which is the sum over all possible sets of dipole alignments.
The main idea of this local updating algorithm is known as Monte Carlo cummation with importance sampling. This algothm generates a subset of syastem states in which lower energy states occur more frequently than higher energy states. The transitio probability is thus $$ \frac{W(1\rightarrow 2)}{W(2\rightarrow 1)} = \frac{(1/N)\exp\left\{-(U_2-U_1)/kT\right\}}{(1/N)} = \frac{\exp\left\{-U_2/kT\right\}}{\exp\left\{-U_1/kT\right\}}$$, which is the ratio of the Boltzmann factors of the two states.
Thus, considering multiple states has transitted in any number of steps, the transition probability remains $$ \frac{W(1\rightarrow \dots\rightarrow 2)}{W(2\rightarrow \dots \rightarrow 1)} = \frac{\exp\left\{-U_2/kT\right\}}{\exp\left\{-U_1/kT\right\}}$$. This speaks for the implementation of algorithm where many iterations of the steps generate many times for every states so the the guarantee of subset generated is an accurate representation for the full collection of all system states. In this scenario, the main concern is that the random states generated provides an accurate illustration of the expected energy and magnetization of the system. It is thus expected that a long time is needed to generate other probable states that differ significantly, that all dipoles are flipped.
![]() |
Figure 1.1 Local updating scheme: Metropolis algorithm with input parameters \(\{N = 50, \beta = 0.4, J = 1.0\}\) |
Using the ideas from percolation theory, whole cluster spins are enabled to move in one step which improves the critical aslowing down from local updating scheme. About critical temperature, the spins display critical fluctuations where large aligned apin domains appear. This phenonmenon of critical slowing down reflects the difficulty to change the magnetization of a correlated spin cluster where one center spin has 4 nearest neighbours.
![]() |
Figure 1.2 Block updating scheme: Wolff algorithm with input parameters \(\{N = 50, \beta = 0.4, J = 1.0\}\) |
The probability to form a cluster of spins in state A before the Wolff flip is the same as state B after the flip. The probability to not add to the cluster is \(1-P_{add}\). \(m\) stands for the non-added aligned spins to cluster in state A while \( n\) stands for the non-added aligned spins to cluster in state B (after flipping all at once) as shown in the figure below.
![]() |
Figure 1.3 Illustration adopted from (Walter, 2014) |
The transition probability is then written as the following expression$$\frac{W\left(A\rightarrow B\right)}{W\left(B\rightarrow A\right)} = \left(1 - P_{add}\right)^{m-n}$$, where by preserving the detailed balance condition, the transition probability (W) condition is rewritten as $$ \frac{W\left(A\rightarrow B\right)\cdot Y\left(A\rightarrow B\right)}{W\left(B\rightarrow A\right)\cdot Y\left(B\rightarrow A\right)} = \left(1 - P_{add}\right)^{m-n} \frac{Y\left(A\rightarrow B\right)}{Y\left(B\rightarrow A\right)} = \exp\left\{-\beta(E_B - E_A)\right\}$$With \(E_B - E_A = 2J(m-n)\), the acceptance probability ratio remains $$\frac{Y\left(A\rightarrow B\right)}{Y\left(B\rightarrow A\right)} = \left[(1-P_{add})\exp\left\{-2\beta J\right\}\right]^{n-m}$$ Therefore, choosing\(P_{add} = 1-\exp\left\{2\beta J\right\}\), the acceptance probabilities \(Y\left(A\rightarrow B\right) = Y\left(B\rightarrow A\right) = 1\) which flips the spin automatically when the cluster is formed. Thus, one cluster iteration isequivalent to one iteration in Wolff iteration, however, for comparison sake, it is not the case in this implementation, which will be addressed in the later sections.
In this article, only ferromagnetic \(J=1\) properties are investigated, due to the limitation of computating resources, only the size of 2500 lattice sites in square lattice is being looked into \(N = 50\). It is therefore a little investigation on the iteration convergence times from varying \(\beta\) of near \(\beta_c\). By using the scaled quantity, we can always write $$ \beta_c \equiv \frac{1}{T_c}$$. The critical (transition) temperature of 2D lattice is \(2.269\) in scaled unit \(\epsilon/k\) (Walter, 2014), where the inverse temperatureis approximately 0.4407. From Hoffmann, as shown in the following figure, the critical inverse temperature lies between 0.4 and 0.5.
![]() |
Figure 2.1 Finit-size scaling behaviour of the specific heat of the 2D Isingmodel on \(L\times L\) lattices to different inverse temperatures. |
![]() |
Figure 2.2 Local updating scheme flow chart |
Since the standard Metropolis Algorithm is relaized y the local updates of single spin, by the auto correlation, the probability of acceptance is as shown below. $$ W(\{S_i\}\rightarrow\{S_i'\}) = \begin{cases} {\exp\{-\beta \Delta E\}} &, \Delta E > 0\\ {1} &, \Delta E \leq 0 \end{cases}$$
The Initialization of lattice array is simply a 2D array of size \(N+2\times N+2\) of elements 1 and -1 using build-in psuedo random number generator. The edge of the lattice is not visible to the user, which only serve as calculation for nearest neighbour interaction energy.
![]() |
Figure 2.3 initialized lattice with expanded invisible edge |
To apply boundary condition, simply copy the edge to the opposite side of invisible edge. Since the four outermost corners will not be used in the calculation, which are thus unimportant.
![]() |
Figure 2.3 Wolff algorithm flow chart |
Considering only nearest neighbours, the change of energy \(\Delta E\) only has 5 possibilies, $$\Delta E \in \{-8J, -4J, 0J, +4J, +8J\}$$, to prevent calculating the boltzmann factor repeatedly, the pre-calculatiuon stores these values and their corresponding boltzmann value into an array for later extraction.
At (r, c): correspond to the specific coordinate of real lattice (not include the expanded edges)
It is essential to loop through each lattice site without revisiting the visited site. For the cluster algorithm, it is essential to have a fixed visiting agenda such that each outer neighbour sites are visited. The required output for the loop run depends also on the random seed as the center lattice site for the crystal to grow around it.
This is the reason why from Figure 1.1, the flipping surrounds one random seed. In metropolis algorithm, there is no restriction to the order of the site visited, as long as each site are visited repeatedly. Thus, this is one of the visualization/ implementation to demonstrate the spiral out sequence.
![]() |
Figure 2.6 Wolff algorithm flow chart |
To preserve the detailed balance of Metropolis algorithm, it is needed that the acceptance probability follows $$ W(\{S_i\}\rightarrow\{S_i'\}) = 1 - \exp\{-2\beta J\}$$
The initialization of lattice for periodic boundary condition is no different than one mentioned section 2.11a. In cluster updating scheme, the lattice, flipped and cluster list array are global arrays to be prepared which is updated along the iterations. This step also includes generation of the starting seed.
With the use of sequence loop from section 2.13, with the inherited seed from the initialization, the order of visiting sites are prepared. To determine whether the site is to be flipped must pass the following conditions, (cy, cx) is the currentlattice site (from sequence loop), (y, x) is the seed site coordinates
![]() |
Figure 2.8 Example of 1 local updating iteration. Left: initial configuration of spin (A) Middle: after flipped (B) Right: formed crystal after one iteration |
After each local update, it will be flipped once the local cluster list site by site. Why not flip after the whole cluster is appended to one list? Since the random initializatio of spin configuration is adopted, it is necessary for us to compare the number of steps required for the system energy to converge to metropolis local updating scheme. If adopting the flip of whole cluster as count, then the number of step will solely depend on the initial configuration of the system. Only in this way will include the information of repeatedly arriving at the same site and trying to flip the unflipped spin in later iterations.
Practically speaking, the flipping of individual local iterations should not affect the number of cluster iteration if the energy eventually converges, as whole cluster iterations only depend on the initial configuration. But this is the only way to demonstrate the transition probability where there exist nearest neighbours being not flipped but aligned.
Thus, from the previous iteration, sweep through the array of coordiates of lattice sites to change the sign of the spin. (Which cannot be at the same instant as it is not a physical process, the same time imply that in ONE iteration times, all lattice sites in the cluster list is flipped.
Previously, in section 2.22 and 2.23, has given detailed description to the workflow of the red loop. As the goal is to minimize the spin interaction energy, the while loop condition asks if there are any nearest neighbours that is aligned. This is equivalent to asking if the island formed is complete as produced initially from the initial configuration. If not, a new seed will be chosen from the outermost cluster members of the formed crystal to perform "local updating" (2.22, 2.23) until the whole cluster is formed. This is the yellow while loop indicated in the work flow.
|
Figure 2.9 Example of 1 cluster iteration. |
It is important to bear in mind that one cluster is independent to one another, thus the global cluster list undergoes clearence for each new cluster iteration. So the while condition is
![]() |
![]() |
![]() |
Figure 2.11 Example of whole growth of crystals of up: \(\{N = 50, \beta = 0.3, J = 1.0\}\), middle: \(\{N = 50, \beta = 0.4, J = 1.0\}\), low: \(\{N = 50, \beta = 0.5, J = 1.0\}\) |
Both the local and block updating scheme demonstrates energy minimization.
\(\beta\) | 0.1 | 0.2 | 0.3 | 0.4 | 0.41 | 0.42 | 0.43 | 0.44 | 0.45 | 0.46 | 0.47 | 0.48 | 0.49 | 0.50 |
\(i\) | 5060 | 5804 | 6790 | 7281 | 7189 | 7659 | 7037 | 7043 | 7561 | 7667 | 7466 | 7575 | 7050 | 7243 |
\(\beta\) | 0.60 | 0.70 | 0.80 | 0.90 | 1.00 | 1.20 | 1.30 | 1.40 | ||||||
\(i\) | 7346 | 7923 | 8093 | 8092 | 7737 | 7792 | 8201 | 8363 |
Table 3.1 average converge iteration times against inverse temperature spectrum over 36 runs |
![]() |
Figure 3.1 average converge iteration times against inverse temperature spectrum over 36 runs (interpolation done by spline from scipy library) |
It is seen that there is ageneral increasing trend in the number of iteration times when increasing inverse temperature. At lower temperature, the acceptance probability lowers, which thus takes more iteration times to revisit the lattice sites in order to align all the spins.
![]() |
Figure 3.3 average converge iteration times against near critical inverse temperature spectrum over 36 runs |
Due to the lack of computation power, not a particularly \(d\beta\) can be tested. From the above plot, the highest number of iteration times by interpolation is \(\beta \approx 0.457\) which is not too close to the theoretical value. It also appears that at around \(\beta\approx 0.42\) the iteration times also is very high, which can be accounted by the step size and number of trials being not enough to get a reasonable result.
\(\beta\) | 0.35 | 0.36 | 0.37 | 0.38 | 0.39 | 0.4 | 0.41 | 0.42 |
\(i\) | 237 | 222 | 220 | 214 | 200 | 214 | 206 | 200 |
\(\beta\) | 0.43 | 0.44 | 0.45 | 0.46 | 0.47 | 0.48 | 0.49 | 0.50 |
\(i\) | 193 | 188 | 188 | 179 | 172 | 164 | 172 | 167 |
![]() |
Figure 3.4 average converge iteration times against inverse temperature spectrum over 6 runs |
It can be seen that the general trend around range of \(\beta \in \left[0.35, 0.50\right]\) demonstrates a decreasing trend with increasing inverse temperature. It makes sense if we take a look at the form of the probability, the accpeptance probability decreases according to increasing inverse temperature \(\beta\).
Around the 2D-Ising model critical inverse temperature, there is no sudden peak which shows that critical slowing down in not that prominant using the cluster update algorithm.
Since the iteration times between the two algorithms are not comparable by magnitude, that differs by an order of at least \(10^1\), only the comparison of trends may bring some conclusions.
![]() |
Figure 4.1 average converge iteration times against inverse temperature spectrum over 36 runs |
By plotting the local upadting scheme in the same range of inverse temperature, it can be seen that the iteration times taken to converge in local updating scheme around critical temperature is quite stable while is steep considering the cluster updating scheme.
However, this conclusion is not strong as the data gathered do not have a large enough sample size.
Even though the number of iteration times of local updating scheme is much larger than the cluster updating scheme, it actually takes much less time for running.
Optimization to cluster updating scheme was making use of mpi (pool.starmap) to running a spectrum of inverse temperatures. Algorithm-wise, the condition of while loop was changed from a more time-consuming function generating a binary output, to the current use of library numpy to calculation of energy convergence.
It is also discovered that both algorithms if to require generation of GIF files became I/O bound which greatly increase the computation time.
The computing resources to generating the resuls are listed as follows:
Increasing the sample size may provide a more precise and accurate conclusion.
In this article, the observables heat capacities are not calculated. Nor the investigation of exponent relationship to different algorthims. These might be developed later.