Tuesday, April 8, 2014

Spectrum of the Anti-Ferromagnet and Ferromagnet:

Spectrum of the Anti-Ferromagnet and Ferromagnet:

 While we will come to more details later, let us just quickly make some statements that seem to make sense. If you have a Heisenberg AF in 2-spatial dimensions, then the ground state (GS) is not exactly a Neel state, since there are quantum fluctuations which destabilize the Neel state. Just to have an idea of the numbers, note that a two-spin system can form a singlet and triplet. The singlet has lower energy, which is -3 J /4. Next consider a large system with the pure Neel state -- this has an energy density of -J/4. Now, it is clear that because the pure Neel state is not an eigenstate of AF Heisenberg model (AFHM), the GS energy must be lower; it is actually close to -0.670 J. This makes it clear that the GS of the AFHM is a superposition of spin-singlets formed by a spin with it's different neighbors, --- a single spin cannot form a singlet with more than a single neighbor.

 The excitations above the GS would be the so-called "rotor states" and they would carry spin. I will add more about them later.

Wednesday, January 11, 2012

Dressed Polyalov Loops and Dual Chiral Condensate

An interesting idea has been discussed in arXiv: 0801.4051 by a group of people from Graz, Austria and Regensburg, Germany (here). The idea is to construct operators using observables that are sensitive to both Chiral Symmetry breaking and Deconfinement transition. 

Why is this necessary? Well, deconfinement and chiral symmetry breaking transitions are both non-perturbative, and it is important to study them non-perturbatively (via the lattice). Such systematic studies should use order parameters to probe the respective symmetries that are being broken/restored in such transitions, indeed if there are any. The common viewpoint is that in the deconfinement transition, the centre symmetry is spontaneously broken, while the chiral transition breaks the chiral symmetry spontaneously. While the Polyakov loop acts as an order parameter in the former transition, the chiral transition is the OP in the latter. The caveat here is the fact that these two transitions are defined in two opposite limits, the former for the pure gauge theory (or in the quenched theory, where the fermion masses are infinite) and the latter in the chiral limit of vanishing quark masses. At any intermediate quark masses, these quantities are not strictly order parameters. 

The paper constructs an operator called the "Dual chiral condensate" by imposing (unphysical) arbitrary boundary conditions on the fermions and then taking the Fourier transform with respect to the boundary condition in such a way that they project on to the "string" operators that wind around the lattice only once. While the finite quark mass allows the quarks to jiggle about thereby creating operators which have spatial displacement as well, the fact that they wind around the temporal direction exactly once ensures that these have the same centre symmetry as the Polyakov loops. In fact, in the infinite mass limit these do become the Polyakov loops. 


So far so good! They took the chiral condensate and obtained with it an operator sensitive to the center symmetry. Next, how does it behave? Does it help in understanding these phenomena any better? Within the framework of the SU(3) quenched lattice gauge theory, they show that even with quark masses about 100 MeV, this quantity vanishes below Tc, the deconfinement temperature, but is finite above it. Rewriting the chiral condensate in terms of the eigenvalues of the (staggered) Dirac operator (with the phase angle), they show that it is the IR modes which play any role. Larger eigenvalues contribute less since they appear in the denominator. 


Carrying the investigations further (see here), the SU(2) lattice gauge theory with adjoint fermions is considered. In this theory, the confinement and the chiral symmetry transitions are widely separated with the latter (~870 MeV) occurring at about 4 times the former transition temperature. This is interesting since it gives a window where the system is no longer confined but chiral symmetry is broken, and therefore is an ideal testbed to check for the effects of these two transitions separately. In the SU(3) case (with quenched or 2+1 flavours) these transitions occur simultaneously. In their results, they see that while the chiral condensate is not sensitive to the deconfinement transition, it is sensitive to the boundary condition above the deconfinement transition (but not below the deconfinement transition), until all the way to the chiral transition when it has completely melted away. The spectral gap remains closed until after the chiral transition. Therefore, the dual chiral condensate vanishes below the deconfinement transition. Above it, and until the chiral transition, it keeps increasing, since the chiral condensate depends on the boundary condition. Beyond the chiral transition, it depends on the mass of the fermion (for heavy fermions it keeps increasing, but drops for lighter fermions). Thus, using light fermions,
the dual chiral condensate is actually able to distinguish the deconfinement and the chiral transitions.


A step further (see here) uses the configurations of the BMW group to look at this observable for full QCD. It is found that the low energy spectra of the Dirac operator is sensitive to the boundary conditions (above Tc), while the large energy spectra is not, which means that the dual condensate gets all the contributions from the IR modes. Again, while the quark condensate has no dependence on the boundary condition below Tc, it does so above Tc, which accounts for the vanishing of the dual condensate below Tc, and its non-zero value above it. It is (perhaps) interesting to note that the dual condensate is zero to the same level as the Polyakov loop is (at least looking from the figure 5 of the paper). 


It seems natural now to expect several investigations using this operator and in this direction in the gauge theories!

Thursday, September 1, 2011

Multi-links and Multi-levels

The paradigm of using multi-level and multi-link integration is a very crucial concept in doing Monte-Carlo calculations. Most of the simulations of Field Theories, or Stat-Mech systems, involve calculating expectation values of observables over the thermal ensemble determined by the action or the hamiltonian, as the case may be. Speaking loosely in the language of QFT, the expectation values of these observables are dominated by UV-contributions that appear at the level of the lattice scale ~1/a. These need to be integrated out carefully. One way is to collect as much statistics as you possibly can, and then average them over.
However, one can be clever, and try to integrate out only those fluctuations that occur at the scale ~1/a. This can be achieved, for example, in the case of the SU(3) lattice gauge theory, by integrating over the staples adjacent to a link, and keeping everything else in the problem (ie here in this case, links on other lattices) fixed. Thus, having integrated out the large scale fluctuations efficiently, one can do the averaging (integrals) over the rest of the lattice, giving a result that is much more stable. This is the so-called Multi-link method. Note that one can do the integration over the staples numerically (which means use another MC-integration step for this) or semi-analytically (which means that write the integral in quadrature in terms of some Special functions, and then evaluate this closed form integral numerically).
As often happens with lattice gauge theorists, you might want to take the continuum limit. This is the limit of large bare coupling β, ie small g2. In this limit, the gauge link matrices become close to unity and their fluctuations are rather small. So you'd expect the efficiency to decrease towards the continuum limit.
The Multi-level algorithm is a generalization of the former and can be used to overcome the aforementioned difficulty. In the Multi-level scheme, you divide the full lattice into several sub-lattices, say in the time direction. So if the thickness of the sub-lattice is d, then you have Nt/d sub-lattices in your disposal. The idea is the same before, just integrate over each of the sub-lattices, keeping the others fixed. Of course, in this case, no semi-analytical formula can be written down, and this procedure has to be performed numerically. In this case, note that you will be integrating over contributions in the regime ~1/(d*a); and this has often the effect of better averaging of the observables, ie less noise. Note that the hallmark of a calculation with the Multi-level scheme is to decrease errors exponentially. In particular, this means that even if you are measuring a correlation function that decays exponentially, one can maintain a fixed error/mean ratio for the entire correlation function!
The crucial requirement, which ensures that the multi-level scheme will work for the gauge theory, is the locality of the action. With fermions present, this is not possible in the standard scheme of doing things, where the fermions are integrated over and written as a determinant in the partition function. Since the determinant is a non-local object, spanning the whole of the lattice, the division into different length scales cannot be made, and multi-level scheme cannot be applied.

Tuesday, April 26, 2011

Some common questions

With the synopsis and the thesis defenses knocking at the door, thought that refreshing some things might be useful. Came across things that I had never bothered to sit and think of before.

1. How do we know that there are gluons?
It basically involves observations of three-jet events at experiments in PERTA, DESY. The idea is that three-jets are formed when a quark emits a hard gluon. Further, there are stray hadrons in between two of the jets. For details, look up this Wiki-link.

2. Discovery of quarks came about through the Deep Inelastic Scattering experiments. The same principle: shoot high energy electrons on hadrons (baryons or mesons). Most of them go through; and the ones that get reflected show that they have been scattered from three points in the case of a baryon and two points for mesons. Of course, since electrons and these "scatters" interact through electromagnetics, itwas deduced that these scatterers carry fractional charge. And so hey presto: partons; and bingo: quarks! The entire process is inelastic; causing the initial hadron to break up completely.

Tuesday, April 19, 2011

Role of chiral symmetry in QCD vacuum

Let us consider the low-energy limit of QCD with 2 massless quarks (u,d). In terms of the left-handed and the right-handed quark fields, the lagrangian can be written as:
With m=0, the following symmetry transformation holds:

where Vi &isin U(2); i=L,R since the gluon interactions do not change the helicity of quarks. Then, according to Noether's theorem, each continuous symmetry parameter has a corresponding conserved current, there will be 8 conserved currents. Consider six of these currents:

(The other two are U(1)A and U(1)B), out of which the former is anomalous. When mq=0, QaV,A will commute with HQCD:

This means that for a QCD eigenstate, HQCD|&Psi&rang = E |&Psi&rang, the states QVa |&Psi&rang and QAa |&Psi&rang would be degenerate. But such parity doubling does not occur in Nature! The symmetry of the Hamiltonian is not respected in nature and one of the flavour SU(2)is spontaneously broken. As evidence for that, there are Goldstone bosons, three in number: &pi0; &pi
(Here is the Goldstone theorem: For a generic continuous symmetry that is spontaneously broken, (i.e. the currents are conserved) the action of the charge on the ground state does not leave it invariant. New massless particles appear in the spectrum for each symmetry generator that is broken.)
The vacuum condenses by producing a non-zero chiral condensate. It turns out that a low energy effective theory (Chiral perturbation theory) can be formulated to study the low energy physics.

Casher's argument: There is an argument due to Casher, which states that in the confining regime, chiral symmetry is necessarily broken. First, lets recall the definition of helicity: it is the projection of the spin onto the direction of momentum. With an equation:

Consider a quark with three momentum p moving along the z-axis. Its massless, so helicity is the same as chirality. For simplicity, choose ps. Confinement implies that the quark must turn back. Now, if chiral symmetry is un-broken, then the spin must also be flipped, so: &Delta s z = -1. The angular momentum needs to be compensated somehow! But QCD vacuum cannot support this angular momentum conservation; it has no Lz.
Therefore, we can naively conclude that if the chiral symmetry is unbroken, the quark cannot turn back and there is no confinement. On the other hand, if the quark does turn back that at the point of turning back, the helicity (which is the same as chirality in the massless limit) must flip and hence a chiral symmetry breaking term must appear in the quark Green function. But quarks are massless, so this term must arise dynamically. This is called the dynamic chiral symmetry breaking of the QCD vacuum.

Saturday, February 12, 2011

Jack-knife and Boot-strap

Jack-knife and boot-strap are non-parametric methods of analysis of data. They do not assume that your data is distributed as a Gaussian distribution. They are useful in many cases where you need to estimate the error-bars on an indirect quantity. Lots of examples come to mind. Consider, for example, the error estimation of a ratio (shall we say the Binder cumulant?) The distribution of this quantity is certainly not Gaussian, and its a further pain if you only have a few measurements. Or shall we say, that you are doing a (complicated) fit on a data set, and need to estimate errors on fit parameters? In each case, we could come up with some way of accurately determining the errors, but a simple universal answer is to do the bootstrap and/or the jack-knife. Its not my aim now to describe why this works, rather I'll just state what to do.

Consider the jack-knife method. Suppose you have independent data/measurements. (If you have auto-correlations, blocking the data is an effective way to make independent data items). Okay, so now, you do whatever calculation you have to do with the whole data set to get the mean values of the parameters you want to determine. Thats it! For the error, you do the analysis on the whole data set but removing one block at a time. Thus, you'll get N estimates of the quantity you were looking for, where N is the no of blocks you have divided your data into. Now quote the error as:


where θn are the estimates of the parameter you want on each of the Jackknife samples and Θ is the corresponding value on the whole data set. The error is then simply σ. Also note that the variance is calculated as the about the full sample mean Θ, if it is going to be the one quoted. Sometimes the average of all the jackknife estimates, ξ can also be used in the construction of an unbiased estimator as Θ - (N-1)(ξ - Θ) . This is sometimes quoted as an estimate of the mean.

What about the boot-strap? This relies on the re-usability of data. Make up your mind on doing N boot-strap samples. Now, select ndat (where ndat is your total number of data. This, I guess is not absolute, but convenient) samples from the whole data set. Do this selection by doing a uniform random number generator, so that some data, in principle, is reselected and some is not used at all, in each boot-strap sample. Now, estimate the parameters you want to in each sample. The mean of the estimates is your desired mean and the standard deviation is the estimate of the error on the mean.
Some people argue about the "absoluteness" of the boot-strap in the sense that, if you did more and more measurements, then you'd come up with the same samples more than once. And thats what the boot-strap achieves by doing the random selection with replacement. But a little thought will tell you, that if you are measuring an operator that has a continuous distribution, then it is pretty likely that you'll generate values that are not contained within your data set.

Tuesday, January 18, 2011

Electrical Conductivity and Thermal Dilepton Rates

There is the recent paper thats come up by the Bielefeld group on an accurate calculation of  the vector current correlation function in the deconfined phase. This is the arXived link to the paper: http://arxiv.org/abs/1012.4963 . I happened to present this paper in our journal club in TIFR, so I'll be able to quote a summary pretty easily.
The main methodology adopted in the paper is an accurate calculation of the continuum correlation function. This is essentially done by taking the continuum limit on Nt = 16, 24,36 and 48 lattices. An 1/Nt 2 extrapolation is done to get the continuum correlation functions. Note that Nt =16 data are not used in the extrapolation.
The further technology introduced is the Taylor expansion of the vector correlation functions about τT = 1/2. The deviations from the free field correlation functions are studied in detail, leading to the conclusion that for the interacting theory at 1.45 Tc, these ratios are never more than 10% at most for all τ values. Therefore, they conclude that the spectral function ansatz for the free theory, suitably modified to take into account behavior of the interacting theory in the low energy (e.g. smear out the exact δ(ω) function present in the spatial channel of the spectral function into a Breit-Wigner form) and the high-energy (perturbative corrections to the free field results) should describe the data: and it actually does. Doing an MEM over these results see small improvements. Note that the default model used for the MEM analysis are the parameters used to model the spectral functions and determined by fits to the extrapolated continuum correlation function. Using a free theory default model kills of the contribution at the origin and hence gives an unphysical value of the electral conductivity.
A final comment on the results: They conclude that σ/T ~ (0.37 ± 0.01) Cem calculated from the fits, while a more conservative estimate can be made from changing the functional form used to represent the spectral function (and consistent with the MEM error estimate) is 1/3 Cem <~ σ/T <~ 1 Cem. Contrast this with previous estimates: a) by Aarts et al (2007): 0.4 ± 0.1 Cem and b) by S Gupta (2004) ~ 7 Cem.