A Short Introduction to Renormalisation Group Methods
A guide to renormalisation in condensed matter physics.
Renormalisation group is one of the most powerful methods in all of theoretical physics, in my opinion. It’s a technique that comes in various forms, but all share a common theme: starting from a microscopic view of a material – in terms of individual electrons or other smallscale degrees of freedom – we want to slowly ‘zoom out’ to see what the material is doing on larger scales. Renormalisation group (RG) is a mathematical procedure for doing this zooming out process in a slow and controlled way, gradually throwing out information about unncessary details which grow too small to be seen and keeping only the information that is most useful in describing the large scale properties of a material. In other words, for example, we don’t care about what individual electrons in a piece of metal are doing: we only care whether the metal is a good conductor.
Recently I stumbled across the slides of a tutorial lecture on RG that I gave at the Ecole Polytechnique to other members of the Quantum Matter Group back in late 2019 when I was a postdoc there. With a few years of distance between me and that presentation, I reread the slides and thought actually this broad summary of renormalisation group techniques wasn’t too bad, and perhaps it deserved to see the light of day again. This week I posted the slides to FigShare, and in today’s post we’ll run through a very abridged version of the talk that went along with the slides. (But if you want the full thing, feel free to invite me to give a colloquium talk on the topic…!)
(NB: I start the page numbering with the slide after the title slide.)

One of the main goals of condensed matter / atomic physics is the description of macroscopic phenomena starting from microscopic models, however describing the behaviour of 10^23 particles is hard! Renormalisation group (RG) is a way to ‘zoom out’ from the microscopic picture to look at the largescale behaviour of quantum systems without having to exactly keep track of all the microscopic details. In other words, it takes us from an impossible problem with too many variables to handle, to a much simpler problem which keeps track of only the most important information.

Renormalisation group was historically developed (at least in condensed matter) to describe secondorder phase transitions. Close to the transition (the critical point), the physics is controlled by a diverging correlation length. The behaviour close to the transition can be completely described by a set of numbers known as critical exponents. Somewhat incredibly, many very different physical systems turn out to exhibit precisely the same set of critical exponents: the microscopic differences are irrelevant, and only the universal properties matter. We say that different systems which are described by the same set of critical exponents are in the same universality class. This motivates the idea that in order to describe these systems, we want a method that ignores smallscale differences and only focuses on properties that matter on large length scales.

Renormalisation group techniques work on the principle of gradually reducing the number of degrees of freedom as we zoom out. Let’s start with realspace renormalisation group. Imagine a onedimensional spin chain of N spins, with nearestneighbour couplings between them. Rather than keeping all N of the spins, you could imagine that we could get a reasonably accurate description of the system by somehow getting rid of half of the spins, and describing the remaining spins in terms of some new effective coupling which takes into account the effects of the spins which we’ve gotten rid of. This is called decimation: in general, we will ’trace out’ (or integrate over) some fraction of spins at each step, leaving behind a system with fewer spins than we started with, but with modified coupling constants.

Here, we’ll go through a real example of Leo Kadanoff’s realspace RG method applied to a classical chain of Ising spins at some nonzero temperature. The partition function is indicated on the slide. The goal here is to perform the trace over half of the spins in the system – here we’ll trace over all oddnumbered lattice sites – and obtain a new partition function that has the same form as the old one, but with a modified coupling between the remaining spins. Let’s see how we can do this.

On this slide, we sketch the calculation for a very small system, just to give you an idea of how it works, but you can repeat this yourself for a larger system to check that the result comes out correct. But hang on, it’s not of the right form…?

By comparing the desired form of the partition function with the one we’ve calculated, we can obtain an expression for the rescaled coupling J’ which describes the interaction between our remaining spins. And in fact, now that we know how to do this decimation procedure once, we can do it again and all the details will be the same, so we can immediately write down an expression for a new coupling constant J’’ after tracing over half of the remaining spins, and then again to obtain J’’’ and so on. By applying this procedure again and again and again in a recursive manner, we slowly get rid of microscopic degrees of freedom and ‘zoom out’ to larger length scales.

We can implement this numerically, starting with J=1 for a variety of different temperatures, and see how the coupling constant behaves as we zoom out. Remarkably, in every case we find that the spinspin interaction J shrinks as we perform more and more steps. We say that J renormalises to zero, and that ultimately it is an irrelevant variable for describing the large scale properties of the material. In other words, at any nonzero temperature, thermal fluctuations dominate and the spinspin interaction has no bearing on the overall magnetic properties of our spin chain. This means that the system must be in a paramagnetic phase! Note that J renormalises to zero much faster at high temperatures: this is because at high temperatures, thermal fluctuations are stronger and after only a few decimation steps we have zoomed out far enough to see the lack of order. At very low temperatures, the system looks ordered on increasingly large length scales, and so we need to perform the decimation step many more times before we start to see the true largescale behaviour of the system. And, by the way, look how many steps we need to make  500 steps. That doesn’t sound like much, but remember at each step we traced out half of the spins in our system. That means that to see paramagnetic behaviour at very low temperatures, we’d need a system with over 2^500 lattice sites. When was the last time you did a numerical simulation for a system that large!?

Okay, so now that we’ve seen the procedure in action for a simple system, let’s look at a more complicated case. Here we’ll take a quantum Ising chain with a spinspin interaction J and a transverse field h. I’ll tell you for free that this model has a phase transition at h/J=1, but can we find this transition with RG?

For the quantum system, we’re going to use a different form of decimation. Rather than tracing out every second site, instead we’re going to combine pairs of sites into new effective ‘bigger’ spins.

Our ‘big’ spins are now built from two spin1/2 objects, each with a local Hilbert space of size 2 (‘up’ and ‘down’ states respectively), leading to each ‘big’ spin having a local basis of size 4 (upup, updown, downup and downdown). Our decimation step here will be to keep only the two lowestenergy states, thereby turning our ‘big’ spins back into ‘small’ spins which still capture the same lowenergy physics. Note that different decimation procedures can capture different physics, and that by phrasing this in terms of density matrices and ‘most/least probable’ states, you end up with a scheme called the Density Matrix Renormalisation Group (DMRG), now most commonly understood in the language of matrix product states. But that’s a story for another time…!

We can now derive the recursion relations for the couplings h and J  see the reference list at the end if you want to go through the details. We can now combine the two into a single ratio given by h/J, and by picking different initial values and applying the recursion relations, we can see what happens to h/J as we perform successive decimation steps. We find that for an initial h/J>1, this ratio diverges and becomes infinite. This tells us that J is a less relevant variable than h, and so the material flows towards a paramagnetic phase (as if we had set J=0). On the other hand, for an initial h/J<1, we find that this ratio flows to zero, indicating the J is more relevant than h, and the material flows towards a paramagnetic phase. Curiously, for h/J=1, nothing changes  it stays equal to one at every step. This is because secondorder phase transitions are scale invariant: fluctuations near these critical points are so strong that they take place on every length scale, and so the system looks the same on every length scale, and so the application of RG doesn’t change anything. These are known as fixed points in the language of RG: any set of parameters unchanged by the renormalisation group method indicates a phase transition.

We can represent this using a flow diagram, with a fixed point at h/J=1, and arrows representing the flow of h/J for initial states greater than or less than one. This provides an ataglance description of how the macroscopic properties arise from the initial microscopic properties of the model.

We’ve used the words ‘relevant’ and ‘irrevant’ a few times. These are actually the technical terms, and the classification is very general. Relevant variables grow under RG, and determine the behaviour at large length scales. These are important to keep track of. Irrelevant variables decay to zero under RG, and have no effect on the largescale behaviour of the system. They can be safely ignored. Marginal variables are unchanged by the RG, and should be kept for safety. Dangerously irrelevant variables (my favourite kind!) decay to zero, but may strongly affect the flow of other variables (e.g. if da/dl = a + 1/b but b>0, we say b is a dangerously irrelevant variable). Phase transitions are given by renormalisation group fixed points where nothing changes under the RG. At a secondorder phase transition, the system is scale invariant and fluctuations on all length scales determine the behaviour of the system.

Now how would we apply this scheme in two dimensions…?

Could we try the same trick of integrating out half of the spins at a time, maybe columnbycolumn? Let’s try.

Ah. Bad idea  we’ve ended up with a set of almostdecoupled 1D chains. This surely can’t capture the physics of a 2D system, can it?

This is better. We’ve integrated out half of the spins in a more democratic manner, using a checkerboard pattern. So we’re still left with a 2D square lattice?

Yes, we are, but look  it’s now rotated and the distance between sites has changed! We won’t go into details, but this is just to demonstrate the principle that doing realspace RG in two or three dimensions requires a bit more care, as it can be quite easy to change the lattice geometry by accident and you may end up with some unexpected results.

Now we’ll look at a different way to do RG in dimensions greater than one: momentum space RG! It’s basically the same procedure, but from a different point of view: instead of real space, we start in momentum space. This method was originally proposed by Kenneth Wilson (1979). It works in d>1 (and in fact, is exact for d>4). It does, however, require a field theory description of the partition function! Once we have that, we want to integrate out the highmomentum physics (‘fast modes’, i.e. shortwavelength fluctuations) and retain only the longwavelength behaviour (‘slow modes’) which describe the behaviour on large length scales.

When preparing this talk, I had only one textbook to hand (as I was at home in lockdown, and all my other books were in my office!), so I thought I’d look up what Lancaster and Blundell had to say about momentumspace RG in their excellent book (which I highly recommend). I found the following, which made me laugh: “We need to integrate over the quickly varying fields. This is usually impossible.”

Let’s do it anyway. Here are the details of how to integrate over the ‘fast’ fields to obtain a renormalised action. We won’t dwell on them, but they are included for completeness. Let’s instead focus on a more physical/intuitive explanation.

It’s traditional at this point to use an image of Kenneth Wilson, the man who developed momentumshell RG, to illustrate the technique in action, but as he wasn’t available when I was writing this talk, instead we’ll use an image of Lyra the cat, probably my fuzziest collaborator. The momentumshell RG method proceeds in three steps: first we start with a field theory that we want to renormalise. We then integrate over the fast modes (i.e. remove the high frequency component of the wave, making the picture blurry and the waveform smooth) to get our ‘slow’ action. It’s doesn’t quite have the same form as our original, as we haven’t done the ‘zooming out’ step yet: we’re still looking at it too closely. To zoom out, we perform a rescaling of the fields, resulting in a field theory that looks qualitatively similar to what we started with (and a picture that is more zoomed out than the original, but which no longer looks blurry despite us throwing out the highfrequency information  it would be too small to see here anyway, so it makes no difference: it’s irrelevant information!).

Let’s take a more mathematical example. We’ll start with a Gaussian action, describing a noninteracting theory. We can integrate out the ‘fast’ modes directly, and in fact they don’t rescale the remaining ‘slow’ modes at all (they just contribute an unimportant constant somewhere that we don’t need to worry about here). This is the first step of the procedure: the bit where we get rid of irrelevant information.

Next, we need to rescale to regain ‘resolution’. We do this as shown on the slide.

After integrating out an infinitesimal shell in momentum space, we can see how the coupling constant changes. This lets us obtain the renormalisation group flow equation (here a differential equation rather than a recursion relation, as it was earlier!). This is a pretty simple one: the variable r will always increase in magnitude except if it is zero, in which case it will stay zero. Let’s look at a more ‘real’ example.

This is the BoseHubbard model, which describes bosons hopping on a hypercubic lattice in any dimension you care to imagine. Its zerotemperature phase diagram is shown on the slide for d=3, containing Mott insulating (MI) and superfluid (SF) phases.

This is the strongcoupling field theory for the BoseHubbard model. I’m not going to go into any details about where this came from: you can verify it yourself if you wish by checking the references. My main point here is just to show you how a ‘real’ RG calculation looks for a more complicated nonGaussian model.

We can go through the same steps as before and obtain a set of coupled differential equations for all the parameters in the field theory. This is more complicated for sure! But there is a simplification. So far I haven’t specified anything about the dimension, or the dynamical critical exponent d_z. Let’s work in d=3, where d_z=2. (I will state this without proof here.) This lets us simplify the equations on the left, to the ones on the right. Anything that decreases under the RG flow is irrelevant and can be neglected (i.e. set to zero). Look what happens  we get back the Gaussian result, dr/dl=2r!

This actually does tell us something. The value r=0 is an unstable fixed point, representing a phase transition. Either side of this, r will increase in magnitude, corresponding to either a superfluid (for r<0) or a Mott insulating phase (for r>0). Physically, r represents the mass term in the field theory, so it’s negative in the gapless SF and positive in the gapped MI, while being zero at the transition. We can use this to map out the phase boundary, by taking each point in the phase diagram and figuring out if r is positive, negative, or zero, as now we know that this variable encodes which phase the model is in on large length scales.

I can’t help myself. Let’s add some disorder. (This was my PhD work…!) If I add some randomness into the model, and you allow me to skip all the details, what happens is that an extra term is added into the field theory. How does this change the physics?

Well, of course it gives rise to some new RG flow equations. In this case, rather than having only dr/dl=2r as the only relevant equation, we now have two equations which can be condensed into the variables shown on the slide. As the behaviour here is now much harder to see, we have to search for the fixed points, i.e. the points where the RG flow equations are equal to zero, which correspond to phase transitions. We find that there are three such fixed points, corresponding to different pieces of physics. Let’s take a look on the next slide.

We can plot a flow diagram to illustrate the behaviour of these equations, and label the fixed points on them. We find that there is one stable fixed point (corresponding to the MI phase), and two unstable fixed points, corresponding to MI/SF and MI/BG phase transitions. The BG (Bose glass) is a new phase that did not exist in the nondisordered model. We’ve found a new phase of matter by studying the RG fixed points! We can now use this to plot a phase boundary, and look what’s happened  the Mott insulating region has shrunk! But, as we didn’t find a BG/SF fixed point, we have no idea where the BG phase ends and the SF begins. This is because of how our field theory was derived: we used a ‘strong coupling’ approximation which excludes the superfluid phase from the disordered calculation. Also, the following point is very important. There is an implicit assumption in RG that the shortscale behaviour is independent of the largescale behaviour, allowing us to integrate out the ‘fast’ modes without caring about the ‘slow’ modes. This is absolutely not true in glassy systems – so be careful! We avoid the issue here by using the ‘replica trick’ to average over disorder and obtain a translationally invariant field theory, but the subtleties of disordered systems are extremely complex, so think twice before trying this at home!

We end with a brief summary of what we’ve seen, and some suggestions for neat things you might want to check out.

A list of useful references you might be interested in if you want to know more about anything discussed here!
Anything I missed? Are there any tricks I don’t know about, anything explained badly, or anything you think I got wrong? Want to invite me to come give a proper talk on the topic? ;) Let me know by dropping me an email or hitting me up on Twitter!