If you want to win a Nobel Prize in Physics by finding the unified field theory, it’s pretty obvious that you will have to learn how to make Quantum field theory (QFT) calculations. In the 1940s, Richard Feynman and Ernest Stueckelberg independently developed a notation (now known as Feynman Diagrams), that greatly ease certain calculations in QFT.

Julian Schwinger complained that Feynman had made QFT accessible to the “masses”. He meant the “masses of physicists.” Between this post and the next, we’re going to take Feynman one better and make Feynman diagrams accessible to the masses, as in “masses of amateurs.” Our simplification will be to use qubits. For a reference in the arXiv literature, see Quantum Electrodynamics for Qubits, but our discussion will be simpler than this. This post will discuss Feynman diagrams the usual way.

I will try to write this introduction at a low enough level, and with enough links to explanations of the jargon, that it can be understood by people unfamiliar with particle physics. Accordingly, we will skim over a lot of details, but will include links to articles that will explain further, if needed.

As a first step, let’s describe what is so difficult about the usual way of doing Feynman diagrams. Maybe this can work as a short introduction to the method for those who haven’t seen it before. It’s fairly easy to write down Feynman diagrams. It’s more difficult to turn them into definite integrals, and then to evaluate the integrals is yet worse. We begin by discussing how Feynman diagrams enter into particle physics.

Feynman diagrams are used in perturbation theory to mathematically approximate a difficult to compute quantum calculation through a series expansion. One takes the calculation one wishes to make, and turns off parts of the interaction that are difficult to compute. This gives an equation that we can solve exactly. We then turn the interaction back on, but only as an infinitesimal correction. Since it is an infinitesimal, we keep only terms that have no more than N products of the infinitesimal. If we can solve the resulting equations (they get more and more difficult as N increases), we end up with a series as a sort of polynomial of order N, for example:

The above illustration shows the bare (i.e. without any interactions) photon propagator on the top row, plus the single 2nd order diagram in the second row, and the two 4th order diagrams in the next two rows. The coupling constant is the fine structure constant, , and the places on the diagrams which correspond to interactions are marked with . If these diagrams compute out to give the values A, B, C, and D, then the photon propagator, corrected (for the possible creation and annihilation of electron / positron pairs in the vacuum) is given by . Note that since the photon is a vector particle, the values of A, B, C, and D are not real or complex numbers but instead are more complicated, as we will discuss later.

In the usual practice, the interaction part that is turned off is typically a coupling constant. A coupling constant is a constant that determines the strength of an interaction. An “interaction” means an event where quantum particles are created or annihilated. “Creation” and “annihilation” are the terms used in QFT to describe the event where a particle begins its existence or ends its existence.

Without interactions, there is no need for the creation or annihilation, and we could use the simpler theory of Quantum Mechanics instead of QFT. Instead, QFT includes quantum mechanics as a subset. After a particle is created, in QFT we can use quantum mechanics to determine how its wave function changes with time, up until the moment when it is annihilated. In QFT, one almost always uses a Green’s function for the quantum mechanical wave equation that we would like the particle to obey. The physicists call such a Green’s function a propagator because it shows how the wave function spreads, or “propagates”, if it starts out concentrated at one point in spacetime (or the equivalent in momentum space). Momentum space comes from taking the Fourier transform of the position space wave functions. It is convenient for calculations in QFT because complicated <a href=”http://en.wikipedia.org/wiki/Convolution” mce_href=”http://en.wikipedia.org/wiki/Convolution”convolution integrals become simple when you Fourier transform them. Another way of saying that momentum space is convenient is that in momentum space, momentum and energy are conserved exactly, this is not the case in position space quantum mechanical calculations.

In the usual way of doing physics, one obtains Feynman diagrams after making a guess at the Lagrangian density. Joseph Louis Lagrange was an 18th century mathematician. The Lagrangian is roughly the kinetic energy minus the potential energy. If we choose a particular form for the kinetic and potential energies we can write down the Lagrangian. From the Lagrangian we can compute the equations of motion. We do this by varying the Lagrangian, that is, by computing the Lagrangian for a set of possible paths and picking a path for which small changes to the path do not change the Lagrangian. Such a path is a possible sequence of values for the positions of our particles (and their momenta). The equations of motion will show up as a set of coupled differential equations.

For a wave theory, like quantum mechanics, the kinetic and potential energies are defined at each point in space-time as a functional of the fields. With the wave function, T the kinetic energy, and V the potential energy, one could write the Lagrangian as: . Instead of getting an equation of motion for the particles, we get a set of coupled partial differential equations. The partial derivatives show up because of the dependency on position.

If we turned off the interaction, the equations of motion we would get from the Lagrangian in the usual QFT technique would be something like Schrödinger’s equation or Dirac’s equation. The propagators (Green’s functions) for these equations of motion are well known. What is not known are the propagators for the more complicated equations of motion that would give the full Lagrangian. Such a propagator is called “exact”. We will direct our effort at this sort of problem, that is, finding the exact propagators (or an approximation to them) for complicated Lagrangians.

In quantum mechanics, we would compute the complex number . The probability would then be the absolute value of C squared, that is, . In QFT, the exact (or approximated) propagator is used in a similar fashion. For example, if the propagator is given by , where x and x’ stand for 3-vectors giving spatial positions, and the initial and final conditions are given by , at times and , respectively, then the complex number computed from the propagator is: where the integral is the definite integral twice over all space. The probability is then given by as in the quantum mechanical calculation.

Of course quantum field theory also allows calculations where the initial and final states are not the same particle states. These are used to model experiments where the type of particle is changed by the interaction. The method is similar to the above, but the propagator refers to different sorts of quantum states. Such a “mixed” propagator would be zero when the interaction is turned off.

T and V can be depend on in complicated ways, for example, they can depend on its derivatives. In addition, is only rarely a simple real or complex valued function (which represent what the physicists call a scalar particle), but instead would be a spinor for a spin-1/2 fermion or a vector for a vector boson such as a photon. The use of spinors is very important and is taught to undergraduate physicists though undergraduate mathematics majors may get a degree without learning of them.

Because spinors and vector bosons are complicated, most QFT textbooks begin with scalars. No elementary scalar particles are known, so this is a toy model used primarily to teach the subject. The problem with the full theory is that it is so complicated that even simple diagrams like the following result in amazingly messy calculations, which we will illustrate.

Our approach will be different from most introductions to QFT. We will first briefly discuss the subject in its full glory, examining the above diagram, and then reduce to a simplification that is not just a toy model, but is the QFT on qubits. For reference, the reader is directed to the recent arXiv paper by Iwo Bialynicki-Birula and Tomasz Sowiński, Quantum Electrodynamics for Qubits.

So now we will turn the above Feynman diagram into a calculation. To do this, we use Feynman rules. These rules convert the elements of the diagram into mathematical objects that will be present in a definite integral. On solving the definite integral, one finds the result of the computation.

The vast majority of work with Feynman diagrams is done in Momentum Space, that is, with momentum and energy instead of space and time. Calculations are much simpler in momentum space. In fact, in quatum mechanics it is easy to forget that there is anything other than momentum space and sometimes people do forget. For example, see “coalesce into a single blob”, with respect to Bose-Einstein condensation.

The source of Feynman diagrams is approximately as follows. First, one chooses a quantum mechanical theory for how a particle moves. One then finds a propagator for this theory. In a theory with several different particles, one might need several different kinds of propagators. The Feynman rules will convert lines and arcs into these propagators.

Second, one chooses a set of interactions. The interactions define what can be created when other things are annihilated. The interactions tell how the lines and arcs are hooked together, that is, they define the vertices. For each type of vertex, a complex constant is given (the coupling constant), and if the arcs coming into the vertex have spinor or vector structure, then a way for these to relate to one another must be included in the vertex.

Generally, one defines the propagators and interactions by using a Lagrangian, but the details of this need not concern us here. For any given set of Feynman rules, we could define a Lagrangian that would give those rules, and vice versa. They are alternative ways of describing the same thing.

As I mentioned above, the rules one uses in momentum space are much simpler than the rules in position space. The primary reason for this is that when one Fourier transforms a derivative, one gets a nice simple factor of the momentum. In the usual QED theory one also has to treat the virtual particles differently from the “real” ones that are “on the mass shell”. For our purposes, it is enough to understand the virtual particles. With that said, the Feynman rules for QED, the interaction between photons and electrons (and positrons) are simply:

The above, which are from Peskin and Schoeder’s book. The mathematical objects on the right need a little explaining. The is an infinitesimally small imaginary constant that comes in when one does the Fourier transform. The sign specifies whether the resulting pole will be taken to be just barely above or just barely below the real line in the complex plane. The “p” is the momentum carried by a particle. In relativity, momentum is a 4-vector, so there are three components. One usually writes the four components . This is said to be convenient for summation conventions, but it seems to me that summation conventions can be what we wish them to be and don’t require integer designations. So I use the more geometric nomenclature , which follows that of the Pauli matrices, which fit better with the “east coast metric”. But to be compatible with Peskin & Schroeder, I’ll keep to the west coast metric here. An advantage of using t,x,y,z to designate the components of momentum (and position), is that I can use the integers to designate different momenta without being at all confusing to the student. So the relativistic conventions I will use will be:

The reader should not take the above two figures as definitive. My purpose here is simply to show what goes into QFT calculations even as simple as the first correction to the photon propagator. (Did I mention that I’m ignoring the term that has a vacuum bubble?) My objective here is not to calculate it. If you would like to correct one of the above, please put it in the comments and I may or may not get around to correcting it. In addition to these problems, there are several different conventions for how to define electricity and magnetism. In short, even the conventions available for the simple theory of QED is a mess.

In addition to the above, one also requires that momentum is conserved at the vertices, and that “undetermined loop momenta” are integrated over . The conservation of momentum is obtained by adding to the vertices a delta function on the sum of the 4-momenta coming into the vertex. For example, if the three propagators coming into a vertex bring momenta of , then we include a 4-delta function This delta function makes sure that energy and momentum are (each separately) conserved at the vertex. This is part of the reason that momentum space is simpler: energy and momentum are conserved.

Finally, the are a set of four 4×4 matrices called the Dirac gamma matrices. In places where one is supposed to add a real or complex (i.e. scalar) number to a matrix, one first multiplies the scalar by the unit 4×4 matrix. As a 4-vector of matrices, we will use x,y,z,t notation to reference the four of them. As 4×4 matrices, the four individual matrices are somewhat arbitrary, they are just a representation of the Lorentz group. In my own work I use geometric notation which does not require the choice of a representation. So it is with some repugnance that here I use a particular representation:

In this choice of representation of the gamma matrices, the product that appears in the electron propagator, , expands out to be a 4×4 matrix. This sort of dot product appears over and over in Feynman diagrams. Feynman introduced an abbreviated notation for it called the “slash” notation, which can be expanded out (in our choice of gamma matrices) as follows:

Putting together all the Feynman rules, let’s write down the first self energy correction to the photon propagator. The Feynman diagram, as before, is:

When using the Feynman rules, one arranges for the “goes into” of one part of the diagram to fit into the “goes outa” of the next. The first place where this has to happen is at the ends of the photon propagators. Each photon propagator makes a factor . The are just dummy variables that will be summed over when we add vertices to the ends of the propagator (where the photon is created and annihilated). In our diagram, we have two photon propagators, so we will end up with four dummy variables. I will use for these as follows:

In the above, the left vertex is associated with the index, and the right vertex uses the index. The finished correction to the propagator will have indices. These indices match up with the indices given in the Feynman diagrams under “QED Vertex”.

Once we get the indices correctly matched to the Feynman diagram, the second place where “goes into” has to match up with “goes out of” is in the 4×4 matrix structure of the fermion propagators. Each of the electron propagators is a 4×4 matrix of complex numbers. The QED Vertex is also a 4×4 matrix. We have to make sure that these matrices are used in the correct order.

With the photon propagators, it didn’t matter what order we put the two indices in. This is because the matrix is symmetric. This is not the case for the fermion propagator. The matrix is not symmetric. This is why the fermion propagators have an arrow sign. The arrow sign indicates the direction in which we insert the 4×4 matrices.

For the diagram above, the fermions form a loop. Starting at the left vertex, the order around the loop is “left vertex”, “top electron propagator”, “right vertex”, “bottom electron propagator”, and back to the “left vertex”. This is the order in which we must put down the 4×4 matrices associated with these objects. Using the momenta as labeled in the diagram, the terms, in order and separated by parentheses, are:

The alert reader will notice that, taken as a product of 4×4 matrices, the above product is arbitrary in that I chose to begin it with the left most vertex. And in general, calculations in QFT reduce to complex numbers, not 4×4 matrices. There is another step, which is to take the trace of this product of matrices. The trace arises because we have to take a sum over all possible polarizations of the fermion loop. You can break the loop at any point, and when you sum over polarizations, you end up taking the trace. Since the trace of AB is the same as the trace of BA, it doesn’t matter where we start the loop, the trace function will give the same result.

Okay, we’ve arranged for the 4×4 matrices to multiply in the correct order (i.e. the electron propagators are hooked up correctly). We’ve arranged for the photon indices to fit correctly into the vertices (i.e. the photon propagators are hooked up correctly). We can now write down the calculation needed:

The above is a definite integral of 4×4 matrix products with 8 variables of integration. The delta functions will simplify four of the integrals. This will leave a delta function that factors out, i.e. . What remains will still be quite a mess. Various calculational tricks will allow computation of the matrix trace without having to actually multiply it out. The equivalent calculation in position space is considerably worse.

This discussion of Feynman diagrams hasn’t included various things that are needed to account for symmetry between diagrams. The above has a minus sign added because of the presence of a fermion loop. God knows what I’ve left out. Surely those in the know will complain in the comments. But these are details. What I’ve tried to do with this post is to give an idea what sort of things go on when one converts a Feynman diagram into a calculation.

In the following post we will redo this, but with qubits, and show how the theory simplifies to the point that amateurs can discover new results.

I sat down to continue writing and I already found errors. In switching the metric to east coast, I need to switch the Feynman rules also, otherwise integrals don’t blow up like they’re supposed to… I’ll get around to fixing it probably later today.

This is very clear and interesting. One thing about quantum field theory that worries me is the poor way that the theory is summarized: either with too much abstract maths which aren’t applied to concrete examples, or else with no maths at all. It should be possible to concisely explain it, and that’s what you’re doing.

One problem with popular Feynman diagrams is that they’re plots of time (ordinate, or y-axis) versus spatial distance (abscissa or x-axis).

Therefore, it’s a funny convention to draw particles as horizontal lines on Feynman diagrams, because such particles aren’t moving in time, just in distance, which is impossible.

[snip]For rest of comment, see Long Nigel Post #1. For Nigel’s copy of the comment, with further comments, see Nigel’s blog post on energy conservation comments.[/snip]

Pingback: Feynman Diagrams for the Masses (part 2) « Mass

I also find the website very interesting especially the virtual photons and virtual electrons.Why is momentum conservation necessary ? If energy conservation can be violated in time then momentum can be violated in space.

Little mistake.

Last matrix element is not -pz.

-pt.

[Carl: And that’s not going to be easy to fix…]

How can these diagrams be used for the wave function? Can they be combined with twistors in some way?

Each diagram describes a sequence of changes in a multi-particle wavefunction. For example, in Carl’s first diagram (if you read left to right), you start out with a single-photon wavefunction (represented by the wavy line), it changes into a one-electron,-one-positron wavefunction (the two straight lines with arrows), and then it changes back into a single-photon wavefunction (wavy line again).

The diagram describes both a process in space (photon becomes electron-positron pair which recombine) as well as a quantum probability (which comes from multiplying the algebraic expressions in the diagram labeled “Feynman QED Rules”).

A wavefunction is a superposition of all the possibilities. You can extend that view to talk about a superposition of possible “histories” or “processes”. The standard form of quantum mechanics involves the Schrödinger equation: you start with a wavefunction, and then that equation tells you how it changes over time. But the other way to do it is to look at all the possible sequences of events (the “histories”). Each history gets a complex number, and then the complex numbers add if the histories arrive at the same final state. This is the “sum over histories” or “path integral”, and mathematically it’s exactly the same as the other picture.

When particles can interact, the possible histories can become extremely complex. But in quantum electrodynamics (electron-photon interactions), the more often they interact, the less probable that history is. So the “higher order diagrams” only matter for precision calculations.

Feynman wrote a famous book about this, “QED”, which is meant to explain it all to the public.

Twistor theory also has “twistor diagrams” which are exactly the same idea. In fact they are now being used in QCD (gluons) and string theory.

Thanks for the answer, but you said nothing about the combination Feynman – twistors.

I’m a novis in these things, as you certainly saw. I hunt the qubit thing and the degrees of freedom (P- V-axis). In Wikipedia they said that Feynman was no good for solitons and superconduction in general, and that is what it is about in my meridians and the nerve pulse.

The twistors are no good for particles, atoms and molecules. But they can have many superpositions at a time. It seems to me now that it are the twistors that are responsible for the wave collapse, through braiding. The wave is repelling, the particle is attracting?

Wikipedia says that Feynman diagrams are “no good for solitons” because Feynman diagrams are for perturbation theory and perturbation theory is no good for solitons and similar bound states.

My efforts have been in extending the methods of Feynman diagrams to exactly those deeply bound states. The idea is that you look at what happens in perturbation theory to give you an idea of what must happen in strongly bound states. See my latest paper on “Spin Path Integrals and Generations” for example calculations of bound states using Feynman path integrals.

Another way of describing the idea is that in those bound states, you have to agree that certain things are as true in bound states as they are in perturbational quantum mechanics, especially superposition. Now standard quantum mechanics handles bound states beautifully, the real problem is only that Feynman diagrams and modern elementary particles is more of a perturbational way of looking at things. So I take the tools defined by what we know from perturbation theory and turn them around to apply them to bound states.

As far as twistors go, I don’t think I know enough to competently comment on them except to say that Marni Sheppeard seems to think they’re useful, and it sure looks to me like they’re similar to the kind of stuff I’m doing.

The problem is the relaxation that happens in the braidings. The energy is freed. Can you really take the tools and just turn them around? There are also the time factor.

Also the hierarchial problem is there. The vectors/spinors are of different amplitude? What determines it?

It sounds logical to think this process is assymmetric and it cannot be turned around.

I need Feynman diagram for atomic and nuclear (bound states) reactions. Please, provide some, if you can.