# The Quantum Zeno Paradox or Effect

Kea recently brought up the subject of Zeno of Elea and his now long lost book of 40 paradoxes dealing with the continuum. His nominal 2500th birthday should be celebrated relatively soon. Let me paraphrase an example paradox is the following:

If one assumes that space and time are continuous, then an arrow shot from a bow, before reaching its target, must first travel half the distance. And then travel half the remaining distance. And so on. And therefore, there are an infinite number of distances to be travelled and the arrow could never reach the target. But arrows do reach targets. Therefore, space and time are not continuous.

Surprisingly, there is an echo of this thought in quantum mecahnics. The echo is so close to the original paradox that it is known as the Quantum Zeno’s Effect or sometimes “Paradox” depending on the writer. The subject is discussed in many arXiv articles.

In quantum mechanics, when one measures a system, the formalism requires that the system collapse to the result of the measurement. If one examines this carefully, one finds that if one measures a system at a sufficiently high rate, the effect of the repeated measurements is to prevent the quantum system from changing. In effect, if one examines the position of the arrow too frequently, the arrow cannot move. It’s worthwhile looking at the simple mathematics that causes this effect.

In the usual quantum mechanics, one represents a quantum system by a wave function or state vector. One obtains probabilities from this object by squaring its absolute magnitude. Given the state of the system at time t, a linear differential equation defines its state at later times. For instance, with Schroedinger’s wave equation, one has: Suppose we have a very short time period $\Delta t$. How much will the wave function change? As for any differential equation, we have: Now suppose that we are looking at a quantum state that decays. For example, a uranium atom. At time t=0, we measure the system and determine that it has not decayed. According to the laws of quantum mechanics, this puts the uranium atom into a pure “not decayed” state. If this seems a funny way of talking about a thing, some physicists think so too.l You might try reading about Schroedinger’s Cat which will likely just confuse you further but may be more entertaining than one of those confusing foreign movies.

According to quantum mechanics, it is possible to define a complete set of basis states that fully define the wave functions that one could get if one made a measurement on the system. Now one of those basis states is the undecayed state and the rest of them consist of the various decayed states. Now at time t=0, we’re supposing the state to be the undecayed state (because we measured or checked it, and that is what it was). After a short time, the state vector moves to be a mixture of this undecayed state and a decayed state. But since the motion is linear (i.e. Schroedinger’s wave equation is linear), the amount of the decayed state that we have entered must be linear for small time.

To compute the probability of decay, we take the state we wish to measure and compute the dot product of it with the possible state we might get for a measurement. This is the “amplitude” and is a complex number. Again this amplitude must be linear in $\Delta t$. To get the probability we compute the squared magnitude of the amplitude. Therefore, the probability of decaying has to be approximately proportional to the square of $\Delta t$.

Exponential Decay

Now this is a very odd thing. When you have a radioactive atom, we assume that the rate at which it decays does not depend on time and so it should follow an exponential distribution. In exponential decay, the probability of a decay over a very short time interval $\Delta t$ is proportional to $\Delta t$. It is not proportional to $(\Delta t)^2$. This is what we intuitively expect, but it is not what happens.

Since the square of small numbers gives even smaller numbers, the effect of this nonlinear decay rate is that decay is suppressed near a quantum measurement. If we measure the system again, then again the decay rate will be suppressed. And if we repeatedly do this, then the decay rate will be reduced accordingly. Furthermore, in the limit as the time delay between measurements goes to zero, decay is completely suppressed; its probability goes to zero. This is the Quantum Zeno Effect.

As one can learn by reading the literature, one can show that the exponential distribution does obtain at longer times. Even more interestingly, the deviation from exponential decay rates reappears for extremely long times as well as short times, and it is possible for the decay rate to be increased (the “Quantum Anti-Zeno Effect”), for reference, see papers like quant-ph/9708024.

Density Matrices

From the density matrix point of view, part of the reason we see a paradox in the QZE is due to our natural inclination to see a quantum state as it were a little photograph of the condition of the system at a given time. We see a sequence of states as if it were a movie showing how the system changes with time. Similarly, when we split a system into things being acted upon and things that are acting, of course it becomes confusing.

Consequently, adding a measurement to the system, for example by using a laser pulse, is a modification of the process. Leaving the laser pulse out of the equations (and thinking of them as a “measurement” that uses no matter or energy) is unphysical and it is not surprising that the dynamics are unexpected. In fact, one can derive the decay rate change of the QZE without any need for a collapse hypothesis by using density matrix theory that includes the laser pulse used to make the measurement. This is well known in the literature and will show up if you search for “density matrix” along with “quantum zeno”. For example, see quant-ph/9611020:

The QZE and this experiment have not only aroused considerable interest in the literature [8, 9], but the very relevance of the above experimental results for the QZE has given rise to controversies. In particular the projection postulate and its applicability in this experiment have been cast into doubt, and it was pointed out that the experiment could be understood without recourse to the QZE by simply including the probe laser in the dynamics, e.g. in the Bloch equations or in the Hamiltonian . Since the Bloch equations describe the density matrix of the complete ensemble, including the probe pulse as an interaction in them gives, however, no direct insight on how such a pulse acts on a single system.

Filed under History, physics

### 4 responses to “The Quantum Zeno Paradox or Effect”

1. nige cook

“And therefore, there are an infinite number of distances to be travelled and the arrow could never reach the target.”

Physically, Zeno was making a massive leap in assuming that there is some sense in dividing a journey into an infinite number of infidesimal steps, and he was making an error in assuming that an infinite number of infidesimal steps can’t be make in a non infinite amount of time.

I agree with you that Zeno’s crazy argument has echos of quantum mechanics in it.

While I agree with quantum mechanics as an approximation for electrons in atoms and for alpha decay by quantum tunnelling, I prefer the Feynman interpretation of what is going on: the crazy looking effects are due to the virtual particles in the Dirac sea (or however the vacuum should be described physically) getting involved with fundamental particles. Virtual particle effects cancel out on large scales, just like the impacts from air or water molecules can be described classically as a continuous pressure on large scales. On small scales, they cause chaos and introduce unpredictability unless you average out the motion, just like Brownian motion which can be treated statistically by a path integral.

On the exponential radioactive decay equation, it’s maybe worth the physical (not mathematical) comparison to the dynamics of a capacitor discharging.

A simple charged capacitor is two charged conductors separated by an insulator, which at the simplest can be just a distance of vacuum. Hence a capacitor physically be composed of charges separated by a distance of vacuum.

A nucleus about to decay by emitting a charged particle is conceptually somewhat similar to a capacitor plate about to discharge. We have to remember that in the universe there are similar numbers of positive and negative charged particles, so the “other” capacitor plate to balance the nucleus we are focussing on is elsewhere.

The analogy is interesting because if you have a large number of radioactive nuclei, they do decay as a whole giving the exponential decay curve, just as you get when a capacitor is discharged.

Physically, when a capacitor discharges, charge is removed from it.

Now here’s the fun part. Charge comes in lumps! At no time does a charged capacitor plate contain a continuously variable amount of charge. It only ever contains a discrete number of electrons. The maths of exponential decay of charge in the capacitor is a fiction. The amount of charge must fall in discrete steps as each electron leaves. So the classical theory of the exponential decay of charge in a discharging capacitor is a large-numbers approximation.

If you shrink the capacitor plates so that you are considering a tiny capacitor with just a few charges in it, the fall of charge when each electron leaves will no longer be a smooth exponential curve, but a series of steps.

In other words, the true model for the capacitor is not the differential model given at http://hyperphysics.phy-astr.gsu.edu/Hbase/electric/capdis.html

Current is quantized into lumps (fundamental particles like electrons), so it’s definitely not possible to represent it by I = dQ/dt where Q is charge. Q is not a continuous variable, dQ doesn’t exist, because the smallest amount of Q that exists is the charge of a unit fundamental particle. This smashes up the mathematics of the calculus completely. You can only apply the calculus statement I = dQ/dt as a crude approximation where you can ignore individual charges (fundamental particles) because the numbers of charges flowing are extremely large.

The (incorrect) non-stepwise formula for the proportion of charge remaining in a capacitor with capacitance C when discharged through resistance R for t seconds is:

exp[-t/(RC)]

Catt has analysed the capacitance of a fundamental particle with respect to the surrounding universe, see http://nige.files.wordpress.com/2008/04/http___wwwivorcattcom_1_3.pdf

Basically, you consider a capacitor made of two concentric spherical shells (each being a capacitor plate, with vacuum dielectric between them) with radii A (inner shell) and A + B (outer shell):

Capacitance, C = 4*Pi*[permittivity of free space]*A*(A + B)/B.

Where the outer charge shell is at large distance compared to the inner charge shell, as is the case when dealing with an isolated nucleus, with a shell of electron charges at distances many times the radius of the nucleus, A + B ~ B, so:

C ~ 4*Pi*[permittivity of free space]*A.

So we can calculate the capacitance of the nucleus of radius A. We only now need to estimate the resistance R against the alpha or beta particle being emitted, in order to perfectly represent radioactive decay as an electrical discharge of a charged capacitor plate-like nucleus.

The product RC in the capacitor discharge formula is identical to the mean life of a radioactive atom, which for exponential decay is always bigger than the statistical half-life by a factor 1/ln2 = 1.44. Hence the radioactive half-life is predicted to be RC*ln2 = 0.693*RC.

Estimating the electrical resistance, R, to emission that a charged particle in the nucleus experiences is an interesting problem. Using Ohm’s law:

R = V/I.

For alpha decay, pion exchange is the strong attractive nuclear force which is acting against radioactive decay, while Coulomb repulsion is acting in favour of radioactive decay. Nucleons tend to have quantum numbers in the nucleus and bunch up into stable configurations like alpha particles in the outermost (most weakly bound) nuclear shells, in the nuclear shell model which is somewhat similar to the model of electron shells.

The Coulomb electric force field acts to accelerate the alpha particle away from the radioactive nucleus, while the strong nuclear force mediated by pions is effectively the source of electrical resistance, by hindering the motion of the alpha particle away from the nucleus.

This can be worked out by calculating the repulsive Coulomb force and attractive strong force operating on the alpha particle in the outer shell of the nucleus as it moves outward; the velocity the alpha particle gains will be equivalent to the drift velocity a charge in a circuit gains as a result of the balance between acceleration from the electric field and deceleration due to drag effects like collisions.

Since R = V/I, the calculation is fairly easy.

“The difference in voltage measured when moving from point A to point B is equal to the work which would have to be done, per unit charge, against the electric field to move the charge from A to B.” – http://hyperphysics.phy-astr.gsu.edu/hbase/electric/elevol.html (see also http://hyperphysics.phy-astr.gsu.edu/hbase/electric/ev.html#c2 )

So all we have to do is to work out the voltage from the electrical work done as an alpha particle is repelled away from a nucleus to its effective electric current due to that charged alpha particle. This should allow the half-life to be calculated without the usual quantum mechanical Gamow obfuscation of quantum tunnelling. I think that mechanically, quantum tunnelling does have some sense it: it works because the force fields aren’t smooth, continuously operating forces. Instead, they are quantum fields of virtual particles acting at random intervals. On large scales they mimic classical approximations, but one difference is that the can’t always trap an alpha particle into the nucleus. The rate of exchange of pions and electromagnetic gauge bosons is irregular and random, like the irregular clicking of a geiger counter. Sometimes there are random intervals of very few interactions, when an alpha particle has a chance to escape from the nucleus.

2. dorigo

Hi folks, not really able to make a dent in this interesting discussion, but I enjoyed reading about it.
Cheers,
T.

3. Dave

Really interesting stuff. So can the Quantum Zeno Effect be used to stave off the decay of say, a muon? This would have very interesting applications when it comes to muon catalyzed fusion.
Could you possibly extend the lifetime of a muon in the ions frame and possibly have a gain of greater than one? What constitutes a ‘measurement’ that: yep it’s still a muon and not an electron yet? Like it’s inertia when it’s accelerated in an electric field could distinguish between a muon and an electron?
Any ideas?
Dave