Given a 3-vector of complex numbers, (A,B,C), define its discrete Fourier transform as

where . That is, I’ll use lower case letters to denote the discrete Fourier transforms of UPPER case letters. The above leaves off a factor of but it will do.

Of interest today will be vectors (A,B,C) which happen to satisfy A+B+C = 0. These are eigenvectors of the Democratic D matrix

that is, the matrix all of whose entries are equal to the complex number D. Of course their eigenvalues are zero. None of this is particularly interesting until we move from linearity to bilinearity and work with the discrete Fourier transforms of 3×3 matrices.

Define the Fourier transform of a 3×3 matrix U as where is the matrix:

where . With this definition, the discrete Fourier transform of the democratic matrix D, is:

This is a nice simplification.

Now let A+B+C=0 and compute some discrete Fourier transforms of four kinds of matrices, 1-circulant, 2-circulant, and two new types I will call “bra” and “ket” for obvious reasons. Untransformed matrices on the left, their transforms on the right, note that they fit together like the pieces of a jigsaw puzzle:

The discrete Fourier transform for 3×3 matrices is 1-1 and onto, so the above gives a proof that we can always write a 3×3 matrix uniquely as the sum of a democratic, 1-circulant, 2-circulant, bra, and a ket matrix, each of which, other than the democratic, is subject to the constraint that A+B+C = 0. This gives a decomposition of a 3×3 matrix into five parts, with the democratic part giving one and the others two each degrees of freedom, for a total of nine (complex) degrees of freedom.

**Unitary Matrices**

Some time ago, I gave a parameterization that seems to be a complete parameterization of the magic unitary 3×3 matrices. This uses 4 real parameters. We can add five more parameters by multiplying rows and columns by arbitrary complex phases. This gives a total of nine parameters, just what are needed for an arbitrary unitary matrix. So I’m wondering how to prove that this parameterization is complete or not.

Of the five components of a 3×3 matrix as given above, three are magic, that is, three have rows and columns sum to the same value. The bra and ket forms have rows or columns that sum to the same value but not both. Consequently, we know that any magic matrix can be written uniquely as the sum of a democratic, a 1-circulant, and a 2-circulant and this must also apply to the magic unitary matrices. Because the decomposition is 1-1 and onto, we know that the magic matrices have 1+2+2 = 5 complex degrees of freedom.

The 1-circulant, 2-circulant, and democratic matrices are closed under multiplication and addition, thus the magic matrices are a subalgebra in the algebra of 3×3 matrices. The unitary matrices are also a subalgebra of the 3×3 matrices and so are the 3×3 magic unitary matrices. Now the 4 parameter parameterization I gave for them was under the additional restriction that the sum of the rows and columns, in addition to being equal, was required to be equal to 1. We can generalize with a 5th parameter to multiply the whole matrix with a complex phase and therefore make the sum of each row and column be an arbitrary complex phase. And this gives 5 parameters that define the magic unitary matrices completely.

This is kind of funny. Magic 3×3 matrices have five complex degrees of freedom, while magic 3×3 unitary matrices have five real degrees of freedom. Could there be a relationship between them?

All this discussion has to do with the democratic, 1-circulant, and 2-circulant portions of the matrices. What about the bra and ket portions?

The method I’ve been using to get a general unitary matrix into magic form is by multiplying the rows and columns by arbitrary complex constants. Ignoring the overall complex phase, we get to pick two rows and two columns for this treatment:

Comparing with the table of discrete Fourier transforms, the matrices of complex phases in the above equation can be seen to be the discrete Fourier transforms of 1-circulant matrices, that is, they are of the form

where, as before, the complex numbers A,B, and C are related by A+B+C = 0. This gives two complex degrees of freedom or four real degrees of freedom, again just twice what we need to arrange for multiplication by two complex phases.

So I think I’m getting fairly close to cobbling together a proof that the new parameterization does cover all the unitary matrices, when it is extended by multiplying columns and rows by complex phases.

The funny thing about all this is that it has the feeling of complete triviality, but I’ve never seen the unitary matrices parameterized so nicely.

I think the thing is, people like

group theoryso much that they cannot think of 3×3 unitary matrices without thinking of U(3) and all the standard constructions thereon. From this point of view, the Fourier transform etc is more fundamental than the idea of a continuous group, and so these structures appear more naturally.A childish problem you gave me (together with your wrong answer):

Lubos, prove this:

A matrix is “magic” if every row and column sums to the same value. 3×3 complex magic matrices are closed under addition and multiplication. Every unitary 3×3 matrix can be obtained from a unique magic unitary matrix by multiplying rows and columns by arbitrary complex phases.

LM:

Dear Carl, first, you are surely using a non-standard (if not “wrong”) definition of a magic square: the sum over the two diagonals have to be equal to the same value, too. Because all these conditions are linear (without right hand side, in your case, but that’s another non-standard thing), it is obvious that these matrices form a linear space – your first (double) statement.

The rest is a straightforward children’s game, too. I surely think that every talented child should be able to compute it at age of 10.

I will assume the conditions for the 2 diagonals because I am sure that you ignored them, too – and the system would be far too overdetermined (and the proof of showing almost no solutions trivial). With your choices, the first row is (a,b,c), the second is (d,e,a+b+c-d-e), the third is (b+c-d,a+c-e,-c+d+e). The orthogonality of 1st and 2nd row gives me “e” in terms of others. Now, the whole matrix is in terms of a,b,c,d only. In this form, the inner product of the 1st and 3rd row is actually d-independent, 2ab+2bc+2ac. This must vanish, too. That’s a=-bc/(b+c).

In terms of b,c,d, the inner product of 2nd and 3rd row is -2(b^2+bc+c^2)^2.(b-d).(c-d)/(b^2-c^2)^2. That’s amazingly factorized and must vanish. There are three factors that can vanish. The first, squared one, only vanishes if b=c=0, but if it does, the denominator over-cancels it, anyway. So it doesn’t work. Instead, we must have b=d or c=d. It turns out that all the other inner products – and equality conditions for squared rows/and/columns – are proportional to (b-d)(c-d), too.

Without a loss of generality, I can assume b=d. The whole matrix is then written in terms of b,c. Normalizing it, I actually get a one-parameter family of real solutions as a function of b, for example. There are four solutions to the simple quartic equation for b. All of them are circulants of a sort. 1/4 of the real solutions are

((u,b,v),(b,v,u),(v,u,b)) where v=-(1+b+sqrt(1-2b-3b^2))/2, while u=-2v/(1-b+sqrt(1-2b-3b^2)). Easy to check. At any rate, your statement is wrong: the solution is not unique, it has an adjustable real parameter. Adding the conditions for the diagonals, 2v would have to be equal to u+b. That’s overdetermined and has no solutions. At any rate, your statement was wrong.

Let me give you some examples because they’re easier for you than mathematics. Math class is hard , Barbie.

The unitary magic matrix can be written as ((a,b,c),(b,c,a),(c,a,b)). To show you numbers that are different than your “unique” ones, let me start with my birthday. I can insert, for example, the following three numbers to the first row:

0.19731205, -0.16386129, 0.96654924,

-0.16386129, 0.96654924, 0.19731205,

0.96654924, 0.19731205, -0.16386129.

The other rows were obtained by rotations of the first one. All your conditions are satisfied and the first number is my birthday, year, month, day. This was a 10-minute exercise. It’s just amazing that you and Kea spend your two lives with this problem and your (“unique” WTF) solution is still completely wrong after those lives. Why didn’t you simply ask a mathematician?

Luboš, thanks for taking an interest in this. Of course the applications are CKM / MNS matrices. In the literature, “magic” is defined the way I’m using it. For example, see Magic Neutrino Mass Matrix and the Bjorken-Harrison-Scott Parameterization, C. S. Lam, Phys.Lett. B640 (2006) 260-262

Your calculation is correct for real magic unitary matrices, but in physics, “unitary” means complex matrices. I would have thought you would know this, LOL.

Keep grinding!

Dear Carl,

I hope I have correctly understood you definition of magic unitary matrix. Rows are obtained by cyclic permutations of components of a fixed vector.

In the real case you get 2-D sub-space of orthogonal matrices. a+b+c= C condition with c fixed by the condition that the determinant equals to one means that the allowed matrices define a 2-D surface of 3-D space.

Then you multiply the rows and columns by phases. That is you multiply by a diagonal unitary matrix from right or left. This gives a unitary matrix. You get 2+3+3 dimensional space for real numbers a,b,c.

The determinant of the resulting matrix is product of determinants of two unitary matrices and that for the original real matrix and therefore a complex phase. You want however SU(3) matrices and when you fix one of the phases you get 7-D space , call it X. This is not SU(3).

I hope that this helps and that I have not completely misunderstood your problem;-).

Matti

You want to show that a 3×3 unitary matrix can be transformed to a magic matrix by multiplying rows and columns by phases.

I can prove it but non-constructively (if I have understood the problem correctly).

A magic matrix by your definition is one whose columns and rows sum to the same constant L.

I.e. multiply the elements of ith row by a phase factor a_i and the jth column by a phase factor b_j. Make a column vector a with components

a_i and a row vector b with components b_j. The requirement can be written

Ua = L b*

bU = L a*

The * operator is transpose and complex conjugate.

L must be a pure phase because U is unitary and |a| = |b| = sqrt(3). We can take L=1 because the

magic matrix is undetermined up to an overall phase. You can also see that these two equations are equivalent because U is unitary, so we only need the first one. I.e we want

Ua = c where the components of a and c are pure phases.

This is three equations but any two imply the other because |c|^2 = |a|^2 by unitarity.

There are also only two independent unknowns because multiplying the three phases a_i

by an overall phase changes c by a phase. So set one of the components (say a_3) to 1.

Then the first equation expands to

x a_1 + y a_2 + z = c_1

where x, y and z are the top row of the matrix and |x|^2 + |y|^2 + |z|^2 = 1.

For the next step think graphically. The locus of values of the left hand side is an annulus centred on z with inner radius ||y| – |x|| and outer radius |x| + |y|. The solutions are found where this annulus intersects the circle of unit radius centred

on the origin. Since |z| = 1 there is a non-empty set of solutions which break into either one or two connected parts depending on whether ||y| – |x|| + |z| > 1 or not.

The space of values of a_1 and a_2 is a torus and from the graphical analysis you can see that the solutions form a curve or two curves that wrap once around the torus in one of the two directions. The direction depends on which is larger |x| or |y|.

Now look at the second equation in the same way

u a_1 + v a_2 + w = c_2

Suppose for example that |x| > |y| and |u| < |v|, then the solution of each equation would wrap round the torus in different orthogonal directions, so they must intersect giving a simultaneous solution for both equations and therefore for the whole problem. If these inequalities (or their reverse) are not met for the first and second equation we could get a solution by looking at the first and third or second and third together.

The only way to have no solutions would be if all magnitudes of components of the second column of the unitary matrix are greater than (or less than) all the corresponding magnitudes of the first column, but this cannot be because the columns are unit vectors.

Do you also need the solutions to be unique up to a phase? It looks like there might be at least two solutions in some cases but that needs checking.

Phil, I think that’s exactly what I’ve been looking for.

I’m writing a paper to be titled “Quarks, Leptons, and the Discrete Fourier Transform”. How can I credit you for the proof?

The reason this is of interest is that the MNS matrix is, to experimental error, in so-called tribimaximal form. When you rewrite the tribimaximal matrix in magic form, it becomes very simple. See doubly magic matrices and the MNS.

The general parameterization of magic unitary 3×3 matrices uses 6 intermediate values I, J, K, R, G, and B. See A New Parameterization for 3×3 unitary matrices. You just provided the proof as to the parameterization being complete for unitary as well as magic unitary. I suspected it was true from counting parameters and computer calculations but didn’t have a proof.

And then when you use the same I, J, K, R, G, and B to solve the 3×3 complex matrix idempotency equation UU = U (rather than the magic unitarity problem), you find that the weak quantum numbers of the quarks and leptons are given by the Hermitian solutions. See section 5 of this paper for the algebra.

Dear Carl,

I assumed that you were speaking about real magic matrices.

If you allow the components of the magic unitary matrix to be complex you have 3-dimensional space of magic matrices: six real parameters subject to 3 conditions from unitarity means 3 real parameters (length of row equals to 1 means one condition and the orthogonality with other rows gives 2 conditions for magic matrices).

What I did not realize in the previous argument that you have only 3+2 phases since the multiplications from right and left by a phase multiple of unit matrix are equivalent by their commutativity with U(3). You get 8 parameters. This is not enough for U(3).

In case of SU(3) you must eliminate one phase since the determinant must be equal to one. This means that you have still 7 parameters and it does not seem to work.

If this worked the space of magic unitary matrices could be identified as coset space of SU(3)/U(1)^2_RxU(1)^2_L (R and L refer to multiplication from right and left) by selecting a magic matrix as a representative of each coset. This would require that magic unitary matrices form a 4-D rather than 3-D space.

So my opinion is that this does not work but I might be wrong.

Best Regards,

Matti

so that a transformation to a magic matrix might well be possible.

If your hypothesis is correct, the 3-D coset space of magic matrices can be identified as the coset space U(3)/U(1)^3_LxU(1)^3_R in the sense that one can always find in give coset a representative which is magic matrix.

Matti, the magic unitary matrices have 5 real parameters. There is a 4-parameterization at my blog post A New Parameterization. The 5th parameter is an overall complex phase.

To get from this 5-parameterization to the full 9-parameterization of 3×3 unitary matrices, multiply two rows and two columns by complex phases. According to the proof from PhilG, this gives a complete parameterization of the 3×3 unitary matrices using the magic matrices as the foundation.

I think it’s far more elegant than the usual parameterizations of the 3×3 unitary matrices seen in elementary particle theory, which are quite arbitrary and are arranged to make comparison with experiment easy (rather than to make them theoretically elegant). But the real advantage is in the MNS matrix where in this parameterization, it becomes quite trivial.

Carl, I’m pleased the proof is useful to you.

The solution for transforming from unitary matrices to magic unitary is not always unique even when you fix the sums in the magic matrix to be one. To see this, just consider the real case, i.e. real orthogonal matrices. There will be special cases where the transformation phases are +1 or -1 but in most cases complex phases will be required. Then a second distinct solution is given by taking the complex conjugate of the phases.

For use in your paper a small acknowledgement to Philip Gibbs would be more than enough. The wording of the proof is a bit hand wavy with insufficient care taken over extremal cases, and it could be improved. It should be good enough for physicists. If the referees get picky we can work on it.

By the way, the method of the proof can be applied to larger unitary matrices. the number of degrees of freedom still matches. The topological part of the argument gets a bit more hair-raising and topology is not my strong point, but it looks good enough for me to at least conjecture that the result holds for any size of unitary matrix.

Good luck with your research.

Dear Carl,

with my meaning for the word “magic” (rows are relate to each other by cyclic permutations) I end up with 3 parameters in the case of SU(2) by checking orthonormality conditions.

a) a,b,c gives 3 complex parameters since the rows are obtained by cyclic permutations of a,b,c.

b) Then you pose |a^2|+|b|^2+|c|^2=1 giving eliminating one parameter and then the orthogonality of first and second row giving two conditions (real and imaginary part). The other orthogonality conditions reduce to the same condition so that you have 3 conditions and 3 real parameters.

c) There are 5 phases since right and left multiplication with phase multiple of unit matrix commute with U(2) matrix. One of the phases must be eliminated since you require det=1. This gives 7 parameters.

%%%%%%%%%%%%%%%%%%

You can do the same for SU(2). You get 4-3=1 parameters for magic matrices which are just matrices representing rotation around z-axis in spinor representation. You have 4-1-1=2 phases available so that all SU(2) matrices should be reducible to magic ones and this is indeed the case since you can take a matrix representing a rotation around z-axis and multiply the rows and column with phases of Euler angles.

%%%%%%%%%%%%%%%%%%%%%%%

For SU(4) you would get 4*2 real parameters (a,b,c,d). Unit vector property gives one condition and by writing the orthogonality conditions explicitly one finds that they reduce to orthogonality conditions for two inner products giving 4 conditions: 4+1 conditions alltogether so that magic matrices form 8-5=3-D space. There are 4+4-1-1 =6 phases available so that you get so that you get 9-D space of magic matrices whereas SU(4) is 15-dimensional.

%%%%%%%%%%%%%%%

The general case can be also understood.

a) The orthogonality conditions are of form

Sum a_i a_(i+k)^*=0, for k=1,N.

b) For k and N-k the conditions are equivalent since there is just a cyclic permutation in the sum. This gives for N=3 just one condition. For N=4 and N=5 2 conditions. For N=6 and N=7 3 conditions and so on.

c) For N=2*n and 2*n+1 one obtains 2*n+1 conditions from normalization and orthogonality. The dimension of the magic subspace is therefore

d= 4*n – 2*n-1 = 2*n-1 for N=2*n

and

d= 2*n+1 for N=2*n+1 .

for N=2*n+1 (d=1 for SU(2) and d=3 for SU(3) for instance) .

c) The space of the unitary matrices reducible to magic ones has dimension

d= 2*n-1+ 4*n-2 = 6*n-3 for

N=2*n

and

d= 2*n+1+4*n =6*n+1

for N=2*n+1 (d= 3 for SU(2) and d= 7 for SU(3)).

Best Regards,

Matti

(1,2,. 2*n-2 +2*n-2= 4*n-3 the dimension of the space of matrices reducible to magic ones 4

Carl, bad news, there is a flaw in my proof. In the case where ||y| – |x|| + |z| < 1 the solution curve does not wrap round the torus as I thought. (I told you I was no good at topology.) That doesn’t mean that there are no solutions, it just requires more work to show they exist. It might be possible to complete the proof by considering different cases with different inequalities in the components until the whole space of unitary matrices is covered. Perhaps there is a tidier solution.

An interesting case is the unitary matrix used in the discrete fourier transform. All the matrix components are the same size 1/sqrt(3) so you get ||y| – |x|| + |z| = 1/sqrt(3) < 1 for every row and column . By trying different phases I found six distinct solutions for this matrix!

Hi Carl,

still about the magic matrices. If only phase multiplication is allowed then the subgroup SU(2) subset SU(3) represented as direct sums of 1×1 unit matrix and 2×2 matrix representing element of SU(2) are certainly not transformable to magic matrices by a mere phase multiplication.

Situation changes if you are talking about matrices transformable to magic matrices by a unitary automorphism A–> UAU(-1) followed by phase multiplications.

As a matter fact, rotations around z-axis in case of SU(2) are magic only in the sense that first row is (a,b) and second row (-b,a). I have not checked whether phase multiplication induced by right and left multiplication allows to transform these matrices to strictly magic ones.

Matti

Phil, your observation about Uu = v for vectors of phases u and v is more than enough for me to put a little note in. I’ve been working on the physical interpretation of u and v quite busily since you wrote that.

All this stuff is based on the assumption that the leptons are composed of a wave function with three states (i.e. 3-d Hilbert space). A complete set of basis states would be r=(1,0,0), g=(0,1,0), and b=(0,0,1). The corresponding operators are R=diag(1,0,0), etc. Because it is a 3-d Hilbert space, it is natural to get three generations of particles.

Given a vector |a>, the expectation value of R is (a|R|a). For a state to be color neutral, it must have

(a|R|a)=(a|G|a)=(a|B|a).

This means, that in the color basis, the vector |a) has to be a vector of phases.

It’s not quite as simple as described above because our information about the generations is in the mass basis, not the color basis and perhaps not the left-handed state basis (i.e. only the left handed states participate in the weak interaction), but this gives the general idea of why vectors of phases are important. And of course the Fourier transform is a matrix of orthonormal vectors of phases.

Writing “>” doesn’t work in comments here.

Matti,

You might be interested to know that the magic unitary 2×2 matrices can be elegantly parameterized as follows. First, note that a unitary matrix can be converted into a matrix of probabilities by taking squared magnitudes of the elements. The most general such matrix of probabilities is:

[p, q]

[q, p]

where p+q=1.

Then a magic unitary matrix that gives the above probabilities is

[p+i sqrt(pq), q-i sqrt(pq)]

[q-i sqrt(pq), p+i sqrt(pq)].

Carl, I am still thinking about this problem from time to time. Here is one approach that looks promising.

We need to show that there are solutions to the equation Ua = c for any unitary matrix U when a and c are constrained to be vectors whose components have modulus one. Call the components of the matrix

u1,u2,u3|v1,v2,v3|w1,w2,w3

so we need to find phases a1,a2,a3 such that

|u1 a1 + u2 a2 + u3 a3| = 1

|v1 a1 + v2 a2 + v3 a3| = 1

The equation for the third row is implied by these two due to unitarity. Square these equations up and use the fact that

|u1 a1|^2 + |u2 a2|^2 + |u3 a3|^2 = 1

so we get

Re(u1 u2* x1 + u2 u3* x2 + u3 u1* x3) = 0

Re(v1 v2* x1 + v2 v3* x2 + v3 v1* x3) = 0

where x1, x2, x3 are of modulus one and x1 x2 x3 = 1

I can’t go much further than this in the general case yet, but the real case is doable (i.e. real orthogonal matrices) The equation then become

u1u2 y1 + u2u3 y2 + u3u1 y3 = 0

v1v2 y1 + v2v3 y2 + v3v1 y3 = 0

y1 = cos(t1)

y2 = cos(t2)

y3 = cos(t3)

t1 + t2 + t3 = 0

(all variables are now real)

The first two equations are linear and always have a non-zero solution in the vector (y1,y2,y3). It can be multiplied by a real factor to get other solutions (in certain singular cases there will be more solutions) so in the 3D real space the solutions are given by at least one straight line through the origin.

The remaining trigonometric equations define a surface in the same 3D space. This surface is shaped like a humbug centred on the origin ( a humbug is a mint sweet in the shape of a bloated tetrahedron which I hope is as familiar worldwide as it is in the UK). The key fact is that this surface totally encloses the origin so any line through the origin will intersect it in two places. (I suppose this needs a formal proof.) In fact this gives four distinct solution to the problem in most cases because cos(t) = cos(-t).

So for most real orgthogonal matrices there are four transformations to a magic unitary matrix which come in two pairs of complex conjugates.

In the general case you can try to picture the equations in 3 complex or 6 real dimensions. The linear part resolves to a four dimensional hyperplane. The surface defined by the phases is a two dimensional compact surface so it cannot fully enclose the origin in six dimensions like the humbug does in three. In fact there are 4d hyperplanes which do not intersect the phase surface so more work is required to show that the hyperplanes derived from the unitary matrix always do intersect it. I am not even totally convinced now that it is true. It might be worth searching for counterexamples.

I dont know if you have already noted any connections between this magic matrix question and MUBs but in case you have not here is one for you. (forgive my slowness if you already noticed this)

Remember how I said that transforming the discrete fourier transform matrix to a magic matrix is an interesting case and that it has at least six distinct solutions. I thought about it some more and realised why this arises from an MUB and how it generalises to larger matrices.

Consider an MUB in N dimensions where we choose the first basis matrix to be the identity matrix and the second to be the discrete FT matrix, then extend this to a the largest possible collection of MUBs. In the case where N is a prime power we know that we can add N-1 more basis matrices to the collection for N+1 in total.

The components of all these matrices are of course a complex phase factor times 1/sqrt(N). It follows from the MUB property that any column of the N-1 MUBs when multiplied by the FT matrix must also be a vector of complex phase factors times 1/sqrt(N). So each of these columns provides a solution for transforming the FT matrix to a magic matrix.

Furthermore all these solutions are distinct because the scalar product of any two different columns is zero if they come from the same MUB matrix, or it is a complex phase factor times 1/sqrt(N) if they come from different matrices. This shows that when N is a power of a prime there are at least N(N-1) distinct ways to transform the discrete FT matrix to a magic matrix. In the case N=3 this confirms the six solutions I found by hand.

Phil, the extra matrices are known by the papers of Combescure and given (for prime dimensions p) by the top row phase entries of omega^(-0.5k(k+1)) where k goes from 0 through p-1.

But nice observation of yours about the magicness!

I know how to complete the proof now.

Remember that there were different cases depending on whether ||y| – |x|| + |z| > 1 or not for the elements of a row in the matrix. If this inequality fails for all rows then my first attempt at proof is OK, so I have to complete the case where the identity holds for at least one row.

In that case the unit circle cuts the annulus in one continuous connected curve.

I also showed that the problem is equivalent to the simultaneous solution of these two equations

Re(u1 u2* x1 + u2 u3* x2 + u3 u1* x3) = 0

Re(v1 v2* x1 + v2 v3* x2 + v3 v1* x3) = 0

with

x1 = exp(i t1)

x2 = exp(i t2)

x3 = exp(i t3)

t1 + t2 + t3 = 0

Without loss of generality I’ll take the first row u1,u2,u3 to be the one satisfying the inequality so the solution of the first equation is a continuous connected curve. I can also take u1,u2,u3 to be real and non-negative because I can use phases to transform them. (This step is not essential but it makes the situation clearer). The first equation is then

u1u2 cos(t1) + u2u3 cos(t2) + u3u1 cos(t3) = 0

Look for solutions where cos(t1) = 0,

i.e. t1 = pi/2 or t1 = 3 pi/2 and

u2 cos(t2) + u3 cos(t3) = 0

t1+t2+t3 = 0, so if t1 = pi/2 we get

u2 cos(t2) + u3 sin(t2) = 0

This has two solutions t2 = T and t2 = T+pi

The other case t1 = 3 pi/2 gives

u2 cos(t2) – u3 sin(t2) = 0

with solutions t2 = -T and t2 = -T+pi

So altogether we now know four points on the solution curve for the first equation which is a connected closed curve (if T=0, or T=pi there are only two distinct points but that does not affect the argument)

Now substitute the four points into the expression in second equation

A(t1,t2,t3) = Re(v1 v2* exp(t1) + v2 v3* exp(t2) + v3 v1* exp(t3))

If A(t1,t2,t3) = 0 for any of the points we are done. Otherwise consider the sum of A(t1,t2,t3) taken over the four points. Because the points have values of t1, t2 and t3 that fall into pairs differing by pi, the sum is zero. This means that the four real values of A(t1,t2,t3) must either be all zero, or at least one of them must be positive and at least one must be negative. But the four points lie on the solutuon curve for the first equation and this curve is continuously connected. By the intermediate value theorem it follows that there must be at least one point on the solution curve for which A(t1,t2,t3) = 0 and this gives us the simultaneous solution we require.

I’ve written a less disjointed version of the proof which you can find at http://www.weburbia.com/magic.html. Feel free to use this in your papers if it is useful to you. You can edit it as you wish.

Carl,

Might look at http://arxiv.org/abs/0905.3936 which appeared this morning (5/26)