## A matrix problem

Suppose you are given n linearly independent vectors in n-dimensional Euclidean space. You move the vectors so that each vector becomes longer, but their inner products remain the same. What happens to the volume of the parallelepiped they span?

This month I am in Cambridge, at the Isaac Newton Institute pretending to be a statistician. (The picture shows one of the striking John Robinson artworks outside the Institute.) Dennis Lin, from Penn State, asked me a very interesting question, which so far I can’t solve. In a post on team games, I discussed conference matrices. Let me repeat the facts here. A conference matrix is a square matrix C with entries 0 on the diagonal and ±1 elsewhere, which satisfies CCT = (n−1)I. Conference matrices only exist for even order n; they are equivalent to symmetric (resp. skew-symmetric) matrices if n is congruent to 2 (resp. 0) mod 4.

If C is a skew-symmetric conference matrix, and H = C+I, then H is a Hadamard matrix, that is, it has all entries ±1 and satisfies HHT = nI. Such a matrix is called a skew-Hadamard matrix (it is skew-symmetric apart from the +1s on the diagonal). Conversely, given a skew-Hadamard matrix H, the matrix C = HI is a skew-symmetric conference matrix.

Let us widen the enquiry to deal with the case where no conference or Hadamard matrix exists. The letters C and H suggest a fanciful terminology which I will use. We define a hot matrix to be a square matrix with entries ±1, with +1 on the diagonal and off-diagonal entries skew-symmetic; a cold matrix is a skew-symmetric matrix with 0 on the diagonal and all off-diagonal elements ±1. There is an obvious bijection between cold and hot matrices, given by H = C+I.

Now a theorem of Hadamard implies that a hot matrix has determinant at most nn/2, with equality if and only if it is a skew-Hadamard matrix. For consider the rows of such a matrix as vectors in n-dimensional space. Each vector has length n1/2, so the volume of the parallelepiped they span is at most nn/2, with equality if and only if the rows are pairwise orthogonal. Similarly, a cold matrix has determinant at most (n−1)n/2, with equality if and only if it is a skew conference matrix.

Thus, if a skew-Hadamard matrix exists (as is conjectured to be the case for all multiples of 4), then it has maximum the determinant among all hot matrices, and the corresponding skew conference matrix has maximum determinant among all cold matrices.

What happens if n is not a multiple of 4? This was Lin’s question.

If n is odd, then the determinant of a cold matrix is zero. (More generally, any skew-symmetric matrix of odd order has determinant zero.) So there is nothing to say in this case.

Dennis Lin asked whether it is true that, for n congruent to 2 (mod 4), if H is a hot matrix with maximum determinant, then HI is a cold matrix with maximum determinant. Perhaps it goes the other way as well.

Computation shows that the assertion is correct for n = 6: the largest possible determinants for hot and cold matrices are 160 and 81 respectively, and the matrices representing the maxima correspond under the natural bijection. There are too many matrices for n = 10 for an exhaustive search; and in lieu of doing anything clever, I simply looked at a few million random matrices, and found that the matrices with maximum determinant which showed up did indeed correspond. (The best I found were 64000 for hot and 33489 for cold.)

The connection with my introductory remarks simply comes from the observation that, if hot and cold matrices correspond under our bijection (that is, H = C+I), then the rows of H, regarded as vectors, are longer than those of C (length n1/2 as opposed to (n−1)1/2), but the inner products of corresponding pairs of distinct rows in the two matrices are the same. However, I haven’t been able to use this observation to get anywhere.

Let me end on a cautionary note. The correspondence between hot and cold matrices does not induce a monotone map on their determinants. In other words, the orderings might agree at the top end, but definitely differ lower down. I count all the things that need to be counted.
This entry was posted in open problems and tagged , , , , , , , . Bookmark the permalink.

### 7 Responses to A matrix problem

1. Will Orrick says:

Dear Peter,

Since Hadamard’s bound was improved by Ehlich and by Wojtas for matrices whose size is congruent to 2 (mod 4), I wanted to ask whether anything is known about the conditions under which hot matrices can attain the Ehlich-Wojtas bound. Your 6×6 matrix does attain the bound, but your 10×10 matrix does not. (And I was unable to do any better.) Interestingly, there is a 14×14 hot matrix that does attain the bound, but so far I haven’t found any hot matrices of larger size that do.

Ehlich found examples of matrices attaining his bound by restricting attention to matrices of the form {{A, B}, {-B^T, A^T}} with A and B circulant of size n/2. If one further requires that the matrix be hot, than A must be hot, and B must be symmetric. Indeed, the 6×6 and 14×14 matrices above are of this form. Denoting the row sums of A and B by a and b, the condition a^2+b^2=2n-2 must hold. For hot matrices, we have a=1, so 2n-2 must be of the form 1+b^2. The solutions for n=6 and n=14 correspond to b=3 and b=5. The next possible case where we might hope to find a hot matrix of Ehlich’s form is b=7, n=26, but this appears not to work. After that, the next case is b=9, n=42. It should be possible to check exhaustively whether there is a solution.

2. Will Orrick says:

I made a mistake in my previous comment: There’s no reason that B has to be symmetric. Dropping this restriction, one finds that there are two-circulant hot matrices of sizes 26 and 42 that attain the Ehlich-Wojtas bound.

I ‘m curious about whether it is possible to construct hot matrices that attain the Ehlich-Wojtas bound in the case when 2n-3 is not a perfect square. It’s much more difficult to do an exhaustive search when one cannot impose the two-circulant condition. So far, nothing…

3. Will Orrick says:

OK, so I now realize that the condition that 2n-3 be a perfect square is generally necessary in order for a hot matrix to attain the Ehlich-Wojtas bound. Let H be a hot matrix attaining the bound. By suitable permutations and negations of rows, H may be transformed into a matrix such that the Gram matrix, HH^T, has the standard optimal form, (n-2)I+2 I_2\otimes J_{n/2}. Here I_2 is the 2×2 identity, J_{n/2} is the (n/2) x (n/2) all ones matrix, and \otimes represents the Kronecker product. Performing the same permutations and negations on columns that we performed on rows preserves hotness, and therefore H^T H has the same form.

Writing H = {{A, B}, {C, D}}, where A, B, C, D are (n/2) x (n/2) matrices, Ehlich showed that the row/column sums of the four submatrices, which we denote a, b, c, d, satisfy a = d, b = -c, a^2 + b^2 = 2n-2. Since A must be hot, we have a=1 as before, and therefore b^2 = 2n-3.

• Peter Cameron says:

Will, thanks for these comments. In order to calculate the determinant of the corresponding cold matrix, one needs the eigenvalues (which of course can be calculated in the case of your circulant construction).

• Will Orrick says:

Peter,

Let C = -C^T and let H = I+C, so that H^T = I-C. Let G denote the Gram matrix of H. We have G = HH^T = H^T H = I – C^2. If v is an eigenvector of C, then it is also an eigenvector of H, of H^T, and of G. The eigenvalues of C are imaginary; when n is even, they occur in conjugate pairs ir_j, -ir_j, where the r_j are nonnegative real numbers and j ranges from 1 to n/2. The corresponding eigenvalues of G are duplicates of each other, [1+(r_j)^2], [1+(r_j)^2]. Hence the eigenvalues of C and H are determined by the eigenvalues of G.

Therefore, knowledge of the eigenvalues of G allows us to compute the determinants of C and H without imposing any additional assumptions on H such as Ehlich’s two-circulant structure. We get

det G = \prod [1+(r_j)^2]^2
det H = \prod [1+(r_j)^2]
det C = \prod (r_j)^2

where j ranges from 1 to n/2 in each product.

When G is the Ehlich-Wojtas optimal form,

G = (n-2)I+2 I_2\otimes J_{n/2},

the eigenvalues of G are 2n-2, which occurs twice, and n-2, which occurs n-2 times. Therefore

det H = (2n-2) (n-2)^((n-2)/2),
det C = (2n-3) (n-3)^((n-2)/2).

4. Peter Cameron says:

This is why I think the conjecture (and more) may be true. If you look at the formulas for det(C) and det(H), it is clear that the second is not a function of the first; but, the more closely bunched the eigenvalues of C are, the less is the possible variability in det(H). So the spread of possible values of det(H), given the value of det(C), will be rather small if the eigenvalues are close together, as they are near the optimum. I don’t know how to convert this vague insight into a proof, though.

• Will Orrick says:

I agree with this intuition. However, we don’t have a good handle on the size of the maximal determinant of general {-1, 1} matrices except in cases where a construction attaining the upper bound is known. My feeling is that there exists a C independent of n such that one can always find a determinant exceeding Cn^(n/2), and that C is at least 0.5, but I know of no proof of this. I think it is fair to say that we are similarly ignorant about the situation for hot and cold matrices. Therefore it may turn out that maximal determinants will not always be sufficiently large for the intuition to apply.

There are, in fact, examples where the non-monotone phenomenon you mention occurs at an uncomfortably high point in the range of the determinant function. For example, consider the 10×10 hot matrices

+–+–+++-
++-++—-+
+++-+-+++-
–+++—++
+—++–++
++++-+–++
-+-++++-+-
-+-+++++++
-+——++
+-+—+–+

and

+–+–+++-
++-++-+–+
+++—+-+-
–+++—–
+-+-++–++
++++-+–+-
—++++-+-
-+++++++++
-+-+—-++
+-++-++–+

which have determinants 88×2^9 and 90×2^9 respectively. The corresponding cold matrices have determinants 147^2 and 143^2, that is, the order is reversed. The maximal determinant for hot matrices of size 10 appears to be 125×2^9 and the maximal determinant for cold matrices appears to be 183^2. Up to equivalence, there appear to be only a few matrices with determinants lying between those of the above examples, and the maximal determinants. Unless it can be shown that these examples differ in some fundamental way from maximal-determinant matrices, I would consider them to be near counterexamples.

On the other hand, I strongly suspect that in those cases where there exists a hot matrix attaining the Ehlich-Wojtas bound, the conjecture is true and probably not hard to prove. As you observe, the map from hot to cold does not affect the inner products of distinct rows, and therefore the Gram matrix of a hot matrix differs from that of the corresponding cold matrix only by the identity matrix. My guess is that the analysis that leads to the optimal Gram matrix for {-1, 1} matrices will require only minor modification for cold matrices, and that the optimum will turn out to be the expected one. The analysis I am thinking of is described in the following papers:

H. Ehlich, Determinantenabschätzungen für binäre Matrizen, Math. Z. 83 (1964) 123-132.
M. Wojtas, On Hadamard’s inequality for the determinants of order non-divisible by 4, Colloq. Math. 12 (1964) 73-83.
J. H. E. Cohn, On determinants with elements ±1, II, Bull. London Math. Soc. 21 (1989) 36-42.

If I ever get a spare moment, I will try to take a look at this.

Interestingly, the set of possible determinants of cold matrices of a given size becomes extremely small in comparison with the set of possible determinants of hot matrices as the size grows. The determinant of a cold matrix must be the square of a positive odd number (indeed, it is probably more natural to consider the Pfaffian rather than the determinant), while that of a hot matrix must be divisible by 2^(n-1). The Hadamard bound gives a comparable upper bound in the two cases. While the map from hot to cold determinants is not one-to-one in either direction, the many-to-one phenomenon is much more pronounced in the hot-to-cold direction than in the reverse direction.

This site uses Akismet to reduce spam. Learn how your comment data is processed.