There are many different ways to regard a finite symmetric group as a metric space. These have been used for various purposes. Here I would like to say something about them.
One application of such metrics is in non-parametric statistics. Suppose that we observe two attributes of a set of observational units, but the precise numerical values are unimportant; only the relative sizes count. The order of the second attributes is some permutation of the order of the first (assuming there are no ties). If the attributes are highly correlated, the permutation will be close to the identity; if uncorrelated, it will be essentially random. Information about the distribution of values of the metric can tell us which alternative is likely to be the case. What is needed is the number of permutations at each possible distance from a given one (either exactly or approximately). For right-invariant metrics this number is independent of the permutation chosen.
Any of the metrics described here could potentially be used in this way, though only a few have been.
Also, any of these metrics could be the basis of a study of the symmetric group on coding theory lines, asking questions such as the maximum size of a subset or subgroup with given minimum distance, or the minimum size of a subset or subgroup with given covering radius. This has been extensively studied for the Hamming metric; I described some of the results in an earlier post. Little has been done for other metrics. Taoyang Wu and I showed that finding the minimum or maximum weight of a subgroup with given generators is, with a few slightly surprising exceptions, NP-hard; with Christoph Buchheim, we gave similar results for the distance from a given element to a subgroup.
For more information about these metrics, look at Persi Diaconis’ book
on Group Representations in Probability and Statistics.
The most natural metrics are those which are invariant under left and/or right translation. If a metric d is invariant under right translation, then d(g,h) = d(1,hg-1), so once we have defined the distances of all elements from the identity, the metric is completely specified. We can think of d(1,g) as a “norm” on G. If the metric is also invariant under left translation, then we have d(1,x) = d(g,xg) = d(1,g-1xg), so the norm must be invariant under conjugation.
I will call a metric invariant under right translation a Cayley metric, and one invariant under both left and right translation a normal Cayley metric, adapting the terminology for Cayley graphs. But note that
- some people use “Cayley” to mean invariant under left, rather than right, translation;
- some people use “normal” in an entirely different sense, meaning that the group of translations is a normal subgroup of the full automorphism group of the Cayley graph or metric.
The commonest normal Cayley metric is the Hamming metric, where the distance from the identity to an element g is the number of points moved by g. This is by far rhe best-studied metric, but I will say least about it here; I discussed some of the combinatorics of Hamming distance on the symmetric group in an earlier post in this series.
Transposition and movement metrics
Another example is the graph distance in the Cayley graph of G with respect to a set S which is a union of conjugacy classes generating G. The best-known case is that where S is the set T of transpositions in G. In this case, the distance from the identity to g is the smallest number of transpositions whose product is g, which is equal to n−c(g), where c(g) is the number of cycles of g (including fixed points).
This metric dT is closely related to the Hamming metric dH: we have dT+1 ≤ dH ≤ 2dT.
Another normal Cayley metric is based on the norm called movement, which has been studied by Cheryl Praeger and others. The movement of a permutation g is the maximum, over all subsets A of the domain, of the size of the symmetric difference of A and Ag. Thus the movement of g is n−o(g), where o(g) is the number of cycles of g of odd length (including fixed points).
Right invariant metrics
As noted above, a right-invariant metric is derived from a norm N by the rule d(g,h)=N(gh-1).
A norm also gives rise to a left-invariant metric d- by the rule d-(g,h)=N(g-1h). One can easily convert between the two metrics, since d-(g,h)=d(g-1,h-1). We will see an example below.
There are so many metrics invariant under right translation (for example, the graph metric in any Cayley graph) that all we can do is to concentrate on interesting ones.
The most interesting case is the Cayley graph with respect to the Coxeter generators (1,2), (2,3), …, (n−1,n). I will call this the Coxeter metric, though many other names have been used; we will see that it connected to many other mathematical topics.
Finding a path from the identity to g in this Cayley graph is precisely the sorting procedure known as Bubblesort, described in elementary books on programming and looked down on by real computer scientists because it is so inefficient (its complexity is about n2, as opposed to n log n for Mergesort, for example.
Bubblesort works as follows. Given a list to be sorted (which we suppose is a permutation of [1,2,...,n]), we make repeated passes through the list. At each pass, if the element we are looking at is greater than its immediate neighbour to the right, we swap these two elements; then we move one place right and continue. When a complete pass has involved no swaps, we are finished. The name comes from the way that large elements bubble up to the top of the list.
A word of explanation is required. Bubblesort exchanges elements in adjacent positions, whereas multiplying on the right by a Coxeter generator (i,i+1) interchanges adjacent elements i and i+1, wherever they are. This problem is easily resolved. Multiplying on the left by (i,i+1) interchanges the elements in positions i and i+1.
In some sense, Bubblesort is optimally efficient. If we wish to sort a permutation by adjacent transpositions, each pair of elements which are out of order must be swapped; Bubblesort swaps such pairs once, and never swaps pairs in the correct order. So it shows that the distance of a permutation g from the origin in the Coxeter metric is equal to the number of pairs of elements in g which are out of order. In particular, the permutation at maximum distance from the identity is the involution which reverses the order.
I cannot resist mentioning an application which Celia Glass and I found recently. Given a graph X on n vertices, an acyclic orientation of X is an assignment of directions to the edges so that there are no directed cycles. It is not hard to show that any acyclic orientation arises from a total ordering of the vertices, by directing all edges from smaller to larger.
Given two acyclic orientations of X, is it possible to pass from one to the other by a sequence of edge reversals without leaving the class of acyclic orientations, and if so, how many edge reversals are required? Each acyclic orientation comes from an ordering of the vertices, and we can Bubblesort one orientation into the other. Now interchanging consecutive vertices has no effect if the vertices are not joined, and is an edge reversal preserving the acyclic property if they are joined. So the answer to the question is yes, and the edges which must be reversed are just those whose directions differ in the two orientations (each is reversed just once by the algorithm).
The Coxeter metric on the symmetric group is of great importance in the study of its representation theory, geometry and combinatorics; but that is a topic for another time.
The corresponding order statistic is called Kendall’s tau; it is
shifted to be symmetric about zero, so that its value is the number of pairs
in the correct order minus the number of pairs which are reversed.
These belong to a general class of metrics on n-tuples of real numbers derived from norms of the form
N(x) = F-1(F(|x1|)+…+F(|xn|)),
where F is a convex function on the positive real numbers. We get lp when F(x)=xp, for p≥1. As p→∞, the norm simplifies to N(x) = max(|x1|,…,|xn|).
Note that the metric defined in the “natural” way from such a norm, by d(g,h)=N(g−h), is left-invariant, if we take gi to be the value of g at i, that is, if we write the permutation in image form as an n-tuple. In order to get a right-invariant metric, we have to take gi to be the position in which i occurs in the n-tuple.
The statistic obtained from the l2 metric is Spearman’s rank correliation.
Another interesting metric is Ulam’s, where the distance between g and h is n−k, where k is the longest increasing subsequence of gh-1 (written in image form). The permutation furthest from the identity is the reversal, as for the Coxeter metric.
The Lee metric is of some importance in coding theory, and features in the fairly recent work on codes over the integers mod 4. Like the Hamming and l1 metrics, we sum the distances between coordinates of the permutations in image form. For the Hamming metric, the distance between any two distinct coordinates is 1; for the l1 metric, it is the modulus of their difference (as integers); for the Lee metric, it is the shortest distance apart when the values 1,2,…,n are written around a circle.
Any sensible metric on a group should have some connection with the group structure; right invariance is the most natural condition to take. There are some metrics which are neither right-invariant nor left-invariant, but are invariant under conjugation. One example is the distance in the commuting graph, which I discussed in a recent post. (I showed there that the commuting graph of Sn is connected provided that neither n nor n−1 is a prime, and that the diameter in this case is bounded by an absolute constant; but the distribution of distances is unknown, as far as I know.