Computational group theory is the art or science of using a computer to learn something about a group. I was introduced to it by John Cannon in the early 1980s. It seems like a black art to many mathematicians (myself included), and perhaps also to computer scientists, I am not sure. (I mean no disparagement here; “black” is the colour of mystery. See below.) I think there may be several reasons for this.

- First, there are many different ways in which a group may be presented to the computer: by a set of concrete generators (which may be permutations, matrices, or something else entirely); a presentation by generators and relations; as the symmetry group of something; as the homotopy group of some topological space; and so on. These may require very different techniques. For example, given generating permutations on a finite set, we can learn everything about the group in a finite amount of time; but given a presentation, it is undecidable even whether the group is trivial or not.
- What seems easy to a human may be hard for a computer, and
*vice versa*. I can say, “Let*p*be a prime divisor of the order of*G*, and let*P*be a Sylow subgroup of*G*.” But, even if I know the group order and can factorise it(!), standard proofs of the existence of Sylow subgroups are worse than exponential. It was a huge and somewhat shocking breakthrough when Bill Kantor showed how to find them in polynomial time. - Randomness plays a big part in computational group theory. Many important algorithms are either
*Las Vegas*(if they run, they give the right answer, but they may fail with small probability), or, even worse,*Monte Carlo*(which might give the wrong answer with small probability). This is disturbing to mathematicians, even those who entrust their personal and financial details to a system whose security depends on numbers which are “probably prime”. - There is also a distinction between theorists and practitioners, which was graphically highlighted by Laci Babai at a conference where he referred to them as the “reds” and the “greens” (I cannot now remember which was which). A theorist devises an algorithm which runs in time “soft O of
*n*squared”, with constants depending on something or other, and “soft O” allows an arbitrary power of log*n*; the practitioners have a program which runs on huge groups in a few seconds or hours when programmed in GAP or Magma and run on such and such a machine.

(In connection with the last point, Dick Lipton and Ken Regan have discussed the concept of galactic algorithms, which are theoretically efficient but will never be run, either because the constants are too large, or because the power of *n* is too large. They hope that some of these can be brought down to earth in the future.)

The basic things that a computer can do with a group are to multiply group elements, invert an element, and return the identity. This led to the notion of a *black box group*, where the group elements are represented (not necessarily uniquely) by bit strings of fixed length, and the black box performs these operations in a specified time which may be taken as the unit for measuring the complexity. I remember a talk at the ICM in Kyoto in 1990 in which Babai illustrated this by a picture of a Japanese-style street vending machine, with three slots: you put group element *g* in the first slot, *h* in the third, and ¥100 in the third, and the product *gh* is delivered to you. A black box algorithm will thus run on any finite group in which the basic group operations are computable, and multiplying its complexity by the time taken for a multiplication will give the complexity of the algorithm in the practical situation.

So, for example, if your groups consist of permutations on *n* symbols, you can encode elements as bit strings of length *n* log *n* and apply black box algorithms to learn about your group. But you probably wouldn’t do that. Given a permutation, it is easy to compute its cycle lengths; their least common multiple is the order of the element. Given a set of elements, the Screier–Sims algorithm, and various refinements, efficiently give a canonical form for elements, the group order, and a membership test.

A compromise is provided by various sorts of “grey boxes” which provide a bit more information, for example the orders of group elements.

DADA + DATA = DATDA (Defense Against The Dark Arts)