## Discrete mathematics in Derby This week I have been at a conference on “Theoretical and Computational Discrete Mathematics” at the University of Derby, under the auspices of the Institute for Mathematics and its Applications.

The University of Derby was founded as the Derby Diocesan Institution for the Training of Schoolmistresses in 1851 and became one of the “new universities” in 1992. Finding mathematics on their website is difficult; it is in the college of engineering and technology, rather than a nnumber of equally plausible sounding colleges, and is part of the school of computing and mathematics. The order is significant; mathematics follows computer games, computer science and information technology in their list of subject areas. The mathematics website, when I found it, was entirely about teaching, with no mention of research or staff list. So I had little idea what to expect.

Peter Larcombe, head of mathematics, introduced the conference, and told us that this is the first ever mathematics conference in Derby, but they are hoping to make it the first of a series, possibly every other year. I wish them success in this! Then Chris Linton, current IMA president, told us a bit about the IMA and what it does.

I spoke first, about synchronization, which you have heard enough about if you read what I put here. There was quite a lot of interest, including a question about whether you can produce a parameterised version of the result that finding the shortest reset word for an automaton is NP-hard.

Other talks on the first day:

• Armen Petrosyan talked about things he called “arithmetic graphs”. The vertex set is a set N of positive integers, possibly with repetitions; there is another set M of positive integers, and you join two vertices if their sum is in M. Taking M={1,2,3} and N={3,4,5} you get the famous right-angled triangle known to the Egyptians, which he calls the “Egyptian graph”. The bulk of his talk consisted of defining a notion of “information” based on the numbers on the edges of the graph, finding representations of various molecules as arithmetic graphs (where N is the set of atomic weights of the atoms), and then finding that various physical properties of the molecules such as solubility in water are highly correlated with “information” (all his reported correlations are over 0.9). Mysterious!
• Colin Wilmott gave a lovely talk about quantum circuits. Most quantum gates have one input and one output, but you need a gate with 2 in and 2 out, called “controlled NOT” or CNOT. The CNOT gate is the most difficult to implement so he wants to find a circuit for a given job which uses the fewest CNOT gates. This led to some interesting questions about linear recurrences.
• Dorin Andrica from Romania was introduced as the person who conjectured that the difference between the square roots of consecutive primes is always less than 1. He started his talk with the discussion of “derivatives”, real functions f which are derivatives of some other function. The class of derivatives is larger than the class of continuous functions but smaller than the class of “Darboux functions” (those satisfying the Intermediate Value Theoreom). Trying to construct derivatives of certain specific forms such as products of functions of the form cos(a/x) led to combinatorial questions about Erdős–Surányi sequences, with some interesting results and open problems. Here was someone whose mathematical mansion has no internal walls!
• Robert Hancock, a St Andrews undergraduate now working with Andrew Treglown in Birmingham, talked about their generalisations of Cameron–Erdős to sets excluding solutions of other linear equations such as px+qy = rz. They have some very precise estimates.
• Ovidiu Badgasar talked about Horadam sequences, certain special classes of solutions to linear recurrence relations. He asked a very simple question: for what values of the four parameters (the two coefficients in the recurrence and the two starting values) can such a sequence be periodic? Some nice formulae and pretty patterns in the complex plane emerged.
• Ibrahim Ozbek presented a new threshold secret sharing scheme. (This is a scheme which shares a secret among n people, such that any k of them can recover the secret if they cooperate, but k−1 can get no information whatever.) His example is built from error-correcting codes. We take a t-error-correcting code which can’t correct any t+1 errors; the secret is a word in this code. In essence, it goes like this. The “dealer” chooses n = t+k positions and makes errors in those positions. He also chooses n words from a 1-error-correcting code, such that the first letter of the ith word is equal to the ith coordinate which was changed in the secret; he deletes this first letter and gives the rest to the ith participant. Now the ith participant can recover the deleted coordinate of his word, by putting any old rubbish there and then correcting; so k of the participants can change the published word to one with only nk = t errors, which they can then correct, but fewer than k participants cannot correct the errors.
• Michael Alphonse talked about an algorithm to find the domination number of a circulant graph with two step lengths.

The second day’s fare:

• Vadim Lozin explained some very interesting stuff about structural reasons why some graph problems are hard. He began with the observation that the independent set problem is NP-complete, but the matching problem (which is the independent set problem in line graphs) is polynomial (Jack Edmonds’ famous algorithm). Now line graphs form a hereditary class defined by nine forbidden subgraphs, one of which is the claw. The claw is the one that counts. The independent set problem is polynomial in graphs forbidding a claw, but is NP-complete in the class of graphs forbidding the other 8 forbidden subgraphs for line graphs. He went on from there to his theory of boundary classes, a bit technical, but containing some lovely results and a very nice conjecture.
• Jessica Enright has worked for the Scottish Epidemiology Centre, and had looked at the question: given a graph, how many edges to we have to remove to break it into connected components of size bounded by a given constant? Even the decision problem is NP-complete. However, for graphs of bounded treewidth, there is an algorithm with time linear in the number of vertices, the constant being a function of the treewidth bound (in other words, the problem is fixed parameter tractable). Remarkably, it turns out that the graph of cattle movements between Scottish farms has extremely small treewidth, so the methods are applicable. This seems to me the real mystery from her talk. Afterwards, we had a discussion about this, and decided that perhaps networks designed by humans have small treewidth (because large treewidth makes the graphs hard to think about), while those that evolve by some process may have large treewidth. For example, the “neighbour” relation on Scottish farms (two farms adjacent if they share a boundary) has large treewidth.
• Nicholas Korpelainen talked about cliquewidth, a related parameter (perhaps). For a hereditary graph class, being well quasi-ordered by induced subgraphs is related to bounded cliquewidth (though a conjecture that these are equivalent has recently been refuted).
• Yury Kartinnik talked about finding minimum-size maximal k-matchings (sets of edges which are independent in the k-th power of the given graph.
• Kitty Meeks gave a nice talk about counting induced subgraphs with some particular property. There are six versions of the problem she considers: does such a subgraph exist? can we count them approximately? can we count them exactly? can we find one? can we sample at random? can we find them all? For a given type of subgraph, some of these problems may be tractable, but there are relationships: for examole, if we can solve the decision problem affirmatively then we can find a witness. For example, for a k-clique, delete vertices one at a time until there is no longer a k-clique; the last deleted vertex must be in a clique, so delete it and replace k by k−1.
• Shera Marie Pausang found Nash equilibria for the “inspection game” where an inspector with limited resources has to inspect a number of people who may or may not be complying when the inspector arrives: a very practical problem!
• Eric Fennessey talked about some of the practical problems he has to deal with in his daily job at Portsmouth dockyard looking after the navy’s destroyers. The problems seem remarkably basic. One of them is as follows. We have a number of bags of jelly beans, each bag labelled with the number and total weight of beans it contains. Find maximum-likelihood estimators for the mean and variance of the weight of a jelly bean. He remarked that for the variance he has to assume that the distribution of weight is normal, which of course means that there is a non-zero probability that a jelly bean has negative weight!
• Fionn Murtagh talked about a hypermetric, or p-adic, approach to problems of clustering in large datasets. He remarked that this sort of thinking has been used in publications on psychoanalysis (Ignacio Matte Blanco), fundamental physics (Igor Volovich), and cosmology.
• I.-P. Popa talked about linear discrete-time systems in Banach spaces.
• Hitoshi Koyano said that, while a lot of data consists of numbers, for which there are well-developed statistical techniques, increasing amounts of data consist of strings, where techniques are much less developed; he is developing techniques for this.
• Finally, Sugata Gangopadhyay talked about a subject close to my heart, bent functions; specifically, cubic bent functions which are invariant under cyclic permutation of the variables (he calls this property “rotation invariance”). A bent function is a Boolean function which lies at maximum Hamming distance from the set of linear functions. He is interested in finding bent functions at maximum distance from the quadratic functions. (In fact, it is not obvious that the functions furthest from quadratic functions are necessarily bent, but that is another question!)

A small but extremely wide-ranging conference; I look forward to the next one. 