3rd Scottish Combinatorics Meeting

This week we had the third in the series of Scottish Combinatorics Meetings. The series was begun by Kitty Meeks in Glasgow; she ran it for two years, and this year it moved to St Andrews. Fittingly, Kitty was invited to speak at this year’s meeting.

At the weekend we had two beautiful spring days; bluebells are out in force, and rarer plants such as water avens were putting on good displays. Monday morning dawned sunny, but by lunchtime the weather had reverted briefly to winter. Since everything was in the Mathematical Institute this didn’t matter too much.

Here are notes on just a few of the talks that struck me.

The opening lecture was by Simon Blackburn, who talked about network coding (a subject I knew a bit about, thanks to working with Søren Riis and Max Gadouleau at Queen Mary; I wrote about it here. Briefly, in a network through which some physical commodity flows, there can be bottlenecks; but if the network carries data, then giving the nodes some simple processing power can allow faster transmission. The famous butterfly network illustrates this; I started my account with this, and Simon also began with it.

There are n packets of information which have to be sent through the network to a number of sinks. Unlike the simple butterfly, real networks are large and complicated, and so a sink will not know which functions of the input packets it is receiving, and so will not know how to recover the original messages. If (as we assume) the operations of intermediate nodes are linear, then all that the sinks know is that what they receive lies in the subspace of Fn spanned by the input data. So we have to shift our viewpoint, and regard the information being sent as a subspace of a vector space, rather than a specific collection of vectors.

The number of k-dimensional subspaces of an n-dimensional vector space over a field of order q is a Gaussian or q-binomial coefficient. As my Advanced Combinatorics students know, this is a monic polynomial in q of degree k(nk). So, for large q, it is about qk(nk). This number is equal to the number of subspaces which are the row spaces of matrices of the form (ID) where I is the identity matrix of order k and D an arbitrary k×(nk) matrix. So we can restrict attention to these. Whatever the sink receives is a matrix with the same row space; by performing row operations it can reduce it to the above form and recover the data vector D.

What if some nodes in the network are not working correctly? This could be either because of mechanical faults or the result of malicious hacking. If t nodes are affected, then the actual subspace received at the sink will be at distance t or less from the correct one, where the distance between two subspaces U and W is the maximum of dim(U)−dim(UW) and dim(W)−dim(UW). So we have a new setting in which to do error-correction, with subspaces replacing words and the above distance replacing Hamming distance. Simon explained the analogue of the Johnson bound for constant-dimension subspace codes, and some recent work of his with Tuvi Etzion and Jessica Claridge on this.

Kitty Meeks talked about a new version of graph colouring, interactive sum list colouring. In regular graph colouring we have a fixed set of k colours, and ask for the smallest k for which a given graph can be coloured. List colouring generalises this, by giving each vertex a list of k colours from a possibly much larger set of colours and again asking for the smallest k for which the graph can be coloured. Sum list colouring extends further, by allowing the lists given to vertices to be of different sizes, and minimizing the sum of the list sizes. The new generalisation takes this one step further, and is best thought of as a game between Alice, who is trying to colour a graph, and Bob, who is trying to thwart her. At each round Alice is allowed to ask for a new colour for a particular vertex, different from those already provided by Bob for that vertex; each request has a cost of 1, and she is trying to minimise (and Bob to maximise) the total cost. The corresponding colouring number is the cost if both Alice and Bob play optimally.

Let scn(G) and iscn(G) be the minimum sums required in the two cases. An easy argument shows that scn(G) is at most the sum of the numbers of vertices and edges of G; we call G sc-greedy if this bound is met. Many examples of sc-greedy graphs are known.

Not too much is known yet about iscn(G). It is an interesting exercise to show that the three-vertex path has iscn equal to 4. (Ask for a colour for each vertex first, and then consider possible cases.) This graph has scn equal to 5 (it is sc-greedy).

It is known that iscn(G) does not exceed scn(G), and is equal to it in the case of complete graphs; they (Kitty and her collaborator Marthe Bonamy) have shown that it is strictly smaller for other graphs. Indeed, the difference between the two numbers is at least (n−ω(G))/2, where ω(G) is the clique number of G.

The first day concluded with a talk by Max Gadouleau. As I said before, Max has done some good work on network coding, but in this talk he stepped back. Networks are studied in many areas of mathematics and science, and there is inevitably a certain amount of multiple discovery and of calling the same thing by distinct names.

Max actually talked about discrete dynamical systems. A dynamical system is just a set with a function on it, and we are interested in iterating the function. If the set has N elements, there are NN functions, and the long-term behaviour is simple: there is a subset of the points on which the function acts as a permutation (the recurrent or periodic points), and every point arrives at a periodic point after a finite number of steps. Max was interested in making general statements about the number of fixed points, the number of periodic points, and the size of the image of the function.

The connection with networks arises thus. We are particularly interested in the case where there is a set V of nodes, each of which can be in one of a finite number q of states; so we have N = qn, a function f from the set to itself can be regarded as n functions each of n arguments from [q] to itself, where [q] is the set of states. Typically, in systems which arise in practice, the functions depend only on a small number of arguments; that is, the transition at a node only depends on the states of a few “neighbouring” nodes. We can represent this dependence by a directed graph (the interaction graph of the dynamical system), and so we are doing network theory in a very general way.

Apparently this has real meaning. Fixed points in a gene regulation system may, for example, correspond to different modes of functioning of the cell, and may include cancer.

One thing that one can do now is to ask about dynamical systems with a given interaction graph. Another is to fix the graph and vary q, or restrict the kind of functions allowed (linear, monotone, threshold, etc.). Yet again we could have different rules for the updates. Max considered only the case where all vertices update simultaneously.

One of the few things known is the analogue of the Singleton bound in coding theory: the number of fixed points is at most qτ, where τ is the size of the smallest feedback set (a set whose complement contains no directed cycles). Max gave exact values for the size of the image and the number of periodic points in terms of graph-theoretic parameters of the interaction graphi, such as the maximum number of independent arcs, or the maximum number of vertices covered by disjoint cycles. (One of these equalities holds only for q > 2, and is just a bound in the case q = 2.)

The next day, David Bevan gave the first of two nice talks on permutation patterns. (I have discussed these here also.) Briefly, a short permutation $pi; is involved in a longer permutation σ if, when we remove some points and push everything down to the shorter interval, we obtain π. (In my preferred way of thinking about it, a permutation is a pair of total orders on a set; involvement is just the “induced substructure” relation.) A permutation of length n is k-prolific if every permutation of length nk is involved in it. They (David and co-authors Cheyne Homberger and Bridget Tenner) have found that k-prolific permutations of length n exist if and only if n ≥ k2/2+2k+1. He outlined the proof, which involves an interesting detour through packings of diamond-shaped tiles in the plane.

Anders Claesson gave us a number of different proofs that there the same number of subsets of even and odd size of an n-set for non-zero n. The proofs, of increasing complexity, involved manipulations of exponential generating functions, and in particular the use of sign-reversing involutions. He went on to further applications, including an explicit count for interval orders. He ended up by remarking that species form a semiring, which can be extended to a ring of virtual species in the same way that the natural numbers are extended to the integers, and that these give a way of handling “negative objects”.

Robert Brignall talked about the notorious problem of counting permutations excluding 1324. He quoted Doron Zeilberger: “Even God doesn’t know the value of Av(1324)1000“. Not even the exponential constant is known, although simulations suggest it is around 11.6. Robert and his co-authors (David Bevan, Andrew Elvey Price and Jay Pantone) have improved the upper and lower bounds for this constant to 13.5 and 10.24, both improvements on previously known bounds. The ingenious proof encloses the diagram of such a permutation in a staircase made up of dominoes with similar structure, but is difficult to describe without pictures!

The conference closed with a talk by Laszlo Babai. Laci is in St Andrews for a week, and will be giving five talks, including two ninety-minute talks at the British Colloquium on Theoretical Computer Science today and tomorrow. So I will defer my comments on his talks for a while …

Advertisements

About Peter Cameron

I count all the things that need to be counted.
This entry was posted in events and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s