Multifractal analysis

Yesterday, the Edinburgh Mathematical Society met in St Andrews. Thomas Jordan gave a talk on multifractal analysis. I had no idea what this was, and went along without too much expectation of being enlightened. But Thomas began with a well-chosen example (dating from before the term “multifractal analysis” was invented) that gave the flavour very clearly. So here is a summary of the first ten minutes or so of the talk.

Any real number x in [0,1] has a base 2 expansion. The density of 1s in this expansion (the limit of the ratio of the number of 1s in the first n digits to n) need not exist; but, according to the Strong Law of Large Numbers, for almost all x, the limit exists, and is 1/2.

So it might seem perverse to consider the set Xp of numbers for which the limit is equal to p, for arbitrary p. But that is precisely what Besicovich did, and he proved a nice theorem about the Hausdorff dimension of this set.

Briefly, given a subset of [0,1], take a cover of it by intervals of length at most δ, and take the sum of the sth powers of the lengths of the intervals. Now take the infimum of this quantity over all such coverings, and the limit of the result as δ tends to 0. The result is the s-dimensional Hausdorff measure of the set. There is a number s0 such that the measure is ∞ for s < s0, and is 0 for s > s0; for s = s0, it may take any value. This s0 is the Hausdorff dimension of the set.

Besicovich calculated the Hausdorff dimension of the sets Xp defined earlier. It turns out to be equal to the binary entropy function of p; this is H(p) = −p log p−(1−p)log (1−p), where the logarithms are to base 2. The function H(p) takes its maximum value when p = 1/2, when it is equal to 1; and 1-dimensional Hausdorff measure coincides with Lebesgue measure. So Besicovich’s result handles this case correctly.

The entropy function suggests a relation to binomial coefficients and Stirling’s formula, which is indeed involved in the proof. (The logarithm of the binomial coefficient {n choose pn} is asympototically nH(p), as follows easily from Stirling’s approximation for n!.)

All this can be phrased in terms of the dynamics of the map 2x mod 1 on the unit interval (which acts as the left shift on the base 2 expansion), which suggests a good direction for generalisation, and suggests too that that this generalisation will involve concepts from ergodic theory such as entropy and pressure. Most of the lecture was about this.

(Perhaps worth mentioning that all these sets are negligible in the sense of Baire category, according to which almost all real numbers have the property that the lim inf of the density is 0 and the lim sup is 1.)

Posted in exposition | Tagged , , | Leave a comment

3rd Scottish Combinatorics Meeting

This week we had the third in the series of Scottish Combinatorics Meetings. The series was begun by Kitty Meeks in Glasgow; she ran it for two years, and this year it moved to St Andrews. Fittingly, Kitty was invited to speak at this year’s meeting.

At the weekend we had two beautiful spring days; bluebells are out in force, and rarer plants such as water avens were putting on good displays. Monday morning dawned sunny, but by lunchtime the weather had reverted briefly to winter. Since everything was in the Mathematical Institute this didn’t matter too much.

Here are notes on just a few of the talks that struck me.

The opening lecture was by Simon Blackburn, who talked about network coding (a subject I knew a bit about, thanks to working with Søren Riis and Max Gadouleau at Queen Mary; I wrote about it here. Briefly, in a network through which some physical commodity flows, there can be bottlenecks; but if the network carries data, then giving the nodes some simple processing power can allow faster transmission. The famous butterfly network illustrates this; I started my account with this, and Simon also began with it.

There are n packets of information which have to be sent through the network to a number of sinks. Unlike the simple butterfly, real networks are large and complicated, and so a sink will not know which functions of the input packets it is receiving, and so will not know how to recover the original messages. If (as we assume) the operations of intermediate nodes are linear, then all that the sinks know is that what they receive lies in the subspace of Fn spanned by the input data. So we have to shift our viewpoint, and regard the information being sent as a subspace of a vector space, rather than a specific collection of vectors.

The number of k-dimensional subspaces of an n-dimensional vector space over a field of order q is a Gaussian or q-binomial coefficient. As my Advanced Combinatorics students know, this is a monic polynomial in q of degree k(nk). So, for large q, it is about qk(nk). This number is equal to the number of subspaces which are the row spaces of matrices of the form (ID) where I is the identity matrix of order k and D an arbitrary k×(nk) matrix. So we can restrict attention to these. Whatever the sink receives is a matrix with the same row space; by performing row operations it can reduce it to the above form and recover the data vector D.

What if some nodes in the network are not working correctly? This could be either because of mechanical faults or the result of malicious hacking. If t nodes are affected, then the actual subspace received at the sink will be at distance t or less from the correct one, where the distance between two subspaces U and W is the maximum of dim(U)−dim(UW) and dim(W)−dim(UW). So we have a new setting in which to do error-correction, with subspaces replacing words and the above distance replacing Hamming distance. Simon explained the analogue of the Johnson bound for constant-dimension subspace codes, and some recent work of his with Tuvi Etzion and Jessica Claridge on this.

Kitty Meeks talked about a new version of graph colouring, interactive sum list colouring. In regular graph colouring we have a fixed set of k colours, and ask for the smallest k for which a given graph can be coloured. List colouring generalises this, by giving each vertex a list of k colours from a possibly much larger set of colours and again asking for the smallest k for which the graph can be coloured. Sum list colouring extends further, by allowing the lists given to vertices to be of different sizes, and minimizing the sum of the list sizes. The new generalisation takes this one step further, and is best thought of as a game between Alice, who is trying to colour a graph, and Bob, who is trying to thwart her. At each round Alice is allowed to ask for a new colour for a particular vertex, different from those already provided by Bob for that vertex; each request has a cost of 1, and she is trying to minimise (and Bob to maximise) the total cost. The corresponding colouring number is the cost if both Alice and Bob play optimally.

Let scn(G) and iscn(G) be the minimum sums required in the two cases. An easy argument shows that scn(G) is at most the sum of the numbers of vertices and edges of G; we call G sc-greedy if this bound is met. Many examples of sc-greedy graphs are known.

Not too much is known yet about iscn(G). It is an interesting exercise to show that the three-vertex path has iscn equal to 4. (Ask for a colour for each vertex first, and then consider possible cases.) This graph has scn equal to 5 (it is sc-greedy).

It is known that iscn(G) does not exceed scn(G), and is equal to it in the case of complete graphs; they (Kitty and her collaborator Marthe Bonamy) have shown that it is strictly smaller for other graphs. Indeed, the difference between the two numbers is at least (n−ω(G))/2, where ω(G) is the clique number of G.

The first day concluded with a talk by Max Gadouleau. As I said before, Max has done some good work on network coding, but in this talk he stepped back. Networks are studied in many areas of mathematics and science, and there is inevitably a certain amount of multiple discovery and of calling the same thing by distinct names.

Max actually talked about discrete dynamical systems. A dynamical system is just a set with a function on it, and we are interested in iterating the function. If the set has N elements, there are NN functions, and the long-term behaviour is simple: there is a subset of the points on which the function acts as a permutation (the recurrent or periodic points), and every point arrives at a periodic point after a finite number of steps. Max was interested in making general statements about the number of fixed points, the number of periodic points, and the size of the image of the function.

The connection with networks arises thus. We are particularly interested in the case where there is a set V of nodes, each of which can be in one of a finite number q of states; so we have N = qn, a function f from the set to itself can be regarded as n functions each of n arguments from [q] to itself, where [q] is the set of states. Typically, in systems which arise in practice, the functions depend only on a small number of arguments; that is, the transition at a node only depends on the states of a few “neighbouring” nodes. We can represent this dependence by a directed graph (the interaction graph of the dynamical system), and so we are doing network theory in a very general way.

Apparently this has real meaning. Fixed points in a gene regulation system may, for example, correspond to different modes of functioning of the cell, and may include cancer.

One thing that one can do now is to ask about dynamical systems with a given interaction graph. Another is to fix the graph and vary q, or restrict the kind of functions allowed (linear, monotone, threshold, etc.). Yet again we could have different rules for the updates. Max considered only the case where all vertices update simultaneously.

One of the few things known is the analogue of the Singleton bound in coding theory: the number of fixed points is at most qτ, where τ is the size of the smallest feedback set (a set whose complement contains no directed cycles). Max gave exact values for the size of the image and the number of periodic points in terms of graph-theoretic parameters of the interaction graphi, such as the maximum number of independent arcs, or the maximum number of vertices covered by disjoint cycles. (One of these equalities holds only for q > 2, and is just a bound in the case q = 2.)

The next day, David Bevan gave the first of two nice talks on permutation patterns. (I have discussed these here also.) Briefly, a short permutation $pi; is involved in a longer permutation σ if, when we remove some points and push everything down to the shorter interval, we obtain π. (In my preferred way of thinking about it, a permutation is a pair of total orders on a set; involvement is just the “induced substructure” relation.) A permutation of length n is k-prolific if every permutation of length nk is involved in it. They (David and co-authors Cheyne Homberger and Bridget Tenner) have found that k-prolific permutations of length n exist if and only if n ≥ k2/2+2k+1. He outlined the proof, which involves an interesting detour through packings of diamond-shaped tiles in the plane.

Anders Claesson gave us a number of different proofs that there the same number of subsets of even and odd size of an n-set for non-zero n. The proofs, of increasing complexity, involved manipulations of exponential generating functions, and in particular the use of sign-reversing involutions. He went on to further applications, including an explicit count for interval orders. He ended up by remarking that species form a semiring, which can be extended to a ring of virtual species in the same way that the natural numbers are extended to the integers, and that these give a way of handling “negative objects”.

Robert Brignall talked about the notorious problem of counting permutations excluding 1324. He quoted Doron Zeilberger: “Even God doesn’t know the value of Av(1324)1000“. Not even the exponential constant is known, although simulations suggest it is around 11.6. Robert and his co-authors (David Bevan, Andrew Elvey Price and Jay Pantone) have improved the upper and lower bounds for this constant to 13.5 and 10.24, both improvements on previously known bounds. The ingenious proof encloses the diagram of such a permutation in a staircase made up of dominoes with similar structure, but is difficult to describe without pictures!

The conference closed with a talk by Laszlo Babai. Laci is in St Andrews for a week, and will be giving five talks, including two ninety-minute talks at the British Colloquium on Theoretical Computer Science today and tomorrow. So I will defer my comments on his talks for a while …

Posted in events | Tagged , , , , | Leave a comment

Sum-free sets

This semester, Han Yu has run a reading group on Arithmetic Combinatorics. Mostly we have been following the excellent notes by Tuomas Orponen. I gave two talks. The first was a proof of the result of Ellenberg and Gijswijt on cap-sets over GF(3). The paper is just over 3 pages long, so no additional notes are necessary; in any case, I discussed it here last year after hearing Ben Green’s talk in Singapore.

My second talk was on sum-free sets, and was rather more discursive, so here is a summary.

The context

A set of natural numbers is k-AP-free if it contains no k-term arithmetic progression, and is sum-free if it contains no solution to x+y = z.

Three big theorems in additive combinatorics are:

Theorem 1 (van der Waerden) For a natural number k > 2, the set N cannot be partitioned into finitely many k-AP-free sets.

Theorem 2 (Roth–Szemerédi For a natural number k > 2, a k-AP-free set has density zero.

Theorem 3 (Schur) The set N cannot be partitioned into finitely many sum-free sets.

At first sight, one would like a theorem to complete the pattern, asserting that a sum-free set has density zero. But this is false, since the set of all odd numbers is sum-free and has density 1/2. What follows could be motivated as an attempt to find a replacement for the missing fourth theorem.

A bijection with Cantor space

The Cantor space C can be represented as the set of all (countable) sequences of zeros and ones. It carries the structure of a complete metric space (the distance between two sequences is a monotonic decreasing function of the index of the first position where they differ) or as a probability space (corresponding to a countable sequence of independent tosses of a fair coin).

We define a bijection between Cantor space and the set S of all sum-free subsets of N. Given a sequence x in C, we construct S as follows:

Consider the natural numbers in turn. When considering n, if n is the sum of two elements already put in S, then of course n is not in S. Otherwise, look at the first unused element of x; if it is 1, then put n into S, otherwise, leave n out of S. Delete this element of the sequence and continue.

For example, suppose that x = 10110…

  • The first element of x is 1, so 1 ∈ S.
  • 2=1+1, so 2 is not in S.
  • 3 is not in S+S; the next element of x is 0, so 3 is not in S.
  • 4 is not in S+S; the next element of x is 1, so 4 is in S.
  • 5=1+4, so 5 is not in S.
  • 6 is not in S+S; the next element of x is 1, so 6 is in S.

So S = {1,4,6,…}.

Baire category

The notion of “almost all” in a complete metric space is a residual set; a set is residual if it contains a countable intersection of dense open sets. Thus, residual sets are non-empty (by the Baire Category Theorem); any countable collection of residual sets has non-empty intersection; a residual set meets every non-empty open set; and so on.

A sum-free set is called sf-universal if everything which is not forbidden actually occurs. Precisely, S is sf-universal if, for every subset A of {1,…,n}, one of the following occurs:

  • there are i,jA with i<j and j−iS;
  • there exists N such that S∩[N+1,…,N+n] = N+A,

where N+A = {N+a:aA}.

Theorem The set of sf-universal sets is residual in S.

Theorem (Schoen) A sf-universal set has density zero.

Thus our “missing fourth theorem” asserts that almost all sum-free sets (in the sense of Baire category) have density zero.

There is a nice application. Let S be an arbitrary subset of N. We define the Cayley graph Cay(Z,S) to have vertex set Z, with x joined to y if and only if |y−x|∈S. Note that this graph admits the group Z acting as a shift automorphism on the vertex set.

Theorem

  • Cay(Z,S) is triangle-free if and only if S is sum-free.
  • Cay(Z,S) is isomorphic to Henson’s universal homogeneous triangle-free graph if and only if S is sf-universal.

So Henson’s graph has uncountably many non-conjugate shift automorphisms.

Measure

In a probability space, large sets are those which have measure 1, that is, complements of null sets. Just as for Baire category, these have the properties one would expect: the intersection of countably many sets of measure 1 has measure 1; a set of measure 1 intersects every set of positive measure; and so on.

The first surprise is that measure and category give entirely different answers to what a typical set looks like:

Conjecture The set of sf-universal sets has measure zero.

Although this is not proved yet (to my knowledge), it is certain that this set does not have measure 1.

Given the measure on S, and our interest in density, it is natural to ask about the density of a random sum-free set. This can be investigated empirically by computing many large sum-free sets and plotting their density. Here is the rather surprising result.

Density spectrum

The spike on the right corresponds to density 1/4 and is explained by the following theorem.

Theorem

  • The probability that a random sum-free set consists entirely of odd numbers is about 0.218 (in particular is non-zero).
  • Conditioned on this, the density of a random sum-free set is almost surely 1/4.

A word about this theorem. Suppose that we have tossed the coin many times and found no even numbers. Then we have chosen the odd numbers approximately independently, so the next even number is very likely to be excluded as the sum of two odd numbers, whereas the next odd number is still completely free. So the probability of no even numbers is not much less than the probability of no even numbers in the first N coin tosses. Moreover, since the odd numbers are approximately independent, we expect to see about half of them, and so about a quarter of all numbers.

Other pieces can also be identified. Let Z/nZ denote the integers modulo n. We can define the notion of a sum-free set in Z/nZ in the obvious way. Such a sum-free set T is said to be complete if, for every z in (Z/nZ)\T, there exist x,y in T such that x+y = z in Z/nZ . Now the theorem above extends as follows. Let S(n,T)$ denote the set of all sum-free sets which are contained in the union of the congruence classes t (mod n) for tT.

Theorem Let T be a sum-free set in Z/nZ.

  • The probability of S(n,T) is non-zero if and only if T is complete.
  • If T is complete then, conditioned on SS(n,T), the density of S is almost surely |T|/2n.

In the figure it is possible to see “spectral lines” corresponding to the
complete modular sum-free sets {2,3} (mod 5) and {1,4} (mod 5) (at density 1/5) and {3,4,5} (mod 8) and {1,4,7} (mod 8) (at density 3/16).

The density spectrum appears to be discrete above 1/6, and there is some
evidence that this is so. However, a recent paper of Haviv and Levy shows
the following result.

Theorem The values of |T|/2n for complete sum-free sets T in Z/nZ are dense in [0,1/6].

However, this is not the end of the story. Neil Calkin and I showed that the event that 2 is the only even number in S has positive (though rather small) probability. More generally,

Theorem Let A be a finite set and T a complete sum-free set modulo n. Then the event A ⊆ S ⊆ A∪(T (mod n)) has positive probability.

Question Is it true that a random sum-free set almost surely has a density? Is it further true that the density spectrum is discrete above 1/6 but has a continuous part below 1/6? Is it also true that the probability that a sum-free set has zero density is 0?

In this connection, consider the following construction of Calkin and Erdős. Let α be an irrational number, and define S(α) to be the set of natural numbers n for which the fractional part of nα lies between 1/3 and 2/3. It is easy to see that S(α) is sum-free and has density 1/3. However this does not resolve the question, since the event S ⊆ S(α) for some α has probability zero.
However, there might be other examples along these lines …

Ultimate periodicity

A sequence x is ultimately periodic if there exist positive integers n and k such that xi+k = xi for all i ≥ n. Analogously, a sum-free set S is ultimately periodic if (iS) ⇔ (i+kS) for all i ≥ n.

It is easy to see that, in our bijection from sequences to sum-free sets, a sequence which maps to an ultimately periodic sum-free set must itself be ultimately periodic. What about the converse?

Question Is it true that the image of an ultimately periodic sequence is an ultimately periodic sum-free set?

After some investigation, Neil Calkin and I conjectured that the answer is “no”. There are some ultimately periodic sequences (the simplest being 01010 repeated) for which no sign of periodicity has been detected despite computing nearly a million terms. These sets are fascinating, and seem sometimes to exhibit an “almost periodic” structure; they settle into a period, which is then broken and a longer period established, and so on.

Posted in exposition | Tagged , , , , , , , , , | 2 Comments

A message

Last week my daughter and her family came for a short holiday.

We went to East Sands where the children wrote messages on driftwood and threw them into the sea. Looking for replies, they found this:

HIC

Is it a message from departed spirits, or did the ancient Romans come here?

Posted in Uncategorized | Tagged | 4 Comments

Oligomorphic Permutation Groups

In 1988, there was an LMS Durham symposium on model theory and groups. I had been developing the theory of oligomorphic permutation groups for some time: these are the permutation groups G on Ω with the property that the number of G-orbits on Ωn is finite for all natural numbers n. I spoke about these at the meeting. Subsequently it turned out that there would be no proceedings of the meeting; so, when I wrote up the notes of my lectures, I felt free to incorporate interesting bits from other speakers’ talks into my own account. In this way it became a sort of de facto proceedings.

It was written in rather a hurry, and it shows. But when I took my copy down from the shelf last week, I found in it a few notes about possible changes that might be made in a second edition, if there were ever to be one. Needless to say, there was no second edition, and now we have reached the point where the subject has moved on in some directions (much less so in others) and rather than a second edition there would need to be a complete rewrite.

I have returned to the topic on several occasions, most recently for the proceedings of Groups St Andrews 2005. I have not, however, produced a monograph-length account, and maybe never will. In lieu of such a thing, I have decided to post my rather brief notes for the revision with the oddments on the Lecture Notes page. If you look at them, you may be disappointed (as I was) at how modest my ambitions were back then (but it was only shortly after the original was published).

In brief: an oligomorphic group should probably be regarded as a species, and the numbers of orbits on ordered n-tuples of distinct elements and on n-element subsets of the domain coincide with the sequences counting, respectively, labelled and unlabelled structures in the species. Moreover, the “modified cycle index” of an oligomorphic group is the cycle index of the species.

What distinguishes the species of oligomorphic groups from the general case? In terms of the counting sequences above, two special features of the group case are that the sequences grow rapidly and smoothly. On the first point, the best result is still Macpherson’s Theorem, according to which, if G is primitive (that is, it is transitive and preserves no non-trivial equivalence relation), then either the number of orbits on n-sets is 1 for all n (the group is highly homogeneous), or the number grows at least exponentially. What is lacking to date is a proof of the conjecture that the exponential constant cannot be smaller than 2. The best known result is by Francesca Merola, giving a lower bound of about 1.324.

On smoothness, it is known that the sequence counting orbits on n-sets is the Hilbert series of a graded algebra, and the most significant result is that of Maurice Pouzet: if the group G has no finite orbits, then the algebra has no zero-divisors. (It takes a little imagination to see this as a smoothness result for the rate of growth, but it can be done.)

So these are some of the additions which I would now put on a list, in the hope that sometime in the future I might get round to it …

Posted in books, history | Tagged , , , , | Leave a comment

ZFC is consistent!

A large cardinal

Here is a small model of a large cardinal, from the Albertina Gallery in Vienna. His existence shows that ZFC is consistent.

Posted in Uncategorized | Tagged | 1 Comment

The march of truth

You can find a thing I wrote about this on the Institute of Art and Ideas website at https://iainews.iai.tv/articles/the-march-of-truth-auid-788, as part of a “conversation” on The Limits of Reason.

Posted in exposition | Tagged | Leave a comment