This week I was at a workshop on Graphs and Probability at Eurandom in Eindhoven. And yes, there was quite a bit of discussion of trees! Here are a couple of things that caught my eye (not the most significant results presented).
Benjamin Doerr discussed his work with Marvin Künnemann on a model of rumour spreading. This happens in discrete time steps. At the start, one person knows the rumour. At each time step, each person who knows the rumour calls a randomly chosen other person and tells them. How long until everyone knows? This was first considered by Frieze and Grimmett in 1985; the answer is log2n+logen with some error, and subsequent analysis has involved improving bounds on the error. (I am happy to report that the proofs haven’t got monotonically longer as the results have improved.) But why the strange mixture of logs?
- In the early stages of the process, the number of people who know the rumour roughly doubles at each step, so after log2n steps a substantial fraction of the population will be informed.
- In the late stages of the process, when nearly everybody knows the rumour, the probability that a given individual who doesn’t know it fails to learn it at the next round is about ((n−1)/n)n, which is about 1/e. So the number of people who don’t know the rumour shrinks by a factor about 1/e each time, and informing everyone will take about logen steps.
The detailed arguments involve making these two statements precise and looking at the interface between the two regimes. Their result was that it is a “constant” plus a tiny random variable, but the “constant” is not constant, it depends on the arithmetic structure of n (the variation is about 10−11).
Stefanie Gerke talked about a problem she’d worked on with Paul Balister. This concerned a theorem which had appeared in Nature on maximum matchings in bipartite graphs with degree constraints. In the Nature paper, the theorem had been “proved” by a heuristic, and “verified” by simulation. Stefanie showed us that the theorem was false, and gave us the correct version with a mathematical proof. (Part of the problem was that the simulations had been restricted to some very special cases where the true and false results happen to agree.)
Finally, a very small thing; so small that I forgot to record whose talk I learned it from. Suppose that a graph on n vertices is regular, with degree d. Then there is an independent set (one containing no edges) of size at least n/(1+d). This can be found by the following simple procedure: repeatedly choose a vertex, add it to the independent set being built, and delete it and its neighbours from the graph. This obviously runs for at least n/(1+d) rounds. But what do we do if the graph is not regular? Not at all obvious, at first sight.
It turns out that there is an independent set of size at least ∑1/(1+d(v)), where the sum is over all vertices, and d(v) is the degree of vertex v. The proof is very simple. Choose a random ordering of the vertices of the graph. The probability that a vertex v comes before all of its neighbours is clearly 1/(1+d(v)), and so the given sum is the expected number of vertices which come before all their neighbours. But these vertices form an independent set!