Where does randomness come from?

In Agenda, the “what’s on in Cambridgeshire” magazine for June 2010, there is an article about Improbable Games, an event created by David Spiegelhalter, the Winton Professor of the Public Understanding of Risk in the University of Cambridge. What a pity I missed it by more than a year! (It was David who programmed the “monkeys and typewriters” for the BBC Horizon programme To Infinity and Beyond last year.)

I read once the opinion that, if the human lifespan were a million years, then nobody would ever cross the road; it would be far too dangerous! A nice way of saying that even if probabilities are in some sense absolute, our understanding of them is relative.

Anyway, to the topic. Why is the world unpredictable? What is randomness, and where does it come from?

I think that, in all but rather extreme cases, randomness is simply ignorance. You toss a coin; the only reason I cannot guess reliably whether it will come down heads or tails is the fact that I lack detailed information about its starting position and the momentum and angular momentum imparted to it by the tossing. The work of Persi Diaconis, Susan Holmes and Richard Montgomery on coin tossing is well worth reading. When I was a student, one of my lecturers could toss a coin so that the axis perpendicular to the plane of the coin precessed about the vertical, but without very close inspection it was difficult to see that the coin was not spinning conventionally. A useful trick!

Indeed, teaching probability to first-year students led me to the view that “all probability is conditional”: the probability I assign to an event depends on what I already know (and even events for which what I know is merely probabilities). When I learn more, I update my probabilities according to Bayes’ Theorem. (Well, I probably don’t actually do that – humans are not very good at thinking about probability – but I know that that is what I should be doing.)

But two developments in twentieth-century science suggest that there is some randomness in the universe which is not just our ignorance. Laplace’s view that an omniscient being who knew the positions and momenta of everything in the universe at one moment could predict the state of the universe in the indefinite future has been challenged, first by quantum mechanics, and then by chaos theory.

Quantum mechanics is a completely deterministic theory: the state of something (for example, the universe) is described by a wave function, which evolves in time according to Schrödinger’s equation. So how does randomness come in? If we apply the principles of quantum mechanics to a system which we observe from the outside, then a mysterious “collapse” of the wave function occurs to an eigenfunction of the linear operator associated with the measurement we make (and the value we measure is the corresponding eigenvalue). There is no way to predict in advance which eigenfunction the system will “choose”; we can only assign probabilities to them. Einstein famously thought that this was tantamount to “God playing dice”, and rejected it on these and other grounds.

If we could apply the principles of quantum mechanics to the entire universe, then this source of randomness would disappear, since observation from outside would not be possible. This is seen as a possible way to reconcile quantum theory with relativity. But it has not been carried through successfully yet.

An intermediate stage, where a quantum effect has a macroscopic consequence, was proposed by Schrödinger, in the famous “Schrödinger’s cat” thought experiment. If the quantum system remains in a superposition of states until observed, and if it has macroscopic effects, then we are faced with a difficult philosophical problem: what exactly constitutes an observation? Can the cat be an observer of its own fate?

I don’t know if the experiment has ever been performed. But instances in the real world where quantum effects have macroscopic consequences are very difficult to imagine. (Roger Penrose has proposed that this happens in microtubules in our brains, as a possible way in which free will could be introduced into the universe; but his proposal is not generally accepted.)

In my view, chaos theory poses an even more serious philosophical dilemma. As far as I know, all models of chaos theory which could be observed in the world depend on assuming that the world is modelled by real numbers; and a single real number is an infinite object, in the sense that we need to know infinitely many bits of information to specify it uniquely. Laplace’s omniscient being needs to have an infinite amount of information to know even one coordinate of the position of one object. We assume that the world is governed by deterministic differential equations; if we can solve them analytically, then all is well and good, but if we are forced to resort to numerical calculation, then rounding errors will necessarily occur, and chaos is the principle that such errors (no matter how small) can sometimes grow exponentially and completely swamp the accuracy of the calculation after a finite time.

It is claimed, for example by Ian Stewart in his book Does God play dice, that this makes the universe fundamentally unpredictable in a theoretical, rather than just practical, sense.

However, the principal difficulty I see is not whether, with bigger and bigger computers and more and more detailed observations, we can get more and more accurate predictions of the weather. It is whether the universe, at bottom, is continuous or discrete. If there is a lower limit on distances in the universe (not merely on our ability to measure them), then measurement to that limit can not lead to chaotic results, since there will be no smaller errors to propogate.

Some modern theories of the universe, such as loop quantum gravity and Rafael Sorkin’s causal sets, propose that the universe is discrete, and only appears continuous to us because its natural scale is far too small for us to perceive. If this is so, then we can return to the idea that randomness is ignorance rather than any in-built physical limitation of the universe. Perhaps Einstein was right about this.

Remarkably, there is some evidence. Loop quantum gravity (along with some other theories such as noncommutative geometry) predicts that electromagnetic radiation does not all travel at the same speed: when we observe an extremely distant event, high frequencies will reach us before low frequencies do. This phenomenon has now been observed.

About Peter Cameron

I count all the things that need to be counted.
This entry was posted in exposition. Bookmark the permalink.

13 Responses to Where does randomness come from?

  1. Anthony says:

    Nice post!
    Do you have a reference for the observation you mention at the end concerning loop quantum gravity?
    Thanks!

    • I don’t recall exactly but will try to find it and let you know.

      For nonccommutative geometry, this information comes from my colleague Shahn Majid, who featured in a New Scientist article a couple of weeks ago.

  2. Pingback: Reimann, average calculations for cryptanalysis | Peter's ruminations

  3. The New Scientist article can be found at http://www.newscientist.com/article/mg21128241.700-beyond-spacetime-welcome-to-phase-space.html?full=true

    As I noted, it refers to curved momentum space, which is claimed to be a feature of noncommutative geometry.

    The connection to loop quantum gravity is described in a New Scientist article at
    http://www.newscientist.com/article/mg20327210.900-late-light-reveals-what-space-is-made-of.html?full=true

  4. Anthony says:

    Ok! Thanks a lot for the references!

  5. Ralph Dratman says:

    You suggest that if the universe turns out to be discrete, we can go back to assigning uncertainty (“randomness”) to ignorance, rather than to theory. Two points: first, that would be true only if the discrete laws turn out to be deterministic. They too could be probabalistic in nature. Second, you would still have all the experimental evidence from quantum mechanics to cope with, indicating that entangled particles behave as though they had knowledge of events outside their causal light-cone. Unless your discrete theory fundamentally re-maps the geometry of space and time, you would still be faced with that obstacle to determinism.

  6. Yes, I agree. I expressed myself too strongly there.

  7. Pingback: Weekly Picks « Mathblogging.org — the Blog

  8. Jochen says:

    Even in a completely deterministic universe, Laplace’s demon would still face some problems in predicting the behaviour of sufficiently complex systems, as such prediction is generally equivalent to solving the halting problem. It’s easy to see if as your system you take, say, a computer — the question of whether it will evolve to a certain state is then equivalent to the question of whether or not it will halt. This is not a pathology relegated to a small class of systems, i.e. computers, but to any system one could in principle build a computer out of, or view as being capable of universal computation, which includes even rather simple systems.

    Somewhat related to that, there’s an ‘irreducible randomness’ even in pure mathematics, in the digits of, for instance, the binary expansion of the probability that a given Turing machine, supplied with a random program, will eventually halt (Chaitin’s constant Ω). Any given formal system can only derive finitely many bits in its expansion, meaning that the rest have values ‘without reason’, randomly. Of course, this is really again just a restatement of the unsolvability of the halting problem, as knowing Ω would enable one to solve it (though in a very inefficient way).

    So, even in a deterministic, computable universe, there’s unpredictability and randomness — take as a simple example the Game of Life cellular automaton, which is capable of universal computation, and hence, about whose evolution there exist formally undecidable questions.

    (And not to nit-pick, but I think wrt Lorentz violation, newer data from the FERMI telescope sees no such effect, and in fact puts rather stringent constraints on its existence. However, I also understand that it’s not entirely clear whether LQG actually leads to Lorentz violation…)

  9. Laplace’s demon is not to be thought of as bound by either the physical or the logical constraints of computability. It can instantly work out the acceleration on any particle due to Newtonian forces from every other particle in the universe, and integrate them.

    By the way, I attended a talk once in which the speaker proposed that general relativity could be used to compute non-computable things such as the consistency of Peano arithmetic. All(!) you have to do is construct a space-time containing an infinite time-like geodesic lying in the past of some point P (such things exist, I believe), and send a Turing machine along this geodesic computing all consequences of the Peano axioms. If it finds one, it sends a signal. So if no signal is received at P, we know that Peano arithmetic is consistent.

    Of course, in practice, a Turing machine has an infinite tape, hence presumably infinite mass; not only would it distort the carefully-constructed space-time, but it would presumably instantly collapse into a black hole … They regarded their work as of theoretical interest only!

  10. Jochen says:

    Hmm, but what’s the meaning of prediction if there’s no physically possible device capable of carrying out that prediction? Seems like you might as well endow it with God-level omniscience, then — which of course is something one might talk about, but it’s difficult to see how one might extract anything of relevance to the physical universe out of it.

    You’re right that there are proposals for physically possible hypercomputation, though; all of which (well, those I’m familiar with, anyhow!) I think suffer from severe problems — but of course, that doesn’t mean that you can’t come up with something that works. I’m personally not a fan of the possibility of nature being non-computable, since this pretty much implies to me that it is incomprehensible, as well, but of course, I’ve got no business telling nature how to behave!

  11. In my view, it is first and foremost a question of principle: if Laplacian determinism is true, then God can’t change the predetermined outcome of any future event without breaking one of his laws.

    There is also the point that, if prediction is theoretically possible, there is at least a chance that approximate prediction in the short term is practically possible, i.e. weather forecasts are not a complete waste of time.

  12. Re-reading Lee Smolin’s book Three roads to quantum gravity reminded me that there is another source of randomness (though maybe not one that affects our everyday lives much), namely black holes.

    In “empty space”, according to quantum theory, pairs of virtual particles appear and disappear on very short timescales. However, if such a pair appears very close to the event horizon of a black hole, one particle may cross the event horizon and be unable to annihilate its partner, which is “promoted” to being a real particle. However, since the two particles are entangled, a complete description of the real particle involves knowledge of its partner which is now inaccessible. Our ignorance is mathematically an increase in entropy outside the black hole, or in other words, randomness.

    Smolin’s book also reminded me that information is observer-dependent. If I fall into the black hole, on the way in I will have information about the other particle which is not accessible to you on the outside.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.