In nature, the hardest things to do are usually asymptotic to perfection.
Here’s the continuity problem: If you bounced a ball on a floor, and waited until it came to a rest, you’d probably wait a few seconds, at most a minute, before that happened. But if you counted the height to which it bounced up each time, and used it to solve a Newtonian equation of motion, you’d see that energy was being left unaccounted for, somewhere, somehow.
Why? Because all energy has to be conserved, and if all energy can be conserved, then the ball has to bounce up just a little bit lesser than the last time (since some energy is lost per bounce as sound, too). As a result, it must come to rest never, or only at infinity, and keep bouncing until, somehow, it can bring 0.000…001 units of energy to 0 units.
(My UG thesis was on the application of Nernst’s law of thermodynamics in a magnetic refrigerator, and there, an argument arises that while the energy is lost in infinite steps, the time-duration of each step becomes infinitesimally small, leaving us with finite energy loss in finite time!)
So, what I meant by asymptotic is that as you get closer to attaining the ideal, the perfect, the harder it gets to become even closer. You saw this happen with the ball. You’ll see this happening even with accelerator physics where, as the confidence-levels 1-sigma (σ), 2σ, 3σ, 4σ, 5σ and 6σ are each subsequently established, the number of errors required to be eliminated increases exponentially. The chart below better illustrates this idea – look at the consecutive differences between each sigma-level.
From CERN's July 4, 2012 announcement
In my opinion, the classical problem most impossible – yes, more impossible than the rest – to perfect is the generation of random numbers. Everything – every technique, tool or method – we could use to generate ten random numbers from a set of numbers is deterministic.
While there are different definitions for this word, I prefer one that is inspired by Michel Foucault's idea of order: Order is a sorting of priorities (An Archaeology of the Human Sciences, 1994). And for a system to be deterministic, it must have clearly defined states of orderliness, and thus clearly defined sets of priorities for different states.
The presence of priorities for each state is what makes the system yield to human manipulation, and so makes it deterministic. Random numbers, on the other hand, are ideally purely random, which means they don’t have any priorities in any state. Some parameter that has value x in one state needn’t necessarily have value y in another state, although this would be true in ordered systems. So, how possible is it really to generate a purely random number using a mostly deterministic tool/technique/method?
Thinking gedanken, the problem is this: Effectively, generating random numbers using deterministic principles involves an elimination of determinism over a period of time such that only the emergent chaos remains.
If you look closer, it’s the continuity problem all over again! Using determinism to eliminate determinism is an asymptotic process, because the more determinism is eliminated, the lesser determinism there is going to be to eliminate determinism (that’s the simplest I can put it)!
Unless you’re able to instantaneously create a random number the moment you destroy a determinate number (which would be a number bound to other numbers in some well-defined way), a perfectly random number cannot ever be generated.
Just this morning, I came across a pre-print paper titled Explaining Quantum Contextuality to Generate True Random Numbers [arXiv:1301.5364]. This paper, in turn, drew on a thought-experiment conceived of and published in 2008 as Simple Test for Hidden Variables in Spin-1 Systems [Phys. Rev. Lett. 101, 020403. (2008)]. Both papers describe the violation of a quantum mechanical principle called locality. The principle holds that any object is only directly influenced by its immediate surroundings.
Surprisingly, despite how elegant and intuitive the principle of locality seems, it is violated in nature. This was verified by the French physicist Alain Aspect in 1982. He performed what’s called a “two-channel” Bell test, and it goes like this.
Say there’s a source of photons, S, which generates them in pairs and sends them scurrying in opposite directions, left and right. Along each direction, at a fixed distance, is a precisely fixed polariser, which changes the direction each photon is vibrating in to one of two values. Let’s call them (–) and (+). Once a photon’s been polarised, it’s passed through a detector that indicates to the experimenter if the polarisation is (–) or (+).
So, for every pair of photons, the overall reading must be (+,+), (–,–), (+,–) or (–,+).
Since the direction in which a photon is polarised by the polariser each time is known to be random, the final readings should also be randomly distributed. To ensure this is happening, Aspect set up a system to monitor how often each of the (+,+), (–,–), (+,–) or (–,+) pairs occurred. When he found that one particular pair was being produced more often than it should be – say it was (+,–) – it was an indication that the left-photon’s (+) value was somehow related to the right-photon’s (–) value.
Of course, it wasn’t really much of a surprise to him because he was just experimentally verifying a theory that John Stuart Bell had proposed in 1964.
Since we’re all so attenuated to think in terms of locality and local realisms, we immediately think there’s something wrong with the photon source, S, that it’s churning out photons that are able to communicate with each other before they’re polarised. Over the years, however, this doubt has been fully assuaged: a highly improvised photon source still is capable of generating these weird photons. So… what the heck?!
One argument is that there are hidden variables, unseen but definitely-there events that interfere with the experiments and confound the results. However, Aspect’s experiment, and of those before and after him, abided by theoretical constraints dictated by Bell’s theorem, the Kochen-Specker theorem, CHSH inequalities, etc.
In other words, these are rules that dictate the design of the experiment, and if the experiment has been designed so, then there are no such things as hidden variables.
Anyway, this behaviour of photons, or any particles that exhibit a strange form of communication such that they’re able to preserve order and keep from being random, is called quantum entanglement, and the particles participating in it are said to be entangled. And the relevance of quantum entanglement in generating random numbers is the confirmation that there’s a leverage for orderliness against randomness.
By suitably designing an experiment that’s able to entangle two photons, you increase the amount of statistical locality. By this, I mean that if there’s a system that’s capable of being random in some parts and orderly in others, then by forcing the orderliness to occur only in some specific channels, the amount of randomness in the other channels can be increased!
This is a metaphysical argument, quite the parallel to the gedanken experiment introduced earlier in this post: To get the purely random, the deterministic itself needs to eliminate the purely deterministic. If you’re interested in getting full details on this, read this (it’s simple if you read it from start to end instead of starting from somewhere in between).
Can you think of other ways to resolve this thought-experiment in a finite period of time?