In a previous post I discussed a startling new view concerning the relationship between quantum and classical realities; an issue which has trouble thoughtful observers since the time of Einstein . This view is supported by the work of Wojciech Zurek and papers by Muller and Masanes which suggest that classical reality is a subset of quantum reality, not a faithful mirroring of quantum systems but consisting of only the information quantum systems are able to exchange. Classical reality is composed of this quantum information exchange. The process of information exchange has also been described within quantum theory as 'measurement' or 'wave function collapse'.
Only information describing an extremely small portion of the quantum reality of one system may be exchanged with another but it is this information which forms classical reality. Any entity within classical reality can only effect or influence another through one of the four fundamental quantum forces: electromagnetism, strong nuclear force, weak nuclear force and gravity. An entity can participate in classical reality only via these forces and an entity that does not communicate via one of these forces cannot be said to have an existence within classical reality.
It turns out that in a mathematical sense classical reality is governed by a form of logic which is likewise a small subset of quantum logic: probability theory. Thus the claim that probability theory is the logic of classical reality, but how are we to understand this; what does it mean?
Fortunately a clear path is already documented in the scientific literature. Within information theory and perhaps contrary to popular usage, information and probability are defined in terms of one another.
The mathematical relationship is:
I = -log(P)
Information is the negative log function of probability. Intuitively this means that if we assign a probability that a particular outcome will take place and then find out that the event did actually occur the amount of information we receive (in bits) is inversely proportional to the probability we initially assigned. In other words if we considered something as likely to happen and it does we receive little information, if we considered it unlikely but it does occur we receive a greater amount of information.
Assigning probabilities to discernible states or to outcomes is only feasible if all the possible states are known and assigned probabilities which sum to 1 . Such a collection of probabilities is called a probability distribution and we may consider it a model for the possible outcomes of an event.
In the beginning if the model has no prior data to determine which outcomes are more plausible than others it must assign an equal probability to each possible outcome. If there are n possible outcomes described by the model then each has the probability of 1/n.
A little magic takes place when the model does receive information pertinent to the outcome which actually does occur. Mathematically this data implies changes to the model; on the reception of this new data some outcomes must now be considered more likely and other less likely but they still must sum to 1.
The mathematical procedure for calculating the new probability is called the Bayesian Update.
P2 = R*P1
That is the initial probability (P1) for the occurrence of each outcome is multiplied by a ratio R to produce a new probability for the outcome (P2). The ratio is merely the likelihood that the new information would be received if the outcome described were the actual outcome. By incorporating information in this fashion into a model it can come to more accurately describe the event it models. This process of building knowledge is called inference.
To recap this short excursion into information theory we have seen that the existence of information mathematically implies probability theory and inference. However mathematics is not reality. So we still have the question 'in what sense is probability theory the logic of classical reality?'
I will argue, over several posts, that probability theory and inference are instantiated within classical reality through the actions of Darwinian processes and that Darwinian processes are the physical implementation of the mathematics of inference and have created the sum of classical reality. In this view Darwinian processes may be characterized as the engineers of classical reality.
At the bottom of classical reality we have quantum information exchange which Wojciech Zurek has describe as the process of Quantum Darwinism. In a manner this information exchange entirely explains classical reality and it may be understood as a Darwinian process. However a complete description of classical systems in terms of quantum systems is only possible in the micro-realm of sub-atomic particles, atoms, molecules and chemistry.
Further Darwinian processes are involved with the creation and evolution of more complex classical systems including life, nervous systems and culture. I will leave a detailed treatment of quantum Darwinism and these other important Darwinian processes for latter posts but will attempt, in the remainder of this post, to demonstrate the plausibility of an isomorphism between Darwinian processes in general and the mathematics of inference. A more formal argument to this effect may be found in my paper Bayesian methods and universal Darwinism.
The Darwinian process of natural selection was the original Darwinian process described by science and has come to be understood as the 'central organizing principle' of biology. Indeed, Darwin himself, in his On the Origin of Species speculated that the same principle involved in natural selection might operate in other realms such as the evolution of languages and thus was the first to understand a generalization of natural selection to the principle of, what we now understand as, a Darwinian process.
The isomorphism between natural selection and inference is clear. Population biologists understand many characteristics of living things are determined by the particular genetic alleles, or possible variations at the site of each gene, which are actually possessed by an organism. Within a given population there are only a finite number of alleles at the site of each gene. These various alleles may be assigned a probability describing their relative frequency within the population.
Of particular interest to population biologists is the change due to natural selection which takes place in these probabilities between generations. The formula they use to describe this shift in allele frequency is:
Thus natural selection may be understood as a form of inference; the process by which species gain knowledge concerning the implementation of strategies for reproductive success.
Part of human nature, expressed across all cultures, is to seek out explanations of our origins. Science has gone further than any other human endeavour in providing satisfying answers to this quest. In general the view offered by science is that we are children of the universe; we are not freaks or a special case but conform to the laws of the universe like everything else. In particular an understanding of Darwin's natural selection allows us to trace our ancestry back to the first life form, it reveals all life forms to be an extended family and details our relationships within this family of life to all other living things.
It is my belief that in recent years science has made great progress in extending knowledge of our origins all the way back to the basic emergence of classical reality from the quantum substrate. While this view has great import for our quest to discover our origins, surprisingly it remains little noticed. A particularly elegant feature of this understanding is that the processes which produced both our reality and ourselves may be viewed as a generalization of natural selection.
This view provides detailed understanding in support of philosopher Daniel Dennett's claim that Darwin's idea was the best one ever: