Thursday, 17 November 2022

Death

John O. Campbell

Death comes for us all. Over a long life, prior to our end we will witness life wink out in many of those close to us. Our mortal lives are truly doomed, leaving little consolation. This foreseeable but inevitable tragedy may shape our lives, inclining us to depression, denial, and futile escapism (1).

Religions attempt consolation with promises of rebirth or a continuing life in heaven. But since Nietzsche declared that even God has died (2), such promises ring hollow to the modern mind.  Yet, these promises may hold clues to an understanding that does offer some consolation. I refer mainly to the promise of a Christian heaven and some aspects of it that might be understood and believed within a scientific context.

My father was a devout Christian and expected upon his death to be reunited with my sister who had died at a young age. He died at ninety-three with severe dementia – he could not recognize me or anyone else. Once when telling him family stories to try to jog his memories, he found it hard to believe that I could be his son. And when I asked why, he said it was because I was so much older than he was.

File:Communion of saints - Baptistry - Padua 2016.jpg

Upon his death I wondered about his ascension to heaven. Would he remain demented, or would he be restored to his prime? Could he recognize his daughter if he encountered her? Would his saintly qualities be the only part of him to endure? How much of his earthly identity would make it into heaven?

Biblical texts offer very few details and given the conflicting accounts of self-styled priests and experts we are pretty much free to imagine it as we wish, but one thing is clear; if we are washed of our sins upon entering heaven, we are not the same individuals we were on earth. The drive for reproductive success is probably absent or severely constrained, as are the joy of the high-life, high status, and worldly ambitions. The usual impression is that we are immersed in bliss and all inhabitants have only saintly characteristics.

All of this hints that the transformation to death, even in the Christian account, does not retain much of our personal identity, that the heavenly I is very different from the earthly I. Once earthly sins and ambitions are washed away, presumably heaven’s inhabitants are all quite similar, only their very best traits remain in existence.

The religious vision teaches that our individual mortal lives are only part of a much bigger picture in which our earthly life is but a short prelude in which we must do our best. And the religious consolation is that this bigger picture is immortal and that by doing our best we might earn access to it, finally escaping death.

Science views the situation in a similar light. Life has two complementary aspects, phenotypic and genotypic. The phenotypic mortal life, from birth to death is short and often brutish. The genotypic life is more perfect and nearly immortal, retaining only successes accumulated over billions of years.

And science even suggests the purpose of our mortal phenotypic lives. Largely through doing our best with a measure of trial and error, it is to generate those rare successes and bestow eternal life upon them within the immortal genome.

Phenotypes engage in what Darwin termed a struggle for existence and their success in this struggle provides evidence that updates their species’ genomic model. In part, the phenotype is an experiment whose purpose is to provide evidence for the evolutionary process, evidence that increases the species' knowledge in its struggle for existence.

Some might consider this inferential view of life as a degradation of the phenotypic role to that of disposable or mortal experimental probes, that our purpose as a phenotype is merely the testing and selection of knowledge.  This view suggests that the purpose of life is to evolve a nearly timeless and immortal repository of knowledge concerning increasingly powerful strategies for life’s existence and phenotypes exist merely to provide experimental evidence as grist for this mill of evolving knowledge.

It may be difficult for us to accept a diminished role for phenotypes. After all, for centuries before the existence of genetics was even known, the phenotype was biology’s sole focus of study. More personally, we identify with the phenotypic aspect of our lives. We experience ourselves and our short mortal lives as phenotypes as we strut and fret our hour upon the stage. We are actors following genetic, neural and cultural scripts, and those scripts almost completely define us. A timeless and endlessly creative bard wrote those scripts, and surely the bard’s role is more admirable than is the poor player’s.

It is difficult to accept that all our strutting and fretting may be mainly in vain, may signify nothing, and that any lasting influence or meaning is unlikely. This bleak outlook offers us little consolation in our brief, perhaps even disposable, phenotypic roles. We might gain some solace if we shift our focus from our mortal phenotypes to the more nearly immortal aspects of ourselves, our generalized genotypes in the form of genomes, learned cognitive models and cultural knowledge.

While it is true that those aspects of our generalized genotypes marking us as unique individuals are nearly as mortal as our phenotypes, some may prove meaningful for the future. Regardless of our personal contribution, it is our generalized genotypes that directly connect us to a more nearly immortal chain of being. In this context, we assume a cosmic identity within an eternally evolving nature, or as Spinoza and Einstein saw it, a perpetually evolving God.

On the other hand, the purpose of our timeless genetic knowledge is to achieve existence, and existence is the realm of the phenotype. In this view, the purpose of life is to discover phenotypes that can exist. For ourselves, our life as a generalized phenotype motivates us to take part in the cutting edge of evolution and to create novel forms that may have a continued existence. These activities may include having children and participating in the cultural and social issues of our time.

The present is our time, as phenotypic players, to strut our stuff on the stage of existence. It is our turn to bring to life, one more time, that dead, musty knowledge stored in timeless genotypes. We are the growing green shoots of evolution; we revive that dead knowledge and give it a dynamic new interpretation. We play our role in a reinvigorated drama of exploration into the unformed and unknown future, maybe even leaving behind some novel ad-libs for the inspiration of future players.

Perhaps a middle ground in deciding the relative merits of our identities, either as generalized genotypes or generalized phenotypes, is in order; we must consider that both are but two sides of the same coin. They are two elements of an inferential system, and their synergies may provide a more suitable context for understanding our existence than either element alone. As an inferential system, our generalized genotypes contain timeless knowledge for bringing structures and behaviours (generalized phenotypes) into the world, and the evidence generated from their existential success updates generalized genotypes to greater existential knowledge (3). Perhaps a balanced identity for us is that of an inferential system as it validates both our short-term strutting and fretting and our longer-term role in the evolution of timeless knowledge.

At birth, our phenotypic life begins with all possible advantages, in the form of biological, learned, and cultural inheritances capturing successes of our ancestors. But though our lives are subject to many harsh experiences and a brutal end, this view of life may offer some consolations. Our lives are truly part of a bigger picture that is nearly immortal. In this heavenly realm the failures of phenotypic life are washed away - only the best of all prior generations is retained.

References

1. Becker, E. The denial of death. . s.l. : Free Press., 1973.

2. Nietzsche, Friedrich. The Gay Science. 1882.

3. Campbell, John O. The Knowing Universe. s.l. : KDP, 2022.

Saturday, 8 October 2022

The Good Scientist and the Good Engineer at the Foundations of Existence

 

John O. Campbell

 A scientific revolution focused on the free energy principle is in progress. This is perhaps an inevitable revolution driven by a new but solid scientific consensus; information must be included along with mass and energy as foundational forms of existence underlying the scientific worldview (1; 2; 3). It is revolutionary because information is the first addition to the scientific understanding of foundational substances since the time of Newton – and we must expect its incorporation will have numerous profound and unanticipated consequences.

In 1929 Leo Szilard (1898 -1964) explained the thermodynamic paradox of Maxwell’s Demon by establishing the equivalence of energy and information (1 bit = ln(2) k T joules) in his brilliant paper: ON THE DECREASE OF ENTROPY IN A THERMODYNAMIC SYSTEM BY THE INTERVENTION OF INTELLIGENT BEINGS (4). This paper does not mention information but supplied a definition of what would be called information thirteen years later in Shannon’s ground-breaking 1942 paper (1) - and Szilard’s paper was only translated into English in 1964. Although it had little impact at the time, the equivalence of information and energy would join the more famous 1905 Einstein equation demonstrating the equivalence of energy and mass (e = mc2). Both equations have moved from theoretical understanding to empirical fact through recent experiments demonstrating their interchangeability (5). We now understand that the fundamental substance of existence comes in at least three different but equivalent forms.

The information revolution was launched with the publication of Shannon’s paper and its central concepts rapidly began to transform many branches of science. For example, in 1957, Edwin Jaynes (1922 – 1998) derived statistical mechanics, and thereby thermodynamics, using Shannon’s formulation of information (6).  

The current phase of this revolution is perhaps led by the free energy principle (FEP), a mathematically based principle stating that all existing systems and things built from this foundational substance (information/energy/mass) must act to reduce the surprise they experience. But information, defined as -ln(p), is surprise, and so the free energy principle states that all things attempt to reduce the magnitude of the information they receive. The probability p in the definition of information is the initial probability assigned to a specific outcome of some event. If that outcome does indeed happen, then -ln(p) is the amount of information or surprise experienced. The higher the initial probability assigned, the more the event was expected and the less information or surprise.

So why is the reduction of surprise the key for understanding existence? The short answer offered by the FEP is that all existence is dependent upon a strategy for existence encoded within a generative model of the system. At this stage in the development of our explanation, we will use biological examples - and in the biological case, (epi) genetic models are the generative model. Logically, there are only two methods of reducing surprise within systems, either cause the outcomes to be more as predicted by the model or improve the model to predict more realistically achievable outcomes. Using our biological example, either the phenotypic outcomes can be made more as predicted by the (epi) genetic model, a process described by developmental biology, or the (epi) genetic model can be made to predict more successful phenotypes, as described by natural selection.

Existence is thus seen as a rare and difficult process where the many obstacles standing in its way, summarized by the second law of thermodynamics, may only be overcome by precisely following knowledgeable strategies stored within the generative models of existing systems and by evolving these models to predict realistic forms of existence more accurately.

 

More formally a system’s expected free energy, which it acts to minimize, may be factored (7):

The far-right hand term, extrinsic value, or -ln(p) is just information, dependent upon the outcome. This term is minimized when, as above, the outcome is close to that predicted by the model.  The left term is the difference between a hidden optimal model and the systems tentative model. This term is minimized when the tentative model approaches the optimal model, or, as above, the model predicts more realistically achievable outcomes.

Interestingly epistemic value as been labelled a good scientist and extrinsic value a crooked scientist (8; 7). The latter because this formulation seems to imply that extrinsic value skews the evidence to support the model, as a crooked scientist would manipulate evidence to support a favoured theory – an activity that is verboten for scientists. But this is a misunderstanding as extrinsic value is maximized when a generative model is followed precisely so that the outcome is as predicted. Generative models, such as (epi) genetics, are autopoietic as they are detailed instructions for the creation and maintenance of a generated thing. A less pejorative term than a crooked scientist is a good engineer, who precisely follows engineering models, for example, of a bridge, and endeavours to have the actual bridge turn out precisely as predicted by the model. Or, following our biological example, when a phenotype develops precisely in accord with its (epi) genetic model. We may say that the good engineer constructs the evidence it seeks by precisely following the instructions of its generative model.

The good scientist takes the other tack in minimizing surprise. It acts to build a more knowledgeable model, ones that are more autopoietically effective in predicting realistically achievable forms. As all knowledge is ultimately evidence-based knowledge, it does this by using the evidence supplied by model outcomes to inferentially update the model – as with natural selection.

But both the good engineer and the good scientist are incomplete methods for minimizing surprise in themselves. The good engineer transforms model knowledge into evidence, begging the question of the source of that model knowledge while the good scientist transforms evidence into model knowledge, begging the question of the source of that evidence.

Together these expected free energy factors may achieve synergy; the model knowledge required by the good engineer can be produced by the good scientist and the evidence required by the good scientist can be produced by the good engineer. Following our biological example, the good engineer precisely follows (epi) genetic model knowledge, in the developmental process from zygote to mature phenotype, and the good scientist transforms the evidence produced by the good engineer (evidence of phenotypic fitness) into model knowledge, in the inferential process of natural selection (9).

Thus, the free energy principle, by incorporating information and knowledge into the fundamental components of existence, provides us with a revolutionary explanation of existence, one that is cyclical and evolutionary.

 

So far, we have exemplified this process in terms of biological processes, but the free energy principle, and this factorization, applies to all forms of existence. This view encompasses the many forms of existence found in nature. In addition to biological existence, it includes physical existence where the quantum wave function serves as a generative model and classical reality as the generated thing (10; 11; 12), the existence of neural-based behaviours where mental models serve as generative models and neural-based behaviours as the generated thing (13), and cultural existence where cultural models serve as generative models and cultural structures and processes as generated things (14). It is claimed that consensus scientific theories from quantum mechanics to those for cultural evolution are process theories implementing the free energy principle (15).

Figure 1: A general process of existence described by the free energy principle.

For us, perhaps the most revolutionary understanding revealed by the free energy principle is the relationship between humanity and the natural world. Previously, the scientific vision of the world, developed since Newton, has largely been reductionist and mechanistic. In this paradigm, all natural systems from the cosmos itself to biological cells to cultural interactions are viewed as finely tuned machines and while these processes have been described in ever greater detail, the highly unlikely existence of these many mechanisms has not been explained.

Where did they come from and how did they achieve their present state? The foremost theory for much of scientific history was a creator God and this was only partially undermined by Darwin whose evolutionary theory provided a naturalistic explanation of the emergence and development of life. But the free energy principle offers us an expanded view embracing the emergence and evolution of all existence. And crucially, humans are not above this process acting as observers, but are immersed in it maintaining and evolving new cultural forms.

We might be skeptical of the good engineer and good scientist metaphors as anthropomorphic, as attempting to portray nature in human terms. But we suggest precisely the opposite is the case; it portrays human activity and existence in terms of the same principle used throughout nature. In other words, the good scientist and the good engineer form synergistic processes operating throughout nature creating and evolving new forms within existence. Human scientists and engineers are merely the latest rediscoveries of this timeless process within a cultural context.

 

References

 

1. A mathematical theory of communications. Shannon, Claude. 1948, Bell System Technical Journal.

2. Information in the holographic universe. Bekenstein, Jacob. August 2003, Scientific American.

3. Information, Physics, Quantum: The Search for Links. In _Proceedings III International Symposium on Foundations of Quantum Mechanics_. Tokyo: pp. 354-358. Wheeler, John Archibald. Tokyo: : (1989). Information, Physics, Quantum: The Search for Links. In _Proceedings III International Symposium on Foundations of Quantum Mechanics_, 1989. pp. pp. 354-358.

4. Szilard, Leo. s.l. : Behav Sci. , 1964, Vols. Oct;9(4):301-10. . doi: 10.1002/bs.3830090402. PMID: 5888785..

5. Experimental demonstration of information-to-energy conversion and validation of the generalized Jarzynski equality. Toyabe, Shoichi , et al. s.l. : Nature Phys , 2010, Vols. 6, 988–992 (2010). . https://doi.org/10.1038/nphys1821.

6. Information Theory and Statistical Mechanics I. Jaynes, Edwin T. 1957, Phys. Rev., 106, 620.

7. Generalised free energy and active inference. Parr T, Friston KJ. 2019, Biological Cybernetics.

8. The anticipating brain is not a scientist: the free-energy principle from an ecologicalenactive perspective. . Bruineberg J, Kiverstein J, Rietveld E. 2016, Synthese. .

9. Universal Darwinism as a process of Bayesian inference. Campbell, John O. s.l. : Front. Syst. Neurosci., 2016, System Neuroscience. doi: 10.3389/fnsys.2016.00049.

10. A free energy principle for a particular physics. Friston, Karl. s.l. : arXiv:1906.10184 [q-bio.NC], 2019.

11. On Bayesian Mechanics: A Physics of and by Beliefs. Ramstead, Maxwell & Sakthivadivel, Dalton & Heins, Robert & Koudahl, Magnus & Millidge, Beren & Da Costa, Lancelot & Klein, Brennan & Friston, Karl. s.l. : ArxIv, 2022. 10.48550/arXiv.2205.11543.

12. Quantum Darwinism, Classical Reality and the randomness of quantum jumps. Zurek, Wojciech. s.l. : Physics Today, 2014, Vols. 67, 10, 44.

13. The free-energy principle: a unified brain theory? Nat Rev Neurosci. 2010 Feb;11(2):127-38. doi: 10.1038/nrn2787. Epub 2010 Jan 13. PMID: 20068583. Friston, Karl. s.l. : Nat Rev Neurosci. , 21010, Vols. Feb;11(2):127-38. doi: 10.1038/nrn2787. Epub 2010 Jan 13. PMID: 20068583..

14. Cultural Affordances: Scaffolding Local Worlds Through Shared Intentionality and Regimes of Attention. . Ramstead MJ, Veissière SP, Kirmayer LJ. s.l. : Front Psychol. , 2016, Vol. Jul 26;7:1090. . doi: 10.3389/fpsyg.2016.01090. PMID: 27507953; PMCID: PMC4960915..

15. Campbell, John O. The Knowing Universe. s.l. : KDP, 2022.

 

Tuesday, 20 September 2022

Reductionwasism

 John O. Campbell

Since its beginnings, western science has evolved a reductionist vision, one explaining more complex phenomena as compositions of simpler parts, and one that is amazingly good at describing and explaining most entities found in the universe. Wikipedia provides a commonly accepted definition of reductionism (1):

Reductionism is any of several related philosophical ideas regarding the associations between phenomena, which can be described in terms of other simpler or more fundamental phenomena. It is also described as an intellectual and philosophical position that interprets a complex system as the sum of its parts.

One reductionist triumph is the field of statistical mechanics which describes the behaviour of large-scale materials completely in terms of the behaviour of individual atoms or molecules.

But in recent decades reductionism has fallen out of favour, especially in the public imagination, due to a desire for more wholistic (sic) explanations. Richard Dawkins writes (2):

If you read trendy intellectual magazines, you may have noticed that ‘reductionism’ is one of those things, like sin, that is only mentioned by people who are against it. To call oneself a reductionist will sound, in some circles, a bit like admitting to eating babies.

This popular rejection may partially reflect rejection of scientific explanations in general, but for several decades thoughtful scientists have been adding their voices, offering serious criticisms that are difficult to refute (3; 4; 5). Still, most scientists continue to prefer some version of the reductionist explanation. Richard Dawkins offers a particularly spirited defence of what he calls hierarchical reductionism (2):

The hierarchical reductionist, on the other hand, explains a complex entity at any particular level in the hierarchy of organization, in terms of entities only one level down the hierarchy; entities which, themselves, are likely to be complex enough to need further reducing to their own component parts; and so on.

Here we explore this controversy. It is perhaps an inevitable controversy, one sure to grow into a scientific revolution due to new additions recently discovered to the small number of fundamental entities at the bottom of the reductionist hierarchy. These new, revolutionary discoveries are information and knowledge. Traditionally reductionist explanations have been ultimately cast in terms of mass, energy and forces, but  once information enters the mix, as an equivalent form of energy and mass, all theories are ripe for transformative reinterpretation, interpretations explaining how information acting together with energy and mass shapes all things at the most basic level (5; 6). It is as if scientific explanations developed since Newton and Descartes must now be supplemented with an additional dimension, an informational dimension.

Figure 1: Descartes proposed that animals were essentially machines, machines that should be understood in terms of the workings of their part.

Reductionism itself is a victim of this kind of revolutionary transformation. Once reductionism expands to encompass the fundamental substance of information, it is no longer reductionism but transforms into something more complete – a two-way inferential process between levels, not only drawing, in a reductionist manner, on lower physical levels of mass and energy but also on information and knowledge existing at a higher level.

And the discovery of information may have arrived just in time to save reductionism from a severe, possibly fatal problem. It is the problem of the evolutionary origins of order and complexity, phenomenon that have never had satisfactory reductionist explanations (3). Energy, mass, and forces, on their own, tend towards equilibrium; undirected they cannot create order or complexity. Order and complexity may only arise if the equations describing mass, energy and forces are subject to extremely fine tuned initial or boundary conditions. Biology, for example, explains the growth of biological complexity as due to natural selection which accumulates and refines the attempts of generations of phenotypes to maintain themselves within existence and encodes the more successful experiences within DNA. Thus the life of each new individual in each new generation is guided by its (epi) genetics, the finely tuned initial and boundary conditions accumulated by former generations in the tree of life. And this implies that life is not a reductionist process, its guiding knowledge is not due to the uninformed chemistry and physics of lower levels; life occurs when these lower levels are guided and informed by knowledge gathered over evolutionary time, and this knowledge exists at a higher level than uninformed physics and chemistry.

This view of the origin of order, answers the question ‘Where did all the knowledge, in the form of fine-tuned initial conditions come from?’. Reductionism cannot answer this. Any attempt to explain life in terms of only mass, energy, and forces, is already at the bottom of the hierarchy and cannot account for these special conditions, in terms of any lower level. These conditions contain knowledge (e.g., (epi) genetic knowledge [1]) and the origins of that knowledge demands an explanation. Reductionism posits linear, bottom-up causation but can offer no explanation of how the knowledge to create ever more complex entities, a trend away from equilibrium, is encoded into the most fundamental of physical building blocks. It is linear and offers no opportunities for feedback or other sources of control or regulation required for the existence of complex entities.

All knowledge is essentially evidence-based knowledge that accumulates through the process of (Bayesian) inference. Abstractly, inference is an interplay between two levels, the level of a model or family of hypotheses and the level of evidence spawned by those models. It is a cyclical process that is inherently non-reductionist.

Clearly, one-way causation is insufficient to explain the existence of complex entities and better explanations must involve some form of two-way causation where the casual forces of the building blocks are orchestrated and regulated by knowledge residing at a higher level in the hierarchy. Our first clue to an answer is this: the process is a two-way street involving a cyclical structure.

We are quite familiar with the conditions that impose order on complex entities such as organisms – they are the conditions set by the (epi) genome and this source of regulation is not spontaneously encoded at the more fundamental level of atoms but exists at the same level as biology itself. It is ironic, that Dawkins, the champion of gene-centred biology, seems to have missed this point. As a process of two-way causation, we might view knowledge encoded in the (epi) genome as structuring, orchestrating, and selecting chemical and physical potentials to form the phenotype – the upward causation of chemical potential is regulated by downward genetic control. We may view the developmental process of organisms in terms of an (epi) genetic model that is followed as faithfully as possible to produce the phenotype.

But, in general, where did the model knowledge come from? In some cases, such a biology, we know quite well. Darwin presented heredity, or the passing of knowledge between generations, as a conceptual pillar in his theory of natural selection. And the on-going micro-biological revolution demonstrates that much of that knowledge is encoded within DNA. In biology, (epi) genetic knowledge is accumulated by natural selection which is, in general terms, a Bayesian process, one in which evidence of what worked best in the last generation inferentially updates the current genomic models to greater adaptedness (7).

This biological example can be abstracted into a two-step cyclical process in which the first step involves the transformation of an autopoietic model into a physical entity, and the second step involves the evidence provided by that physical entity’s quest to achieve existence, serving to update the model’s knowledge.

This describes an evolutionary process, with each cycle beginning with model knowledge acting to constrain the building blocks, explaining the formation of phenotypes under the influence of genotypes and, more generally, the formation of entities under the influence of information.  And then to complete the cycle, the produced entity acts as evidence of what worked to update the model. With each cycle phenotypic adaptedness improves and this improvement updates the population level model for adaptedness encoded by genotype.    

Arguably, with this tradition of including information and knowledge within its fundamental explanations, biology presaged the information revolution that would begin to diffuse through most branches of science in the latter half of the twentieth century. But even here reductionist biology has had an uneasy relationship with informational biology. We might argue that the (epi) genetic knowledge merely forms life’s initial and boundary conditions and that their workings are satisfactorily explained in terms of chemistry and physics. But this leaves unexplained the origin of these condition’s that encode knowledge. And an explanation cannot be found at a lower level of fundamental machinery; explanations are only found at the higher level of evolutionary biology.

Reductionist explanation, when forced to accommodate information and knowledge are extended beyond their possibilities and must transform into explanations that are no longer reductionist. This reductionwasism leaves an explanatory void, one that we will explore for hints of a way forward. But even at the start, with just the understanding that information and knowledge must play a large role in any explanation, we may detect a clear outline.

Prior to the information revolution, some researchers attempted, in a reductionist manner, to fil the biological explanatory void with some additional, vital force but by the 1930s that view had been almost completely excluded from the scientific literature (8) – biological processes were beginning to appear amenable to reductionist explanations in terms of only chemistry and physics. Nonetheless some scientists, including the leading biologist J.B.S. Haldane (1892-1964), insisted that biology could not be completely understood within a reductionist framework.

The applicability of reductionist explanations to life was clarified by the quantum physicist, Erwin Schrödinger (1887 – 1961), in his 1944 book What Is Life? (10). Schrödinger claimed that all biological processes could be explained in terms of chemical and physical masses and forces – along with derivative concepts such as energy which is the effect of a mass moved by a force. But he also acknowledged inherited biological knowledge allowing organisms to maintain a low entropy state in defiance of the second law of thermodynamics, even speculating that this knowledge might be stored in aperiodic crystals, four years before Shannon introduced the mathematical concept of information (9) and nine years before the discovery of DNA’s role in heredity.

Schrödinger thus began the formation of an emerging biological outline of biological processes as a synergy of reductionism guided by knowledge. While all biological processes can be explained purely in terms of physics and chemistry, these processes are orchestrated, regulated, or governed by knowledge stored in the (epi) genome selecting and specifying those processes.

One way in which this problem is currently framed is as downward versus upward causation, where downward causation is composed of information or evidence-based knowledge regulating the process of upward causation which plays out according to the laws of physics and chemistry. George Ellis (born 1939) uses the informative example of computation where, at the bottom level, the flow of electrons through the computer’s components are all well explained by physics in terms of upward causation but the orchestration and regulation of those electrons causing them to perform a useful computation is supplied by knowledge contained in the algorithm coding the computation in a form of downward causation imposed on the electron flow (3). While a computation can be explained as patterns of electron flow, the patterns performing the computation are specified by conditions imposed by the algorithm which directs it.

In this view it appears, in biology and computation at least, that two separate forms of causation are at work: one causing processes playing out as physics and chemistry and one orchestrating or specifying or selecting those processes from the multitude of possibilities. But any fundamental difference between these forms of causation also appears unlikely as even life’s DNA is chemistry governed by physical laws – the upward and downward perspectives on causality must share a fundamental unity.

To explore a possible form of this unity we first revisit the fundamental physical concepts forming the building blocks of reductionism: mass and force. Also of great importance to reductionist explanations is energy that is related to mass and force as energy is defined as mass moved by a force. In simple terms, upward causation occurs when forces move masses and change their energy. Thanks to Einstein we know that mass and energy are fundamentally equivalent and proportional to each other: e =  mc2 - a little mass is equivalent to a lot of energy. And so, we might view upward causation as ultimately starting from a single building block of mass/energy and the forces it generates.

Can we view downward causation similarly? What would correspond to a force and what to mass or energy? The answer suggested here is that all three of these physical concepts basic to upward causation are equivalent to a single concept of downward causation: information. And thus, we may identify information as the missing vital force thought necessary for biological explanations in the nineteenth century.

This startling claim might gain plausibility if we consider Szilard’s equation showing the equivalence of energy and information; 1 bit of information = k T ln2 or about 10^-23 joules of energy - a little energy represents a lot of information. As the PhysicsWorld website recounts (10):

In fact, Szilárd formulated an equivalence between energy and information, calculating that kTln2 (or about 0.69 kT) is both the minimum amount of work needed to store one bit of binary information and the maximum that is liberated when this bit is erased, where k is Boltzmann’s constant and T is the temperature of the storage medium.

1 gram mass = 9 x 10^13 joules energy = 3 x 10^34 bits

As the quantity of computer memory existing in the world today is about 10^23 bits, 1 gram of mass represents a quantity of information roughly 3 x 10^11 times larger.

And it has since been experimentally confirmed that energy can be converted to information and vice versa (11). Thus mass, energy and information are equivalent forms transforming one into another and in recognition of this and other evidence, it may now be a consensus that information must join mass and energy as fundamental component of all existence (5; 6).

The inclusion of information along with mass and energy as different aspects or states of the basic substance composing reality has suggested a type of monism to some researchers (12). But this is a different type of monism than might be expected to emerge from a reductionist account because of the inclusion of information or knowledge, which we suggest acts as a force performing downwards causation, thus breaking the reductionist mould.

We might note that the concept of force is closely aligned with the fundamental reductive substance of mass/energy and ask if the addition of information to this fundamental substance is also intimately related to a force. This force cannot be one of the four fundamental forces acting in upward causation known to science, it must be something different. The answer suggested by Steven Frank and others is that information acts as both a substance and a force, a force acting in the process of inference, a force that moves probabilistic models away from the ignorance of the uniform distribution, where everything is equally likely, towards a more knowledgeable or spiked distribution denoting greater certainty and one essential to the accumulation of evidence-based knowledge (13).

An important derived concept in the reductionist framework is work or the movement of a mass by a force. For example, work is performed when the mass of a car moves due to the force provided by its burning fuel. We might ask if something analogous happens with the force of information, is it capable of moving a mass in a process we might call work? The answer, as demonstrated by Seven Frank, is yes (13).  

This involves two different aspects of information the first as evidence and the second as information contained in the model. Evidence acts as a force and model information acts as the mass that is moved (keeping in mind that information in this sense is equivalent to mass). In this view the force of evidence in Bayesian inference moves the information of the model. The distance that a model describing hidden states is moved is called information gain and is the distance moved towards certainty that is forced by the evidence (19). We may name this form of work knowledge – the distance a model is moved by the force of evidence.

Frank offers the example of natural selection as the force of evidence moving the genomic model to perform work (13):

For example, consider natural selection as a force. The strength of natural selection multiplied by the distance moved by the population is the work accomplished by natural selection.

Having introduced information as both a basic substance and a force producing work we may now examine the implications this holds for reductionist philosophical interpretation of complex systems as the sum of their parts. A straightforward approach is to consider information and inferred knowledge as additional parts forming the sum, along with mass, energy and the physical forces employed by reductionist explanations. At a stroke this inclusion accommodates downward causation or knowledgeable regulation as a complement rather than as an alternative to upward causation. In this view, knowledgeable generative models may orchestrate or regulate the many possibilities offered by upward causation to form complex entities capable of existence.

Once this possibility is entertained, we find examples of blended downward and upward causation everywhere. For example, if a society wants to build a bridge across a gorge, we start by composing a knowledgeable generative model – a model that will generate the envisioned bridge. This engineering model specifies the required reductionist components, the masses to be used, the forces which must be balanced and the energies which must be applied. Once the engineering model is completed the construction begins by following as closely as possible the generative model’s instructions for placing and connecting the reductionist components. The generative model exercises downward causation through its orchestration and regulation of the reductionist components while the reductionist components perform upward causation by following the laws of physics and chemistry to which they are subject.

Another familiar example of synergy between downward and upward causation is found in biology where a knowledgeable generative model formed by the genome orchestrates and regulates the masses, forces and energies of reductive components involved in upward causation to form the phenotype.  The proverb ‘mighty oaks from little acorns grow’ is usually taken to mean that great things have modest beginnings, but a deeper meaning might be that the acorn, given the right conditions, contains sufficient autopoietic knowledge to orchestrate physical and chemical processes into the creation and maintenance of a mighty oak tree. 

We might note that these examples are illustrative of a general autopoietic process where the downward causation of generative models generates things into existence through the orchestration of upward causation - and are in complete accord with the free energy principle as developed by Karl Friston and colleagues (14). The free energy principle states that all systems act to minimize the surprise they experience where surprise is defined as the informational distance (KL divergence) between the expectations of their generative model and the actual generated thing – between what is expected and what is achieved. General processes implementing this surprise reduction strategy employ a generative model to specify and control an autopoietic or self-creating and self-maintaining process orchestrating the forces of lower-level building blocks to bestow existence on a generated thing (15). Through careful conformance to its model, the generated thing is more likely to turn out as specified and thus achieve minimum surprise.

Figure 2: A general process of entity formation. Exercising downward causation, a knowledgeable generative model orchestrates and regulates mass and energy acting in upward causation through physical laws.

But this synergy between upward and downward causation still leaves much unexplained. As shown in figure 2, the process is linear and begs the question of both the origin of the generative model and the effect of the generated thing.

What is the origin of the generative model? In the example of biology above the answer is well known – the (epi) genetic generative model has accumulated its knowledge through natural selection acting over billions of years. And what is the effect of the generated thing? In biology the experience of the generated thing or phenotype in achieving an existence (reproductive success) composes the evidence updating the genetic model, at the population level, to greater existential knowledge.

In this light the generative model and the generated thing assume a cyclical rather than a linear relationship where the generative model sculpts the generated thing from the clay of its building blocks, and the generated thing updates the generative model to greater knowledge with the evidence of its existential success

Figure 3

I have referred to systems that achieve and maintain their existence through this two-step process as inferential systems (16). In these systems upward causation is in terms of the building blocks’ physical and chemical potentials. And downward causation is orchestrated by the generative model during the autopoietic step of the process. Thus, the inclusion of information and knowledge as fundamental physical concepts forces an expansion of the reductionist framework to one that can no longer be called reductionist because it relies upon knowledge at a higher level to supplement fundamental processes at a lower level.

New to this paradigm is its second step where autopoietic success is used as evidence to update the model that generates it. Natural selection serves as an example of this model updating as it is a process in which the genetic model is updated with the evidence of the phenotypes it brings into existence (17; 18; 16). The evidence that updates the genetic model is the evidence of fitness in the form of reproductive success. With each generation, the population level genome forms a probabilistic model composed of genetic alleles, with each allele assigned the probability of its relative frequency within the population, and in each generation these probabilities are updated, in a Bayesian manner, due to the evidence of fitness they produce.

Evidence is information which acts as a force in Bayesian inference – it moves the probabilistic model to a new shape containing greater existential knowledge. Crucially all evidence is evidence of existence; to qualify as evidence, it must have an actual existence. And in turn, this means that evidence can only have relevance for and can only update models that hypothesize about possible forms of existence.

As we have seen, information acts as a force in the process of Bayesian inference where the degree of plausibility granted to each hypothesis composing a model is updated by information in the form of evidence updating the prior distribution into a posterior probability;

Where hi is the ith hypothesis, in a family of exhaustive and mutually exclusive hypotheses describing some event, forming a model of that event, and 0 is a new observation or evidence. A prior probability, is transformed into a posterior probability, .  The term describing the force doing the heavy lifting in moving the prior to the posterior is the likelihood, describing the informational force of the current observation (13).  

Reductive explanations, even Dawkin’s hierarchical reductionism, must ultimately explain phenomena in terms of fundamental physical forces, energies, and masses. For example, Dawkins’ reductionist example of a car describes it in terms of its parts such as its carburettor. But a fundamental explanation must progress down this hierarchy ultimately to an explanation in terms of atoms or subatomic constituents and their abstract description in terms of forces, energies, and masses.

We have suggested a transformation of reductive explanations required by the addition of information and knowledge to nature’s small repertoire of fundamental components. Reductionist explanations on their own have brought us a great deal of understanding but they struggle to explain why, out of the nearly infinite possible configurations of mass, energy, and forces, only a few complex forms such as life have achieved an existence. Reductionist explanations for complexity depends on special initial and boundary conditions, and they can offer no explanation for the knowledge contained in these conditions which beget complexity (20).

One effect of this reductionist deficiency is that it precludes a unified understanding of natural processes. Given the laws of nature and the operative initial and boundary conditions, reductionism can explain a specific system’s further evolution. But the conditions orchestrating each type of system appear quite different. It appears that atoms operate differently then cells, cells differently then brains and brains differently then cultures.   

The explanatory void left in the wake of reductionwasism may be remedied by the introduction of information and knowledge as fundamental entities. At a stroke these offer a solution to the problem of initial and boundary conditions in that the knowledgeable conditions orchestrating complexity evolve from the evidence generated by their actions (16). As illustrated in figure 3, this may lead to an evolutionary process of knowledge accumulation forming the initial and boundary conditions for each generation. And as demonstrated in various lines of emerging research, such as the free energy principle, the inferential and evolutionary accumulation of knowledge and its testing is a common mechanism of all existing things. And this makes the universe much more easily knowable – if you know a little about inferential processes, you know a little about everything.

 

References

1. Wikipedia. Reductionism. Wikipedia. [Online] [Cited: July 13, 2022.] https://en.wikipedia.org/wiki/Reductionism.

2. Dawkins, Richard. The blind watchmaker. s.l. : Oxford University Press, 1976. ISBN 0-19-286092-5..

3. How Downwards Causation Occurs in Digital Computers. Ellis, George. s.l. : Foundations of Physics, , 2019, Vols. 49(11), 1253-1277 (2019).

4. Ulanowicz), R.E. Ecology: The Ascendant Perspective. s.l. : Columbia University Press , 1997. ISBN 0-231-10828-1.

5. Developmental Ascendency: From Bottom-up to Top-down Control. . Coffman, James. s.l. : Biological Theory., 2006 , Vols. 1. 165-178. 10.1162/biot.2006.1.2.165. .

6. Information in the holographic universe. Bekenstein, Jacob. August 2003, Scientific American.

7. Information, Physics, Quantum: The Search for Links. In _Proceedings III International Symposium on Foundations of Quantum Mechanics_. Tokyo: pp. 354-358. Wheeler, John Archibald. Tokyo: : (1989). Information, Physics, Quantum: The Search for Links. In _Proceedings III International Symposium on Foundations of Quantum Mechanics_, 1989. pp. pp. 354-358.

8. Universal Darwinism as a process of Bayesian inference. Campbell, John O. s.l. : Front. Syst. Neurosci., 2016, System Neuroscience. doi: 10.3389/fnsys.2016.00049.

9. Wikipedia. Vitalism. [Online] 2013. [Cited: November 10, 2013.] http://en.wikipedia.org/wiki/Vitalism.

10. Schrödinger, Erwin. What is life? s.l. : Cambridge University Press, 1944. ISBN 0-521-42708-8.

11. A mathematical theory of communications. Shannon, Claude. 1948, Bell System Technical Journal.

12. Cartlidge, Edwin. Information converted to energy. physicsworld. [Online] 2010. [Cited: May 31, 2022.] https://physicsworld.com/a/information-converted-to-energy/#:~:text=In%20fact%2C%20Szil%C3%A1rd%20formulated%20an,is%20the%20temperature%20of%20the.

13. Experimental demonstration of information-to-energy conversion and validation of the generalized Jarzynski equality. Toyabe, Shoichi , et al. s.l. : Nature Phys , 2010, Vols. 6, 988–992 (2010). . https://doi.org/10.1038/nphys1821.

14. Sentience and the Origins of Consciousness: From Cartesian Duality to Markovian Monism. Friston, Karl J., Wiese, Wanja and Hobson, J. Allan. s.l. : Entropy, 2020, Vol. 22. doi:10.3390/e22050516.

15. Simple unity among the fundamental equations of science. Frank, Steven A. s.l. : arXiv preprint, 2019.

16. Free Energy, Value, and Attractors. Friston, Karl and Ping Ao. s.l. : Comp. Math. Methods in Medicine, 2012.

17. Campbell, John O. The Knowing Univese. s.l. : Createspace, 2021. ISBN-13 ‏ : ‎ 979-8538325535.

18. —. Darwin does physics. s.l. : CreateSpace, 2015.

19. Active inference and epistemic value. . Friston K, Rigoli F, Ognibene D, Mathys C, Fitzgerald T, Pezzulo G. s.l. : Cogn Neurosci. , 2015, Vols. 6(4):187-214. . doi: 10.1080/17588928.2015.1020053..

20. Temporal Naturalism. Smolin, Lee. s.l. : http://arxiv.org/abs/1310.8539, 2013, Preprint.

21. Laplace, Pierre-Simon. A philosophical essay on probabilities. [trans.] Frederick Wilson Truscott and Frederick Lincoln Emory. New York : JOHN WILEY & SONS, 1902.


[1]  In the sense that knowledge encoded within DNA is the brains of a process essential to the creation and smooth running of living things – making genetics the custodian of knowledge forming the initial and boundary conditions.