Thursday, 3 February 2022

The Free Energy Principle For Dummies

This is an excerpt from the book: The Knowing Universe.

A philosophical and scientific theory with the potential to explain existence appears to be coalescing around the variational free-energy principle (FEP). Almost two decades ago, Karl Friston introduced the FEP as a unified neuroscientific theory. These initial papers are now some of the most cited in the field and have inspired thousands of other papers developing his principle and applying it to everything. Great excitement is building from a growing body of evidence hinting that the same general strategy brains use to keep their hosts in existence may be common to all forms of existence.

While this approach has gained tremendous popularity among researchers, it has also baffled many experts. As Wikipedia tells us (Wikipedia):

The free energy principle has been criticized for being very difficult to understand, even for experts.

Although this simple-sounding principle is transforming cognitive neuroscience and is considered by many  (myself included) as the most promising approach to a theory of everything, the bafflement it induces in smart people is legendary.

In contrast to its supposed difficulty, I marvel that it makes such clear sense. How could I so easily comprehend this principle, which seems to escape much brighter people? Perhaps the answer is that my unusual intellectual journey has arrived independently at many of the conclusions underlying this principle, and without these distinctive insights, it may well have remain incomprehensible to me. The upside is that perhaps this background may offer some assistance to those struggling to understand the FEP.  

We can easily state the principle: everything attempts to minimize the surprise it experiences. It sounds pretty innocuous for a theory of everything; in fact, it has an almost Zenlike simplicity. But like a Zen koan, its meaning is elusive. Many papers fail to fully explain the principle before diving into complex mathematics and computer simulations, and some readers are left wondering about the claimed links between surprise and existence.

The ability of all things to experience surprise contains one critical assumption that is rarely explicitly mentioned. The assumption is this: the existence of every ‘thing’ depends on a model having knowledge for its self-creation and maintenance. This model provides an expected roadmap for existence, and surprise occurs when its expectations are unmet. For many of us, the idea of everything having built-in models that can be surprised is a little hard to accept. But consider that all life has genetic models and complex animals have neural models, and humans have cultural models. Each of these models can be surprised by the evidence. Friston sometimes uses the example of a fish whose genetic and neural models expect it to be in the water and are surprised if it is not. Surprised genetic and neural models are often precursors of death, and surprised cultural models are often precursors of cultural extinction. That is why all things attempt to avoid surprising their models; things that don't trend towards non-existence.

What about physical existence? As we discuss in chapter 8, it turns out that the quantum wave function may form a similar model for quantum existence, but the argument is somewhat more complicated (Friston, 2019), so here we start with more familiar examples.

This connection between models and existence is profound and deserves some explanation. Why should existence require a model? The short answer is that the challenges to existence are formidable, and existence does not occur without following a detailed, knowledgeable model. But we see existence all around us. In what sense is it challenging to achieve? A law of nature, the second law of thermodynamics, summarizes the challenges to existence: disorder increases in all things. If a thing's disorder increases enough, it ceases to be that thing; it becomes non-existent. As we see a little later, the second law and the free energy principle say much the same thing, but while the second law focuses on existence's challenges, the free energy principle focuses on their circumvention through reducing their models' surprise.

But how do things act to minimize surprising evidence? There are two answers: things can accurately follow their models and produce evidence confirming their model predictions, or secondly, they can improve their models to make better predictions. In short entities can either cause reality to conform to their model or cause their model to better conform to reality. The first strategy is easy to comprehend as our genetic, neural, and cultural models predict existence enhancing outcomes and, as a bonus, provide algorithms for achieving those outcomes. Thus this route to minimal surprise only involves following the models as accurately as possible - anything's best strategy for existence is to reduce errors in executing their finely-honed models. The second answer is the evolutionary processes that create and hones more knowledgeable models. This process called inference uses a thing's relative ability to achieve existence as evidence and uses this evidence of existence to update their models' accuracy; think natural selection where evidence generated by the struggle for existence updates the genetic model — the more knowledgeable the model, the fewer surprises it experiences in the world.

This principle's beauty is in its mathematical depth; Friston and colleagues have developed mathematics to approximate surprise experienced in complex, real-world phenomena.  Here we only scratch the mathematical surface to reveal a bit of its potential.

We should probably start with the mathematical definition of surprise; it is -ln(p), where p is a probability that some hypothesis is true. How does evidence create this surprise? When sufficient evidence reveals the truth of a particular hypothesis, then -ln(p) is the surprise experienced; if the initial probability assigned to the hypothesis is small but the evidence indicates that the hypothesis is true, there is much surprise.

What does -ln(p) have to do with an entity's model? Models used by real-world things to achieve their existence are probabilistic models. Genetic, neural and cultural models involve a family of competing hypotheses, each of which is assigned a probability that they are the one true hypotheses. For example, at each of an organism's genetic locations or locus, various individuals from the population may have different genetic sequences or alleles. The probability assigned to each specific sequence is its relative frequency within the population, and this probability is the fitness of the sequence.  If over many generations a population evolves from having multiple alleles at a locus to having only one, the probability for that sequence is 1, and we might say that the evidence has proven it to be the fittest among the initial family of alleles; it is the one proven to produce the least surprise among the options.

We can consider the hypothesis assigned probability p as one in a mutually exclusive and exhaustive family of hypotheses offering solutions to a real-world existential challenge. Being mutually exclusive and exhaustive has a couple of consequences. The first is that one and only one of the hypotheses must be true. The second is that the sum of the probabilities over the family of hypotheses must equal 1. If the probabilities add to less than 1, then the hypotheses are not exhaustive; some other possibility exists. If the probabilities add to more than 1, they are not mutually exclusive; the hypotheses have some logical overlap.

Real-world instances simplify these mathematical complexities. For example, the family of alleles at a genetic locus within a population of organisms is naturally mutually exclusive and exhaustive. It is mutually exclusive because each allele is unique, and it is exhaustive because the family consists of all the alleles within the population. Thus the sum of the relative frequencies of alleles in the population must equal 1 as that is implicit in the meaning of relative frequency.

Because the probabilities assigned to the family of hypotheses sum to 1, they form a probability distribution, and a good deal of mathematical machinery is available for analyzing probability distributions. For example, every probability distribution has the property of entropy or the amount of expected surprise: Sum(-p ln(p)). Thus minimizing free energy is equivalent to minimizing model entropy. But the second law of thermodynamics states that the entropy of isolated systems must always increase.

It is in this seeming contradiction that it all comes together. Systems having unconstrained entropy are subject to unconstrained surprise and dissipate into non-existence. An alternative statement of the free energy principle is that existence depends on minimal surprise.  Systems only achieve existence if they know how to avoid isolation and exploit outside energy sources to decrease their entropy. And they must accomplish this while following the second law in producing entropy increases in the combined system plus environment. For example, a photosynthetic cell's existence depends on its genetic knowledge for using the sun's energy to counter the second law's tendency towards disintegration; the combined cell-plus-sun system's entropy increases as dictated by the second law and more than pays for the cell's entropy reduction.

Existing systems follow their models' knowledge to navigate the environment and fend off nature's relentless forces towards dissipation. In short, existence is fiendishly tricky; it requires a great deal of knowledge to achieve and must follow that knowledge without errors or surprises. The free-energy principle is important because it is a road map, perhaps nature's only roadmap, for achieving existence, and that is why it provides a principled account of all things.

 

References

Friston Karl A free energy principle for a particular physics [Journal]. - [s.l.] : arXiv:1906.10184 [q-bio.NC], 2019.

Raviv Shaun The Genius Neuroscientist Who Might Hold the Key to True AI [Online] // Wired. - Wired Magazine, November 13, 2018. - https://www.wired.com/story/karl-friston-free-energy-principle-artificial-intelligence/.

Wikipedia Free energy principle [Online] // Wikipedia. - 3 11, 2019. - https://en.wikipedia.org/wiki/Free_energy_principle.

 

Friday, 28 January 2022

Following the FEP to self-actualization

 

 John O. Campbell

Looking back over a long life, my efforts at self development or self-actualization appear meandering and ineffectual. Where did I go wrong and what might I have done to navigate a more direct course? At long last I may have found an answer – one that might be of some utility for those setting course towards this destination.

But first, a quick review of the goal. What is self-actualization, or as some call it, self-authenticity?

Perhaps its roots in western philosophy extend to Friedrich Nietzsche (1844–1900) who is largely remembered for his pronouncement that God is dead and esteem for the will to power and for supermen. These memes may seem incongruent with self-actualization but when properly understood they reinforce it. Nietzsche was an atheist and viewed religious indoctrination of the young as one of the main barriers to their formation of authentic worldviews, a necessary accomplishment for a strong moral character or a ‘superman’ who would be immune to the herd mentality. Having studied Darwin, he concluded that Christianity had lost its hold on humanity and that this offered potential for self-actualization as well as terror at being cast adrift in an unfamiliar universe.

This theme was developed by existentialist philosophers of the mid twentieth century, perhaps reaching its culmination in the writings of the great humanist psychologist Abraham Maslow, best remembered for introducing a hierarchy of human needs and placing self-actualization at its pinnacle.

 


But he gradually became aware of an ultimate stage even beyond self-actualization, just prior to his untimely death in 1970 Maslow had increasingly become convinced that self-actualization is healthy self-realization on the path to self-transcendence. And psychological studies have since demonstrated that self-actualization does show a strong positive correlation with increased feelings of oneness with the world (Kaufman, 2018).

Young adults, especially students have long been prone to rebellious tendencies. A natural urge towards freedom drives young adults to cast off the social constraints imposed by prior generations in forms such as religion and strident demands of a consumer society in favour of developing one’s true self. But the road to freedom is not easy and after a few years of the highs and lows of turning on, tuning in, and dropping out, for example, many revert to more traditional world views.

What has been lacking is a reliable road map that might aid us in our journey towards freedom and self-actualization. Well, science may now have provided an answer, in the form of the variational free energy principle (FEP). In a nutshell the FEP, developed by renowned neuroscientist Karl Friston, states that all existence depends upon reducing the surprise or difference between an entities model for existence and its experience in achieving existence. This can be done in two fundamental ways, either make the model closer to reality or cause reality to follow the model more closely.

We may illustrate these two methods as a cyclical process where entities are created through the autopoietic process of carefully following the model and then the entity’s experience in achieving existence is used as evidence to update the model in a Bayesian manner.

 

For example, the model underlying biological existence might be the inherited genetic and epi-genetic model, autopoietic creation takes place as described by developmental biology, the existing entity is the resulting phenotype and the experience of the phenotype in achieving existence updates the model through natural selection. This describes an evolutionary process where knowledge for more resilient forms of existence is accumulated in the genetic model (Campbell 2022).

So, what could this have to do with self-actualization. Well, a self-actualized person is an existing entity whose evolution is described by this paradigm. Her model is her world view, including plans and expectations for her life - the self she strives to be. Her autopoietic self-creation occurs through faithful adherence to her model - she attempts to follow her life plans and these attempts result in an actual self that may conform to the plan in some areas and deviate from it in others. And this experienced actual self provides a test of her model or life plan that she may use to update and fine-tune it. Best of all this is an evolutionary process; over many cycles she continuously approaches the person she envisions in a process of self-actualization. 

One extremely important recent finding is that over time, evolving under the FEP, the model becomes ever closer to the world it is modelling (Fields et al, 2022). And applied to self-actualization this might imply the eventual realization of Maslow’s goal of self-transcendence – becoming one with the world. 

But wait a minute, how could an abstract scientific model lead to self transcendence? In part the answer might be that the FEP is moving from the realm of scientific abstraction to the realm of natural processes, one capable of typifying all other natural processes. If we were to align our self development with this process, we could begin the long journey of connecting to the world and cognitively merging with it.

References

Campbell, J. O. (2022). The Knowing Universe. KDP. Retrieved from https://www.amazon.com/Knowing-Universe-John-Campbell/dp/B09JJFF771/ref=sr_1_1?crid=MKS7ZHOB6X6Y&keywords=the+knowing+universe&qid=1643396653&sprefix=the+knowing+universe%2Caps%2C122&sr=8-1

Chris Fields, K. F. (2022). A free energy principle for generic quantum systems. ArXiv preprint. Retrieved from https://arxiv.org/pdf/2112.15242.pdf

Kaufman, S. B. (2018, November). What Does It Mean to Be Self-Actualized in the 21st Century? Scientific American. Retrieved from https://blogs.scientificamerican.com/beautiful-minds/what-does-it-mean-to-be-self-actualized-in-the-21st-century/

 

 

 


Monday, 13 December 2021

Does science describe the world-in-itself?

This is the conclusion of The Knowing Universe

This book proposed an inferential systems definition of existence. In this interpretation, the world-in-itself is portrayed as a vast hierarchically nested series of inferential systems, each diligently investigating the possibilities for specific forms of existence. One benefit of this interpretation is its casting of the universe in an easily knowable form; a little knowledge of inferential systems provides a little knowledge of everything.

Our interpretation of scientific knowledge within an inferential systems framework reveals science itself in a new light; science is an inferential system that accumulates knowledge in essentially the same manner as all other naturally occurring inferential systems. This view of science may inform some problems that have long plagued western philosophy.

One of those problems is the general relationship between existing entities and our perceptions or mental concepts portraying them. During the scientific revolution, roughly between 1543 and 1687 (1), it became apparent that a combination of sensory experience and rational hypotheses could form synergetic intellectual models having powerful, pragmatic effects. Some scientific pioneers, such as Francais Bacon (1561 - 1626), assumed the scientific method would provide ultimate and infallible knowledge of the universe (2). But questions soon emerged regarding the relationship between scientific models and the entities they described.

David Hume

Fifty years after the scientific revolution, David Hume cast some shade on empirical claims of certain knowledge, noting that many natural processes, such as causation, are not entirely amenable to sensory evidence. However, he retained the critical caveat that instead of certainty, sensory evidence can provide probabilistic knowledge. Using this, essentially Bayesian insight, Hume argued that hypotheses judged as unlikely in our prior experience require a greater weight of evidence, or as more recently framed, extraordinary claims require extraordinary evidence (3). He most famously used this insight as a basis for skepticism regarding Christian miracles. At the time, Hume’s views failed to carry the day, and although currently rated the greatest philosopher in British history, his contemporaries excluded him from holding a university post due to his alleged atheistic tendencies (4). Scientific and philosophical insights were as yet too weak to pose a severe threat to established religious models.

Forty years later Immanuel Kant, built upon Hume’s empirical skepticism claiming an essential dichotomy between sensory-based knowledge and the true nature of things in themselves (5):

And we indeed, rightly considering objects of sense as mere appearances, confess thereby that they are based upon a thing in itself, though we know not this thing as it is in itself, but only know its appearances, viz., the way in which our senses are affected by this unknown something.

Kant describes an apparent gulf between things in themselves and any possible empirical knowledge we can have of them. A recent statement of this dichotomy uses the example of the gulf between maps and the territory they model (and admonishes us not to confuse the two). In Kant’s time, many perceived this as a trivial intellectual gulf posing no reason for despair as it did not yet challenge the near-universal religious models. Although lacking empirical foundations, the Christian model of the universe was considered an accurate portrayal of the world-in-itself. Indeed, Kant’s contemporaries understood his radical philosophy as resonating quite well with Christian doctrine. One of his early commentators noted (6):

And does not this system itself cohere most splendidly with the Christian religion? Do not the divinity and beneficence of the latter become all the more evident?  

But scientific and philosophical developments were building bridges between perceptual models and the world-in-itself, bridges that would come to challenge those offered by religious teaching and to undermine our cozy place within Christian models of the universe. Astronomers such as William Herschel (1738 – 1822) found evidence of a vast universe, suggesting that both earth and humanity played apparently insignificant roles – a scenario challenging Christian doctrine. And then Darwin demonstrated that counter to religious teaching, people have descended from earlier forms of life. These and countless other scientific findings undermined the Christian worldview among the intelligentsia.

Not only did this new learning contradict many religious teachings, it also illustrated the constraints that religion placed upon knowledge. Most Protestant sects identified the Bible as revealed truth and considered it a complete worldview for humanity. But the scope of biblical knowledge is relatively minimal. How far could knowledge grow within this confine? For example, the bible makes only passing references to stars. One of the more explicit passages is:

And God made the two great lights—the greater light to rule the day and the lesser light to rule the night—and the stars.

It does not offer any detailed knowledge concerning stars-in-themselves; they are only bit players in this God-centric tale. And it provides no path to greater knowledge of stars. For that, we must look elsewhere.

Under these influences, acceptance of biblical teachings as literal descriptions of the world-in-itself became increasingly untenable. Finally, in 1882 Fredrick Nietzsche (1844 – 1900) announced God’s murder and held humanity to be responsible:

God is dead. God remains dead. And we have killed him. How shall we comfort ourselves, the murderers of all murderers? What was holiest and mightiest of all that the world has yet owned has bled to death under our knives: who will wipe this blood off us? What water is there for us to clean ourselves? What festivals of atonement, what sacred games shall we have to invent? Is not the greatness of this deed too great for us? Must we ourselves not become gods simply to appear worthy of it?  

After fifteen hundred years, Europe's foundational model of the universe crumbled, leaving no handy alternative. As Martin Heidegger (1889 – 1976), a leading 20th-century metaphysician, explained, humanity was left exposed to its most significant source of anxiety, the anxiety experienced when we face the finite nature of our existence.

And worse was to come. Ludwig Wittgenstein (1889 – 1951), perhaps the twentieth century’s most influential philosopher (7), described an even more profound metaphysical abyss – claiming that scientific understanding, the slayer of our old theological models, was incapable of offering a replacement model of the world-in-itself. He believed existence had no logical explanation and that its nature must forever remain a mystery (8).

It is not how things are in the world that is mystical, but that it exists.

Wittgenstein had experienced the terrors of WWI’s trench warfare, and although he rose to the occasion displaying remarkable valour, it left him deeply shaken and his philosophy practically a denial of any possibility for human meaning. Wittgenstein was not alone in his visceral reaction to the terrors of WWI. Many began to consider that at best, God was only remotely concerned with the world's workings and that to understand those workings and perhaps even shape them, we would have to look elsewhere. 

Together Nietzsche, Heidegger and Wittgenstein brought western philosophy to the brink of nihilism (9). We had outgrown the Gods providing meaning for millennia and found ourselves instead in a vast, uncaring universe devoid of meaning. Even worse, we were unable to conjure up any convincing alternative model of the world-in-itself. This era perhaps marked a low point for western philosophy. Existentialism, which succeeded this philosophical movement, also lamented human meaninglessness but supplemented despair with a growing sense that meaning was within us, that it was only necessary to pull ourselves up by our bootstraps. Existentialists, such as Soren Kierkegaard (1813 - 1855), Jean-Paul Sartre (1905 - 1980) and Albert Camus (1913 - 1960), maintained that living an authentic life, or at least persevering through the absurdities of life, could lead to meaning.

But even while philosophy despaired, science had quietly begun developing models detailing the human relationship to the world-in-itself. This scientific awakening offered a revolutionary new perspective of our place in the universe, one that converged on several different fronts into a single idea; people are a part of nature - as the old saying goes, just like the trees and the stars you have a right to be here.

Just as western philosophy flirted with nihilism, science confirmed that everything in the universe, ourselves included, is made of the same hundred-odd elements. This discovery soon developed into the understanding that all elements are forged in stars from the same simple building blocks and spread to the rest of the universe when stars die. We, along with everything else, are stardust.

But science soon moved beyond a simple unification of nature based on a shared universal composition. Perhaps the scientific finding having the greatest impact on our perceived relationship to the universe was the theory of natural selection developed by Charles Darwin. This theory offered an alternative to God’s special creation, placing humans above all other living things and describing humanity as a recent evolutionary design directly related to all other organisms. Darwin’s brilliant description of natural selection in On The Origin Of Species included many examples from the natural world supporting his theory incompatible with the Christian theory of creation from design. Many open-minded readers found Darwin’s arguments devastating to the biblical account, causing a collapse in credibility, a profound cultural shock recorded by pessimistic philosophers, such as Nietzsche and Heidegger.

Gone was our favoured place among the gods, and we found ourselves instead exposed to a sense of meaninglessness within a vast, uncaring universe. Yet, once over that initial shock, a closer reading of Darwin reveals new, more profound meaning. As noted by the Darwinian champion Thomas Huxley (1825 – 1895) in his great book Evidence As to Man's Place In Nature (10):

Mr. Darwin's hypothesis is not, so far as I am aware, inconsistent with any known biological fact; on the contrary, if admitted, the facts of Development, of Comparative Anatomy, of Geographical Distribution, and of Palaeontology, become connected together, and exhibit a meaning such as they never possessed before

We were no longer the favoured children of an all-powerful God, but we had gained membership in the more tangible family of all living things. As Darwin succinctly noted, this context provides us with significant meaning (11):

There is grandeur in this view of life

Although scientific explanations, such as natural selection, offered meaning through our context within nature, they often provide only a vague summary of nature-in-itself. Natural selection reduces almost to tautology in its central claim that only the fittest organisms exist because fitness is the relative frequency of existence.  It only escapes tautology due to the non-statistical or functional aspects of fitness composing a vastly complex network of adaptation working in conjunction to retain an organism within existence.  Although natural selection, by itself, does not go far in exposing the thing-in-itself of existence, science began developing knowledge of those mechanistic details.

Kant’s dichotomy, although softened by scientific understanding, remained stark. The modern scientific philosopher Alfred Korzybski (1879 – 1950) framed the Kantian dichotomy with a caveat offering a way forward (12):

A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness.

Here he makes two claims. The first supports Kant’s view that maps (models) and territories (things in themselves) are separate and should not be confused. He describes a dichotomy that is still somewhat faddishly used as putdowns to proposed explanations, dismissing them as confusing the map with the territory, mistaking mere description for the thing-in-itself. His second point is more substantial than the first; any map is a good map to the extent that it shares its territory’s logical structure. 

Ensuing developments during the scientific age have explored this caveat bridging Kant’s dichotomy. As one example, Darwinian evolution argues that animals’ sensory perceptions should accurately portray the world-in-itself; senses and perceptions have evolved for the practical purpose of allowing animals to navigate challenges posed by the world-in-itself. It identifies perception as an adaptation for accurately portraying aspects of the world-in-itself and describes a mechanism for improving this perceptual accuracy over evolutionary time. In other words, the world-in-itself and animals’ perceptions of it must share much the same logical structure.

Nature's many domains such as life may be viewed, within the framework of inferential systems, as instances where autopoietic models, such as genomes, transform into existing entities such as organisms. Here the map transforms into the territory, revealing both as composed of the same logical structure. This transformative relationship allows scientific models to extend natural models and transform them into natural processes wielding tremendous powers. How could this kind of power be possible unless the underlying scientific models are, to a large extent, true?

A favourite, perhaps an apocryphal example, involves one of the first physicists to understand the process of nucleosynthesis within stars. While stargazing with a girlfriend, he bragged that he was the only person who truly understood why stars shone[1]. His brag implied a close correspondence between the scientific model he had discovered and stellar things-in-themselves, a correspondence borne out by utilizing his newly discovered model as a recipe for building thermonuclear bombs a few decades later. When carefully followed, this scientific model transforms into mini stars here on earth. How can models that prove so powerful fail to be faithful representations or share the same logical structure as the things-in-themselves?

The development of Covid-19 vaccinations provides a further example illustrating convergence between scientific models and the world-in-itself. The virus’ genetic code, published just weeks after it became a concern in China, stimulated a few medical labs to construct computer models of those genetic sequences expressing proteins on the virus’ surface. When these models are transformed into mRNA and injected into people, they alert the body's immune system, just as would an actual infection, ramping up antibodies capable of defeating future viral infections. This transformation from scientific model to immune system stimulation causes the vaccine's effectiveness and indicates convergence between scientific models and the world in itself.

As a final and perhaps decisive example of the convergence between scientific models and the world-in-itself, we consider computation's power to model natural processes. The Church-Turing thesis states that a Turing machine, the technical name for a classical computer having unlimited resources, can compute any computable function (13). As models describing natural systems are computable functions, we may say that a computational model is possible for every natural system. In other words, algorithms capturing scientific models of every natural system can simulate those natural systems. In turn, comparisons between these simulations and observation of natural systems become evidence updating the scientific models and algorithms to greater accuracy.  In this manner, scientific models and their simulations join in an inferential system that progressively bridges the chasm between scientific models and the world in itself.

The recent theoretical discovery of quantum computing has propelled this thesis into the Church – Turing - Deutsch principle (14). In noting a one-to-one correspondence between quantum phenomena and quantum computation and that all physical systems have a quantum description, this principle states that any physical system may be simulated to any degree of accuracy using a universal quantum computer. It is tantamount to claiming the world-in-itself, at the quantum level, is equivalent to a computational process, leaving little distinction between scientific models and the world-in-itself.

As illustrated by these examples, scientific understanding has evolved beyond mere descriptive models to provide models capable of duplicating or simulating nature’s many mechanisms and bringing entities to exist within the world in itself.  In other words, we can view the world in itself as essentially similar to our scientific models – resolving the Kantian dichotomy.

Scientific models simulate both nature’s models or generalized genotypes and their resulting physical forms, or generalized phenotypes, existing as the world in itself. But science goes beyond simulating existing forms and can serve as a generalized genotype that brings new technologies into existence.

The world itself and its scientific models are vastly complex, hierarchical nestings of inferential systems; each system is engaged in a cyclical two-step inferential process, evolving and implementing knowledge for existence. These two steps are consequences of the free-energy principle, which states that systems maximize the evidence predicted by their models and thereby reduce the surprise they experience. Systems may do this in two ways:

1)  They may accurately follow their models’ knowledge (active inference), causing the world in itself to conform to their models.

2) They may update their models to greater accuracy using evidence of their existence (learning or evolution) within the world in itself, causing their models to conform to reality.

These two steps form cyclic inferential systems where knowledgeable models create and maintain the world in itself (e.g. genotypes create and maintain phenotypes), and the phenotype’s experience within the world in itself updates model knowledge (e.g. natural selection). This universal dualism explains the accumulation of knowledge in the universe and identifies science as a recent, powerful, but essentially natural method of knowledge accumulation. We suggest that in this manner, the chasm between scientific models and the natural world in itself is bridged.

Inferential systems provide a general scientific description of existence and an account of the world-in-itself, where each entity within the vast web of existence is inferentially engaged in exploring possibilities for existence. Here, the polar concepts of scientific description and the world-in-itself converge in the nascent field of Bayesian mechanics (421), which, like Newtonian mechanics, serves as a scientific description and an account of the world-in-itself. Science’s inferential nature provides optimal models, those sharing maximal logical structure with the vast inferential system composing the world-in-itself. In other words, science provides potent maps of the world’s territory because of its shared logical structure.

Part I of this book claimed that all forms of existence are examples of inferential systems. Part II explored and described inferential systems, and Part III interpreted modern scientific findings in the domains of cosmology, quantum phenomena, biology, neural-based behaviour and culture as inferential systems. Almost certainly, many of this account’s details are incorrect but more accurate, universalist explanations of existence may soon emerge because many convergent frameworks are under development. And we may also have confidence that this eventual explanation will describe existence in terms of knowledge and knowledge in terms of inferential processes.

 

References

1. Wooten, David. The invention of science: A new history of the scientific revolution. s.l. : Harper Perennial; Reprint edition (December 13, 2016), 2016. ISBN-10: 0061759538.

2. Wikipedia. Francis Bacon. Wikipedia. [Online] [Cited: September 26, 2021.] https://en.wikipedia.org/wiki/Francis_Bacon.

3. —. Sagan standard. Wikipedia. [Online] [Cited: July 11, 2021.] https://en.wikipedia.org/wiki/Sagan_standard.

4. —. David Hume. Wikipedia. [Online] [Cited: November 22, 2020.] https://en.wikipedia.org/wiki/David_Hume.

5. Kant, Immanuel. Prolegomena to Any Future Metaphysics. 1783.

6. Wikipedia. Immanuel Kant. Wikipedia. [Online] [Cited: February 14, 2021.] https://en.wikipedia.org/wiki/Immanuel_Kant.

7. —. Ludwig Wittgenstein. Wikipedia. [Online] [Cited: Februarfy 15, 2021.] https://en.wikipedia.org/wiki/Ludwig_Wittgenstein.

8. Wittgenstein, Ludwig. Tractatus Logico-Philosophicus. New York : [Reprinted, with a few corrections] Harcourt, Brace,, 1933.

9. Wikipedia. Nihlism. Wikipedia. [Online] [Cited: February 24, 2021.] https://en.wikipedia.org/wiki/Nihilism.

10. Huxley, Thomas Henry. Evidence as to Man's Place in Nature. s.l. : Williams & Norgate, 1863.

11. Darwin, Charles. The Origin of Species. sixth edition. New York : The New American Library - 1958, 1872. pp. 391 -392.

12. Korzybski, Alfred. Science and Sanity: An Introduction to Non-Aristotelian Systems and General Semantics. . s.l. : International Non-Aristotelian Library Publishing Company., 1933.

13. Wikipedia. Church-Turing thesis. Wikipedia. [Online] [Cited: February 23, 2021.] https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis.

14. —. Church-Turing-Deutsch principle. Wikipedia. [Online] [Cited: February 23, 2021.] https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93Deutsch_principle.

 



[1] Sometimes these eureka moment celebrations can be deflationary. The neuroscientist Geoffry Hinton (born 1947) is reported to have announced to his family that he believed he had finally figured out how the human brain worked. His fifteen-year-old daughter replied: ‘Oh Daddy, not again!’.


Friday, 18 June 2021

Existential Knowledge

John O. Campbell

This is an excerpt from the book The Knowing Universe

Dictionary.com chose existential as its 2019 word of the year due to a spike in internet searches partly in response to Bernie Saunders' and Greta Thornburg's descriptions of global warming as an existential crisis (1). Dictionary.com defines existential as:

concerned with existence, especially human existence as viewed in the theories of existentialism.

Historically our species has tended to view its existence as given and unalterable, and this current interest in existential threats is heartening. Perhaps our view of existence is shifting from a static conception to one more dynamic. Static concepts of existence dominated the western philosophical tradition before Darwin; the consensus view held that all existing things came into existence in a single event and would retain their initial forms until the end. For example, it was a consensus among western intellectuals that God created all species during his seven days of creation and that this original group of species had since remained static.

Darwin's findings compel us to consider existence in a new, dynamic light; existence may have only brief durations even of entire species. Since the beginning, novel entities embodying new strategies have winked into existence and those no longer able to resist natural tendencies towards dissipation have winked out. The current popularity of the word existential might be due to some lingering, justified unease about prospects for our continued existence. 

As this book argues, existence is fragile and entirely dependent upon knowledge inferred from the evidence. Cultures are not immune to this winnowing process. Human history documents an unending succession of cultures whose shockingly brief time in the sun is cut short through fatal flaws in inferential abilities. We have cause for worry when dire circumstances outmatch our nascent inferential abilities.

Some aspects of cultural inferences are near flawless. Our many scientific inferences efficiently develop powerful economies and technologies; these scientific models are flawlessly updated with the latest available experimental evidence, becoming ever more powerful. However, other cultural models, especially models concerning our use of technology, are neglected, and there is resistance to updating them with the evidence. Models concerning the usage of technologies form an essential part of cultural regulation, and neglect of these models may pose our most considerable existential challenge.

It is common for entities to wink out of existence when their models for achieving existence fall into error and are no longer sufficient to maintain them within changed circumstances. For example, among the most advanced life forms during the Late Ordovician, Trilobites met extinction due to volcanic release of greenhouse gases and subsequent planetary warming (2). Trilobites' genetic knowledge for maintaining their existence was overwhelmed by new and unforeseen circumstances. Our cultural existence is not currently challenged by volcanic greenhouse gases but by greenhouse gases whose production is entirely under our control.  

During almost all our species history, developing models for the wise application of technology has been relatively straightforward: use any available technology to maximize resource extraction in the service of biological existence. This strategy of maximizing resource extraction was hard-wired into our biological ancestors since the beginning, and the exponential increase in our numbers evidence its success. And our conservative inclinations counsel us not to tinker with a successful strategy. After all, existence is difficult, and if we already have a model that successfully achieves existence, why fix something that is working?

The wisdom of conservative strategies raises the interesting question of when we should alter our basic cultural models. What weight of evidence is sufficient to motivate adjustments to our primary strategies for existence? The short answer is that we should heed the scientific evidence.

Science is by far our most powerful process for understanding the workings of natural systems. Powerful scientific models are responsible for the modern technologies that vastly increase our health and wealth and provide the improved circumstance in which we exist. It only stands to reason that if we are going to use science to alter our world, we should also use science to understand, as fully as possible, the implications these alterations hold for our existence. The failure to correctly apply scientific knowledge to strategies for existence opens the possibility of our extinction. Only scientific knowledge is capable of guiding us through this challenge.

The scientific method is routinely applied as a rigorous inferential system in laboratories and universities worldwide to understand natural processes and employ it as technological applications. Science and technology provide great power, but that power is inherently dangerous to us. Great power tends to provide increased forces towards dissipation, which is as true within the cultural domain as it is in any other. As we have seen, the only way great power can be compatible with existence is through efficient regulation discovered through careful inferences. One of my mother's wise sayings that have stuck with me is:

Fire is a good servant but a bad master.

Fire is an example of a great power bestowing great benefits, but if we have a fire in our homes, it is best contained in a stove where it can be regulated and held subservient. If we allow it to burn unconstrained outside of the stove, it follows an independent trajectory, one that may threaten our existence. Given that human actions are now the most powerful influences on our planet, we must model their effects as accurately as possible and regulate our actions so that our planet's future states provide for our existence.

Perhaps the primary obstacle preventing us from coming to grips with our predicament is a predilection for outmoded biological values such as resource accumulation and status. These values emerge as strivings for wealth in a cultural context, and although science and technology could easily provide sufficient resources for all, we remain obsessed with these strivings. Societies have become structured to facilitate competition for wealth, and this competition has concentrated wealth in a small portion of the population, often known as the one percent.


Within the US, for example, the top 1%, now own much more wealth than does the bottom 90% of the population (3), a concentration of wealth and power that Pope Francais lambasted as obscene (4):

The earth, entire peoples and individual persons are being brutally punished. And behind all this pain, death and destruction there is the stench of what Basil of Caesarea called "the dung of the devil". An unfettered pursuit of money rules. The service of the common good is left behind. Once capital becomes an idol and guides people's decisions, once greed for money presides over the entire socioeconomic system, it ruins society, it condemns and enslaves men and women, it destroys human fraternity, it sets people against one another and, as we clearly see, it even puts at risk our common home.

His last point, implicating current levels of wealth inequality as a risk to our common home on mother earth, deserves some unpacking within the context of existential threats.

References

1. .com, Dictionary. Dictionary.com's word of the year for 2019 is existential. Dictionary.com. [Online] [Cited: July 16, 2020.] https://www.dictionary.com/e/word-of-the-year/.

2. Late Ordovician mass extinction caused by volcanism, warming, and anoxia, not cooling and glaciation. Bond, David P.G. and Grasby, Stephen E. s.l. : Geological society of America, 2020, Vol. 48. https://doi.org/10.1130/G47377.1.

3. Saez, Emmanuel and Zucman, Gabriel. Wealth Inequality in the United States Since 1913: Evidence From Capitalized Income Tax Data. [NBER Working Paper] s.l. : National Bureau of Economic Research, 2014.

4. Kozlowska, Hanna. Pope Francis: Unfettered capitalism is "the dung of the devil". Quartz. [Online] [Cited: 3 3, 2016.] http://qz.com/450445/pope-francis-unfettered-capitalism-is-the-dung-of-the-devil/.

 

Monday, 24 August 2020

Are we phenotypes or genotypes?


This post is an excerpt from the book: The Knowing Universe.

The relationship between biological genotypes and phenotypes forms the central subject matter of population genetics and evolutionary biology. Every organism, including us, may be viewed from either perspective - a duality that may pose a challenge to an integrated self-identity. For us this duality is even more problematic because our lives take place within a cognitive and cultural as well as the biological arena.  And each of these arenas may be understood in terms of a generalized genotype including mental, cultural, and genetic models, as well as a generalized phenotype including behaviours, cultural practices and biological phenotypes.

Phenotypes engage in what Darwin termed a struggle for existence and their success in this struggle provides evidence that updates their species’ genomic model. In part, the phenotype is an experiment whose purpose is to provide evidence for the evolutionary process, evidence that increases the species' knowledge in its struggle for existence.

Some might consider that this inferential view of life reduces the role of phenotypes to that of disposable or mortal experimental probes, that our purpose as a phenotype is merely the testing and selection of genetic knowledge.  This view suggests that the purpose of life is to evolve a nearly timeless and immortal repository of knowledge concerning increasingly powerful strategies for life’s existence and phenotypes exist merely to provide experimental evidence as grist for this mill of evolving knowledge.

It may be difficult for us to accept a diminished role for phenotypes. After all, for centuries before the existence of genetics was even known, the phenotype was biology’s sole focus of study. More personally, we identify with the phenotypic aspect of our lives. We experience ourselves and our short mortal lives as phenotypes as we strut and fret our hour upon the stage. We are actors following genetic scripts, and those scripts almost completely define us. A timeless and endlessly creative bard wrote those scripts, and surely the bard’s role is more admirable than is the actor’s.

It is difficult to accept that all our strutting and fretting may be mainly in vain, that any lasting influence or meaning is unlikely. This bleak view offers us little consolation in our brief, perhaps even disposable, phenotypic roles. We might gain some solace if we shift our focus from our mortal phenotypes to the more nearly immortal aspects of ourselves, our generalized genotypes in the form of our genomes, learned cognitive models and cultural knowledge.

While it is true that those aspects of our generalized genotypes marking us as unique individuals are nearly as mortal as our phenotypes, some may prove meaningful for the future. Regardless of our contribution, it is our generalized genotypes that directly connect us to a more nearly immortal chain of being. In this context, we assume a cosmic identity within an eternally evolving nature, or as Spinoza and Einstein saw it, a perpetually evolving God.

On the other hand, the purpose of our timeless genetic knowledge is to achieve existence, and existence is the realm of the phenotype. In this view, the purpose of life is to discover phenotypes that can exist. For ourselves, our life as a generalized phenotype motivates us to take part in the cutting edge of evolution and to create novel forms that may have a continued existence. These activities may include having children and participating in the cultural and social issues of our time.

The present is our time, as phenotypic players, to strut our stuff on the stage of existence. It is our turn to bring to life, one more time, that dead, musty knowledge stored in timeless genotypes. We are the growing green shoots of evolution; we revive that dead knowledge and give it a dynamic new interpretation. We play our role in a reinvigorated drama of exploration into the unformed and unknown future, maybe even leaving behind some novel ad-libs for the inspiration of future players.

Perhaps a middle ground in deciding the relative merits of our identities, either as generalized genotypes or generalized phenotypes, is in order; we must consider that both are but two sides of the same coin. They are two elements of an inferential system, and their synergies may provide a more suitable context for understanding our existence than either element alone. As an inferential system, our generalized genotypes contain timeless knowledge for bringing structures and behaviours (generalized phenotypes) into the world, and the evidence generated from their existential success updates generalized genotypes to greater existential knowledge. Perhaps a more balanced identity for us is that of an inferential system as it endorses both our short-term strutting and fretting and our longer-term role in the evolution of knowledge.


Thursday, 21 November 2019

Inferential Systems and the Causal Revolution

John O. Campbell

This is an excerpt from the book: The Knowing Universe.

As we have discussed, an important aspect of inferential systems is their autopoietic form; inferential systems cause entities to exist. That is, the knowledge which models accumulate is causal knowledge capable of causing the specific structures, adaptations and regulation which form the entity and allow it to resist natural forces towards dissipation.

Unfortunately, scientist and statisticians often consider ‘cause’ as something of a forbidden concept because data or evidence, on its own, can indicate a correlation but not a causal relationship. An oft quoted example is that the rising sun and the crowing rooster are correlated but that the rooster’s crowing does not cause the sun’s rising. Something beyond data is required to establish causality.
A recent scientific innovation led by Judea Pearl, sometimes called the ‘causal revolution’ (128; 129), may have completed the logic of science as described by Bayesian probability (61). In particular, the causal revolution provides some important new understanding that illuminates the autopoietic nature of inferential systems underlying existence.

While the causal revolution has provided methods of understanding new to science, these methods may have previously been unconsciously discovered by humans in the course of human genetic evolution. It is widely understood that sometime between 70,000 and 30,000 years ago a series of mutations in the genes specifying our neural structures resulted in a ‘cognitive revolution’ that powered our species cultural evolution and provided key abilities enabling our (perhaps temporary) ascendency over other biological forms. Some researchers attribute this cognitive revolution to a subconscious realization of casual reasoning, the same causal reasoning that is now becoming consciously understood and that forms the basis of the scientific causal revolution (130; 129; 131).
In this view the current scientific revolution is a mere rediscovery, in conscious terms, of a fitness enhancing reasoning process which natural selection placed in our subconscious minds, tens of thousands of years ago. 

Unfortunately, so far, the causal revolution has acknowledged the process of causal reasoning to be an ability possessed only by our species. In this section we will demonstrate that causal reasoning, like the more general process of Bayesian inference, is a key component of the process of evolutionary change which has brought into existence and evolved reality’s many domains including the cosmological, quantum, biological, neural and cultural domains (13)

The causal revolution is better cast as a recent scientific discovery of a general law of nature which underlies the many existing forms composing our universe and which has operated since the beginning, long before the evolution of our species. In that sense the causal revolution provides powerful tools for understanding inferential systems and general evolutionary processes as described by the theory of universal Darwinism (4; 6; 13; 9).

We will not only demonstrate that the relationship between causal reasoning and inferential systems provides a generalized context in which causal reasoning is found to participate in the universe’s many evolutionary processes but will also provide specific details of causal mechanisms operating in inferential systems.  The causal revolution understood within the context of autopoietic inferential systems provides insights into the widespread role of causality within natural systems.
Contrary to traditional statistics and the mantra of the ‘big data’ movement that ‘data is everything and everything is data’[1], Pearl points out that this perspective severely constrains the scope of statistics. Traditional statistics only views data as describing correlations. Correlations are not causes as illustrated by the cliché of the crowing rooster and the rising sun.

Pearl demonstrates that descriptions of causal relationships require an ingredient beyond data, in addition they require a model that hypothesizes causal relationships. As Pearl describes it (129):

The model should depict, however qualitatively, the process that generates the data – in other words, the cause and effect forces that operate in the environment and shape the data generated.

The generative model described by Pearl is the same model that operates within inferential systems, a model composed of a competing family of hypotheses that explain how the evidence was generated or caused. Thus, effective models are constrained to model how existing entities and processes are generated or brought into existence.

Pearl’s preferred type of causal model is the causal diagram which depicts the variables involved in a process as points and connects these points with arrows that serve to hypothesize their cause and effect relationships. He is clear that these causal diagrams are hypotheses or best guesses as to the actual causal relationship and that they must be subject to the scrutiny of the data or evidence.
If the evidence does not support the diagram, Pearl suggest we should take another, perhaps informed, guess as to the actual relationship, and test its implications against the data. We should continue with these guesses until a diagram is discovered that is consistent with all the data (129):

If the data contradict this implication, then we need to revise our model.

Rather than test one hypothesis at a time, a more systematic approach often seen with inferential systems, is to consider a model composed of a complete family of competing hypotheses describing the causal connections between the variables. Then application of the data to the model consists of updating the probability assigned to each possible hypothesis. Importantly this type of model often quickly simplifies as the data eliminates unsupported hypotheses.

We might note Pearl’s reference above to ‘cause and effect forces’ that are described by causal models. As noted in previous sections, system regulators generically overcome challenges to existence posed by physical law through the application of cause and effect forces. It is in this sense that the knowledge of inferential systems initiates a causal chain of forces which form and maintain systems within existence, and it is in this sense that inferential systems are autopoietic.
It is important to understand what Pearl means by causation:

For the purpose of constructing the diagram, the definition of ‘causation’ is simple, if a little metaphorical: a variable X is a cause of Y if Y ‘listens’ to X and determines its value in response to what it hears.

Although Pearl’s description is both clear and accurate, the term ‘listens to’ may be a little anthropomorphic when applied to abstract variables. Perhaps a better term is ‘receives information from’ and then his definition of causation becomes:

A variable X is a cause of Y if Y receives information from X and determines its value in response to that information.

In these terms, causation, as defined by Pearl, is an exact analog to inferential systems; models within inferential systems are updated by information or evidence in the sense that the value of the probability of each hypothesis composing the model is updated in response to the information or evidence it receives. This update or response has been generally described in terms of force; the evidence may be said to ‘force’ some of a model’s hypotheses to be assigned greater probability and some less (21). Thus, the model is shaped by the force of the evidence and we may say that the evidence causes the resulting model.

This analogy between causation and force is widely accepted. As Wikipedia tells us (132):

Causal relationships may be understood as a transfer of force. If A causes B, then A must transmit a force (or causal power) to B which results in the effect. 

In these terms, considering the close analogy between causation, inference and force, inferential systems may be considered the fundament form of causation. It is forces which cause effects and forces may always be understood in terms of the updating of inferential systems. This generic concept of force has been explored by Steven Frank (born 1957) who concludes that the process of inference may be considered as the force of data applied to models (21)

As we have seen in our previous discussion of physical forces, at the fundamental physical level a force is produced when information contained in a gauge boson updates the wave function of another quantum system. In Pearl’s terms we might say that the quantum system listens to the gauge boson and determines or updates its value, for example the value of its momentum, in response to what it hears.

Although nature has employed inferential systems since the beginning as its primary engine of existence, scientific understanding is only now becoming able to more fully describe them. The causal revolution provides our scientific understanding with important tools for describing the autopoietic aspect of inferential systems. In short, this new tool allows us to scientifically describe inferential systems as initiating a cascade of causes whose effects are existing systems. The mechanisms by which causes lead to effects may be understood in terms of orchestrated or regulated forces and these forces, as we have seen, are essential to overcome the many natural obstacles to existence.  The initiated causes must be highly orchestrated to achieve the effect of existence and knowledge required to achieve this orchestration is accumulated by the inferential system through the process of Darwinian evolution.  

A breakthrough made by the causal revolution is its understanding that a full scientific description of a causal process must include a causal model of the hypothesised causal cascade. Central to our understanding of inferential systems is that this same causal model is necessary to orchestrate or regulate existing systems as required by the good regulator theorem. In other words, the causal revolution has discovered that scientific descriptions of existing systems must include a causal model, and this is necessary because causal models are an essential component of existing systems; a scientific description of them that did not include a causal model would be incomplete. 

Central to the causal revolution’s understanding is that causation often involves interventions made to the normal or spontaneous unfolding of events. If an effect is caused, that means that the normal range of possible outcomes is constrained to a single outcome and this selection is described as occurring through an intervention in the normal course of events (128). In terms of inference, circumstances are causally orchestrated so that the received data updates the model to strongly predict or initiate a single outcome or effect.

At the core of autopoietic inferential systems is their ability to cause outcomes, to intervene in the spontaneous course of events and select a single outcome that might otherwise be extremely unlikely. As a biological example, we might consider that the knowledge contained in an organism’s genome causes specific outcomes or effects. A given three letter genetic codon identifies a single specific amino acid to be added to a protein and a gene, composed of a string of codons, identifies a specific complete protein. In turn, that protein, if it is an enzyme, may catalyze a specific bio-chemical reaction, an outcome or effect that is extremely unlikely to occur without the participation of the enzyme. This biological cascade of knowledgeable causes, resulting in specific effects, illustrates the tight causal control or regulation exercised by autopoietic inferential systems.

Pearl describes the causal revolution in terms of a ladder having three rungs which together fully describe causal systems. The first rung is characterized by correlations, the traditional study of statistics. The second rung is characterized by causal interventions of the type we have just examined. The third rung involves counterfactuals or hypothesis regarding what might be rather than what is. A dictionary example of a counterfactual hypothesis is ‘If kangaroos had no tails, they would topple over (133).

While causal interventions describe the autopoietic aspect of inferential systems in that they are required to ensure that the system’s knowledge causes a specific process of self-creation and self maintenance, counterfactual hypotheses describe the evolutionary aspects of inferential systems as they make hypotheses which have never been tested before. Counterfactual hypotheses are tools which may be used to search through the space of possibilities, a search for those possibilities which may be made actual, which may be brought into existence. 

For example, the counterfactual hypothesis involving kangaroos and their tails may be coded in the genome of a kangaroo where a mutational cause produces the effect of a tailless kangaroo offspring. The mutant gene codes the counterfactual hypothesis which in effect asks if a kangaroo with no tail would be reproductively successful (didn’t always fall over). If the resulting tailless kangaroo did achieve reproductive success the possibility would be made actual and tail-less kangaroos would achieve existence in the world.

The causal revolution has extended scientific understanding of causation beyond correlations to include causal models, interventions and counterfactual hypotheses. I have provided examples from biology to illustrate how this extended understanding describes actual causal processes central to biology and in chapter 3 we will see that this understanding is also central to domains of reality other than biology. In effect autopoietic inferential systems in all domains cause their own existence and evolve through their exploration of counterfactual hypotheses.

However, it appears that those who forged the causal revolution consider it more of a revolution in scientific calculation than in the understanding of actual phenomena found in nature. Pearl, for example writes extensively on how causal models can assist the computation of relationships between variables but nowhere does he suggest that these models actually exist in the natural world or their essential role in bringing actual phenomena into existence (129). Instead he understands the findings of the causal revolution as calculational devices that allow us to calculate data and arrive at causal conclusions.

This is a common misunderstanding when a scientific revolution first brings some important new physical phenomena to light for which there is little direct observational evidence. It occurred, for example, in the late 1800s when many considered the controversial concept of atoms as referring to a shorthand for making scientific calculations rather than to actual phenomena. When Boltzmann tried to publish his work deriving thermodynamics from atomic theory, his journal editors insisted that he refer to atoms as ‘bilder’; merely models or pictures that were not ‘real’ (134). Again, in the early 1900s when Mendel’s concept of genes was rediscovered there ensued a great debate among biologist on whether Mendel’s theory conflicted with Darwin’s. The one thing agreed upon by most leading biologists was that Mendel’s ‘genetics’ was not a physical process but merely a means of making calculations. As recounted by the philosopher of science David Hull (37):

As much as Bateson might disagree with Pearson and Weldon about the value of Mendelian genetics, he agreed with them that it was unscientific to postulate the existence of genes as material bodies. They were merely calculation devices.

As in the case of atoms and genetics I am confident that history will demonstrate that the new calculational methods of the causal revolution describe actual physical reality. The task of science is to describe nature and when new calculational tricks are discovered that provide powerful methods of describing reality at a deeper level, we may rest assured this is because nature also uses those same tricks to create and maintain the actual phenomena, in other words, that science has merely rediscovered and described nature’s own methods. This is the proper role of science.

References

1. Causal inference in statistics: an overview. Pearl, Judea. s.l. : Statistics Surveys, 2009, Vols. Volume 3 (2009), 96-146.

2. Pearl, Judea and Mackenzie, Dana. The Book of Why: The New Science of Cause and Effect. s.l. : Basic Books, 2018. ISBN-10: 046509760X.

3. Jaynes, Edwin T. Probability Theory: The Logic of Science. s.l. : University of Cambridge Press, 2003.

4. Harari, Yuval Noah. Sapiens: A brief history of humankind. s.l. : Harvill Secker, 2014.

5. Boyer, Pascal. Minds make societies: How Cognition Explains the World Humans Create. s.l. : Yale University Press, 2018.

6. Universal Darwinism as a process of Bayesian inference. Campbell, John O. s.l. : Front. Syst. Neurosci., 2016, System Neuroscience. doi: 10.3389/fnsys.2016.00049.

7. Dennett, Daniel C. Darwin's Dangerous Idea. New York : Touchstone Publishing, 1995.

8. Blackmore, Susan. The Meme Machine. Oxford, UK : Oxford University Press, 1999.

9. Bayesian Methods and Universal Darwinism. Campbell, John O. s.l. : AIP Conference Proceedings, 2009. BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: The 29th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. AIP Conference Proceedings, Volume 1193. pp. 40-47.

10. Simple unity among the fundamental equations of science. Frank, Steven A. s.l. : arXiv preprint, 2019.

11. Wikipedia. Causal reasoning. Wikipedia. [Online] [Cited: May 26, 2019.] https://en.wikipedia.org/wiki/Causal_reasoning.

12. Google dictionary. Counterfactual. Google dictionary. [Online] [Cited: June 2, 2019.] https://www.google.com/search?q=counterfactual&oq=counterfactual&aqs=chrome..69i57j69i59l2j0l3.8063j0j8&sourceid=chrome&ie=UTF-8.

13. Wikipedia. Ludwig Boltzman. Wikipedia. [Online] [Cited: June 7, 2014.] http://en.wikipedia.org/wiki/Ludwig_Boltzmann.

14. Hull, David L. Science as a Process: An Evolutionary Account of the Social and Conceptual Development of Science. Chicago and London : The University of Chicago Press, 1988.



[1] If in doubt as to the widespread use of this meme, try googling it.