Winning the Universal Lottery: God, the Multiverse and Fine Tuning

Winning the Universal Lottery: God, the Multiverse and Fine Tuning

34 minutes reading time

The following is an excerpt from “Does the Universe Paint God Out of the Picture?” by Luke Baxendale. This is part four of four in the book.

“Astronomy leads us to a unique event, a universe which was created out of nothing, one with the very delicate balance needed to provide exactly the right conditions required to permit life, and one which has an underlying (one might say ‘supernatural’) plan.”
— Arno Penzias, Physics Nobel Prize winner

A Life-Permitting Universe

In the realm of astrophysics, the name Sir Fred Hoyle is likely to resonate with many. Hoyle, who began his scientific career as a staunch atheist, held the conviction that there was no evidence of God in the universe. He argued that “religion is but a desperate attempt to find an escape from the truly dreadful situation in which we find ourselves… No wonder then that many people feel the need for some belief that gives them a sense of security, and no wonder that they become very angry with people like me who think that this is illusory.”[i] His atheism significantly influenced his scientific perspective, predisposing him to dismiss the notion that the universe had a beginning.

However, Hoyle’s atheism was shaken by a groundbreaking discovery. He identified a set of parameters, now known as the “fine-tuning” parameters of the universe, which revealed that numerous properties of the universe fall within exceptionally narrow and improbable ranges. These properties are essential for the existence of chemistry that supports complex forms of life, and any conceivable form of life. Physicists have since labelled the fortunate values of these factors as “anthropic coincidences” and the convergence of these coincidences as the “anthropic fine-tuning” of the universe.

Since the 1950s, every scientific discovery has added to the kaleidoscopic picture of an increasingly complex and finely balanced universe. It has become apparent that the existence of life in the universe relies on a highly improbable dance of forces, features, and a delicate equilibrium among them. Our “Goldilocks universe” (not too hot, not too cold, but just right) seems to be characterised by fundamental forces of physics with just the right strengths, contingent properties with the perfect characteristics, and an initial distribution of matter and energy that constituted the precise configuration to support life. Even the slightest difference in these properties would have rendered complex chemistry and life impossible. The fine-tuning of these properties has not only bewildered physicists due to their extreme improbability, but also because there appears to be no underlying physical reason or necessity for their existence according to the fundamental laws of physics or mathematics.

For instance, carbon-based life is the sole known form of life, and carbon possesses unique qualities that render it ideal for complex chemistry and life. Throughout his career, Hoyle contemplated the factors that needed to be perfectly calibrated for carbon to be readily produced within stars. These factors include the strengths of the strong nuclear and electromagnetic forces, the ratios between fundamental forces, the precise kinetic energy of beryllium and helium, the strength of gravitational forces within stars, and the excitation energy of carbon. Hoyle deduced that these factors required exquisite tuning and coordination within remarkably narrow tolerances to facilitate the synthesis of substantial amounts of carbon inside stars.

Astounded by these “cosmic coincidences” and numerous others that physicists have uncovered since the 1950s, Hoyle became convinced that an intelligent force must have orchestrated the intricate balance of forces and factors in nature, rendering the universe life-permitting. Nevertheless, the fine-tuning parameters Hoyle discovered represent only a fraction of the parameters necessary to ensure a universe that could allow for life.

While some examples of fine-tuning are subject to dispute and complex debates surrounding probability calculations, numerous well-established instances of fine-tuning are widely accepted by most scientists. These examples highlight the exceedingly narrow probabilities of finely tuned constants necessary for the existence of life:

  • Gravitational constant: 1 part in 1034
  • Electromagnetic force versus the force of gravity: 1 part in 1037
  • Cosmological constant: 1 part in 1090
  • The mass density of the universe:  1 part in 1059
  • The expansion rate of the universe: 1 part in 1055

A conservative estimate might suggest around 20 to 30 such constants and parameters are commonly considered when discussing the fine-tuning of the universe, though this number can vary based on the breadth of factors included in the analysis.

To truly appreciate the magnitude of these probabilities, imagine the task of firing a bullet towards the other side of the universe, twenty billion light-years away, and accurately striking a one-inch target. This awe-inspiring feat underscores the sheer improbability of the finely tuned constants essential for the existence of life.

However, these examples merely scratch the surface of the intricate fine-tuning within our universe. Following Hoyle, one of the world’s most renowned mathematicians, Sir Roger Penrose, delved deeper into the precision of the universe’s fine-tuning. Penrose meticulously examined the fine-tuning of the initial distribution of mass-energy, also known as the “initial entropy” fine-tuning, and his findings revealed an even more astonishing level of precision in our universe’s delicate balance.

Initial-Entropy Fine-Tuning

To start, let’s talk about entropy. Entropy, often associated with disorder, measures the number of ways a system’s particles, like atoms or subatomic particles, can be arranged. When entropy is low, systems tend to be more ‘ordered,’ and as entropy increases, ‘disorder’ typically increases too. For the universe to develop structured systems like galaxies and solar systems, it had to start from a state of relatively low entropy, where the distribution of mass and energy was highly specific and uniform.

Consider black holes: they epitomise high-entropy states. Within a black hole, the conventional concepts of space and matter break down. This entropy doesn’t imply chaos in the usual sense, but reflects a vast number of ways matter and energy could be organised at the event horizon, the black hole’s boundary.

In contrast, our universe reflects a state of lower entropy. This is evident in the formation of structured, organised entities like galaxies, solar systems, and stars. These cosmic structures formed through the pull of gravity, organising matter into complex patterns that seem to defy entropy’s increase. However, this organisation on a cosmic scale is consistent with the overall increase in entropy, according to the laws of thermodynamics.

Exploring further, the early state of our universe, especially its mass and energy distribution, was characterised by low entropy. This crucial condition set the stage for the development of large-scale cosmic structures like galaxies over time. In a universe with high entropy, matter would likely either be too evenly dispersed or end up trapped within black holes, thus hindering the formation of galaxies and stars. Therefore, the presence of organised cosmic structures in our universe is a clear indication of its low-entropy origins.

Sir Roger Penrose sought to determine the probability of our universe exhibiting the low-entropy, highly ordered arrangement of matter observed today. He understood that by answering this question, he could gauge the fine-tuning of the initial arrangement of matter and energy at the beginning of the universe. Penrose concluded that the formation of a universe like ours, replete with highly ordered configurations of matter, necessitated an astoundingly improbable low-entropy set of initial conditions. Penrose used principles from thermodynamics, general relativity, and cosmology to analyse the initial conditions of the universe. He considered the gravitational degrees of freedom related to the distribution of matter and energy at the beginning of the universe. By comparing the phase space volume corresponding to the observed low-entropy state to the total phase space volume of all possible configurations, Penrose could calculate the probability of our universe starting in the highly ordered, low-entropy state that it did. Considering the vast range of potential entropy values for the early universe, he calculated that the likelihood of a universe possessing initial conditions conducive to life is 1 in 10^(10^123).[ii] That is 10 raised to the 10th power (or 10 billion) raised again to the 123rd power. That’s a big number.

To put this figure into perspective, it is worth noting that physicists estimate the entire universe contains a mere 1080 elementary particles, an insignificant fraction of 10^(10^123). Even if all matter in the universe were converted into paper, this would still be insufficient to print the number of zeros required to express this probability as a percentage.

This probability quantifies the extraordinary precision of the fine-tuning of the universe’s initial conditions. In other words, Penrose’s calculated entropy suggests that, among the nearly infinite potential configurations of mass and energy at the universe’s beginning, only a minute fraction would lead to a universe resembling ours.

Theistic Proposal for Fine-Tuning

The Stanford Encyclopedia of Philosophy reads that “the apparent probability of all the necessary conditions sufficient to allow just the formation of planets (let alone life) coming together by chance is exceedingly minuscule.” This observation raises a fundamental question within the realm of scientific inquiry: How can we account for these extraordinary “coincidences”: these royal flushes turning up hand after hand? Could it be reasonable to consider the possibility that a purposeful entity has orchestrated the system? In the context of the fine-tuning conundrum, is it justifiable to propose a grand designer as an explanatory hypothesis?

Conversely, might a thoroughly naturalistic explanation suffice to account for the fine-tuning? Is the fine-tuning, as Richard Dawkins has framed the issue, “just what we would expect” if “at bottom there was no purpose, no design… nothing but blind pitiless indifference?” In a similar vein, can we identify a coherent series of explanations for the fine-tuning of the laws and constants of physics, as well as the initial conditions of the universe, “which ultimately reaches to the fundamental laws of nature and stops,” that the theoretical physicist and philosopher Sean Carroll says naturalism requires?

These contrasting perspectives exemplify the ongoing debate within the scientific and philosophical communities surrounding the fine-tuning of the universe. On one hand, some argue that the improbable arrangement of the universe’s properties suggests the involvement of an intelligent force in orchestrating the system. Conversely, advocates of naturalism assert that the fine-tuning can be fully accounted for through a succession of interconnected natural phenomena, obviating the need for a designer.

We will start by presenting arguments that support the idea of an intentional cause or intelligence behind the fine-tuning of the universe. This assertion is based on the observation that the fine-tuning of our universe meets two criteria: immense improbability and functional specification. In our experience, these criteria consistently indicate the involvement of a designing intelligence.

Intriguingly, these finely balanced variables of our universe are characterised as being:

  • Contingent (they could have been different, e.g. the mass of a proton or the expansion rate of the universe could have been quite different from what they actually are);
  • Extraordinarily improbable and balanced to a functionally infinitesimally small degree;
  • Independently specifiable (they correspond precisely to the conditions necessary for life).

These three features collectively constitute what is referred to as the ‘design filter.’ In our experience, such features arise exclusively from the actions of intelligent agents, much like the fine-tuning required in engineered systems. For example, the precise engineering of a machine necessitates the involvement of a skilled engineer who intentionally adjusts each component to function correctly.

To further illustrate this concept (although with a major oversimplification), consider the simple analogy of baking a cake. The ingredients for a cake—such as flour, sugar, eggs, and baking powder—can vary in quantity. However, for the cake to turn out well, these ingredients need to be measured with precision. If you alter the quantity significantly, the cake might not rise or could taste terrible. The exact measurements and timing are crucial. The oven temperature and baking time need to be exact; otherwise, the cake could burn or remain uncooked. This precision is similar to how certain constants in the universe are finely tuned to allow life to exist. Each step in the recipe corresponds to specific conditions necessary for the desired outcome, akin to how certain conditions in the universe precisely meet the requirements for life. For example, the order in which ingredients are mixed and the method of mixing can affect the texture and structure of the cake. The process of baking a good cake can be seen as a ‘recipe filter.’ In our experience, such precise outcomes from a recipe generally arise from the deliberate actions of a baker, who carefully measures and mixes ingredients to achieve a specific result.

By extrapolating this cause-effect relationship, we could reasonably suggest that the fine-tuning of the universe most likely required intelligent input.

Furthermore, mathematician William Dembski’s work indicates that physical systems or structures exhibiting a highly improbable combination of factors, conditions, or arrangements of matter, and embodying a significant “set of functional requirements,” invariably originate from intelligent design rather than undirected material processes. This is consistent with our uniform experience.

The universe contains hundreds, if not thousands, of “dials” (constants of nature) that could adopt a wide array of alternative settings (values). Yet, each dial is calibrated precisely to permit the emergence of life. The apparently miraculous assignment of numerical values to these fundamental constants fosters the inescapable impression that the current structure of the universe has been meticulously conceived. To be fair, this is an intuitive response.

Consider this analogy: suppose you purchased a lottery ticket and won with the chosen numbers, then continued to buy tickets and win millions every weekend for the rest of your life. At some point, you would cease to perceive this as mere coincidence and instead infer that the lottery system has been manipulated in your favour, perhaps someone has tweaked the system? Likewise, the fine-tuning of our world, many times more unlikely by chance than winning every single lottery, logically suggests the presence of an ultimate fine-tuner, whom theists refer to as “God.”

The core of this argument is that the universe’s fine-tuning displays two key characteristics — extreme improbability and functional specification — that consistently evoke a sense of, and justify an inference to, intelligent design. Renowned Cambridge physicist and Nobel laureate, Brian Josephson, has expressed his confidence in intelligent design as the optimal explanation for the conditions that enable evolution at “about 80%.”[iii] The esteemed late professor of quantum physics at Yale, Henry Margenau, stated that “there is a mind which is responsible for the laws of nature and the existence of nature and the whole universe. And this is consistent with everything we know.”[iv]

Intriguingly, even physicists who maintain a materialistic perspective have acknowledged the implications of fine-tuning as suggestive of intelligent design. Atheist physicist George Greenstein admitted that despite his materialistic inclinations, “the thought insistently arises that some supernatural agency, or rather Agency, must be involved. Is it possible that, suddenly, without intending to, we have stumbled upon scientific proof for the existence of a supreme being? Was it a God who providentially stepped in and crafted the cosmos for our benefit?”[v] Richard Dawkins, a renowned and outspoken atheist, acknowledged the persuasive nature of the fine-tuning argument during his discussion with Francis Collins on the Premier Unbelievable podcast. Although not endorsing the fine-tuning argument himself, Dawkins admits that it presents an intriguing case.

Stephen C. Meyer, in his book ‘The Return of the God Hypothesis’ has argued for the theistic implications from fine-tuning in a slightly different manner. His argument can be summarised as follows:

  • Major Premise: Based on our knowledge of intelligently designed objects, if an intelligent agent acted to design the universe, we might expect the universe to exhibit (a) discernible functional outcomes (such as living organisms) dependent on (b) finely tuned or highly improbable conditions, parameters, or configurations of matter.
  • Minor Premise: We observe (b) highly improbable conditions, parameters, and configurations of matter in the fine-tuning of the laws and constants of physics and the initial conditions of the universe. These finely tuned parameters (a) make life (a discernible functional outcome) possible.
  • Conclusion: We have reason to believe that an intelligent agent acted to design the universe.

If the concept of intelligent design is the most plausible explanation for the universe’s fine-tuning, it logically leads to the proposal of an intelligent agent that exists beyond the confines of the universe. This is because the agent must possess the ability to set the universe’s fine-tuning parameters and its initial conditions from the very moment of creation. Evidently, no being within the universe, having emerged after its inception, could have influenced the fine-tuning of the physical laws and constants essential for its existence and evolution. Such a being would also be incapable of setting the initial conditions upon which the universe’s subsequent evolution and existence rely. Therefore, an intelligence within the universe, such as an extraterrestrial entity (alien), is insufficient to explain the origin of this cosmic fine-tuning.

The overarching fine-tuning of the universe aligns more coherently with an intelligent agent that transcends the material cosmos. Theistic perspectives, which envisage God as existing independently of the material universe in a timeless, eternal realm, align with this explanation. Theism can provide a causally adequate account for the universe’s origin in time, its fine-tuning from the onset, and the emergence of specific information after the universe’s beginning, necessary for the genesis of the first living organisms.

Many might quickly dismiss a theistic interpretation of the universe’s fine-tuning — one that attributes the precise conditions to the deliberate intentions of a higher intelligence. However, these dismissals are often based more on personal bias than on solid scientific reasoning. Often, reluctance to accept this viewpoint may stem from discomfort with a theistic explanation. Thus, it is vital to engage in these discussions with intellectual honesty and an openness to various perspectives, always prioritising the pursuit of truth. This case for intelligent design rests on a robust scientific basis. Reflecting on his discovery, Sir Fred Hoyle stated, “a common-sense interpretation of the facts suggests that a super-intellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature.”[vi]

Some time ago I discussed the topic of fine-tuning with a naturalistic atheist, suggesting that such precision seemed more in line with a theistic perspective of a transcendent intelligence than with strict naturalistic assumptions. He did not take my suggestion kindly and became angry and rude in his response. As the conversation intensified, he unexpectedly admitted, perhaps more out of frustration than intent, that he would rather entertain any other explanation than the notion that the cosmic fine-tuning could have originated from a deliberate, intelligent choice. His resistance was not based on logical reasoning but rather on a deep-seated anti-theistic stance.

The cosmological fine-tuning we observe aligns more closely with the workings of a deliberate, designing mind than with a random, aimless process. This is not the universe of “blind, pitiless indifference” that Richard Dawkins claimed it to be. Our experiences and observations suggest that such precise calibration typically originates from intelligent agency. Given that philosophical naturalism rejects any pre-universe intelligent agent, its adherents would logically expect a universe where phenomena are exclusively explained by fundamental physical laws, without the need for fine-tuning. Yet, these laws themselves do not account for either the fine-tuning of the initial conditions of the universe or the contingent features of the physical laws (the fine-tuning of their constants) necessary for sustaining a life-permitting universe.

To be clear, science is not beholden to the constraints of Naturalism. Naturalism cannot claim exclusive dominion over the advancement of scientific knowledge. Instead, science illuminates the path towards the metaphysical paradigm that most elegantly aligns with our provisional understanding.

The fact is, our observations lead us to understand that systems exhibiting such fine-tuning are usually the result of intelligent intervention. Naturalism, denying any intelligence predating the universe, would seem unable to account for an entity capable of influencing the observed fine-tuning.

Naturalistic Explanations for the Fine-Tuning

“He who knows only his own side of the case knows little of that.”
— John Stuart Mill

I’m sure we’ve all been there, listening intently to someone’s viewpoint, only to realise you’ve been too quick to judge them based on an oversimplified version of their argument. It’s human nature — our tendency to jump to conclusions or craft a straw man to knock down easily. Just like a detective would avoid jumping to hasty conclusions or an experienced chess player would think twice before making an impulsive move, we should treat our judgements and conclusions with similar caution.

One of the most enlightening ways to avoid making premature conclusions is to hold them up against alternative views, contrasting and comparing, letting them spar in the arena of ideas. By understanding and challenging other perspectives, we refine our own.

So, we need to be cautious about our conclusions regarding the cosmic mystery known as the fine-tuning phenomenon. Before wrapping ourselves in the cosy blanket of a theistic interpretation, it’s our duty to explore every nook and cranny of naturalistic explanations. This part of the book delves into which perspective, theistic or naturalistic, stands up to the rigours of analysis and provides the most coherent explanation for the marvel of our finely-tuned universe.

Physicist Paul Davies has marvelled, “the really amazing thing is not that life on earth is balanced on a knife-edge, but that the entire universe is balanced on a knife-edge, and would be total chaos if any of the natural ‘constants’ were off even slightly.”[vii] Stephen Hawking, in relation to the fine-tuning of cosmological constants, observed, “the remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life.”[viii] Slight variations in the strength of any of these constants or their ratios would preclude the possibility of life. Martin Rees, an emeritus professor of cosmology and astrophysics at the University of Cambridge, aptly encapsulated the situation: “Nature does exhibit remarkable coincidences.”[ix]

Wait, what… Coincidences? Given the staggeringly slim odds for a universe to be stable enough to support life, dismissing it as mere “coincidence” seems intellectually untenable. There must be an explanation, which has prompted many to undertake the challenge of providing alternative explanations. These alternative perspectives undoubtedly merit further examination and discussion.

The Weak Anthropic Principle

In 1974, physicist Brandon Carter offered a naturalistic explanation for the universe’s fine-tuning, known as the “weak anthropic principle” (WAP). This principle suggests that it shouldn’t surprise us to find ourselves in a universe fine-tuned for life, as only a fine-tuned universe could produce conscious observers like us. This leaves open the question of what causes the finely tuned constants to be set as they are, but seeks to reduce our interest in the question.

Consider the following analogy to help explain the weak anthropic principle: imagine you are a fish living in a small pond filled with water. You might wonder why the pond is filled with water, since that is the very substance that enables you and other fish to survive. According to the fish version of the principle, which we could call the weak ichthyic principle, you shouldn’t be surprised to find that the pond is filled with water, because if it were not, you wouldn’t be there to observe it in the first place. Given you exist, the pond must therefore exist to support you, so the existence of the pond is unexceptional.

However, while it is true that we shouldn’t be surprised to find ourselves in a universe suited for life (since we are alive), is it not strange that the conditions necessary for life are so exceedingly improbable? For instance, consider the scenario of a blindfolded man who miraculously survives an execution by a firing squad of one hundred expert marksmen. The fact that he is alive is indeed consistent with the marksmen’s failure to hit him, but it does not account for why they missed in the first place. The prisoner ought to be astonished by his survival, given the marksmen’s exceptional skills and the minuscule probability of all of them missing if they intended to kill him. Evidently, the WAP commits a logical error by conflating the statement of a necessary condition for an event’s occurrence (in this case, our existence) with the elimination of the need for a causal explanation of the conditions enabling the event.

Furthermore, it appears that advocates of the WAP are concentrating on the incorrect phenomenon of interest. The crucial question is not why we observe a universe consistent with our existence, but rather what caused the fine-tuning of the universe in the first place.

The Strong Anthropic Principle

The strong anthropic principle (SAP) is an extension and a more assertive version of the WAP. While the WAP suggests that we shouldn’t be surprised to find ourselves in a universe that supports life, the strong anthropic principle takes it a step further by asserting that the universe must have properties that enable the development of intelligent life.

The strong anthropic principle was introduced by astrophysicist John D. Barrow and mathematical physicist Frank J. Tipler in their 1986 book, “The Anthropic Cosmological Principle.” It essentially states that the existence of intelligent life is a necessary feature of the universe, and that the universe’s fundamental properties are such that they lead to the development of intelligent observers like humans.

One popular version of the SAP states, “The universe must have properties that eventually permit the emergence of observers within it.” This implies that the existence of intelligent life is an inevitable consequence of these cosmic properties, not merely a chance result of them.

In a sense, I agree with the SAP – it’s just stating the obvious. It states the observation of fine-tuning and its implications for the emergence of life but does not itself explain the underlying reasons for this fine-tuning.

The step beyond SAP is that this explanation of fine-tuning is based on an interpretation of a strange phenomenon in the field of quantum physics.

In the field of quantum physics, the famous double-slit experiment demonstrates the strange behaviour of particles when they are observed. When particles, such as photons or electrons, pass through a barrier with two slits, they create an interference pattern on a screen behind the barrier, as if they are behaving like waves. However, when the particles are observed or measured as they pass through the slits, the interference pattern disappears, and the particles behave like individual particles instead of waves.

Applying the principles of this quantum oddity to the SAP suggests that conscious observation plays a pivotal role in determining the behaviour of particles and the very fabric of reality itself.

This calls to mind the Participatory Anthropic Principle (PAP), proposed by physicist John Archibald Wheeler, whom I greatly admire. It suggests that human observations are necessary to bring the entire universe into existence, merging ideas of consciousness with quantum mechanics. While the SAP posits that the universe must have properties that allow life to develop within it at some stage in its history, the PAP posits that conscious life somehow actively and retroactively causes these conditions to come into being.

Now, it’s important to note that the PAP takes a stronger position on the role of conscious observation than most physicists would support based solely on quantum mechanics. In quantum physics, “observation” refers to any interaction that leads to the collapse of a particle’s wave function, not necessarily involving a conscious observer. The PAP, while philosophically intriguing, makes a leap beyond what I would conclude from quantum mechanics alone.

Either way, these interpretations endeavour to explain not only the existence of conditions necessary for observation, but also the underlying cause or design that allows the observer to significantly impact the experiment’s outcome. Proponents of PAP argue that, akin to an electron’s specific location being contingent on observation, the universe itself might depend on an observer for its existence. Thus, this extension of the strong anthropic principle affirms that:

  1. The universe must have those properties which allow life to develop within it at some stage in its history.
  2. There exists one possible universe ‘designed’ with the goal of generating and sustaining ‘observers’.
  3. And observers are necessary to bring the universe into being.

There are a few issues to this proposal. Firstly, there’s what can be described as the grandfather paradox. The Grandfather Paradox is a theoretical conundrum that arises from the concept of time travel. Here’s how it goes: Imagine you have a time machine and decide to travel back in time to a period when your grandfather was a young man, before he had any children. Now, let’s say you interfere in his life in such a way that he never has children. This action, in turn, would mean that one of your parents would never be born, and consequently, neither would you, so you would then never be able to travel back in time to stop your grandfather from becoming your ancestor.

Similarly, in the PAP, if the existence of conscious observers (effect) is contingent on a finely-tuned universe (cause), yet the universe’s nature (effect) is posited to depend on these observers (cause), we encounter a circular, cause-and-effect relationship. If conscious observers’ existence is contingent on a finely tuned universe, but the universe’s very nature relies on the existence of these conscious observers, we find ourselves entwined in a causal loop reminiscent of the paradox. Even if we accept the possibility of two entities continually causing each other in an eternal cycle, this doesn’t clarify why such a looping system exists in the first place.

Moreover, this line of reasoning leads us to presuppose the existence of consciousness existing in a dualistic relationship with material reality, a move that veers towards a non-materialistic interpretation of reality. This, in turn, challenges the principles of naturalism.

To reiterate in a slightly different way, a key issue with the PAP springs from the intriguing temporal anomaly within its premise. In this scenario, observers are attributed with the responsibility of causing the fine-tuning of the universe. However, these observers are making their observations billions of years after the universe’s fine-tuning event, not before it. Traditionally, causality insists on a preceding cause leading to a subsequent effect.

Even when considering quantum phenomena, where the effects of observers have been proposed, the classical temporal relationship between cause and effect is maintained. For example, when an observer causes a quantum wave function to collapse—a change in the quantum system—the cause (the detection of the light wave) precedes the effect (the collapse of the wave function).

Therefore, to uphold the belief that consciousness somehow brings finely-tuned spacetime into existence necessitates the proposition of a conscious mind that causally precedes our spacetime, rather than one that has emerged within it, as is the case with human minds.

Recap — Unravelling the Threads of the Cosmic Display

In the quest for an explanation of the cosmic fine-tuning, two interpretations frequently rise to the surface: (1) the Weak Anthropic Principle (WAP) and (2) the Strong Anthropic Principle (SAP). Yet, neither provide a naturalistic explanation.

Interestingly, a growing number of scientific materialists are themselves unsatisfied by these interpretations of fine-tuning. In response, they’ve created a hypothesis as audacious as it is clever: the multiverse. This hypothesis revives the chance argument, striving to recast the cosmological fine-tuning from an improbable anomaly to an anticipated outcome in an infinite cosmic lottery.

The Multiverse — String Theory and Inflationary Models

To avoid the theistic implications of a finely-tuned universe, some theorists propose the existence of not just one, but an almost infinite number of universes. The theory suggests that with a nearly infinite number of universes, it’s less surprising to find at least one—ours—that supports life. This reasoning suggests that with enough trials, even the most unlikely outcomes can happen, eliminating the need for a higher intelligence to explain our universe’s conditions.

Proponents of the multiverse theory often describe our universe as having luckily won a cosmic lottery. They compare the universe-generating process to a slot machine, where each ‘spin’ produces a new universe. While most of these universes do not support life, some, like ours, do.

Two major cosmological models have been proposed to explain the potential origin of new universes. The first model, proposed by Andrei Linde, Alan Guth, and Paul Steinhardt, is known as inflationary cosmology (we talked about this earlier in the book). The second model is rooted in string theory. Both models were initially developed to resolve specific quandaries in physics but were later adapted to provide multiverse explanations for our universe’s fine-tuning.

Inflationary Multiverse Model

Let’s clarify the inflationary cosmological model. Immediately following the Big Bang, it is understood that the universe underwent an extraordinarily rapid expansion. This wasn’t a slow stretch but a sudden surge in size, happening just fractions of a second after the universe’s birth and ending almost as quickly as it started. This short but dramatic phase set the stage for how the universe would continue to grow at a steadier pace.

Initially, scientists developed this inflation model to solve some big puzzles that standard Big Bang cosmology struggled to explain. These include the universe’s relative homogeneity (especially the temperature uniformity of cosmic background radiation), its apparent flatness, and the absence of magnetic monopoles.

As our understanding evolved, so did the models of inflation. One prominent version is known as “eternal chaotic inflation,” proposed by Andrei Linde. This model refines the inflationary framework by introducing a dynamic and stochastic element to the inflationary field, characterised by vacuum energy. According to the eternal chaotic inflation scenario, this field acts as a catalyst for the expansion of spacetime, within which our observable universe and potentially numerous others have emerged.

In the context of this model, the inflation field is not a uniform entity but varies across different regions of spacetime. This variation allows for the spontaneous nucleation of lower-energy bubble universes within the higher-energy inflationary field. These bubble universes, including ours, are envisaged as being causally disconnected due to their rapid and divergent expansion rates. This model thus predicts a multiverse where these bubble universes are nested within an ever-expanding inflationary backdrop.

Advocates of this model argue that the inflation field’s potential to generate an infinite number of universes implies that any physical possibility, no matter how improbable in a single universe, is rendered not only possible but inevitable across the multiverse. This line of reasoning extends to the anthropic principle: the finely-tuned conditions necessary for life, as observed in our universe, are not a mere cosmic coincidence but an inevitable outcome in the vast ensemble of universes. Consequently, it was only a matter of time before some universe manifested the finely tuned conditions necessary to support life. Our universe just happened to be the fortunate one.

The String Theory Model

String theory is a complex and confusing concept that offers an alternative explanation for the fine-tuning of the laws and constants of physics. It’s a lot to wrap your head around; even I am still not entirely there, so this might be a bit overwhelming…

String theory suggests that the fundamental building blocks of matter are not elementary particles like photons, quarks, or electrons, but rather tiny, one-dimensional strings or filaments of energy. These tiny filaments can vibrate in different patterns in many more dimensions than we can perceive, forming both “open” and “closed” strings. All elementary particles, according to string theory, are manifestations of these differently vibrating strings.

Early iterations of string theory, focusing on bosons (the elementary particles thought to carry the strong nuclear force), required a twenty-six-dimensional spacetime. However, as string theory expanded to account for both matter and force particles, it was found that only six or seven extra spatial dimensions are necessary (in addition to the four dimensions of spacetime). This raises a question: where are these unobserved dimensions? Current string theory postulates that these additional dimensions are compacted into small topological structures, invisible to us, at a scale smaller than 10-35 of a metre, the spatial radius of what physicists call the Planck length. This is the scale at which quantum gravitational phenomena are expected to occur.

String theorists envision that within these minuscule structures, energy strings vibrate in the six or seven extra spatial dimensions. The variations in these vibrations give rise to the particle-like phenomena we observe in our familiar three dimensions of space. At its core, string theory is a quantum-scale, particle physics-based theory that aims to unify all fundamental forces, including gravity. One outcome of this theory is the proposed existence of “gravitons,” which are understood as massless, closed strings that transmit gravitational forces across long distances at the speed of light. Different vibrational states of strings, not just gravitons, are hypothesised to be responsible for the various fundamental particles, including those that carry the other three fundamental forces of physics (electromagnetic, weak, and strong forces).

At its foundation, string theory was originally formulated to explain the universe’s fundamental forces. However, it didn’t account for matter’s existence. This prompted string theorists to incorporate a principle known as “supersymmetry” to bridge this gap. According to supersymmetry, for every bosonic elementary particle (those responsible for forces) that exists, there must also be a complementary fermionic particle (those constituting matter), and vice versa. This inclusion reduced the number of required spacetime dimensions in string theory from an unwieldy twenty-six to a manageable ten, making string theory applicable to our universe, which hosts both forces and matter.

However, a plot twist arose: the mathematical framework underpinning string theory didn’t yield a single, unique solution reflecting the physics of our universe. Rather, it unveiled countless solutions, each depicting a distinct physical reality. Initially, physicists viewed this surplus of solutions as an embarrassment, a glaring flaw in the model. But some string theorists, with an innovative twist, turned this perceived vice into a virtue. They proposed that the vast number of possible ways to compactify these extra dimensions leads to a different vacuum state, or “solution,” of the string theory equations, resulting in a different set of physical laws and constants. Each solution represents a different universe, each with unique physical laws and constants. The shape of the folded spaces associated with each solution determined the laws of physics in the observable spatial dimensions, with the number of flux lines determining the constants of physics.

In essence, some scientists argues that string theory, with its additional dimensions and vibrating strings, combined with the principles of supersymmetry and the hypothetical graviton, leads to the idea of a multiverse.

These string theorists further proposed a mechanism that could generate a staggering 10500 to 101,000 possible universes, each corresponding to these solutions, thus making the fine-tuning of the laws and constants in our universe probable.

This mechanism started with a high-entropy compactification of space, representing a universe housing one quantum gravitational field. As this field’s energy decayed, it gave rise to new universes with different physical laws and constants. This ongoing process of energy decay would sequentially morph one universe into another. Through this process, the vast landscape of potential universes was explored, making the fine-tuning parameters of our life-friendly universe an inevitable result of a random exploration process.

One God vs Many Universes

Let’s pause for a moment and consider: do the cosmological models of inflationary cosmology or string theory adequately account for the fine-tuning of the laws and constants of physics and the initial conditions of the universe? Are they better at explaining cosmic fine-tuning than the concept of intelligent design?

Remember, these theories are still subjects of heated debate and intensive research within the realm of theoretical physics. Most physicists consider the multiverse hypothesis a mere speculative metaphysical theory rather than a scientific one. They argue that since we can’t observe or measure either other universes or God, choosing between these two theories boils down to a matter of personal preference. They assert there are no compelling evidential or theoretical reasons to favour one hypothesis over the other.

I don’t agree with that. Both scientific and metaphysical hypotheses can be evaluated by comparing their explanatory power against their competitors. In this context, we can weigh the pros and cons of the theistic design hypothesis against the multiverse concept. And there are substantial reasons to view intelligent design as a more plausible explanation than the multiverse theory…

First, it would seem more reasonable to refer to the simpler explanation for the cosmic fine-tuning. As Oxford philosopher Richard Swinburne has argued:

“It is the height of irrationality to postulate an infinite number of universes never causally connected with each other, merely to avoid the hypothesis of theism. Given that… a theory is simpler the fewer entities it postulated, it is far simpler to postulate God than an infinite number of universes, each different from each other.”[x]

Secondly, it’s crucial to understand that neither inflationary cosmology nor string theory fully tackles the fine-tuning conundrum. To address two types of fine-tuning, a multiverse solution necessitates the acceptance of two different universe-generating mechanisms. While inflationary cosmology could theoretically account for the fine-tuning of the universe’s initial conditions, it falls short of explaining the origin of the fine-tuning of the laws and constants of physics. As I have understood, this is because the inflation field operates consistently with the same laws of physics across its expansive space. As it spawns new bubble universes, these offshoots retain the same laws and constants, with only the configurations of mass-energy being novel. On the other hand, string theory could potentially clarify the fine-tuning of the laws and constants of physics, but in most models, it fails to generate multiple sets of initial conditions corresponding to each choice of physical laws. This implies that to conceive a multiverse theory capable of addressing both types of fine-tuning, physicists must speculate on two distinct types of universe-generating mechanisms working in tandem, one rooted in string theory and the other in inflationary cosmology. This requirement has led many theoretical physicists to adopt a hybrid multiverse model called the “inflationary string landscape model.”

Granted, the inflationary string landscape model can account for the entirety of fine-tuning phenomena, but it does so by what philosophers of science refer to as “bloated ontology” — postulating a vast number of purely speculative and abstract entities for which we lack direct evidence. The fusion of inflationary cosmology and string theory necessitates the acceptance of a multitude of hypothetical entities, abstract assumptions, and unobservable processes. Moreover, proponents of the multiverse must also affirm that the combination of an inflation field and a string theory universe-generating mechanism can conjointly spawn a sufficient number of diverse universes to render the origin of our universe’s finely-tuned initial conditions, laws, and constants probable. The theory’s reliance on unseen extra dimensions and its struggle for concrete evidence complicate the model further.

Contrarily, a theistic design hypothesis offers a far simpler explanation for cosmological fine-tuning than the multiverse. Theistic design posits one straightforward postulate — the activity of a transcendent mind or a fine-tuner — thereby avoiding the extravagant multiplication of abstract theoretical entities required by the inflationary string multiverse. However, it’s important to recognise that the intelligent design proposition, while seemingly more straightforward, does not serve as an exhaustive answer but rather as a more reasonable framework for exploration. The concept that an ultimate mind has fine-tuned the universe inevitably leads to further probing questions, as to how?

Overall, I do think it’s more reasonable to lean towards an intelligent design hypothesis of an ultimate mind. Our extensive experience with intelligent agents crafting finely tuned systems, be it Swiss watches, gourmet recipes, integrated circuits, or written texts, supports this. Given that fine-tuning a physical system to achieve a specific, propitious end is precisely what intelligent agents are known to do, it follows that to invoke a “Supermind” to explain the fine-tuning of the universe is a natural extension of our experiential knowledge of intelligent agents’ causal capacities. In contrast, the universe-generating mechanisms proposed by various multiverse theories lack a comparable experiential reference. We lack any experience to support such a claim.

Additionally, to account for the fine-tuning of our universe, both inflationary cosmology and string theory (including versions of the multiverse that meld them) posit universe-generating mechanisms that demand unexplained fine-tuning themselves. Put differently, even if the multiverse could plausibly justify the fine-tuning of our local universe, it would then need some mechanism capable of spawning these universes, a shared cause responsible for the multiverse. Yet, this universe-generating mechanism would necessitate further fine-tuning, pushing the problem up the chain. Therefore, it’s unclear whether the multiverse theory can address fine-tuning without appealing to prior fine-tuning.

At a minimum, string theory necessitates the delicate fine-tuning of initial conditions, as evidenced by the scarcity of the highest energy solutions (approximately 1 part in 10500) within the array of possible solutions or compactifications of universes. Additionally, inflationary cosmology demands more fine-tuning than it was designed to explain. Theoretical physicists Sean Carroll and Heywood Tam have shown that the fine-tuning associated with chosen inflationary models is roughly 1 part in 1066,000,000, thus exacerbating the fine-tuning problem it was meant to alleviate.

It should be noted that scientists are deeply divided over multiverse theories. Several eminent physicists, including Sir Roger Penrose, Avi Loeb, and Paul Steinhardt have dismissed multiverse inflationary cosmology. Penrose criticises it for the fine-tuning problems. Loeb challenges the theory’s lack of falsifiability, arguing that it cannot be empirically tested or verified, thereby questioning its scientific validity. Steinhardt, originally a proponent of the inflationary model, now disputes its predictability and testability, concerned that its adaptability to various observations makes it unscientifically irrefutable.

Furthermore, string theory necessitates “supersymmetry” as a crucial outcome of its attempt to unify the fundamental forces of physics. Yet, Large Hadron Collider experiments have persistently failed to detect such supersymmetric particles. Coupled with other failed predictions and the embarrassment of an infinite number of string theory solutions, scepticism about string theory has been growing among many leading physicists. I am reminded of the words of Nobel-Prize-winning theoretical physicist Gerard ‘t Hooft:

“I would not even be prepared to call string theory a ‘theory,’ rather a ‘model,’ or not even that: just a hunch. After all, a theory should come with instructions on how… to identify the things one wishes to describe, in our case, the elementary particles, and one should, at least in principle, be able to formulate the rules for calculating the properties of these particles, and how to make new predictions for them. Imagine that I give you a chair, while explaining that the legs are missing, and that the seat, back, and armrests will be delivered soon. Whatever I gave you, can I still call it a chair?”[xi]

Moreover, the efforts to explain the evidence through the invocation of multiple other universes seem to hint at a sort of metaphysical special pleading. Renowned theoretical physicist John Polkinghorne, a colleague of Stephen Hawking and the former president of Queen’s College, Cambridge, is celebrated for distinguished scholarship and brilliance in his field. Having been at the vanguard of high-energy physics for over three decades, he staunchly maintains in his book, The Quantum World, that the intricate and intelligible nature of our universe is not adequately explained by random processes of chance. With reference to the multiverse proposition, he argues:

“Let us recognise these speculations for what they are. They are not physics, but, in the strictest sense, metaphysics. There is no purely scientific reason to believe in an infinite ensemble of universes… A possible explanation of equal intellectual respectability – and to my mind, greater elegance – would be that this one world is the way it is because it is the creation of the will of a Creator who proposes that it should be so.”

In the context of discussing the inherent factors within this universe, particularly with reference to quantum theory, Dr. Polkinghorne, during a seminar at Cambridge, wittily remarked, “there is no free lunch. Somebody has to pay, and only God has the resources to put in what was needed to get what we’ve got.”

Godly Intrusion

So, what makes the multiverse the favoured explanation for cosmological fine-tuning, despite its many drawbacks? The answer may lie in a statement made by theoretical physicist Bernard Carr: “to the hard-line physicist, the multiverse may not be entirely respectable, but it is at least preferable to invoking a Creator.”[xii] Bound by a naturalistic worldview, many rule out a Creator as a plausible explanation. It’s their entrenched worldview, not scientific reasoning, that has confined their thinking. Naturalism (or materialism) has become a straitjacket for science, hindering scientists from following or even recognising promising leads.

Take a moment to appreciate the irony of the multiverse argument. To sidestep the consideration of God, some have suggested the existence of other universes — entities that we can’t see, touch, examine, or scientifically validate. Yet these are the very qualities that have been used to argue against the existence of God.

Physicists have proposed explanations for the origin of fine-tuning without invoking a higher intelligence. However, these proposals either fail to account for fine-tuning (as with the weak and strong anthropic principles) or they resort to explaining it by surreptitiously invoking other sources or prior unexplained fine-tuning. Yet, the phenomenon of fine-tuning exhibits precisely those characteristics — extreme improbability and functional specificity — that instinctively and consistently lead us to infer the presence of intelligent design based on our uniform and repeated experiences. In the face of this, intelligent design stands as a worthwhile explanation for the fine-tuning of the laws and constants of physics and the initial conditions of the universe.


[i] Hoyle, F. (1960). The Nature of the Universe.

[ii] Penrose, R. ‘The Emperor’s New Mind’, 341-344.

[iii] Josephson, B. Interview by Robert Lawrence Kuhn for the PBS series Closer to Truth.

[iv] Margenau, H. Interview at Yale University, March 2, 1986.

[v] Greenstein, G. The Symbiotic Universe, 27.

[vi] Hoyle, F. ‘The Universe’.

[vii] “The Anthropic Principle,” May 18, 1987, Episode 17, Season 23, Horizon, BBC.

[viii] Hawking, S. ‘A Brief History of Time’, 26.

[ix] Rees, M. ‘Just Six Numbers’, 22.

[x] Swinburne, R. ‘Science and Religion in Dialogue’, 2010, 230.

[xi] Hooft, G. ‘In Search of the Ultimate Building Blocks’, 163-164.

[xii] Carr, B. “Introduction and Overview”, 16.

Related Essays

Pin It on Pinterest