Winning the Universal Lottery: God, the Multiverse and Fine Tuning

Winning the Universal Lottery: God, the Multiverse and Fine Tuning

34 minutes reading time

“Astronomy leads us to a unique event, a universe which was created out of nothing, one with the very delicate balance needed to provide exactly the right conditions required to permit life, and one which has an underlying (one might say ‘supernatural’) plan.”
— Arno Penzias, Physics Nobel Prize winner

A Life-Permitting Universe

In the world of astrophysics, Sir Fred Hoyle is a name that stands out. Hoyle, who started his career as a firm atheist, believed there was no evidence of God in the universe. He argued that “religion is but a desperate attempt to find an escape from the truly dreadful situation in which we find ourselves… No wonder then that many people feel the need for some belief that gives them a sense of security, and no wonder that they become very angry with people like me who think that this is illusory.”[i] His atheism significantly influenced his scientific perspective, predisposing him to dismiss the notion that the universe had a beginning.

(*This is an excerpt from “Does the Universe Paint God Out of the Picture?” by Luke Baxendale. This is part four in the book.)

However, Hoyle’s atheism was shaken by a groundbreaking discovery. He identified a set of parameters, now known as the “fine-tuning” parameters of the universe, which revealed that numerous properties of the universe fall within exceptionally narrow and improbable ranges. These properties are essential for the existence of chemistry that supports complex forms of life, and any conceivable form of life. Physicists have since labelled the fortunate values of these factors as “anthropic coincidences” and the convergence of these coincidences as the “anthropic fine-tuning” of the universe.

Since the 1950s, every scientific discovery has added to the kaleidoscopic picture of an increasingly complex and finely balanced universe. It has become apparent that the existence of life in the universe relies on a highly improbable dance of forces, features, and a delicate equilibrium among them. Our “Goldilocks universe” (not too hot, not too cold, but just right) seems to be characterised by fundamental forces of physics with just the right strengths, contingent properties with the perfect characteristics, and an initial distribution of matter and energy that constituted the precise configuration to support life. Even the slightest difference in these properties would have rendered complex chemistry and life impossible. The fine-tuning of these properties has not only bewildered physicists due to their extreme improbability, but also because there appears to be no underlying physical reason or necessity for their existence according to the fundamental laws of physics or mathematics.

For instance, carbon-based life is the sole known form of life, and carbon possesses unique qualities that render it ideal for complex chemistry and life. Throughout his career, Hoyle contemplated the factors that needed to be perfectly calibrated for carbon to be readily produced within stars. These factors include the strengths of the strong nuclear and electromagnetic forces, the ratios between fundamental forces, the precise kinetic energy of beryllium and helium, the strength of gravitational forces within stars, and the excitation energy of carbon. Hoyle concluded that these factors required exquisite tuning and coordination within remarkably narrow tolerances to facilitate the synthesis of substantial amounts of carbon inside stars.

Astounded by these “cosmic coincidences” and numerous others that physicists have uncovered since the 1950s, Hoyle became convinced that an intelligent force must have orchestrated the intricate balance of forces and factors in nature, rendering the universe life-permitting. Nevertheless, the fine-tuning parameters Hoyle discovered represent only a fraction of the parameters necessary to ensure a universe that could allow for life.

While some examples of fine-tuning are subject to dispute and complex debates surrounding probability calculations, numerous well-established instances of fine-tuning are widely accepted by most scientists. These examples highlight the exceedingly narrow probabilities of finely tuned constants necessary for the existence of life:

  • Gravitational constant: 1 part in 1034
  • Electromagnetic force versus the force of gravity: 1 part in 1037
  • Cosmological constant: 1 part in 1090
  • The mass density of the universe:  1 part in 1059
  • The expansion rate of the universe: 1 part in 1055

A conservative estimate might suggest around 20 to 30 such constants and parameters are commonly considered when discussing the fine-tuning of the universe, though this number can vary based on the breadth of factors included in the analysis.

To appreciate the magnitude of these probabilities, imagine the task of firing a bullet towards the other side of the universe, twenty billion light-years away, and accurately striking a one-inch target. This awe-inspiring feat underscores just how unlikely it is for the finely tuned constants essential for the existence of life.

However, these examples merely scratch the surface of the intricate fine-tuning within our universe. Following Hoyle, one of the world’s most renowned mathematicians, Sir Roger Penrose, delved deeper into the precision of the universe’s fine-tuning. Penrose meticulously examined the fine-tuning of the initial distribution of mass-energy, also known as the “initial entropy” fine-tuning, and his findings revealed an even more astonishing level of precision in our universe’s delicate balance.

Initial-Entropy Fine-Tuning

To start, let’s talk about entropy. You might vaguely remember hearing about it back in high school physics—though for many of us, that probably feels like a lifetime ago. Put simply, entropy is the rather sobering concept that nothing lasts forever. Everything we create eventually breaks. Everyone we cherish will one day pass away. Any semblance of order or stability we manage to build is destined to unravel over time. On a cosmic scale, the entire universe moves relentlessly toward a state of ultimate disorder. To describe and quantify this universal tendency toward decay and chaos, physicists use the term “entropy.” And yes, I know—that’s not exactly uplifting!

Entropy is often described as a measure of disorder, and it stems from the second law of thermodynamics—one of the most unyielding principles in nature. This law states that the total entropy of the universe always increases over time. In other words, the universe is wired to favour messiness and disarray. Order, meanwhile, is fragile and fleeting. For instance, hand making a beautiful vase might take weeks of meticulous effort, yet a single careless kick of a football can shatter it in an instant. Similarly, the second law dictates that no machine can ever be perfectly efficient—every system wastes some energy during its processes. Ultimately, any structure that arises in the universe, from a star blazing in space to a living organism metabolising food, exists only to further dissipate energy into the cosmos.

The story of entropy began around 200 years ago with French engineer Sadi Carnot, who, in 1824, explored the limits of heat engine efficiency and discovered that some energy is always lost as heat during energy conversion. Later, German physicist Rudolf Clausius formalised these ideas, laying the groundwork for the second law of thermodynamics. In 1865, Clausius coined the term “entropy,” from the Greek entropē (meaning “transformation”), to describe energy’s natural tendency to degrade and become less available for work. He also noted that in any spontaneous process, entropy never decreases. Austrian physicist Ludwig Boltzmann then linked entropy to the microscopic behaviour of particles. He showed that entropy isn’t just about wasted energy—it’s about the vast number of ways a system’s components (its “microstates”) can arrange themselves to produce the same overall appearance (its “microstate”). Boltzmann revealed that disorder is statistically favoured because there are far more disordered arrangements than ordered ones. His work bridged thermodynamics and statistical mechanics, highlighting how entropy connects probabilities, energy distribution, and the natural drift toward disorder.

In simple terms, entropy measures how the particles in a system—like atoms—can be arranged. A system with low entropy is highly ordered, while higher entropy means more disorder.

So, how does this relate to the universe’s fine-tuning? For the universe to form structured systems like galaxies and solar systems, it had to start in a state of relatively low entropy. This means that, initially, the mass and energy were distributed in a very specific and uniform way.

Consider black holes: they represent high-entropy states. Within a black hole, the conventional concepts of space and matter break down. This entropy doesn’t imply chaos in the usual sense; instead, it represents the vast number of ways matter and energy can be organised at the black hole’s event horizon, the boundary of a black hole.

In contrast, our universe reflects a state of lower entropy. This is evident in the formation of structured, organised entities like galaxies, solar systems, and stars. These cosmic structures formed through the pull of gravity, organising matter into complex patterns that seem to defy entropy’s increase. However, this organisation on a cosmic scale is consistent with the overall increase in entropy, according to the laws of thermodynamics.

The early state of our universe, especially its mass and energy distribution, was characterised by low entropy. This crucial condition set the stage for the development of large-scale cosmic structures like galaxies over time. In a universe with high entropy, matter would likely either be too evenly dispersed or end up trapped within black holes, thus hindering the formation of galaxies and stars. Therefore, the presence of organised cosmic structures in our universe is a clear indication of its low-entropy origins.

Sir Roger Penrose wanted to determine the probability of our universe having the low-entropy, highly ordered arrangement of matter observed today. He understood that by answering this question, he could gauge the fine-tuning of the initial arrangement of matter and energy at the beginning of the universe. Penrose concluded that the formation of a universe like ours, replete with highly ordered configurations of matter, required an astoundingly improbable low-entropy set of initial conditions. Penrose used principles from thermodynamics, general relativity, and cosmology to analyse the initial conditions of the universe. He considered the gravitational degrees of freedom related to the distribution of matter and energy at the beginning of the universe. By comparing the phase space volume corresponding to the observed low-entropy state to the total phase space volume of all possible configurations, Penrose could estimate the probability of our universe starting in the highly ordered, low-entropy state that it did. Considering the vast range of potential entropy values for the early universe, he calculated that the likelihood of a universe possessing initial conditions conducive to life is 1 in 10^(10^123).[ii] That is 10 raised to the 10th power (or 10 billion) raised again to the 123rd power. That’s a big number.

To put this figure into perspective, it is worth noting that physicists estimate the entire universe contains 1080 elementary particles, an insignificant fraction of 10^(10^123). Even if all matter in the universe were converted into paper, this would still be insufficient to print the number of zeros required to express this probability as a percentage.

This probability quantifies the extraordinary precision of the fine-tuning of the universe’s initial conditions. In other words, Penrose’s calculated entropy suggests that, among the nearly infinite potential configurations of mass and energy at the universe’s beginning, only a minute fraction would lead to a universe resembling ours.

Theistic Proposal for Fine-Tuning

The Stanford Encyclopedia of Philosophy reads that “the apparent probability of all the necessary conditions sufficient to allow just the formation of planets (let alone life) coming together by chance is exceedingly minuscule.” This observation raises a fundamental question within the realm of scientific inquiry: How do we account for these extraordinary “coincidences”—these royal flushes turning up hand after hand? Could it be reasonable to consider the possibility that a purposeful entity has orchestrated the system? In the context of the fine-tuning conundrum, is it reasonable to suggest a grand designer as an explanatory hypothesis?

On the other hand, could a purely naturalistic explanation suffice to account for this fine-tuning? Is the fine-tuning, as Richard Dawkins has framed the issue, “just what we would expect” if “at bottom there was no purpose, no design… nothing but blind pitiless indifference?” In a similar vein, can we identify a coherent series of explanations for the fine-tuning of the laws and constants of physics, as well as the initial conditions of the universe, “which ultimately reaches to the fundamental laws of nature and stops,” that the theoretical physicist Sean Carroll says naturalism requires?

So, while some argue that the universe’s improbable arrangement of properties hints at an intelligent force orchestrating the system, proponents of naturalism maintain that this fine-tuning can be entirely explained by a series of interconnected self-explaining natural phenomena, eliminating the need for a guiding intelligence.

You can probably figure out by now which way I lean on this. The question of whether the universe’s fine-tuning points to some kind of intentional design is as big as it gets, and people have wrestled with it for decades. When I look at the astonishing precision of the universe, it’s hard not follow the intuition that there’s something deliberate going on.

The argument starts with two key observations about the fine-tuning of our universe: it is both immensely improbable and functionally specific. In our everyday experience, whenever we encounter something that combines these two traits, it almost always points to the involvement of a designing intelligence.

What’s particularly intriguing is that these finely balanced variables of our universe are characterised as being:

  • Contingent (they could have been different, e.g. the mass of a proton or the expansion rate of the universe could have been quite different from what they actually are);
  • Extraordinarily improbable and balanced to a functionally infinitesimally small degree;
  • Independently specifiable (they correspond precisely to the conditions necessary for life).

When we encounter something in everyday life that’s contingent, improbable, and specific—like a carefully engineered machine or a beautifully designed app—we naturally assume there’s an intelligence behind it. Scientists call this combination of traits the “design filter.” And while applying it to the universe is a bold move, it’s hard not to see the parallels.

To make this idea more relatable (although with a major oversimplification), let’s consider a simple analogy: baking a cake. When you bake a cake, you use ingredients like flour, sugar, eggs, and baking powder, each measured with precision. If you change the quantities too much, the cake might not rise, or it could taste awful. The exact measurements and timing, along with the oven temperature, are crucial. Too hot, and the cake burns; too cool, and it remains uncooked. This precision is similar to how certain constants in the universe are finely tuned to allow life to exist. Each step in the recipe corresponds to specific conditions necessary for the desired outcome, akin to how certain conditions in the universe precisely meet the requirements for life. For example, the order in which ingredients are mixed and the method of mixing can affect the texture and structure of the cake. The process of baking a good cake can be seen as a ‘recipe filter.’ In our experience, such precise outcomes from a recipe generally arise from the deliberate actions of a baker, who carefully measures and mixes ingredients to achieve a specific result.

By extending this cause-and-effect relationship, we might suggest that the fine-tuning observed in the universe likely required intelligent input. This idea is supported by mathematician William Dembski’s work, which suggests that physical systems or structures showing a highly improbable combination of factors, conditions, or arrangements of matter, and embodying a significant “set of functional requirements,” invariably originate from intelligent design rather than undirected material processes. This is consistent with our uniform experience.

The universe contains hundreds, if not thousands, of “dials” (constants of nature) that could adopt a wide array of alternative settings (values). Yet, each dial is calibrated precisely to allow the emergence of life. The apparently miraculous assignment of numerical values to these fundamental constants fosters the inescapable impression that the current structure of the universe has been meticulously conceived. To be fair, this is an intuitive response.

Imagine buying a lottery ticket and winning, then consistently winning every weekend for the rest of your life. At some point, you’d likely conclude that the system must be rigged in your favour. Similarly, the extraordinary fine-tuning of the universe—far more improbable than a continuous lottery streak—suggests the presence of an ultimate “fine-tuner” who many theists refer to as “God.”

The core of this argument is that the universe’s fine-tuning displays two key characteristics—extreme improbability and functional specification—that consistently evoke a sense of, and justify an inference to, intelligent design. Renowned Cambridge physicist and Nobel laureate, Brian Josephson, has expressed his confidence in intelligent design as the optimal explanation for the conditions that enable evolution at “about 80%.”[iii] The esteemed late professor of quantum physics at Yale, Henry Margenau, stated that “there is a mind which is responsible for the laws of nature and the existence of nature and the whole universe. And this is consistent with everything we know.”[iv]

Intriguingly, even physicists who maintain a materialistic perspective have acknowledged the implications of fine-tuning as suggestive of intelligent design. Atheist physicist George Greenstein admitted that despite his materialistic inclinations, “the thought insistently arises that some supernatural agency, or rather Agency, must be involved. Is it possible that, suddenly, without intending to, we have stumbled upon scientific proof for the existence of a supreme being? Was it a God who providentially stepped in and crafted the cosmos for our benefit?”[v] Richard Dawkins, one of the world’s most influential atheists, acknowledged the persuasive nature of the fine-tuning argument during his discussion with Francis Collins on the Premier Unbelievable podcast. Although not endorsing the fine-tuning argument himself, Dawkins admits that it presents an intriguing case.

Stephen C. Meyer, in his book ‘The Return of the God Hypothesis’ has argued for the theistic implications from fine-tuning in a slightly different manner. His argument can be summarised as follows:

  • Major Premise: Based on our knowledge of intelligently designed objects, if an intelligent agent acted to design the universe, we might expect the universe to exhibit (a) discernible functional outcomes (such as living organisms) dependent on (b) finely tuned or highly improbable conditions, parameters, or configurations of matter.
  • Minor Premise: We observe (b) highly improbable conditions, parameters, and configurations of matter in the fine-tuning of the laws and constants of physics and the initial conditions of the universe. These finely tuned parameters (a) make life (a discernible functional outcome) possible.
  • Conclusion: We have reason to believe that an intelligent agent acted to design the universe.

If we consider intelligent design as a possible explanation for this phenomenon, it naturally points to the existence of an intelligent force beyond the universe. Such a force would need the power to establish the universe’s fine-tuning parameters and set its initial conditions at the moment of its creation. It seems clear that no being originating within the universe—having come into existence after it began—could have influenced the fine-tuning of the physical laws and constants that are crucial for the universe’s existence and development. Therefore, an intelligence within the universe, such as an extraterrestrial entity (alien), is unlikely to explain the origin of this cosmic fine-tuning.

The fine-tuning of the universe suggests the presence of an intelligent force that transcends the material cosmos. Theistic views, which depict God as existing independently of the universe in a timeless, eternal domain, align with this idea. Theism can provide a causally adequate account for the universe’s origin in time, its fine-tuning from the onset, and the emergence of specific information essential for the genesis of the first living organisms.

Let’s be honest, the idea that the universe’s fine-tuning hints at the deliberate intentions of a higher intelligence or consciousness can be tough to sell. This idea is often dismissed outright because it challenges the materialistic and naturalistic assumptions deeply ingrained in our culture. However, I’d argue that such reactions are driven more by personal biases than by solid scientific reasoning. Part of the hesitation might stem from an unease with anything that feels remotely connected to religion. And that’s exactly why we need to approach this topic with intellectual honesty and an open mind, keeping our focus squarely on the pursuit of truth. The case for intelligent design isn’t just philosophical—it’s built on a compelling scientific foundation. It’s an idea worth considering, not rejecting out of hand. Reflecting on his discovery, Sir Fred Hoyle stated, “a common-sense interpretation of the facts suggests that a super-intellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature.”[vi]

Some time ago I discussed the topic of fine-tuning with a naturalistic atheist, suggesting that such precision seemed more in line with a theistic perspective of a transcendent intelligence than with strict naturalistic assumptions. He did not take my suggestion kindly and became angry and rude in his response. As the conversation intensified, he unexpectedly admitted, perhaps more out of frustration than intent, that he would rather entertain any other explanation than the notion that the cosmic fine-tuning could have originated from a deliberate, intelligent choice. His resistance was not based on logical reasoning but rather on a deep-seated anti-theistic stance.

The cosmological fine-tuning is not what we would expect from a random, aimless process. This is not the universe of “blind, pitiless indifference” that Richard Dawkins claimed it to be. Our experiences and observations suggest that such precise calibration typically originates from intelligent agency. Given that philosophical naturalism rejects any pre-universe intelligent agent, its adherents would logically expect a universe where phenomena are exclusively explained by fundamental physical laws, without the need for fine-tuning. Yet, these laws themselves do not account for either the fine-tuning of the initial conditions of the universe or the contingent features of the physical laws themselves (such as the fine-tuning of their constants) necessary for sustaining a life-permitting universe.

To be clear, science is not beholden to the constraints of naturalism. Naturalism, as a philosophical worldview, does not hold exclusive dominion over the advancement of scientific knowledge. Instead, science illuminates the path towards the metaphysical paradigm that most elegantly aligns with our provisional understanding. Thus far, our observations lead us to understand that systems showing such fine-tuning are usually the result of intelligence. Naturalism, denying any intelligence predating the universe, would seem unable to account for an entity capable of influencing the observed fine-tuning.

Naturalistic Explanations for the Fine-Tuning

“He who knows only his own side of the case knows little of that.”
— John Stuart Mill

We’ve all done it—listened to someone’s argument, only to realise later that we dismissed it too quickly or reduced it to an oversimplified version just to poke holes in it. It’s a very human habit, this urge to jump to conclusions or set up a straw man for an easy win. But think about it: a good detective doesn’t rush to judgment, and a skilled chess player doesn’t make impulsive moves. Why should we treat our ideas and conclusions any differently?

One of the most effective ways to avoid hasty thinking is to test our conclusions against alternative perspectives. Let them clash, compare notes, and see which one holds up under scrutiny. Ideas grow sharper when they’re challenged.

So, we need to be cautious about our conclusions regarding the cosmic mystery known as the fine-tuning phenomenon. Before wrapping ourselves in the cosy blanket of a theistic interpretation, it’s our duty to explore every nook and cranny of naturalistic explanations. This part of the book delves into which perspective, theistic or naturalistic, stands up to the rigours of analysis and provides the most likely explanation for the marvel of our finely-tuned universe.

Physicist Paul Davies has marvelled, “the really amazing thing is not that life on earth is balanced on a knife-edge, but that the entire universe is balanced on a knife-edge, and would be total chaos if any of the natural ‘constants’ were off even slightly.”[vii] Stephen Hawking, in relation to the fine-tuning of cosmological constants, observed, “the remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life.”[viii] Slight variations in the strength of any of these constants or their ratios would preclude the possibility of life. Martin Rees, an emeritus professor of cosmology and astrophysics at the University of Cambridge, aptly encapsulated the situation: “Nature does exhibit remarkable coincidences.”[ix]

Wait, what… Coincidences? Given the staggeringly slim odds for a universe to be stable enough to support life, dismissing it as mere “coincidence” seems intellectually untenable. There must be an explanation, which has prompted many to undertake the challenge of providing alternative explanations. These alternative perspectives merit further discussion.

The Weak Anthropic Principle

In 1974, physicist Brandon Carter offered a naturalistic explanation for the universe’s fine-tuning, known as the “weak anthropic principle” (WAP). This principle suggests that it shouldn’t surprise us to find ourselves in a universe fine-tuned for life, as only a fine-tuned universe could produce conscious observers like us. While this explanation acknowledges the fine-tuning, it downplays the question of why these precise constants exist in the first place.

Consider the following analogy to help explain the WAP: imagine you are a fish living in a small pond filled with water. You might wonder why the pond is filled with water, since that is the very substance that enables you and other fish to survive. According to the fish version of the principle, which we could call the weak ichthyic principle, you shouldn’t be surprised to find that the pond is filled with water, because if it were not, you wouldn’t be there to observe it in the first place. Given you exist, the pond must therefore exist to support you, so the existence of the pond is not a big deal.

However, while it is true that we shouldn’t be surprised to find ourselves in a universe suited for life (since we are alive), is it not strange that the conditions necessary for life are so exceedingly improbable? For instance, consider the scenario of a blindfolded man who miraculously survives an execution by a firing squad of one hundred expert marksmen. The fact that he is alive is indeed consistent with the marksmen’s failure to hit him, but it does not account for why they missed in the first place. The prisoner ought to be astonished by his survival, given the marksmen’s exceptional skills and the minuscule probability of all of them missing if they intended to kill him. Evidently, the WAP commits a logical error by conflating the statement of a necessary condition for an event’s occurrence (in this case, our existence) with the elimination of the need for a causal explanation of the conditions enabling the event.

Furthermore, supporters of the WAP seem to overlook the fundamental question. The issue is not about why we observe a universe compatible with our existence, but rather what initially caused the universe to be fine-tuned.

The Strong Anthropic Principle

The strong anthropic principle (SAP) is an extension and a more assertive version of the WAP. While the WAP suggests that we shouldn’t be surprised to find ourselves in a universe that supports life, the SAP takes it a step further by suggesting that the universe must have properties that enable the development of intelligent life.

Astrophysicist John D. Barrow and mathematical physicist Frank J. Tipler introduced the SAP in their 1986 book, “The Anthropic Cosmological Principle.” The principle essentially argues that the presence of intelligent life is a fundamental aspect of the universe, and that the universe’s basic properties are designed in such a way that they lead to the development of intelligent observers, like humans.

One popular version of the SAP states, “The universe must have properties that eventually permit the emergence of observers within it.” This implies that the existence of intelligent life is an inevitable consequence of these cosmic properties, not merely a chance result of them.

In a sense, I agree with the SAP—it’s just stating the obvious. It states the observation of fine-tuning and its implications for the emergence of life but does not itself explain the underlying reasons for this fine-tuning.

The step beyond SAP is that this explanation of fine-tuning is based on an interpretation of a strange phenomenon in the field of quantum physics. In quantum physics, the famous double-slit experiment demonstrates the strange behaviour of particles when they are observed. When particles, such as photons or electrons, pass through a barrier with two slits, they create an interference pattern on a screen behind the barrier, as if they are behaving like waves. However, when the particles are observed or measured as they pass through the slits, the interference pattern disappears, and the particles behave like individual particles instead of waves.

Applying the principles of this quantum oddity to the SAP suggests that conscious observation plays a pivotal role in determining the behaviour of particles and the very fabric of reality itself.

This calls to mind the Participatory Anthropic Principle (PAP), proposed by physicist John Archibald Wheeler, whom I greatly admire. It suggests that human observations are necessary to bring the entire universe into existence, merging ideas of consciousness with QM. While the SAP suggests that the universe must inherently have properties that enable life to develop, the PAP takes it a step further, proposing that conscious life actively and retroactively influences the conditions that allow for its own existence.

Now, it’s important to note that the PAP takes a stronger position on the role of conscious observation than many physicists would support based solely on QM. In quantum physics, “observation” generally refers to any interaction that leads to the collapse of a particle’s wave function, not necessarily involving a conscious observer. The role of consciousness in QM is still unclear and highly debated.

Either way, these interpretations endeavour to explain not only the existence of conditions necessary for observation, but also the underlying cause or design that allows the observer to significantly impact the experiment’s outcome. Proponents of PAP argue that, akin to an electron’s specific location being contingent on observation, the universe itself might depend on an observer for its existence. Thus, this extension of the strong anthropic principle affirms that:

  1. The universe must have those properties which allow life to develop within it at some stage in its history.
  2. There exists one possible universe ‘designed’ with the goal of generating and sustaining ‘observers’.
  3. And observers are necessary to bring the universe into being.

There are a few issues to this proposal. Firstly, there’s what can be described as the “grandfather paradox.” Imagine you travel back in time to when your grandfather was young, before he had children, and, for whatever reason, your actions stop him from starting a family. If your grandfather never had children, one of your parents—and by extension, you—would never exist. But if you were never born, how could you have travelled back in time to intervene in the first place?

This idea of circular causality is strikingly similar to a problem in the PAP. If the existence of conscious observers (effect) is contingent on a finely-tuned universe (cause), yet the universe’s nature (effect) is said to depend on these observers (cause), we encounter a circular, cause-and-effect relationship. If conscious observers’ existence is contingent on a finely tuned universe, but the universe’s very nature relies on the existence of these conscious observers, we find ourselves entwined in a causal loop reminiscent of the paradox.

How can our observation of the universe explain its fine-tuning, if the fine-tuning occurred billions of years before we were here? Even if we accept the possibility of two entities continually causing each other in an eternal cycle, this doesn’t clarify why such a looping system exists in the first place.

Furthermore, this line of reasoning suggests consciousness exists in a dualistic relationship with material reality, hinting at a non-materialistic view of existence. This challenges the naturalistic framework.

Even when we look at the strange world of quantum mechanics, where observers seem to play a role, the usual cause-and-effect order seems to still stand. For example, when an observation causes a quantum wave function to collapse—the cause (the detection of the light wave) precedes the effect (the collapse of the wave function).

Therefore, to claim that consciousness triggers the existence of a finely-tuned universe implies the existence of a conscious mind that predates our spacetime—an idea quite different from the emergence of human consciousness within the universe.

The Multiverse — String Theory and Inflationary Models

Moving on from ideas like the WAP and SAP, some scientists have proposed a bold and intriguing alternative: the multiverse. This hypothesis flips the script on the so-called “fine-tuning” of the universe, transforming it from an improbable fluke into something we might actually expect in an infinite cosmic lottery.

Here’s the gist: to sidestep the theistic implications of a finely-tuned universe, some theorists propose the existence of not just one, but an almost infinite number of universes. If there are nearly infinite universes, it’s not so surprising that at least one of them, like ours, happens to have just the right conditions for life. It’s like rolling dice over and over again; eventually, you’re bound to hit a lucky roll. In this view, our universe isn’t special because of a higher intelligence—it’s just one of many outcomes in a vast multiverse.

Proponents of the multiverse theory often describe our universe as having luckily won a cosmic lottery. They compare the universe-generating process to a slot machine, where each ‘spin’ produces a new universe. While most of these universes do not support life, some, like ours, do.

Two major cosmological models have been proposed to explain the potential origin of new universes. The first model, proposed by Andrei Linde, Alan Guth, and Paul Steinhardt, is based on inflationary cosmology (we talked about this earlier in the book). The second model is rooted in string theory. Both models were initially created to address specific challenges in physics, but they were later adapted to offer multiverse explanations for the fine-tuning observed in our universe.

Inflationary Multiverse Model

Let’s first clarify the inflationary cosmological model. Right after the Big Bang— the first 10−36 to 10−32 seconds after the Big Bang—the universe didn’t just expand slowly. It blew up like a balloon on steroids. This was a lightning-fast growth spurt that lasted for an incredibly brief moment before things settled into a more gradual pace of expansion.

Initially, scientists developed this inflation model to solve some big head-scratchers in the original Big Bang theory. But as science often goes, one idea leads to another. One prominent version is known as “eternal chaotic inflation,” proposed by Andrei Linde. This model takes the inflationary framework to another level by introducing a dynamic and stochastic element to the inflationary field, characterised by vacuum energy. In this eternal chaotic inflation scenario, this field acts as a catalyst for the expansion of spacetime, within which our observable universe and potentially numerous others have emerged.

In the context of this model, the inflation field is not uniform but varies across different regions of spacetime. This variation allows for the spontaneous nucleation of lower-energy bubble universes within the higher-energy inflationary field. These bubble universes, including ours, are thought to be causally disconnected due to their rapid and divergent expansion rates. As a result, this model predicts a multiverse, with bubble universes nestled within an ever-expanding inflationary backdrop.

Supporters of this model argue that if there are countless universes out there, then everything that could possibly happen will happen somewhere. Even the most unlikely scenarios become inevitable when you have infinite chances to roll the cosmic dice. This ties into something called the anthropic principle: basically, our universe seems perfectly fine-tuned for life—not because we’re special, but because in a multiverse with endless possibilities, at least one universe was bound to hit the jackpot for life-friendly conditions. And lucky us—we’re living in it.

The String Theory Model

String theory is a complex and confusing concept that offers an alternative explanation for the fine-tuning of the laws and constants of physics. It’s a theory so intricate and expansive that even physicists admit it can feel overwhelming. It’s a lot to wrap your head around. But at its core lies a simple and elegant idea: the building blocks of the universe aren’t tiny particles, but unimaginably small, vibrating strings of energy.

Forget the image of tiny, indivisible particles like electrons, photons and quarks zipping around. String theory says that the fundamental building blocks of everything aren’t actually point-like particles, but incredibly tiny, one-dimensional strings of energy. These strings can vibrate in different patterns in many more dimensions than we can perceive, forming both “open” and “closed” strings. All elementary particles, according to string theory, are manifestations of these differently vibrating strings.

But there’s a twist: for string theory to make sense mathematically, our universe needs more than the familiar three dimensions of space and one of time. It requires six or seven extra spatial dimensions—hidden from view because they’re “compactified,” curled up into tiny topological structures, so small that they’re smaller than 10-35 of a metre, the spatial radius of what physicists call the Planck length. This is the scale at which quantum gravitational phenomena are expected to occur.

String theorists envision that within these minuscule structures, energy strings vibrate in the six or seven extra spatial dimensions. The variations in these vibrations give rise to the particle-like phenomena we observe in our familiar three dimensions of space. At its core, string theory is a quantum-scale, particle physics-based theory that aims to unify all fundamental forces, including gravity.

One outcome of this theory is the proposed existence of “gravitons,” which are understood as massless, closed strings that transmit gravitational forces across long distances at the speed of light. Different vibrational states of strings, not just gravitons, are believed to be responsible for the various fundamental particles, including those that carry the other three fundamental forces of physics (electromagnetic, weak, and strong forces).

Initially, string theory only described force-carrying particles called bosons (like photons or gluons). But it didn’t account for matter—the stuff that makes up you, me, and everything else. To fix this, physicists introduced supersymmetry, a principle that pairs every force-carrying particle (boson) with a matter particle (fermion), and vice versa. This addition not only allowed string theory to describe both matter and forces but also reduced the number of required dimensions from 26 to 10 (nine spatial dimensions plus time).

Here’s where things get even more mind-bending. When physicists worked through string theory’s equations, they didn’t find just one, unique solution reflecting the physics of our universe. Instead, they revealed numerous solutions, each representing a different possible physical reality. Initially, physicists viewed this surplus of solutions as an embarrassment, a glaring flaw in the model. But some string theorists, with an innovative twist, turned this perceived vice into a virtue. They suggested that the vast number of possible ways to compactify these extra dimensions leads to a different vacuum state, or “solution,” of the string theory equations, resulting in a different set of physical laws and constants. Each solution represents a different universe, each with unique physical laws and constants. The shape of the folded spaces associated with each solution determined the laws of physics in the observable spatial dimensions, with the number of flux lines determining the constants of physics.

In essence, some scientists argues that string theory, with its additional dimensions and vibrating strings, combined with the principles of supersymmetry and the hypothetical graviton, leads to the idea of a multiverse. They proposed a mechanism capable of generating an astonishing number of possible universes, ranging from 10500 to 101,000 possible universes, each corresponding to these different solutions. This suggests that the specific tuning of the laws and constants in our universe isn’t just a coincidence but rather a probable outcome.

The proposed mechanism begins with a high-entropy compactification of space, representing a universe with a quantum gravitational field. As this field’s energy decayed, it gave rise to new universes with different physical laws and constants. This ongoing process of energy decay would sequentially morph one universe into another. Through this process, the vast landscape of potential universes was explored, making the fine-tuning parameters of our life-friendly universe an inevitable result of a random exploration process.

One God vs Many Universes

Let’s take a moment to think about this: Do cosmological models like inflationary cosmology or string theory really explain the fine-tuning of the universe’s laws, constants, and initial conditions? Are they better at addressing cosmic fine-tuning than the idea of intelligent design?

It’s important to remember that these theories are still hot topics for debate and research in theoretical physics. Many physicists see the multiverse hypothesis more as speculative metaphysics than a solid scientific theory. They argue that because we can’t observe or measure other universes or a divine creator, choosing between these two ideas often comes down to personal preference. They claim there aren’t strong enough reasons to favour one hypothesis over the other.

I see it differently. Both scientific and metaphysical hypotheses can be evaluated by comparing their explanatory power against their competitors. In this context, we can weigh the pros and cons of the intelligent design hypothesis against the multiverse concept. In my view, there are good reasons to consider intelligent design as a more convincing explanation than the multiverse theory.

To begin with, from an intuitive standpoint, it seems more reasonable to consider the simpler explanation. When faced with multiple hypotheses, the one that makes the fewest assumptions is generally preferable. As Oxford philosopher Richard Swinburne has argued:

“It is the height of irrationality to postulate an infinite number of universes never causally connected with each other, merely to avoid the hypothesis of theism. Given that… a theory is simpler the fewer entities it postulated, it is far simpler to postulate God than an infinite number of universes, each different from each other.”[x]

Secondly, it’s crucial to understand that neither inflationary cosmology nor string theory fully tackles the fine-tuning conundrum. To address two types of fine-tuning, a multiverse solution requires the acceptance of two different universe-generating mechanisms. While inflationary cosmology could theoretically account for the fine-tuning of the universe’s initial conditions, it falls short of explaining the origin of the fine-tuning of the laws and constants of physics. As I have understood, this is because the inflation field operates consistently with the same laws of physics across its expansive space. As it spawns new bubble universes, these offshoots retain the same laws and constants, with only the configurations of mass-energy being novel. On the other hand, string theory could potentially clarify the fine-tuning of the laws and constants of physics, but in most models, it fails to generate multiple sets of initial conditions corresponding to each choice of physical laws. This implies that to conceive a multiverse theory capable of addressing both types of fine-tuning, physicists must speculate on two distinct types of universe-generating mechanisms working in tandem, one rooted in string theory and the other in inflationary cosmology. This has led many theoretical physicists to adopt a hybrid multiverse model called the “inflationary string landscape model.” While this approach could theoretically explain the fine-tuning phenomena, it introduces what philosophers call a “bloated ontology,” postulating a vast number of purely speculative and abstract entities for which we lack direct evidence.

The inflationary string landscape model combines complex assumptions about a multitude of hypothetical entities, abstract assumptions, and unobservable processes. String theory has yet to make any testable predictions that can be verified by experiment. The reliance on unseen extra dimensions and the lack of concrete evidence makes this model particularly challenging.

In contrast, a theistic design hypothesis offers a simpler alternative—a transcendent mind is behind it all. Unlike the inflationary string multiverse theory, which requires an extravagant array of abstract theoretical entities, this approach avoids conceptual bloat.

Still, while the idea of intelligent design may feel more intuitive, it isn’t a definitive answer. Instead, it provides a reasonable foundation for further exploration. After all, if the universe was fine-tuned by an ultimate mind, that only raises more probing questions—chief among them, how and why?

Here’s the point: it’s more reasonable to lean toward the idea of intelligent design by a “Supermind” when we consider what we already know. Our extensive experience with intelligent agents crafting intricate and purpose-driven systems—like Swiss watches, gourmet recipes, integrated circuits, or literary works—supports this notion. Fine-tuning a physical system to achieve a specific, propitious end is precisely what intelligent agents are known to do, so hypothesising a “Supermind” to explain the fine-tuning of the universe is a natural extension of our experiential knowledge of intelligent agents’ causal capacities. On the other hand, the mechanisms proposed by various multiverse theories lack a comparable basis in our experiential knowledge. We have no experiential reference for universe-generating processes as described by these theories.

Additionally, to account for the fine-tuning observed in our universe, multiverse theories derived from inflationary cosmology and string theory suggest mechanisms that themselves require fine-tuning. Essentially, even if a multiverse could potentially justify the fine-tuning of our universe, it would still need an underlying mechanism to create these multiple universes. This mechanism would also need its own explanation for fine-tuning, thus pushing the problem further up the causal chain. So, it remains unclear whether multiverse theories can adequately address the issue of fine-tuning without invoking some prior form of fine-tuning.

At a minimum, string theory requires the delicate fine-tuning of initial conditions, as evidenced by the scarcity of the highest energy solutions (approximately 1 part in 10500) within the array of possible solutions or compactifications of universes. Similarly, inflationary cosmology demands more fine-tuning than it was designed to explain. Theoretical physicists Sean Carroll and Heywood Tam have shown that the fine-tuning associated with chosen inflationary models is roughly 1 part in 1066,000,000, further complicating the problem it intended to solve.

It should be noted that scientists are sharply divided over multiverse theories. Several eminent physicists, including Sir Roger Penrose, Avi Loeb, and Paul Steinhardt have dismissed multiverse inflationary cosmology. Penrose criticises it for the fine-tuning problems. Loeb challenges the theory’s lack of falsifiability, arguing that it cannot be empirically tested or verified, thereby questioning its scientific validity. Steinhardt, originally a proponent of the inflationary model, now disputes its predictability and testability, concerned that its adaptability to various observations makes it unscientifically irrefutable.

Furthermore, string theory predicts “supersymmetry” as a vital component for unifying the fundamental forces of physics. However, extensive experiments at the Large Hadron Collider have yet to find these supersymmetric particles. Coupled with other failed predictions and the embarrassment of an infinite number of string theory solutions, scepticism about string theory has been growing among many leading physicists. As Nobel Prize-winning physicist Gerard ‘t Hooft once remarked:

“I would not even be prepared to call string theory a ‘theory,’ rather a ‘model,’ or not even that: just a hunch. After all, a theory should come with instructions on how… to identify the things one wishes to describe, in our case, the elementary particles, and one should, at least in principle, be able to formulate the rules for calculating the properties of these particles, and how to make new predictions for them. Imagine that I give you a chair, while explaining that the legs are missing, and that the seat, back, and armrests will be delivered soon. Whatever I gave you, can I still call it a chair?”[xi]

To me, leaning on the concept of multiple universes to dodge any reasonable explanation resembling a God-like cause can sometimes feel like a form of metaphysical special pleading. Renowned theoretical physicist John Polkinghorne, a colleague of Stephen Hawking and the former president of Queen’s College, Cambridge, is celebrated for distinguished scholarship and brilliance in his field. Having been at the vanguard of high-energy physics for over three decades, he staunchly maintains in his book, The Quantum World, that the intricate and intelligible nature of our universe is not adequately explained by random processes of chance. With reference to the multiverse proposition, he argues:

“Let us recognise these speculations for what they are. They are not physics, but, in the strictest sense, metaphysics. There is no purely scientific reason to believe in an infinite ensemble of universes… A possible explanation of equal intellectual respectability – and to my mind, greater elegance – would be that this one world is the way it is because it is the creation of the will of a Creator who proposes that it should be so.”

In the context of discussing the inherent factors within this universe, particularly with reference to quantum theory, Dr. Polkinghorne, during a seminar at Cambridge, wittily remarked, “there is no free lunch. Somebody has to pay, and only God has the resources to put in what was needed to get what we’ve got.”

Godly Intrusion

Why is the multiverse often considered the best explanation for cosmological fine-tuning, despite its many drawbacks? The key might be found in a statement by theoretical physicist Bernard Carr: “to the hard-line physicist, the multiverse may not be entirely respectable, but it is at least preferable to invoking a Creator.”[xii]

Many scientists, committed to a naturalistic worldview, dismiss the idea of a Creator as a plausible explanation. It’s their entrenched worldview, not scientific reasoning, that has confined their thinking. Naturalism (or materialism) has become a straitjacket for science, hindering scientists from following or even recognising promising leads.

Consider the irony of the multiverse argument. To sidestep the consideration of a cause that may be associated with God, some have suggested the existence of other universes—entities that we can’t see, touch, examine, or scientifically validate. Interestingly, these are the very reasons often given for dismissing the existence of God.

Physicists have come up with several theories to explain the fine-tuning of the universe without involving a higher intelligence. However, these proposals either fail to account for fine-tuning (as with the weak and strong anthropic principles) or they resort to explaining it by surreptitiously invoking other sources or prior unexplained fine-tuning. Yet, the fine-tuning in the universe has precisely those traits—extreme improbability and functional specificity—that instinctively and consistently lead us to infer the presence of intelligent design based on our uniform and repeated experiences. With this in mind, it seems reasonable to at least consider intelligent design as a worthwhile explanation for the fine-tuning of the laws and constants of physics and the initial conditions of the universe.


[i] Hoyle, F. (1960). The Nature of the Universe.

[ii] Penrose, R. ‘The Emperor’s New Mind’, 341-344.

[iii] Josephson, B. Interview by Robert Lawrence Kuhn for the PBS series Closer to Truth.

[iv] Margenau, H. Interview at Yale University, March 2, 1986.

[v] Greenstein, G. The Symbiotic Universe, 27.

[vi] Hoyle, F. ‘The Universe’.

[vii] “The Anthropic Principle,” May 18, 1987, Episode 17, Season 23, Horizon, BBC.

[viii] Hawking, S. ‘A Brief History of Time’, 26.

[ix] Rees, M. ‘Just Six Numbers’, 22.

[x] Swinburne, R. ‘Science and Religion in Dialogue’, 2010, 230.

[xi] Hooft, G. ‘In Search of the Ultimate Building Blocks’, 163-164.

[xii] Carr, B. “Introduction and Overview”, 16.

Related Essays

Pin It on Pinterest