Winning the Universal Lottery: God, the Multiverse and Fine Tuning

Winning the Universal Lottery: God, the Multiverse and Fine Tuning

34 minutes reading time

“Astronomy leads us to a unique event, a universe which was created out of nothing, one with the very delicate balance needed to provide exactly the right conditions required to permit life, and one which has an underlying (one might say ‘supernatural’) plan.”
— Arno Penzias, Physics Nobel Prize winner

A Life-Permitting Universe

In the world of astrophysics, Sir Fred Hoyle is a name that stands out. Hoyle, who started his career as a firm atheist, believed there was no evidence of God in the universe. He argued that “religion is but a desperate attempt to find an escape from the truly dreadful situation in which we find ourselves… No wonder then that many people feel the need for some belief that gives them a sense of security, and no wonder that they become very angry with people like me who think that this is illusory.”[i] His atheism significantly influenced his scientific perspective, predisposing him to dismiss the notion that the universe had a beginning.

(*This is an excerpt from “Does the Universe Paint God Out of the Picture?” by Luke Baxendale. This is part four in the book.)

However, Hoyle’s atheism was shaken by a groundbreaking discovery. He identified a set of parameters, now known as the “fine-tuning” parameters of the universe, which revealed that numerous properties of our universe fall within exceptionally narrow and improbable ranges. These properties are essential for the existence of chemistry that supports complex forms of life, and any conceivable form of life. Physicists have since labelled the fortunate values of these factors as “anthropic coincidences” and the convergence of these coincidences as the “anthropic fine-tuning” of the universe.

Since the 1950s, every scientific discovery has added to the kaleidoscopic picture of an increasingly complex and finely balanced universe. It has become apparent that the existence of life in the universe relies on a highly improbable dance of forces, features, and a delicate equilibrium among them. Our “Goldilocks universe” (not too hot, not too cold, but just right) seems to be characterised by fundamental forces of physics with just the right strengths, contingent properties with the perfect characteristics, and an initial distribution of matter and energy that constituted the precise configuration to support life. Even the slightest difference in these properties would have rendered complex chemistry and life impossible. The fine-tuning of these properties has not only bewildered physicists due to their extreme improbability, but also because there appears to be no underlying physical reason or necessity for their existence according to the fundamental laws of physics or mathematics.

For instance, carbon-based life is the sole known form of life, and carbon possesses unique qualities that render it ideal for complex chemistry and life. Throughout his career, Hoyle contemplated the factors that needed to be perfectly calibrated for carbon to be readily produced within stars. These factors include the strengths of the strong nuclear and electromagnetic forces, the ratios between fundamental forces, the precise kinetic energy of beryllium and helium, the strength of gravitational forces within stars, and the excitation energy of carbon. Hoyle concluded that these factors required exquisite tuning and coordination within remarkably narrow tolerances to facilitate the synthesis of substantial amounts of carbon inside stars.

Astounded by these “cosmic coincidences” and numerous others that physicists have uncovered since the 1950s, Hoyle became convinced that an intelligent force must have orchestrated the intricate balance of forces and factors in nature, rendering the universe life-permitting. Nevertheless, the fine-tuning parameters Hoyle discovered represent only a fraction of the parameters necessary to ensure a universe that could allow for life.

While some examples of fine-tuning are subject to dispute and debates surrounding probability calculations, numerous well-established instances of fine-tuning are widely accepted by most scientists. These examples highlight the exceedingly narrow probabilities of finely tuned constants necessary for the existence of life:

  • Gravitational constant: 1 part in 1034
  • Electromagnetic force versus the force of gravity: 1 part in 1037
  • Cosmological constant: 1 part in 1090
  • The mass density of the universe:  1 part in 1059
  • The expansion rate of the universe: 1 part in 1055

A conservative estimate might suggest around 20 to 30 such constants and parameters are commonly considered when discussing the fine-tuning of the universe, though this number can vary based on the breadth of factors included in the analysis.

To get a sense of how extreme these odds are, imagine firing a bullet toward the far side of the observable universe, around 20 billion light-years away, and hitting a one‑inch target dead centre. That’s the level of precision we’re talking about. And these examples are just the beginning. Following Hoyle, the celebrated mathematician Sir Roger Penrose took an even closer look at the universe’s remarkable fine-tuning. Focusing on the initial distribution of mass-energy, what’s often called the “initial entropy” fine-tuning. He uncovered a level of precision in the universe’s delicate balance that is even more astonishing.

Initial-Entropy Fine-Tuning

To start, let’s talk about entropy. You might vaguely remember hearing about it back in high school physics. Put simply, entropy is the rather sobering concept that nothing lasts forever. Everything we create eventually breaks. Everyone we cherish will one day pass away. Any semblance of order or stability we manage to build is destined to unravel over time. On a cosmic scale, the entire universe moves relentlessly toward a state of ultimate disorder. To describe and quantify this universal tendency toward decay and chaos, physicists use the term “entropy.” And yes, that’s not exactly uplifting!

Entropy is often described as a measure of disorder, and it stems from the second law of thermodynamics, which is one of the most unyielding principles in nature. This law states that the total entropy of the universe always increases over time. In other words, the universe is wired to favour messiness and disarray. Order, meanwhile, is fragile and fleeting. For instance, hand making a beautiful vase might take weeks of meticulous effort, yet a single careless kick of a football shatters it in an instant. Similarly, the second law dictates that no machine can ever be perfectly efficient—every system wastes some energy during its processes. Ultimately, any structure that arises in the universe, from a star blazing in space to a living organism metabolising food, exists only to further dissipate energy into the cosmos.

The story of entropy began around 200 years ago with French engineer Sadi Carnot, who, in 1824, explored the limits of heat engine efficiency and discovered that some energy is always lost as heat during energy conversion. Later, German physicist Rudolf Clausius formalised these ideas, laying the groundwork for the second law of thermodynamics. In 1865, Clausius coined the term “entropy,” from the Greek entropē (meaning “transformation”), to describe energy’s natural tendency to degrade and become less available for work. He also noted that in any spontaneous process, entropy never decreases. Austrian physicist Ludwig Boltzmann then linked entropy to the microscopic behaviour of particles. He showed that entropy isn’t just about wasted energy, it’s about the vast number of ways a system’s components (its “microstates”) can arrange themselves to produce the same overall appearance (its “microstate”). Boltzmann revealed that disorder is statistically favoured because there are far more disordered arrangements than ordered ones. His work bridged thermodynamics and statistical mechanics, highlighting how entropy connects probabilities, energy distribution, and the natural drift toward disorder.

In simple terms, entropy measures how the particles in a system—like atoms—can be arranged. A system with low entropy is highly ordered, while higher entropy means more disorder.

So, how does this relate to the universe’s fine-tuning? For the universe to form structured systems like galaxies and solar systems, it had to start in a state of relatively low entropy. This means that, initially, the mass and energy were distributed in a very specific and uniform way.

Consider black holes: they represent high-entropy states. Within a black hole, the conventional concepts of space and matter break down. This entropy doesn’t imply chaos in the usual sense; instead, it represents the vast number of ways matter and energy can be organised at the black hole’s event horizon, the boundary of a black hole.

In contrast, our universe reflects a state of lower entropy (relatively speaking). This is evident in the formation of structured, organised entities like galaxies, solar systems, and stars. These cosmic structures formed through the pull of gravity, organising matter into complex patterns that seem to defy entropy’s increase. However, this organisation on a cosmic scale is consistent with the overall increase in entropy, according to the laws of thermodynamics.

The early state of our universe, especially its mass and energy distribution, was characterised by low entropy. This crucial condition set the stage for the development of large-scale cosmic structures like galaxies over time. In a universe with high entropy, matter would likely either be too evenly dispersed or end up trapped within black holes, thus hindering the formation of galaxies and stars. Therefore, the presence of organised cosmic structures in our universe is a clear indication of its low-entropy origins.

Sir Roger Penrose wanted to determine the probability of our universe having the low-entropy, highly ordered arrangement of matter observed today. He understood that by answering this question, he could gauge the fine-tuning of the initial arrangement of matter and energy at the beginning of the universe. Penrose concluded that the formation of a universe like ours required an astoundingly improbable low-entropy set of initial conditions. Penrose used principles from thermodynamics, general relativity, and cosmology to analyse the initial conditions of the universe. He considered the gravitational degrees of freedom related to the distribution of matter and energy at the beginning of the universe. By comparing the phase space volume corresponding to the observed low-entropy state to the total phase space volume of all possible configurations, Penrose could estimate the probability of our universe starting in the highly ordered, low-entropy state that it did. Considering the vast range of potential entropy values for the early universe, he calculated that the likelihood of a universe possessing initial conditions conducive to life is 1 in 10^(10^123).[ii] That is 10 raised to the 10th power (or 10 billion) raised again to the 123rd power. That’s a big number.

For comparison, physicists estimate there are about 1080 elementary particles in the entire universe, which is practically nothing compared to Penrose’s figure of 10^(10^123). Even if all matter in the universe were converted into paper, this would still be insufficient to print the number of zeros required to express this probability as a percentage. In other words, Penrose’s calculated entropy suggests that, among the nearly infinite potential configurations of mass and energy at the universe’s beginning, only a minute fraction would lead to a universe resembling ours where life is possible.

Theistic Proposal for Fine-Tuning

The Stanford Encyclopedia of Philosophy reads that “the apparent probability of all the necessary conditions sufficient to allow just the formation of planets (let alone life) coming together by chance is exceedingly minuscule.” So how do we make sense of these extraordinary “coincidences”, these royal flushes turning up hand after hand? One option is obvious, if controversial: maybe the system isn’t just coincidental. Maybe its conditions were set up, or at least steered, by some kind of purposeful intelligence. That’s the core of the “intelligent design” argument in the fine-tuning debate.

On the other hand, could a purely naturalistic explanation suffice to account for this fine-tuning? Is the fine-tuning, as Richard Dawkins has framed the issue, “just what we would expect” if “at bottom there was no purpose, no design… nothing but blind pitiless indifference?” In a similar vein, can we identify a coherent series of explanations for the fine-tuning of the laws and constants of physics, as well as the initial conditions of the universe, “which ultimately reaches to the fundamental laws of nature and stops,” that the theoretical physicist Sean Carroll says naturalism requires?

So, which view is more likely? Does the universe’s improbable arrangement of properties hint at an intelligent force behind it all, or can the fine-tuning be entirely explained by a series of interconnected self-explaining natural phenomena, with no guiding intelligence required?

As for me, I lean toward the first. I don’t have a clear picture of what that looks like, but when I step back and consider the astonishing precision of the universe, it’s hard not to feel the weight of that intuition, that there’s something deliberate at play.

This argument rests on two central observations: our universe is both immensely improbable and functionally specific. In our everyday experience, whenever we encounter something that combines these two traits, it instinctively points to the involvement of a designing intelligence.

What’s particularly intriguing is that these finely balanced variables of our universe are characterised as being:

  • Contingent (they could have been different, e.g. the mass of a proton or the expansion rate of the universe could have been quite different from what they actually are);
  • Extraordinarily improbable and balanced to a functionally infinitesimally small degree;
  • Independently specifiable (they correspond precisely to the conditions necessary for life).

When we encounter something in everyday life that’s contingent, improbable, and specific, we tend to infer that there’s an intelligence behind it. Scientists call this combination of traits the “design filter.” And while applying it to the universe is a bold move, it’s hard not to see the parallels.

Let me put this in down-to-earth terms with a simple analogy: baking a cake. To get a good cake, you need specific ingredients—flour, sugar, eggs, baking powder—in the right amounts, mixed in the right way, baked at the right temperature for just the right amount of time. Too much or too little of anything, and the whole thing fails: flat, burnt, or inedible.

That kind of precision is exactly what fine-tuning in the universe looks like. Each step in the recipe corresponds to specific conditions necessary for the desired outcome. Swap the flour for too much sugar, or crank the oven too hot, and the cake collapses. Likewise, tweak the mass of a proton or the strength of gravity even slightly, and a life‑permitting universe collapses before it starts. The process of baking a good cake can be seen as a ‘recipe filter.’ In our experience, such precise outcomes from a recipe generally arise from the deliberate actions of a baker, who carefully measures and mixes ingredients to achieve a specific result.

This is the basic intuition behind what mathematician William Dembski argues: when you encounter physical systems or structures showing a highly improbable combination of factors, conditions, or arrangements of matter, and embodying a significant “set of functional requirements,” it invariably originates from a mindful source rather than undirected material processes. This is consistent with our uniform experience.

The universe contains hundreds, if not thousands, of “dials” (constants of nature) that could adopt a wide array of alternative settings (values). Yet, each dial is calibrated precisely to allow the emergence of life. The apparently miraculous assignment of numerical values to these fundamental constants supports the logic that the universe hasn’t just stumbled into its current structure, it’s been deliberately set up.

For instance, imagine buying a lottery ticket and winning, and not once, but week after week, for the rest of your life. At some point, you’d stop calling it “luck” and start suspecting the system was rigged.  Likewise, the extraordinary fine-tuning of the universe, far more improbable than a continuous lottery streak, suggests the presence of an ultimate “fine-tuner” who many theists refer to as “God.”

I’m leaning on simple analogies here, but that’s because at bottom this is a simple argument. And “simple” isn’t necessarily a weakness. The reasoning is logical and intuitive. The universe’s fine-tuning displays two key characteristics—extreme improbability and functional specification—that consistently evoke a sense of, and justify an inference to, an intelligent cause. Renowned Cambridge physicist and Nobel laureate, Brian Josephson, has expressed his confidence in “intelligent design” as the optimal explanation for the conditions that enable evolution at “about 80%.”[iii] The esteemed late professor of quantum physics at Yale, Henry Margenau, stated that “there is a mind which is responsible for the laws of nature and the existence of nature and the whole universe. And this is consistent with everything we know.”[iv]

Intriguingly, even physicists who maintain a materialistic perspective have acknowledged the implications of fine-tuning as suggestive of some intelligent source. Atheist physicist George Greenstein admitted that despite his materialistic inclinations, “the thought insistently arises that some supernatural agency, or rather Agency, must be involved. Is it possible that, suddenly, without intending to, we have stumbled upon scientific proof for the existence of a supreme being? Was it a God who providentially stepped in and crafted the cosmos for our benefit?”[v] Richard Dawkins, one of the world’s most influential atheists, acknowledged the persuasive nature of the fine-tuning argument during his discussion with Francis Collins on the Premier Unbelievable podcast. He does not endorse the fine-tuning argument himself but admits that it presents an intriguing case.

Stephen C. Meyer, in his book ‘The Return of the God Hypothesis’ has argued for the theistic implications from fine-tuning in this way:

  • Major Premise: Based on our knowledge of intelligently designed objects, if an intelligent agent acted to design the universe, we might expect the universe to exhibit (a) discernible functional outcomes (such as living organisms) dependent on (b) finely tuned or highly improbable conditions, parameters, or configurations of matter.
  • Minor Premise: We observe (b) highly improbable conditions, parameters, and configurations of matter in the fine-tuning of the laws and constants of physics and the initial conditions of the universe. These finely tuned parameters (a) make life (a discernible functional outcome) possible.
  • Conclusion: We have reason to believe that an intelligent agent acted to design the universe.

If we consider an intelligent cause as a possible explanation for this phenomenon, it naturally points to the existence of an intelligent force beyond the universe. Such an intelligence would need the power to set the fine‑tuning parameters and fix the universe’s initial conditions at the very moment of creation. This makes more sense than suggesting that a being originating within the universe, having come into existence after it began, could have influenced the fine-tuning of the physical laws and constants that govern it. That rules out explanations like advanced extraterrestrials, and leaves us with the presence of an intelligent force that transcends the material cosmos. Theistic views, which depict God as existing independently of the universe in a timeless, eternal domain, align with this idea. Theism can provide a causally adequate account for the universe’s origin in time, its fine-tuning from the onset, and the emergence of specific information essential for the genesis of the first living organisms.

Pointing to fine‑tuning as evidence of a higher intelligence is a tough sell. It rubs against the grain of the materialistic, naturalistic assumptions baked into our culture. And more often than not, the quick dismissals come less from the strength of the evidence and more from discomfort with where it leads, especially if it sounds too close to religion. That’s why this calls for a bit of open‑mindedness. The idea of an intelligent source isn’t smoke and mirrors; it’s got a real scientific backbone. It deserves to be considered, not shrugged off out of habit or bias. As astrophysicist Sir Fred Hoyle once put it, “a common-sense interpretation of the facts suggests that a super-intellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature.”[vi]

Some time ago I discussed the topic of fine-tuning with a naturalistic atheist, suggesting that such precision seemed more in line with a theistic perspective of a “transcendent intelligence” or “grounding consciousness” (for want of better terms), than with strict naturalistic assumptions. He did not take my suggestion kindly and became angry and somewhat rude in his response. As the conversation intensified, he unexpectedly admitted, perhaps more out of frustration than intent, that he would rather entertain any other explanation than to consider that cosmic fine-tuning could have originated from a deliberate, mindful source. His resistance was based on a deep-seated anti-theistic stance.

Yet the cosmological fine-tuning is not what we would expect from a random, aimless process. This is not the universe of “blind, pitiless indifference” that Richard Dawkins claimed it to be. In our experience, when something is this precisely calibrated, it usually points back to intelligent agency—some kind of purpose at work. Given that philosophical naturalism rejects any pre-universe intelligent agent, its adherents would logically expect a universe exclusively explained by self-referential physical laws, without the need for fine-tuning. Yet these laws do not explain their own contingent features, such as why their constants are so precisely set in the narrow range required for a universe capable of permitting life.

To be clear, science isn’t bound to naturalism. Naturalism is a philosophy, not a scientific rule, and it shouldn’t monopolise how we interpret evidence. Science should follow the data to the metaphysical view that best explains it. So far, our observations lead us to understand that systems showing such fine-tuning are usually the result of intelligence. Naturalism, denying any intelligence predating the universe, would seem unable to account for an entity capable of influencing/explaining the observed fine-tuning.

Naturalistic Explanations for the Fine-Tuning

“He who knows only his own side of the case knows little of that.”
— John Stuart Mill

We’ve all done it—listened to someone’s argument only to brush it off too quickly, or worse, shrink it down into a flimsy caricature just so we can knock it over. It’s a human problem: the rush to judgment, the easy win of a straw man, the comfort of feeling certain without doing the hard work of thinking through the “other side.” But a good detective doesn’t jump at the first suspect, and a skilled chess player doesn’t lunge at the board with impulsive moves.

The smarter move, the harder move, is to let alternative perspectives collide, to test our conclusions against rivals and see what survives the clash. That’s how arguments sharpen, and that’s how ideas earn their strength.

So, before wrapping ourselves in the cosy blanket of a theistic interpretation to fine-tuning, it’s our duty to explore every nook and cranny of naturalistic explanations. This part of the book delves into which perspective, theistic or naturalistic, stands up to the rigours of analysis and provides the most likely explanation for the marvel of our finely-tuned universe.

Physicist Paul Davies has marvelled, “the really amazing thing is not that life on earth is balanced on a knife-edge, but that the entire universe is balanced on a knife-edge, and would be total chaos if any of the natural ‘constants’ were off even slightly.”[vii] Stephen Hawking, in relation to the fine-tuning of cosmological constants, observed, “the remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life.”[viii] Slight variations in the strength of any of these constants or their ratios would preclude the possibility of life. Martin Rees, an emeritus professor of cosmology and astrophysics at the University of Cambridge, aptly encapsulated the situation: “Nature does exhibit remarkable coincidences.”[ix]

But coincidences? Really? Given the mind‑boggling odds against a universe stable enough to allow for life, waving it away as “just coincidence” feels intellectually lazy. There has to be an explanation, and that very challenge has pushed thinkers to put forward a range of naturalistic alternatives—ideas that are worth taking a closer look at.

The Weak Anthropic Principle

In 1974, physicist Brandon Carter introduced what he called the “weak anthropic principle” (WAP). In short, it says we shouldn’t be surprised to find ourselves in a universe fine-tuned for life, as only a fine-tuned universe could produce conscious observers like us. While this explanation acknowledges the fine-tuning, it downplays the question of why these precise constants exist in the first place.

Think of it this way: Imagine you’re a fish in a small pond, surrounded by the very water that keeps you alive. One day, you start wondering, why is there water here at all? According to a fishy version of the principle (call it the “weak ichthyic principle”), the answer would be simple: you shouldn’t be surprised there’s water, because if there weren’t, you wouldn’t exist to think about it. Your very existence “proves” the pond exists.

Of course, it’s true that we shouldn’t be shocked to find ourselves in a universe that supports life, after all, here we are, alive. But isn’t it strange that the conditions necessary for life are so exceedingly improbable? For instance, consider the scenario of a blindfolded man who miraculously survives an execution by a firing squad of one hundred expert marksmen. The fact that he is alive is indeed consistent with the marksmen’s failure to hit him, but it does not account for why they missed in the first place. The prisoner ought to be astonished by his survival, given the marksmen’s exceptional skills and the minuscule probability of all of them missing if they intended to kill him.

This is where the weak anthropic principle falters. It mistakes a necessary condition, that we only ever observe a life-permitting universe, for an explanation of why such a universe exists at all. Saying “we exist, therefore the conditions must allow it” is just a tautology dressed up as insight. And in doing so, its advocates quietly sidestep the real question, which is not “Why do we see a universe compatible with life?” but “What caused the universe to be fine-tuned in the first place?”

The Strong Anthropic Principle

The Strong Anthropic Principle (SAP) is a bolder version of the Weak Anthropic Principle (WAP). Where the WAP just says, “Of course we find ourselves in a universe that allows life, otherwise we wouldn’t be here to notice,” the SAP ups the stakes. It doesn’t just say the universe happens to allow life, but that it must be structured to produce intelligent observers like us.

Astrophysicist John D. Barrow and mathematical physicist Frank J. Tipler first laid this out in their 1986 book The Anthropic Cosmological Principle. Their argument is that the universe’s fundamental properties aren’t randomly dialled in but are somehow constrained or determined by the requirement that conscious beings must eventually emerge. In this sense, asking “Why is the universe fine-tuned to allow for life?” is like asking “Why does 2 + 2 equal 4?” The answer isn’t that it’s a lucky break; it’s that it couldn’t be otherwise. 

In a sense, I agree with the SAP—it’s just stating the obvious. It acknowledges the observation of fine-tuning and its implications for the emergence of life but does not itself explain the underlying reasons for this fine-tuning.

The step beyond SAP draws from quantum mechanics (QM), specifically John Wheeler’s revolutionary concept of a “participatory universe.” Wheeler was one of the key scientists who advanced QM and helped revolutionise our understanding of the field. Wheeler, who coined the phrase “it from bit,” argued that information and measurement are fundamental to reality’s very fabric. His “Participatory Anthropic Principle” (PAP) suggests that the universe requires observers not just to understand it, but to actually complete it—making observation a retroactive, creative act that helps determine what has “always” existed.

Wheeler’s famous delayed-choice experiments demonstrated that the decision of how to measure a particle can seemingly influence its past behaviour, suggesting that the boundary between past and future, between “what happened” and “what will happen,” may be more fluid than classical physics assumes. Over his life, Wheeler went back and forth on his interpretation, but generally implied that conscious observers don’t just discover reality, they participate in its creation through their choices of what questions to ask and what measurements to make.

Now, it’s important to note that the PAP takes a stronger position on the role of conscious observation than many physicists today would happily support. In quantum physics, “observation” generally refers to any physical interaction that causes decoherence (the environment “destroying” quantum superpositions), rather than necessarily requiring conscious awareness. It’s fair to say that the role of consciousness in QM remains unclear. While decoherence can help explain quantum effects without explicitly invoking consciousness, this doesn’t fully explain away why conscious awareness seems to play such a distinctive role: why our attention to a system, or the questions we choose to ask, appears to influence what we discover from a quantum state. Perhaps consciousness isn’t merely unnecessary but rather represents a different, more fundamental mode of engagement with reality. I’ll leave this debate for another day.

Either way, these interpretations endeavour to explain not only the existence of conditions necessary for observation, but also the underlying cause or design that allows the observer to significantly impact the experiment’s outcome. Proponents of PAP argue that, akin to an electron’s specific location being contingent on observation, the universe itself might depend on an observer for its existence. Thus, this extension of the strong anthropic principle goes something like this:

  1. The universe must have those properties which allow life to develop within it at some stage in its history.
  2. There exists one possible universe ‘designed’ with the goal of generating and sustaining ‘observers’.
  3. And observers are necessary to bring the universe into being.

There are a few issues to this proposal. The first is what’s often called the “grandfather paradox.” Picture it: you travel back in time to when your grandfather was still young, before he ever had kids. For some reason, your actions prevent him from starting a family. If he never had children, your parent would never be born, and neither would you. But if you never existed, how could you have travelled back in time to interfere in the first place? The loop eats itself.

This kind of circular causality shows up again in the PAP. If the existence of conscious observers (the effect) depends on a finely tuned universe (the cause), but at the same time the universe’s fine-tuning (the effect) is explained by the existence of observers (the cause), then we’re stuck a circular, cause-and-effect relationship.

How can our observation of the universe explain its fine-tuning, if the fine-tuning occurred billions of years before we were here? Even if we accept the possibility of two entities continually causing each other in an eternal cycle, this doesn’t clarify why such a looping system exists in the first place.

Furthermore, the PAP reasoning suggests consciousness exists in a dualistic relationship with material reality, hinting at a non-materialistic view of existence. This challenges the naturalistic framework.

Even when we look at the strange world of QM, where observers can seem to play a role, the usual cause-and-effect order seems to still stand. For example, when an observation causes a quantum wave function to collapse—the cause (the detection of the light wave) precedes the effect (the collapse of the wave function). Even in the famous delayed-choice experiment, I don’t think the results show that our measurement actually changes the particle’s past. I suspect the “delayed choice” simply determines what information we extract from a quantum state that was already indefinite until measured. What looks like a “retroactive” effect is probably our classical intuition misleading us. The weirdness lies in how we describe it, not in time itself running backwards. So, to claim that consciousness triggers the existence of a finely tuned universe implies the existence of some sort of consciousness that predates our spacetime.

The Multiverse — String Theory and Inflationary Models

Another way of looking at our finely tuned universe is through the multiverse hypothesis. It flips the story, turning what looks like an improbable fluke into something you might actually expect in a sort of endless cosmic lottery.

The idea is simple enough: Instead of a single universe whose fine‑tuning hints at theistic implications, some theorists suggest there isn’t just one universe, but an almost infinite number of them. In a multiverse like that, it’s not so surprising that at least one of them, like ours, happens to have just the right conditions for life. It’s like rolling dice over and over again; eventually, you’re bound to hit a lucky roll. Our universe isn’t special because of a “higher intelligence”, it’s just one of many outcomes in a vast multiverse. Advocates often describe us as the winners of a cosmic jackpot. They compare the universe-generating process to a slot machine, where each ‘spin’ produces a new universe. While most of these universes do not support life, some, like ours, do.

Physicists have even sketched out two major models for the potential origin of new universes. The first model, proposed by Andrei Linde, Alan Guth, and Paul Steinhardt, is based on inflationary cosmology (we talked about this earlier in the book). The second model is rooted in string theory. Both models were initially created to address specific challenges in physics, but they were later adapted to offer multiverse explanations for the fine-tuning observed in our universe.

Inflationary Multiverse Model

The story for the eternal chaotic inflation model begins with Alan Guth’s 1981 proposal that right after the Big Bang (within the unimaginably tiny window of 10⁻³⁶ to 10⁻³² seconds), a brief episode of accelerated expansion occurred that could solve the horizon, flatness, and monopole problems associated with the original Big Bang theory. His original “old inflation” scenario, however, struggled to reheat smoothly after bubble formation. Within a couple of years, refinements such as “new inflation” and then “chaotic inflation” came to surface. By allowing a scalar field to slowly roll down a potential from high initial values, these models provided smoother exits from inflation.

A decisive step came in 1986, when Andrei Linde showed that large‑scale quantum fluctuations of the inflaton could drive an infinite process of self‑reproduction. While some regions rolled down the potential and reheated, others were kicked to higher field values, ensuring that globally the inflating volume never died out. As Linde put it, the universe can be an “eternally existing, self‑reproducing inflationary” spacetime comprising innumerable bubble‑like regions, each with its own post‑inflationary history.

In simpler terms, the inflaton is like a ball rolling downhill; as it rolls, inflation ends in that patch and its energy turns into a hot Big Bang state. Quantum “jitters” add random nudges that can sometimes push the ball a bit uphill in some Hubble‑sized regions. If, over one Hubble time, the typical quantum nudge is bigger than the steady downhill step, those regions keep inflating and their volume grows faster than inflation can end elsewhere. With suitable potentials, this makes the total amount of inflating space keep increasing forever to the future, even though the process is not past‑eternal.

Where inflation does end, that region reheats and evolves like a standard hot Big Bang universe—a “pocket” or “bubble.” These pockets are separated by still‑inflating space that stretches so quickly that light from one can never reach another, so they cannot communicate or influence each other. Altogether this gives a multiverse‑like picture: many causally disconnected regions produced by the same self‑reproducing inflationary process.

This can all get rather complicated. In terms of the fine-tuning, the core argument is that if there are countless universes out there, then everything that could possibly happen will happen somewhere. Even the most unlikely scenarios become inevitable when you have infinite chances to roll the cosmic dice. This idea dovetails with the anthropic principle: the fact that our universe seems finely tuned to allow for life is not surprising in a multiverse. Out of all the possibilities, it’s only in a life-friendly universe like ours that we would find ourselves asking these questions.

The String Theory Model

String theory is a complex concept that offers an alternative explanation for the fine-tuning of the laws and constants of physics, though it wasn’t originally intended for this. In the late 1960s it was a curious model of the strong force; only later did physicists realise its mathematics naturally includes a quantum version of gravity. That pivot, along with breakthroughs in the 1980s and 1990s, turned string theory into a possible candidate for a unified picture of nature.

It’s a difficult theory to wrap your head around but at its heart is a simple idea with mind-stretching consequences:  the building blocks of the universe aren’t tiny point-like particles, but unimaginably small, vibrating strings of energy. These strings can vibrate in different patterns in many more dimensions than we can perceive, forming both “open” and “closed” strings. The way they vibrate determines what we observe as particles. A particular vibrational pattern looks to us like an electron, another like a photon, and so on. Different vibrational states of strings are believed to be responsible for the various fundamental particles, including those that carry all four fundamental forces of physics: electromagnetic, weak, strong, and gravitational forces.

But there’s a twist: for string theory to make sense mathematically, our universe needs more than the familiar three dimensions of space and one of time. It requires six or seven extra spatial dimensions, hidden from view because they’re “compactified,” curled up into tiny topological structures, so small that they’re smaller than 10-35 of a metre, which is what physicists call the Planck length. This is the scale at which quantum gravitational phenomena are expected to occur. String theorists envision that within these minuscule structures, energy strings vibrate in the six or seven extra spatial dimensions. The variations in these vibrations give rise to the particle-like phenomena we observe in our familiar three dimensions of space.

One outcome of this theory is the proposed existence of “gravitons,” which are understood as massless, closed strings that transmit gravitational forces across long distances at the speed of light. At its core, string theory is a quantum-scale, particle physics-based theory that aims to unify all fundamental forces, including gravity.

Initially, string theory only described force-carrying particles called bosons (like photons or gluons). But it didn’t account for matter, i.e. the stuff that makes up you, me, and everything else. To fix this, physicists introduced supersymmetry, a principle that pairs every force-carrying particle (boson) with a matter particle (fermion), and vice versa. This addition not only allowed string theory to describe both matter and forces but also reduced the number of required dimensions from 26 to 10 (nine spatial dimensions plus time).

Here’s where things get even more mind-bending. When physicists worked through string theory’s equations, they didn’t find just one, unique solution reflecting the physics of our universe. Instead, they revealed countless solutions, each representing a different possible physical reality. Initially, physicists viewed this surplus of solutions as an embarrassment, a glaring flaw in the model. But some string theorists, with an innovative twist, turned this perceived vice into a virtue. They suggested that the vast number of possible ways to compactify these extra dimensions leads to a different vacuum state, or “solution,” of the string theory equations, resulting in a different set of physical laws and constants. Each solution represents a different universe, each with unique physical laws and constants. The shape of the folded spaces associated with each solution determined the laws of physics in the observable spatial dimensions, with the number of flux lines determining the constants of physics.

In essence, string theory, with its vibrating strings in additional dimensions and principles like supersymmetry, predicts a vast “landscape” of possible configurations—an astonishing 10500 to 101,000 different ways the extra dimensions can be curled up, each giving rise to its own set of physical laws and constants. Cosmologists propose that during eternal inflation, random quantum fluctuations cause space to stop inflating in separate regions, forming distinct “bubbles.” When each bubble settles, it selects one configuration from string theory’s enormous landscape, becoming a self-contained universe with its own unique physics. Because these bubbles form independently within the still-inflating background space, this mechanism could supposedly generate a true multiverse where our observable cosmos represents just one bubble among a huge range of possible worlds.

One God vs Many Universes

Let’s take a moment to think about this: Do cosmological models like inflationary cosmology or string theory really explain the fine-tuning of the universe’s laws, constants, and initial conditions? Are these theories any better at accounting for fine-tuning than the possibility of a purposeful, mindful source behind it all?

It’s important to remember that these theories are still hot topics for debate and research in theoretical physics. Many physicists see the multiverse hypothesis more as speculative metaphysics, and not a solid scientific theory. They argue that because we can’t observe or measure other universes or a divine creator, choosing between these two ideas often comes down to personal preference. They claim there aren’t strong enough reasons to favour one hypothesis over the other.

But I’m not convinced. Hypotheses, whether scientific or metaphysical, can and should be compared by their explanatory power. To me, when we set these ideas side by side, the hypothesis of purposeful consciousness offers a cleaner and more convincing account than its speculative counterpart.

Let’s start with the obvious: when competing hypotheses try to account for the same evidence, we should favour the one that makes the fewest assumptions. Simpler explanations are usually more helpful starting points. As Oxford philosopher Richard Swinburne has argued:

“It is the height of irrationality to postulate an infinite number of universes never causally connected with each other, merely to avoid the hypothesis of theism. Given that… a theory is simpler the fewer entities it postulated, it is far simpler to postulate God than an infinite number of universes, each different from each other.”[x]

Secondly, it’s crucial to understand that neither inflationary cosmology nor string theory fully tackles the fine-tuning conundrum. To address two types of fine-tuning, a multiverse solution requires the acceptance of two different universe-generating mechanisms. While inflationary cosmology could theoretically account for the fine-tuning of the universe’s initial conditions, it falls short of explaining the origin of the fine-tuning of the laws and constants of physics. As I have understood, this is because the inflation field operates consistently with the same laws of physics across its expansive space. As it spawns new bubble universes, these offshoots retain the same laws and constants, with only the configurations of mass-energy being novel. On the other hand, string theory could potentially clarify the fine-tuning of the laws and constants of physics, but in most models, it fails to generate multiple sets of initial conditions corresponding to each choice of physical laws. This implies that to conceive a multiverse theory capable of addressing both types of fine-tuning, physicists must speculate on two distinct types of universe-generating mechanisms working in tandem, one rooted in string theory and the other in inflationary cosmology. This has led many theoretical physicists to adopt a hybrid multiverse model called the “inflationary string landscape model.” While this approach could theoretically explain the fine-tuning phenomena, it introduces what philosophers call a “bloated ontology,” postulating a vast number of purely speculative and abstract entities for which we lack direct evidence.

The inflationary string landscape model combines complex assumptions about a multitude of hypothetical entities, abstract assumptions, and unobservable processes. String theory has yet to make any testable predictions that can be verified by experiment. The reliance on unseen extra dimensions and the lack of concrete evidence makes this model particularly challenging.

By contrast, the idea of a creative intelligence feels like a simpler, more intuitive starting point, even if it’s hardly a final answer. At best, it offers a workable foundation for further thought. After all, if the universe was fine-tuned by an ultimate mind, that only raises more probing questions, like how and why?

Here’s the point: the more reasonable move is to favour the explanation that best fits what we already know. Our everyday experience shows us what intelligent agents are capable of: they design intricate, goal-directed systems. Think of Swiss watches, gourmet recipes, integrated circuits, or novels. Fine‑tuning a system toward a purpose is exactly what intelligence does. So positing a “Supermind” (for lack of a better term) behind the fine‑tuning of the universe isn’t a wild leap; it’s a natural extension of what we already understand about intelligent causation. On the other hand, the mechanisms proposed by various multiverse theories lack a comparable basis in our experiential knowledge. We have no experiential reference for universe-generating processes as described by these theories.

Additionally, to account for the fine-tuning observed in our universe, multiverse theories derived from inflationary cosmology and string theory suggest mechanisms that themselves require fine-tuning. Essentially, even if a multiverse could potentially justify the fine-tuning of our universe, it would still need an underlying mechanism to create these multiple universes. This mechanism would also need its own explanation for fine-tuning, thus pushing the problem further up the causal chain. So, it remains unclear whether multiverse theories can really address the issue of fine-tuning without invoking some prior form of fine-tuning.

At a minimum, string theory requires the delicate fine-tuning of initial conditions, as evidenced by the scarcity of the highest energy solutions (approximately 1 part in 10500) within the array of possible solutions or compactifications of universes. Similarly, inflationary cosmology demands more fine-tuning than it was designed to explain. Theoretical physicists Sean Carroll and Heywood Tam have shown that the fine-tuning associated with chosen inflationary models is roughly 1 part in 1066,000,000, further complicating the problem it intended to solve.

It should also be noted that scientists are sharply divided over multiverse theories. Several eminent physicists, including Sir Roger Penrose, Avi Loeb, and Paul Steinhardt have dismissed multiverse inflationary cosmology. Penrose criticises it for the fine-tuning problems. Loeb challenges the theory’s lack of falsifiability, arguing that it cannot be empirically tested or verified, thereby questioning its scientific validity. Steinhardt, originally a proponent of the inflationary model, now disputes its predictability and testability, concerned that its adaptability to various observations makes it unscientifically irrefutable.

Furthermore, string theory predicts “supersymmetry” as a vital component for unifying the fundamental forces of physics. However, extensive experiments at the Large Hadron Collider have yet to find these supersymmetric particles. Coupled with other failed predictions and the embarrassment of an infinite number of string theory solutions, scepticism about string theory has been growing among many leading physicists. As Nobel Prize-winning physicist Gerard ‘t Hooft once remarked:

“I would not even be prepared to call string theory a ‘theory,’ rather a ‘model,’ or not even that: just a hunch. After all, a theory should come with instructions on how… to identify the things one wishes to describe, in our case, the elementary particles, and one should, at least in principle, be able to formulate the rules for calculating the properties of these particles, and how to make new predictions for them. Imagine that I give you a chair, while explaining that the legs are missing, and that the seat, back, and armrests will be delivered soon. Whatever I gave you, can I still call it a chair?”[xi]

To me, leaning on the concept of multiple universes to dodge any reasonable explanation resembling a God-like cause feels a bit like metaphysical special pleading. Theoretical physicist John Polkinghorne, a colleague of Stephen Hawking and the former president of Queen’s College, Cambridge, is widely respected for his distinguished scholarship and for decades of work in high-energy physics. In his book, The Quantum World, he argues that the intricate and intelligible nature of our universe is not adequately explained by random processes of chance. With reference to the multiverse proposition, he argues:

“Let us recognise these speculations for what they are. They are not physics, but, in the strictest sense, metaphysics. There is no purely scientific reason to believe in an infinite ensemble of universes… A possible explanation of equal intellectual respectability – and to my mind, greater elegance – would be that this one world is the way it is because it is the creation of the will of a Creator who proposes that it should be so.”

In the context of discussing the inherent factors within this universe, particularly with reference to quantum theory, Dr. Polkinghorne, during a seminar at Cambridge, wittily remarked, “there is no free lunch. Somebody has to pay, and only God has the resources to put in what was needed to get what we’ve got.”

Godly Intrusion

Why is the multiverse often considered the best explanation for cosmological fine-tuning, despite its many drawbacks? The key might be found in a statement by theoretical physicist Bernard Carr: “to the hard-line physicist, the multiverse may not be entirely respectable, but it is at least preferable to invoking a Creator.”[xii]

Now that’s telling. For many, the idea of a Creator is ruled out from the start, not because of scientific reasoning, but because of an entrenched commitment to a naturalistic worldview. Naturalism (or materialism) has become a straitjacket for science, hindering scientists from following or even recognising promising leads.

Consider the irony of the multiverse argument. To sidestep the consideration of a cause that may be associated with God, some propose—well—other universes. Universes you and I can’t see, can’t touch, can’t test, can’t measure or validate scientifically. And yet, aren’t those the exact same reasons people laugh God out of the room?

Physicists have come up with several theories to explain the fine-tuning of the universe without involving a higher intelligence. However, these proposals either fail to account for fine-tuning (as with the weak and strong anthropic principles) or they resort to explaining it by surreptitiously invoking other sources or prior unexplained fine-tuning. Yet, the fine-tuning in the universe has precisely those traits—extreme improbability and functional specificity—that instinctively and consistently lead us to infer the presence of a purposeful intelligence based on our uniform and repeated experiences. With this in mind, it seems reasonable to at least consider this as a worthwhile explanation for the fine-tuning of the laws and constants of physics and the initial conditions of the universe.


[i] Hoyle, F. (1960). The Nature of the Universe.

[ii] Penrose, R. ‘The Emperor’s New Mind’, 341-344.

[iii] Josephson, B. Interview by Robert Lawrence Kuhn for the PBS series Closer to Truth.

[iv] Margenau, H. Interview at Yale University, March 2, 1986.

[v] Greenstein, G. The Symbiotic Universe, 27.

[vi] Hoyle, F. ‘The Universe’.

[vii] “The Anthropic Principle,” May 18, 1987, Episode 17, Season 23, Horizon, BBC.

[viii] Hawking, S. ‘A Brief History of Time’, 26.

[ix] Rees, M. ‘Just Six Numbers’, 22.

[x] Swinburne, R. ‘Science and Religion in Dialogue’, 2010, 230.

[xi] Hooft, G. ‘In Search of the Ultimate Building Blocks’, 163-164.

[xii] Carr, B. “Introduction and Overview”, 16.

Related Essays

Pin It on Pinterest