Winning the Universal Lottery: God, the Multiverse and Fine Tuning

Winning the Universal Lottery: God, the Multiverse and Fine Tuning

30 minutes reading time

“Astronomy leads us to a unique event, a universe which was created out of nothing, one with the very delicate balance needed to provide exactly the right conditions required to permit life, and one which has an underlying (one might say ‘supernatural’) plan.”
— Arno Penzias, Physics Nobel Prize winner

A Life-Permitting Universe

In the world of astrophysics, Sir Fred Hoyle is a name that stands out. Early in his career he was a committed atheist, convinced that the universe offered no evidence for God. He wrote that “religion is but a desperate attempt to find an escape from the truly dreadful situation in which we find ourselves… No wonder then that many people feel the need for some belief that gives them a sense of security, and no wonder that they become very angry with people like me who think that this is illusory.”[i] His atheism significantly influenced his scientific outlook, predisposing him to dismiss the notion that the universe had a beginning.

(*This is an excerpt from “Does the Universe Paint God Out of the Picture?” by Luke Baxendale. This is part four in the book.)

But while studying how elements form inside stars, Hoyle stumbled upon something that would shake his worldview to its core. In particular, he encountered what physicists now call “fine-tuning,” the striking fact that the laws, constants, and starting conditions of the universe sit in a surprisingly restricted life-permitting window. If those basic parameters drift even slightly, the universe loses the ability to build long-lived stars, stable chemistry, and the complex structures life depends on.

For Hoyle carbon brought the point into sharp focus. He saw that producing carbon, the backbone of life’s chemistry, depends on an extraordinary alignment of factors: the strengths of the fundamental forces, the masses of key particles, and the energy levels inside stellar furnaces all have to fall within a breathtakingly narrow range. Change any one of these values by the tiniest fraction, and carbon atoms simply couldn’t form. No carbon meant no complex chemistry. No complex chemistry meant no life.

Carbon was just the beginning. Since the 1950s, physicists have uncovered dozens of these “cosmic coincidences,” each one revealing our universe as far more improbable than anyone imagined. The Goldilocks principle (not too hot, not too cold, but just right) applies not just to planetary conditions but to the fundamental fabric of reality itself. Gravity can’t be too strong or too weak. The electromagnetic force must balance perfectly against nuclear forces. The initial expansion rate of the universe had to fall within an impossibly narrow range. Even the distribution of matter and energy at the Big Bang required exquisite calibration.

To grasp how extreme these requirements are, imagine firing a bullet toward the far side of the observable universe, roughly 46 billion light-years away, and hitting a one-inch target dead centre. That’s approximately the level of precision needed for just one of these cosmic parameters. Now consider that the universe depends on multiple such parameters, each independently calibrated to similar accuracy.

The numbers tell the story. While some examples of fine-tuning are subject to dispute and debates surrounding probability calculations, several well-established instances of fine-tuning are widely accepted by most scientists:

  • Gravitational constant: 1 part in 1034
  • Electromagnetic force versus the force of gravity: 1 part in 1037
  • Cosmological constant: 1 part in 1090
  • The mass density of the universe:  1 part in 1059
  • The expansion rate of the universe: 1 part in 1055

Physicists estimate that somewhere between 20 and 30 such constants require this kind of precision, though the exact number depends on which factors you include. What makes these numbers even more puzzling is that physics offers no underlying reason why they must have these specific values. The fundamental laws of nature don’t require the gravitational constant to be what it is. Mathematics doesn’t dictate the strength of electromagnetism. These values appear arbitrary from a theoretical standpoint, yet they converge with laser-like precision on the exact combination needed for life.

Physicist Paul Davies marvelled, “the really amazing thing is not that life on earth is balanced on a knife-edge, but that the entire universe is balanced on a knife-edge, and would be total chaos if any of the natural ‘constants’ were off even slightly.”[ii] Hawking, commenting on the apparent sensitivity of cosmological parameters, likewise observed, “the remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life.”[iii]

This realisation transformed Hoyle. The man who once dismissed religion as desperate escapism became convinced that some intelligent force had deliberately set these cosmic dials. His change of heart was so profound that he later wrote, “a common-sense interpretation of the facts suggests that a super-intellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature.”[iv] 

Hoyle’s discovery opened a door that other scientists couldn’t help but walk through. If the universe’s fundamental constants showed such improbable precision, what about the initial conditions at its birth? The celebrated mathematician Sir Roger Penrose decided to investigate exactly this question. He examined the initial entropy of the universe, essentially asking how matter and energy were arranged at the very first instant. What he calculated was a degree of precision that made even Hoyle’s findings look crude by comparison.

Initial-Entropy Fine-Tuning

To understand Penrose’s work, let’s talk about entropy. You might vaguely remember hearing about it back in high school physics. Put simply, entropy is the rather sobering concept that nothing lasts forever. Everything we create eventually breaks. Everyone we cherish will one day pass away. Any semblance of order or stability we manage to build is destined to unravel over time. On a cosmic scale, the entire universe moves relentlessly toward a state of ultimate disorder.

Physics captures this tendency in the second law of thermodynamics, one of nature’s most stubborn rules. It states that the total entropy of the universe increases over time. In other words, the universe tends toward greater disorder, while order remains fragile and temporary. One consequence is that no machine can be perfectly efficient: every real process dissipates energy, reducing the energy available for future work. Ultimately, any structure that arises in the universe, from a star blazing in space to a living organism metabolising food, persists only by dispersing energy into the wider cosmos.

The story of entropy began around 200 years ago with French engineer Sadi Carnot, who, in 1824, explored the limits of heat engine efficiency and discovered that some energy is always lost as heat during energy conversion. Later, German physicist Rudolf Clausius formalised these ideas, laying the groundwork for the second law of thermodynamics. In 1865, Clausius coined the term “entropy,” from the Greek entropē (meaning “transformation”), to describe energy’s natural tendency to degrade and become less available for work. He also noted that in any spontaneous process, entropy never decreases. Austrian physicist Ludwig Boltzmann then linked entropy to the microscopic behaviour of particles. He showed that entropy isn’t just about wasted energy, it’s about the vast number of ways a system’s components (its “microstates”) can arrange themselves to produce the same overall appearance (its “macrostate”). Boltzmann revealed that disorder is statistically favoured because there are far more disordered arrangements than ordered ones.

So far, so grim, and so simple. But now the universe starts behaving like a riddle. If entropy tends to increase, if the cosmic arrow points from order toward disorder, how did the universe ever produce the intricate structures we see: galaxies, stars, planets, chemistry? For such structure to emerge, the early universe had to begin in an exceptionally special state, with mass and energy arranged in a remarkably smooth, precise way.

Consider black holes: they represent high-entropy states. Within a black hole, the conventional concepts of space and matter break down. This entropy doesn’t imply chaos in the usual sense; instead, it represents the vast number of ways matter and energy can be organised at the black hole’s event horizon, the boundary of a black hole. If the universe had started with high entropy, matter would likely either be too evenly dispersed or end up trapped within black holes, thus hindering the formation of galaxies and stars.

And that’s the crux: our universe appears to have begun with extraordinarily low entropy, especially in its gravitational degrees of freedom. That low-entropy beginning does not cancel the second law. It makes the second law possible as a meaningful arrow of time. It also sets up Penrose’s central question: why was the universe’s initial state so improbably special in the first place?

As a mathematician, Penrose sought to determine the probability of our universe having this precise arrangement. He understood that by answering this question, he could gauge the fine-tuning of the initial arrangement of matter and energy at the beginning of the universe. Using principles from thermodynamics, general relativity, and cosmology, he analysed the gravitational degrees of freedom related to how matter and energy were distributed. He compared the phase space volume corresponding to the observed state against all possible configurations. Considering the range of potential entropy values for the early universe, he calculated that the likelihood of a universe possessing initial conditions conducive to life is 1 in 10^(10^123).[v] That is 10 raised to the 10th power (or 10 billion) raised again to the 123rd power. That’s a big number.

For comparison, physicists estimate there are about 1080 elementary particles in the entire universe, which is practically nothing compared to Penrose’s figure. Even if all matter in the universe were converted into paper, this would still be insufficient to print the number of zeros required to express this probability as a percentage. In other words, among the nearly infinite potential configurations at the universe’s beginning, only a minute fraction would lead to a cosmos resembling ours where life is possible.

Theistic Proposal for Fine-Tuning

How do we make sense of these extraordinary “coincidences”, these royal flushes turning up hand after hand?

The obvious answer, though controversial, is that maybe they aren’t coincidences at all. Perhaps the relevant conditions were arranged, or at least nudged, by some form of purposeful intelligence. That’s the core of the “intelligent design” argument in the fine-tuning debate.

On the other hand, could a purely naturalistic explanation suffice? Is the fine-tuning, as Richard Dawkins has framed the issue, “just what we would expect” if “at bottom there was no purpose, no design… nothing but blind pitiless indifference?” Can we identify what physicist Sean Carroll calls a coherent series of explanations “which ultimately reaches to the fundamental laws of nature and stops,” requiring no guiding intelligence at all?

The question deserves a careful look at what we’re actually dealing with. The fine-tuning of our universe isn’t just unusual, it displays three specific characteristics that make it particularly significant.

  1. First, the fundamental constants and initial conditions are contingent, meaning they could have been radically different. The mass of a proton or the expansion rate of the universe could have taken vastly different values from what they actually are.
  2. Second, these values are also extraordinarily improbable, balanced to functionally infinitesimally small degrees.
  3. Third, they’re independently specifiable, corresponding precisely to the conditions necessary for life.

That trio, contingency plus improbability plus functional specificity, is the kind of pattern that routinely prompts an intelligent source in ordinary life. A Shakespeare sonnet could theoretically emerge from random keystrokes, but the odds are so vanishingly small and the outcome so precisely functional that we never seriously entertain that possibility. Philosophers sometimes call this recognition pattern the “design filter”.

Think about something as ordinary as baking a cake. You need specific ingredients in precise amounts: flour, sugar, eggs, baking powder, mixed in the right way, baked at the right temperature for just the right time. Too much sugar and it’s cloyingly inedible. Too hot an oven and it burns. Too little baking powder and it stays flat. The margin for error is thin, and the successful outcome depends on getting dozens of variables exactly right. We never imagine that a perfect cake assembled itself through random processes. In our experience, such precise outcomes from a recipe generally arise from the deliberate actions of a baker, who carefully measures and mixes ingredients to achieve a specific result.

The fine-tuning discussion claims something like that sensitivity, but at a far more extreme scale. The universe contains hundreds, perhaps thousands, of fundamental constants that could theoretically adopt vastly different values. Yet each one sits in the infinitesimally narrow range necessary for a universe capable of supporting life. Tweak the strong nuclear force slightly and atomic nuclei can’t form. Adjust the cosmological constant minutely and the universe either collapses immediately or expands too rapidly for galaxies to coalesce. Alter the ratio of the electromagnetic force to gravity and stars can’t sustain nuclear fusion. The list goes on.

Mathematician William Dembski has formalised this underlying intuition: when physical systems exhibit a highly improbable combination of factors embodying significant functional requirements, they invariably originate from mindful sources rather than undirected material processes. This accords with our uniform experience across every domain we can test.

That straightforward, intuitive structure is not a weakness. We’re recognising a pattern that shows up consistently everywhere else: extreme improbability combined with functional precision reliably points to intelligent agency.

The logical structure builds naturally from here. If an intelligent agent acted to design the universe, we would expect to observe (a) discernible functional outcomes, such as living organisms, dependent on (b) finely tuned or highly improbable conditions, parameters, or configurations of matter. And that’s precisely what we do see in the laws and constants of physics and the initial conditions of the universe. The expectation matches the observation, giving us reason to believe that an intelligent agent might have acted to structure the universe.

And we’re not talking about just any intelligence. Whatever “force” could account for this fine-tuning would need the power to set these parameters and fix the universe’s initial conditions at the very moment of creation. That rules out explanations like advanced extraterrestrials who arose within the universe after it began. On this line of reasoning, the explanation would point beyond the material cosmos, to an intelligence not confined by spacetime in the usual way. A theistic account fits that requirement reasonably well: it posits an “ultimate mind” capable of originating a universe in time and establishing its life-permitting structure from the outset.

This isn’t speculation from the fringes. Cambridge physicist and Nobel laureate, Brian Josephson, has expressed his confidence in “intelligent design” as the optimal explanation for the conditions that enable evolution at “about 80%.”[vi] The late Henry Margenau, professor of quantum physics at Yale, stated flatly that “there is a mind which is responsible for the laws of nature and the existence of nature and the whole universe. And this is consistent with everything we know.”[vii]

Even physicist George Greenstein, an outspoken atheist, admitted that despite his materialistic inclinations, “the thought insistently arises that some supernatural agency, or rather Agency, must be involved. Is it possible that, suddenly, without intending to, we have stumbled upon scientific proof for the existence of a supreme being? Was it a God who providentially stepped in and crafted the cosmos for our benefit?”[viii] Richard Dawkins himself, during a discussion with Francis Collins on the Premier Unbelievable podcast, acknowledged that fine-tuning presents an intriguing case, though he doesn’t endorse the argument himself.

The idea of an intelligent source has genuine scientific grounding that shouldn’t be dismissed out of hand. Philosophical naturalism, which rejects any pre-universe intelligent agent, would logically predict a universe explained exclusively by self-referential physical laws, with no need for fine-tuning. Yet we don’t find that universe. The degree of cosmological fine-tuning is difficult to square with random, undirected processes alone. In ordinary experience, when we encounter extreme improbability coupled with functional specificity, we reliably infer intelligent agency—purpose at work. Every other time we encounter this pattern, anywhere else in our experience, that’s the inference we draw. The question is whether we’re willing to follow that same reasoning when it comes to the universe itself.

Naturalistic Explanations for the Fine-Tuning

“He who knows only his own side of the case knows little of that.”
— John Stuart Mill

We all tend to dismiss opposing views too quickly. Sometimes we reduce them to weak caricatures just to make them easier to defeat. It’s tempting to reach for certainty without doing the hard work of genuinely engaging with alternatives. But good detectives don’t arrest the first suspect they meet, and skilled chess players don’t make hasty moves. The wiser path is to let competing ideas clash, test your conclusions against serious rivals, and see what emerges intact.

That’s the spirit we need as we turn from theistic interpretations of fine-tuning to examine naturalistic alternatives. Which perspective makes the most sense of the evidence we actually have, and which fits most naturally with the results of modern science?

As Martin Rees, emeritus professor of cosmology and astrophysics at Cambridge, puts it: “Nature does exhibit remarkable coincidences.”[ix] But can we really stop there? Calling it coincidence feels like intellectual surrender. The apparent improbabilities are so extreme that many thinkers have gone looking for deeper, non-theistic mechanisms—ways the right parameters might arise without invoking a mind-like cause. Let’s consider them carefully.

The Weak Anthropic Principle

In 1974, physicist Brandon Carter introduced what he called the “weak anthropic principle” (WAP). In short, it says we shouldn’t be surprised to find ourselves in a universe fine-tuned for life, as only a fine-tuned universe could produce conscious observers like us. While this explanation acknowledges the fine-tuning, it downplays the question of why these precise constants exist in the first place.

Think of it this way: Imagine you’re a fish in a small pond, surrounded by the very water that keeps you alive. One day, you start wondering, why is there water here at all? According to a fishy version of the principle (call it the “weak ichthyic principle”), the answer would be simple: you shouldn’t be surprised there’s water, because if there weren’t, you wouldn’t exist to think about it. Your very existence “proves” the pond exists.

Of course, it’s true that we shouldn’t be shocked to find ourselves in a universe that supports life, after all, here we are, alive. But isn’t it strange that the conditions necessary for life are so exceedingly improbable? For instance, consider the scenario of a blindfolded man who miraculously survives an execution by a firing squad of one hundred expert marksmen. The fact that he is alive is indeed consistent with the marksmen’s failure to hit him, but it does not account for why they missed in the first place. The prisoner ought to be astonished by his survival, given the marksmen’s exceptional skills and the minuscule probability of all of them missing if they intended to kill him.

This is where the weak anthropic principle falters. It mistakes a necessary condition, that we only ever observe a life-permitting universe, for an explanation of why such a universe exists at all. Saying “we exist, therefore the conditions must allow it” is just a tautology dressed up as insight. And in doing so, its advocates quietly sidestep the real question, which is not “Why do we see a universe compatible with life?” but “What caused the universe to be fine-tuned in the first place?”

The Strong Anthropic Principle

The Strong Anthropic Principle (SAP) is a bolder version of the Weak Anthropic Principle. Where the WAP simply observes that we find ourselves in a universe that allows life (otherwise we wouldn’t be here to notice), the SAP goes further. It claims the universe must be structured to produce intelligent observers like us.

Astrophysicist John D. Barrow and mathematical physicist Frank J. Tipler first articulated this position in their 1986 book The Anthropic Cosmological Principle. Their argument is that the universe’s fundamental properties are not randomly dialled in but are somehow constrained or determined by the requirement that conscious beings must eventually emerge. In this sense, asking “Why is the universe fine-tuned to allow for life?” is like asking “Why does 2 + 2 equal 4?” The answer isn’t that it’s a lucky break; it’s that it couldn’t be otherwise.

But again, this merely restates the observation without providing an explanation. Yes, the universe exhibits fine-tuning, and yes, that fine-tuning allows for the emergence of life. But what accounts for this arrangement in the first place? The SAP elevates the correlation between cosmic conditions and conscious observers into a kind of necessity without telling us why such necessity exists.

Some advocates have pushed anthropic reasoning further by appealing to quantum mechanics (QM), and a key name here is physicist John Archibald Wheeler. Wheeler helped shape twentieth-century physics and, late in his career, argued for an “observer-participatory” picture in which information is not just something we learn about the world but something deeply tied to how physical reality shows up for us. In his well-known slogan “it from bit,” Wheeler proposed that what we call physical “things” derive their ultimate significance from answers to yes–no questions registered by measuring devices, so that, in that sense, reality is inseparable from acts of measurement.

It’s worth noting that Wheeler’s language sometimes sounds as if mind is central, but he did not consistently defend a consciousness-causes-collapse view, and later he explicitly leaned toward ‘registered’ phenomena rather than human awareness. In contemporary quantum physics, ‘observation’ is typically treated as physical measurement and environmental interaction (we call this “decoherence”), not as a special act of consciousness. I think it’s fair to say that the role of consciousness in QM remains unclear. Still, Wheeler’s vision was provocative, and his experiments pushed the boundaries of how we think about measurement.

Wheeler’s delayed‑choice idea is designed to make this dependence on measurement feel unavoidable. The basic message is that quantum theory does not let you tell a single, fully classical story about “what the particle was doing” that is independent of the measurement you later choose to perform. For Wheeler, that is a clue that the questions we put to nature, and the particular measurements that register answers, are not just passive reporting tools but are part of how physical reality presents itself as a definite history.

That is the stepping stone to the Participatory Anthropic Principle. In this stronger, explicitly anthropic form, the claim is not merely that the universe must be compatible with observers (the usual anthropic thought), but that observers are in some sense necessary for the universe to be “brought into being” as a universe of definite, registered facts. The argument goes something like this:

  1. The universe must have properties that allow life to develop at some stage in its history.
  2. There exists one possible universe designed with the goal of generating and sustaining observers.
  3. And those observers are necessary to bring the universe into being.

However, two clarifications are worth making before we lean too hard on the Participatory Anthropic Principle. One is a logical worry about causal loops; the other is a more modest reading of what quantum “delayed choice” actually shows.

Start with the loop, or what we might call the “grandfather paradox”: Imagine you travel back in time to when your grandfather was still young, before he ever had children. Your actions somehow prevent him from starting a family. If he never had children, your parent would never be born, and neither would you. But if you never existed, how could you have travelled back in time to interfere in the first place? The loop consumes itself.

A similar circularity shows up in participatory-anthropic talk. If the existence of conscious observers depends on a finely tuned universe, but at the same time the universe’s fine-tuning is explained by the existence of those observers, we’re caught in a circular relationship where cause and effect chase each other’s tails. How can our observation of the universe explain its fine-tuning if the fine-tuning occurred billions of years before we existed? Even if we accept the possibility of two entities continually causing each other in an eternal cycle, this doesn’t clarify why such a looping system exists in the first place.

The second issue is what quantum measurement is doing in the argument. When an observation causes a quantum wave function to collapse, the cause (the detection of the light wave) precedes the effect (the collapse of the wave function). In Wheeler’s delayed-choice experiment, it’s not that the results show that our measurement actually change the particle’s past. The “delayed choice” simply determines what information we extract from a quantum state that was already indefinite until measured. What looks like a retroactive effect is likely our classical intuition misleading us. The weirdness lies in how we describe quantum phenomena, not in time itself running backwards.

And that matters for where the participatory argument ends up. If someone insists that consciousness itself is what brings about the relevant “registration,” then pushing the Participatory Anthropic Principle all the way back to the existence of the early universe starts to suggest some form of observerhood that is not just a late product of cosmic evolution. Whether intended or not, that begins to look like a dualistic or at least non-materialist picture, in which “mind” (or mind-like observerhood) is not merely an arrangement of matter inside spacetime but something more fundamental than spacetime.

The Multiverse — String Theory and Inflationary Models

Another way of looking at our finely tuned universe is through the multiverse hypothesis, which flips the entire conversation on its head. What looks like an improbable cosmic fluke becomes something you’d actually expect if you’re willing to expand your thinking beyond a single universe.

The logic is straightforward: Instead of one universe whose fine-tuning points toward design, imagine an almost infinite number of universes. In that scenario, it’s hardly surprising that at least one of them happened to land the right conditions required for life. Our universe would be that one. Think of it like a cosmic slot machine, where each spin produces a new universe with different physical constants. Most universes come up empty, incapable of supporting even basic chemistry. But keep spinning long enough, and eventually you’ll hit the jackpot: a universe where stars can form, atoms can bond, and life is possible. Our universe isn’t special because of divine intention; we’re just the lucky winners who happen to exist.

Physicists have taken this idea seriously enough to propose actual mechanisms for how it might work. Andrei Linde, Alan Guth, and Paul Steinhardt built one model from inflationary cosmology, the same rapid expansion we explored earlier. String theorists developed another. Neither theory was originally designed to explain fine-tuning; both emerged from attempts to solve unrelated problems in physics. Yet they’ve given the multiverse hypothesis something it desperately needed: mathematical credibility and a foundation in established physics rather than philosophical speculation alone.

Inflationary Multiverse Model

This story begins with Alan Guth’s 1981 proposal: within the unimaginably tiny window of 10⁻³⁶ to 10⁻³² seconds after the Big Bang, a brief episode of accelerated expansion occurred that could solve the horizon, flatness, and monopole problems plaguing the original Big Bang theory. His “old inflation” scenario struggled with the details of how inflation ended, but refinements quickly emerged. “New inflation” and then “chaotic inflation” allowed a scalar field (the inflaton) to slowly roll down a potential from high initial values, providing smoother transitions into the hot Big Bang universe we observe today. In these early models, inflation simply ended everywhere, leaving a single expanding universe.

A decisive step came in 1986 when Andrei Linde theorised that large‑scale quantum fluctuations of the inflaton could drive an infinite process of self‑reproduction. While some regions of the inflaton field rolled down the potential and reheated, others were randomly kicked to higher values by quantum jitters. This ensured that globally, the inflating volume never stopped growing. The universe became what Linde called an “eternally existing, self-reproducing inflationary” spacetime, one containing innumerable bubble-like regions, each with its own cosmic history.

To help understand the mechanism, imagine the inflaton as a ball rolling downhill. As it rolls, inflation ends in that patch and its energy converts into the hot matter and radiation of a standard Big Bang. But QM adds random nudges that can occasionally push the ball slightly uphill in some regions. If the typical quantum nudge over one Hubble time exceeds the steady downhill drift, those regions keep inflating and their volume grows faster than inflation ends elsewhere. With the right potential shape, the total amount of inflating space increases forever into the future, though the process still likely had a beginning in the finite past.

Wherever inflation does end, that region reheats and evolves into a standard hot Big Bang universe (a “pocket” or “bubble”). These pockets are separated by still-inflating space that stretches so rapidly that light from one bubble can never reach another. They remain forever isolated, unable to communicate or influence each other. The result is a multiverse: countless separate regions, each potentially with different physical properties, all born from the same self-reproducing inflationary process.

Still with me?

This framework offers a response to fine-tuning puzzles. If inflation produces many universes (or “bubble” regions) with different properties, then our universe’s fine-tuning may not point to special design at all. Instead, it could be a selection effect. We find ourselves in this particular bubble simply because its conditions permit observers to arise.

The String Theory Model

Another multiverse proposal grows out of string theory, an ambitious framework that’s also notoriously hard to picture.

The story begins with a happy accident. In the late 1960s, physicists developed string theory to explain the strong force binding atomic nuclei together. Then they noticed something unexpected: the mathematics naturally included gravity at the quantum scale, something that had eluded Einstein and every physicist since. That discovery transformed string theory into physics’ most ambitious attempt at a unified picture of nature.

To understand string theory you need to change the way you think about reality. Instead of imagining particles as tiny points, string theory proposes they might be minuscule vibrating strings of energy. Think of a guitar string: pluck it one way and you hear an A, pluck it differently and you get a C. These cosmic strings would work similarly, except each vibration pattern produces a different particle. One pattern manifests as an electron, another as a photon, and so on. The strings could be open (with two endpoints) or closed loops, and together their various vibrations would account for every particle we know, including those carrying all four fundamental forces.

The mathematics, however, demands something counterintuitive. Our universe can’t just have the familiar three spatial dimensions plus time. String theory requires six or seven extra spatial dimensions hidden from our perception. The reason we don’t experience these extra dimensions is because they’re compactified, curled up so incredibly small that they’d measure less than 10-35 meters. That’s the Planck length, a scale so tiny that a single atom is to a human as a human is to the entire observable universe, and then some. At this scale, space itself would behave quantum mechanically, and within these minuscule folded dimensions, strings would vibrate in patterns that create the particles we observe in everyday space.

This framework predicts that gravitons should exist as massless, closed strings transmitting gravity across cosmic distances at light speed. It would be the first quantum description of gravity that actually works mathematically.

But early versions had an obvious flaw. They only described force-carrying particles like photons and gluons, not matter itself. No electrons, no quarks, nothing to build atoms from. Physicists addressed this by introducing supersymmetry, a principle that pairs every force-carrying particle with a matter particle and vice versa. This addition not only allowed the theory to describe both forces and matter, but it also reduced the required dimensions from a ridiculous 26 down to 10.

By the mid-1990s, physicists faced a puzzle: they’d discovered five different consistent versions of superstring theory, each mathematically valid but seemingly distinct. Then Edward Witten showed they were actually different perspectives on a single underlying framework he called M-theory, which required 11 dimensions. This unification strengthened confidence that string theory might genuinely describe reality.

Then physicists hit an unexpected problem. Working through the equations, they didn’t find one unique solution describing our universe. They found countless solutions, each representing a different possible reality. At first, this looked like a disaster. What good is a theory of everything if it predicts nothing specific?

Some theorists, however, turned this vice into a virtue by invoking the anthropic principle. They argued that each way to compactify those extra dimensions creates a different “vacuum state” with its own unique physical laws and constants. The precise geometry of the folded space would determine which laws apply in the observable dimensions. The number of flux lines threading through those tiny structures would set the values of constants like the strength of gravity or the mass of an electron. If vastly many configurations exist, we shouldn’t be surprised to find ourselves in one of the rare configurations that permits observers to exist.

This realisation revealed what physicists call the string landscape: somewhere between 10500 and 101000 possible configurations, each generating different physics.

Cosmologists have connected this to eternal inflation. In that framework, quantum fluctuations randomly cause inflation to stop in separate regions, forming distinct “bubbles” of space. When each bubble settles down, it would select one configuration from the string landscape, locking in a particular set of physical laws and constants. Because new bubbles would constantly form the still-inflating background, this mechanism could generate a true multiverse where our observable cosmos represents just one bubble among a huge number of possible worlds.

One God vs Many Universes

Let’s take a moment to think about this. Do cosmological models like inflationary cosmology or string theory really explain the fine-tuning of the universe’s laws, constants, and initial conditions, or do they function mainly as elaborate ways to avoid the possibility of a purposeful, mindful source behind it all? I am not claiming the latter is proven, but I do think it is the more compelling explanation once we compare the options by explanatory power.

It is worth noting at the outset that these theories remain hot topics for debate and research in theoretical physics. Many physicists see the multiverse hypothesis as closer to speculative metaphysics than solid science, and they point out that we cannot directly observe or measure either other universes or a divine creator. From that angle, it can seem as if choosing between multiverse and theism comes down to personal preference, because neither is straightforwardly testable in the usual way.

Even so, that “they are equally speculative” move misses something important. Hypotheses, whether scientific or metaphysical, can and should be compared by their explanatory power, and that includes at least four questions: Which proposal makes fewer assumptions? Which one actually addresses the full range of what needs explaining? Which one avoids simply relocating fine-tuning to a deeper mechanism? And which one stays closest to the normal evidential and predictive ideals of physics?

Start with simplicity. When competing hypotheses aim to account for the same phenomenon, it makes sense to favour the one that makes the fewest assumptions, at least as a starting point. As Oxford philosopher Richard Swinburne has argued:

“It is the height of irrationality to postulate an infinite number of universes never causally connected with each other, merely to avoid the hypothesis of theism. Given that… a theory is simpler the fewer entities it postulated, it is far simpler to postulate God than an infinite number of universes, each different from each other.”[x]

Simplicity, however, is not the only issue. There is also a basic explanatory mismatch in the way multiverse discussions are often presented, because “fine-tuning” is doing double duty.

On the one hand, there is fine-tuning of initial conditions: the highly particular starting state of our universe. On the other hand, there is fine-tuning of the laws and constants of physics themselves: the structure that governs what any universe can be like.

Once you separate those two targets, an awkward point becomes harder to ignore. Inflationary cosmology could, in principle, speak to the calibration of initial conditions by generating many “tries,” but it does not explain the origin of the finely tuned laws and constants of physics. The inflation field operates under the same laws across its expansive space, so even if it spawns new bubble universes, those offshoots retain the same laws and constants, with novelty appearing mainly in the configurations of mass-energy.

String theory, by contrast, is often presented as a way of explaining why laws and constants take the values they do. Yet in most models, it does not also generate multiple sets of initial conditions corresponding to each choice of physical laws.

So, if you want a multiverse account that addresses both categories of fine-tuning, you need two different universe-generating mechanisms working in tandem. One would be rooted in inflationary cosmology, another in a string-theoretic landscape. This is part of what motivates hybrid proposals such as the “inflationary string landscape model.” But the price is what philosophers call a “bloated ontology,” namely a vast inventory of speculative entities and unobservable processes introduced to do explanatory work.

Moreover, the concern sharpens when we ask whether the multiverse machinery itself avoids fine-tuning, or merely postpones it. Even if a multiverse could potentially justify the fine-tuning of our universe, surely it still needs an underlying mechanism to generate many universes, and that mechanism also demands explanation. If the “generator” requires delicate set-up, then the problem is not solved so much as moved further up the causal chain.

Furthermore, it should also be noted that scientists are sharply divided over multiverse theories. Several eminent physicists, including Sir Roger Penrose, Avi Loeb, and Paul Steinhardt, have dismissed multiverse inflationary cosmology. Penrose criticises it for the fine-tuning problems, Loeb challenges its lack of falsifiability, and Steinhardt now disputes its predictability and testability, concerned that its adaptability to various observations makes it unscientifically irrefutable.

Related doubts surround string theory as well. String theory predicts supersymmetry as a vital component in many unification programmes, and yet extensive experiments at the Large Hadron Collider have not found the corresponding supersymmetric particles. Coupled with other failed expectations and the embarrassment of an effectively unbounded number of solutions, scepticism about string theory has grown among many leading physicists. As Nobel Prize-winning physicist Gerard ’t Hooft once remarked:

“I would not even be prepared to call string theory a ‘theory,’ rather a ‘model,’ or not even that: just a hunch. After all, a theory should come with instructions on how… to identify the things one wishes to describe, in our case, the elementary particles, and one should, at least in principle, be able to formulate the rules for calculating the properties of these particles, and how to make new predictions for them. Imagine that I give you a chair, while explaining that the legs are missing, and that the seat, back, and armrests will be delivered soon. Whatever I gave you, can I still call it a chair?”[xi]

So my point is not that multiverse proposals are foolish, or that their proponents are insincere, but that they can start to resemble metaphysics that is insulated from evidential pressure. If we are going to allow speculative moves, then we should still ask which speculative move best explains what we are trying to explain, with the fewest auxiliary assumptions, and with the least tendency to export the fine-tuning problem to the machinery behind the scenes.

By contrast, the idea of a creative, purposeful source offers a simpler and more intuitive starting point. Even if it raises further questions about how and why, it fits an aspect of reality we already understand from experience. Our everyday experience shows us what intelligent agents are capable of. They design intricate, goal-directed systems like Swiss watches, gourmet recipes, integrated circuits, and novels. Calibrating a system toward a specific purpose is exactly what we as intelligent people do. Positing a “Supermind” (for lack of a better term) behind the universe’s precise adjustment isn’t a wild leap; it’s a natural extension of what we already understand about intelligent causation. The mechanisms proposed by multiverse theories, by contrast, lack any comparable basis in our experiential knowledge. We have no reference point for universe-generating processes as these theories describe them.

In that spirit, John Polkinghorne, a colleague of Hawking and the former president of Queen’s College, Cambridge, argues in his book, The Quantum World that the intricate and intelligible nature of our universe is not adequately explained by random processes of chance. With reference to the multiverse proposition, he writes:

“Let us recognise these speculations for what they are. They are not physics, but, in the strictest sense, metaphysics. There is no purely scientific reason to believe in an infinite ensemble of universes… A possible explanation of equal intellectual respectability – and to my mind, greater elegance – would be that this one world is the way it is because it is the creation of the will of a Creator who proposes that it should be so.”

And, in the context of discussing the inherent factors within this universe, particularly with reference to quantum theory, Dr Polkinghorne once remarked during a seminar at Cambridge, “there is no free lunch. Somebody has to pay, and only God has the resources to put in what was needed to get what we’ve got.”

Godly Intrusion

Why is the multiverse often considered the best explanation for cosmological fine-tuning, despite its many drawbacks? The key might be found in a statement by theoretical physicist Bernard Carr: “to the hard-line physicist, the multiverse may not be entirely respectable, but it is at least preferable to invoking a Creator.”[xii]

That preference isn’t scientific—it’s philosophical. For many physicists, a Creator is eliminated before the evidence is even examined, not through careful reasoning but through an unwavering commitment to naturalism. This worldview has become a blind spot, causing scientists to dismiss promising explanations simply because they point beyond the material.

Consider the irony. To avoid considering a cause that may be associated with God, some propose countless unsees universes. Universes you and I can’t see, can’t touch, can’t test, can’t measure or validate scientifically. And yet, aren’t those the exact same reasons people laugh God out of the room?

We’ve seen how alternative explanations fall short. The weak anthropic principle merely restates the problem. The strong anthropic principle smuggles in the very design it claims to avoid. Multiverse theories either fail to remove fine-tuning or simply relocate it to a deeper, unexplained level. Each dodge creates new mysteries while ultimately solving none.

Meanwhile, fine-tuning in the universe has the distinctive marks that consistently indicate purposeful intelligence: extreme improbability and functional specificity. This pattern recognition isn’t arbitrary—it’s how we distinguish design from chance in forensics, archaeology, and information theory. When the same signatures appear in cosmic parameters, dismissing the design inference requires a willingness to apply different standards to the universe than we apply to everything within it.


[i] Hoyle, F. (1960). The Nature of the Universe.

[ii] “The Anthropic Principle,” May 18, 1987, Episode 17, Season 23, Horizon, BBC.

[iii] Hawking, S. ‘A Brief History of Time’, 26.

[iv] Fred Hoyle, “The Universe: Past and Present Reflections,” Annual Review of Astronomy and Astrophysics 20 (1982): 16.

[v] Penrose, R. ‘The Emperor’s New Mind’, 341-344.

[vi] Josephson, B. Interview by Robert Lawrence Kuhn for the PBS series Closer to Truth.

[vii] Margenau, H. Interview at Yale University, March 2, 1986.

[viii] Greenstein, G. The Symbiotic Universe, 27.

[ix] Rees, M. ‘Just Six Numbers’, 22.

[x] Swinburne, R. ‘Science and Religion in Dialogue’, 2010, 230.

[xi] Hooft, G. ‘In Search of the Ultimate Building Blocks’, 163-164.

[xii] Carr, B. “Introduction and Overview”, 16.

Related Essays

Pin It on Pinterest