How did life first emerge?
There must have been a time in the past when life appeared out of lifeless, inert matter, but how? Without the aid of some grand mind, is that even possible? Scientists commonly call the pre-biological phase of evolution ‘chemical evolution’. This is the period in which the very first living things came into being, and this monumental dawning of life occurred through an unguided variation of natural forces acting on matter over long time spans, or so we are told.
The theory of biochemical evolution (Abiogenesis)
If we take a naturalistic view which assumes that everything arises from natural properties and causes without the intervention of any greater mind, then the theory of chemical evolution really is spontaneous generation. The claim is that life first emerged through purely natural processes, without any intelligent intervention or creative act. No other vital forces had a part, meaning that God was not necessary as a possible explanation; life is only a complicated property of matter. So we are told that non-living molecules, by themselves, through nothing more than the laws of physics and chemistry and the random distribution of molecules and chemical reactions, came together to form the building blocks of life and eventually life itself. This idea is known as abiogenesis.
The modern theory of chemical evolution envisions that the atmosphere of the early Earth contained a mixture of gases (but no free oxygen) that created an atmosphere of a reducing quality which was hospitable to organic molecules. Not only this, but the surface of the young earth had cooled to allow for the survival of various organic molecules. There were also various energy sources on earth such as lightning, geothermal heat, shock waves, ultraviolet light from the sun and others, which drove reactions in the atmosphere and ocean to form a wide variety of organic molecules. For example, In the upper zones of this primitive atmosphere, ultraviolet light would irradiate the reducing atmosphere to form amino acids and other compounds. At lower altitudes, these same organic compounds would be formed by reactions driven by energy from electrical storms and thunder shockwaves. This all meant that nature had a way of causing useful chemical reactions (synthesis) to occur on earth which produced the basic building blocks of life. Eventually, through a series of natural processes, nature happened to accumulate the organic material created by the chemical reactions into a thick concentrated prebiotic soup somewhere on earth.
In this prebiotic soup, the conditions were supposedly now right for the development of protocells. Protocells were not ‘true’ cells, but were coherent systems with a retaining membrane and sufficient functional capacity to survive for an interim period. Over this period of time, their internal complexities increased until the additional characteristics of living cells emerged. Then when the nucleic acids – life’s hereditary molecules, became sufficiently developed, DNA came about and this took control of the processes. Finally, life itself gained its critical first foothold.
If only it was that simple.
Time to test
This is the idea, now it’s time to test it. We can do this by simulating the conditions of the early Earth in a laboratory. We can then assume our results will be similar to what actually occurred on the prebiotic Earth. Of course it’s no easy task to accurately reproduce the conditions of the prebiotic Earth so there will always be a degree of uncertainty.
In 1974 Stanley Miller gave an account of “the first laboratory synthesis of organic compounds under primitive Earth conditions.” Miller created a laboratory experiment that sent an electrical charge through a chemical solution and as long as oxygen had been excluded from the mixture, the product included some amino acids and other organic compounds, simulating how life may have occurred in the early earth. In 1983, all five nucleic acid bases found in DNA and RNA were reported to have formed in a single simulated primitive atmospheric experiment. Since then this experiment has been reproduced using different energy sources which may have been present in prebiotic times:
- Heat experiments (also known as thermal synthesis) replace the spark apparatus used by Miller with a furnace to simulate the heat energy produced by volcanic activity;
- Ultraviolet experiments replace the spark apparatus with ultraviolet sources to expose the “primitive” gases to short-wavelength ultraviolet radiation. This is to simulate the effect of solar ultraviolet radiation which would have been present on the primitive Earth;
- Shock wave experiments subject the gases to a high temperature for a fraction of a second followed by rapid cooling to evaluate the effect from shock waves caused by thunder;
Each of these experiments produced amino acids. Although each energy source differs in the effect, some being more useful in creating organic compounds than others, in general, these experiments support the widely held belief that a variety of energy sources may have been responsible for the buildup of essential biological precursor chemicals in the primitive oceans. These laboratory results have been the basis for much optimism concerning the early stages of chemical evolution, assuring scientists that the primitive ocean was full of organic compounds from which nature could filter, accumulate and concentrate into a prebiotic soup to start reconstructing the origin of the first living things.
This leads us on the second step: nature constructing the first living thing from the organic material in the prebiotic soup. Now we should not expect any meaningful results within laboratory time. Millions of years of simulation might be required for any detectable progress. Somehow we must speed up the process so that we are able to compress what happens over a long time span into manageable laboratory time, yet without distorting what actually did happen.
There is a technique for simulating the extended time factor which allows us to single out individual reaction processes in our simulated “prebiotic soup” and follow their progress. The technique consists of carefully selecting and purifying chemicals conceived to have been the probable precursors of life, assumed in the ‘prebiotic soup’, and then after isolating, purifying, and concentrating these chemicals, the next stage can be simulated, reacting the amino acids together to form polymers. After a similar process of isolating, purifying, and concentrating these polypeptides, the next stage can be simulated in a third experiment to see what is produced. By following this procedure it is hoped that through the right experimental conditions and prebiotic simulation techniques, a living entity will be produced.
Unfortunately for the firm believers of chemical evolution, this second step has faced tremendous difficulties and obstacles, which we shall touch on later.
A critique of chemical evolution
Any good theory should withstand the fires of criticism and be better for it. So it would be an injustice to leave the theory of chemical evolution out. Several kinds of difficulty have persisted for the chemical evolution theory of life’s origin.
1. The prebiotic soup
The assumption of a prebiotic soup is a vital component of chemical evolution, after all, its where it all starts. But the fact of the matter is that in the atmosphere and in the ocean, dilution processes would dominate, making the concentration of essential ingredients far too sparse for chemical evolution rates to be significant.
Additionally, while energy is seen as the means by which molecules can be organized into more complex arrangements, energy alone may not be sufficient to increase the complexity of a system. In fact, it is likely that the energy effects on the early Earth would have been very much like the proverbial bull in a china shop. This predominantly destructive feature of unbridled solar and other forms of energy is a serious difficulty for the chemical soup theory of life’s origin, mainly because one energy source destroys what another source produces. Since these energy sources are generally more effective in destruction than in synthesis, this makes the destruction of complexity more likely than the emergence of complexity. Experiments have not so far taken this into consideration but have just isolated and controlled each energy source. This is unrealistic.
For example, all prebiotic heat, electrical discharge, and ultraviolet experiments use traps. Traps function as a sort of removal process that also shields the products from destruction by the energy source which produced them. The problem is that while we use one energy source to make organic molecules, those same energy sources often destroy these organic molecules. Scientists have a preference in doing this but how does nature do this? It is not clear at all.
So based on the evidence, it would seem that (1) many destructive interactions would have vastly diminished, if not altogether consumed, essential precursor chemicals so that chemical evolution rates would have been negligible on the primitive Earth. (2) The supposed ‘soup’ would have been too dilute for direct polymerization to occur. Even local ponds for concentrating soup ingredients would have met with the same problem. (3) Furthermore, no geological evidence indicates that an organic soup, even a small organic pond, ever existed on this planet.
We are left with the myth of the prebiotic soup.
2. The early earth’s atmosphere
The theory of chemical evolution rests on the assumption that there was no oxygen present on the earth as oxygen rapidly decomposes all organic compounds necessary for chemical evolution. So if there is even a trace of molecular oxygen present, organic molecules could not be formed at all, because as soon as the conditions become oxidizing, organic synthesis effectively ‘turns off’. This is why all the simulation experiments exclude oxygen because none of the essential molecules of life, e.g. amino acids, could ever have formed naturally under oxidizing conditions, and if by some chance they were, they would decompose quickly.
With oxygen, chemical evolution would be impossible.
The fear of chemical evolutionists is that there is an increasing amount of evidence that earth’s primitive atmosphere did contain free oxygen. This is partly due to the evidence of oxidizing minerals, calculations suggesting the presence of oxygen produced by the breaking down of chemical compounds in water, as well as other clues. Although no precise conclusions can be made concerning the levels of oxygen in earth’s early atmosphere, these results are quite intriguing. The primitive atmosphere experiments should be re-assessed in the light of evidence that the early earth and its atmosphere were probably less friendly (a less ‘reducing’ atmosphere), or even destructive (oxidizing).
The other assumption about the early earth’s atmosphere is that it included methane and an excess of hydrogen. Only by including methane in the simulation experiments could significant amounts of amino acids be produced. But there is solid geochemical evidence that indicates that the primitive atmosphere did not contain a significant amount of methane. The evidence implies that when life first appeared, the Earth did not have a hydrogen-rich methane-ammonia atmosphere. In this case, the experiments do not, therefore, mimic the early earth atmosphere, and so do not provide adequate explanations of how life may have occurred.
3. Interfered experiments
Miller’s classic experiment, along with those that followed, all attempted to demonstrate how life may have spontaneously arisen on earth, but in fact, these experiments have been interfered with, controlled and led by intelligent people, not spontaneously at all.
For example, experiments that simulated chemical evolution using ultraviolet light as an energy source have excluded long-wavelength UV because it is destructive. However, the primitive Earth would have been irradiated by the full solar spectrum. We can’t just pick and choose short-wavelength UV because that’s ‘preferred’ and claim the results as evidence of how a process proceeded under different circumstances. There is no known natural filter that would justify the use of selected wavelengths. Likewise, we find other issues using heat, electrical discharge and shock waves as an energy source for simulation experiments. We don’t find local high temperature (>150 degrees Celsius) regions on earth except for geologically brief periods of time. Volcanoes, fumaroles, steam spouts, etc. have been recognized as heat energy sources, but they are generally too far apart geographically, and do not last over significant enough times for effective synthesis to occur. Electrical discharge experiments have attempted to simulate lightning on the early earth, but actual lightning is much too hot for effective synthesis, immediately destroying any products.
Practically all simulated ocean experiments reported in scientific literature have been based on the assumption that if two or three chemicals react when isolated from the soup mixture, they will also react in the same way in the presence of the diverse chemicals in the soup. This assumption is generally false. It is false because it overlooks the synergism of multiple reactions, the ‘Concerto Effect’. A mixture has a characteristic behaviour of its own; it is not the simple sum of the individual components. Soup mixture reactions do not equal the sum of the individual isolated reactions as there are destructive interactions in the soup. This is not taken into consideration in the experiments.
These are few examples to mention, but in summary, the Miller-Urey experiment, and those that followed fell far short of replicating a mindless chemical evolutionary process or the relevant conditions likely to have held sway on the early earth. From the reducing atmosphere used in experiments to the need for just the right amount of energy, to the careful isolation of the tender chemicals from unfavourable interfering cross-reactions, to the protective environment in which the reactions took place, the list goes on. The investigator has played a highly significant but illegitimate role in experimental success. For each of the experimental techniques, the investigator has established experimental constraints, imposing intelligent influence upon a supposedly ‘prebiotic Earth’. Without human intervention, the experiments don’t work at all.
The need for intelligent supervision of the development of the conditions required for life to emerge in experiments is ironic considering the naturalistic stance of many in this scientific field.
4. Nature cannot successfully assemble the chemicals required for life
Assuming the early earth was hospitable for organic molecules (which is unlikely), and assuming there was a prebiotic soup of life’s basic building blocks (also unlikely), you then have the next stage of evolving and assembling the organic material into the very first life. This is the process of moving from chemistry to biology. But without a biologically derived entity acting upon them, molecules have never been shown to “evolve” toward life. Never. The fact of the matter is that molecules don’t care about life. They don’t deliberately move towards life. While organisms exploit chemistry for their own ends, chemicals have never been seen to assemble themselves into an organism. No researchers have ever seen molecules assemble into anything even remotely resembling a living cell without pre-existing life. Scientists have no data to support molecular “evolution” leading to life. Synthetic organic chemist, James Tour, has mentioned a few examples of how clueless we really are on this:
- Molecules that make up living systems always show what is called ‘homochirality’. Each class of compound, the carbohydrates, the amino acids, and the lipids etc, each requires its own methodology to control its specific regiochemistry and stereochemistry. Now, given that experts have repeatedly failed to replicate anything even remotely close to this, how does a mindless prebiotic soup do this?
- To build life you need to select the right chemicals, in the right order, for the right part of the structure. But in a prebiotic soup there are no chemical selectors to choose the molecule types to go forward with. For argument’s sake, say there were some selectors in a prebiotic soup. They would generally need to be more complex than the molecules that they were selecting (and what about the selectors of these selectors, and so on?), so it seems unlikely that a mindless soup could possibly create an effective selector in a prebiotic system.
- When building molecular systems, constant redesigns are needed which take the synthesis back to step one because it is often impossible to remove a distinctive part of a molecule once it has been added to a molecule. So if a prebiotic and mindless reaction makes one small mistake, the synthesis has to go back to step one – but that could mean sending it back millions of years, and there is nothing stopping it from making the same mistake since it has no memory to prevent its repeated mistake. But even that assumes the process will start again, because chemistry is indifferent to moving toward life.
- A prebiotic system does not have the ability to purify the structures, which is a necessary part of building life. Separations from across the different compounds have to be done repeatedly or else the impurities withdraw the resources from the preferred chemicals. How does a prebiotic soup achieve this? I am not confident it can.
- The chemical origin of life is very demanding, with its multi-step sequences, each being specifically timed and requiring their own reaction conditions. The parameters of temperature, pressure, solvency, light, pH, and atmospheric gases have to be carefully controlled in order to build complex molecular structures. Each chemical step also needs to take into consideration activators and blockers to perform all the required activation steps in time and in order. Not to forget that molecular characterization at each step is essential. If the chemist doesn’t know the molecular structure, the process often fails. On top of that, each organic reaction needs to be carefully controlled in isolation to prevent decomposition of the product, which is incredibly difficult. Now, to be honest, with all our scientific research we don’t really know how many chemical steps are needed to make all the chemicals that compose a simple cell. Additionally, considering the complexity needed to build the higher-order structures within a cell the number of steps must be enormous. So to claim one compound just spilt in from one pool, and then another pool dumped in another compound, and then it all just worked out near some volcano is difficult to take seriously. How does nature mindlessly bring sufficient material through a complex multistep synthesis? Your guess is as good as mine.
- Nobody knows how a cell is produced from the massive combinatorial complexity of its molecular components. Nobody has ever synthetically mimicked it either. We are a long way off. Peter Tompa of the University of Brussels and George Rose from Johns Hopkins University calculated that if you consider all protein-protein interactome combinations in just a single cell, the result is an estimated 10^79,000,000,000 combinations (that’s the number one followed by 79 billion zeros). So the claim that “RNA (or DNA) emerged directly from these reactive chemicals, nudged along by dynamic forces” is just nonsense, because a complex pathway of reactions would be needed, along with all the steps of purification and then assembly, polymerization, and sequencing. Reducing all that vast and complex process down to “nudged along by dynamic forces” is like saying that an entire submarine can be explained by throwing a few nuts and bolts here and there.
- When all other explanations fail, some call upon the great saviour: time. Maybe hundreds of millions of years solve their mysteries? Nope. In chemical synthesis, time is often the enemy, especially when making kinetic products that constitute the requisite organic chemicals of life. This is because synthetic reactions do not know how to stop their current course of progression, or why to stop. This means the prebiotic system will continue to make derivatives, constantly going off course. Time really works against obtaining desired chemicals.
So if all this can tell you anything, it is that mindless synthesis and structural blindness just isn’t going to do the job. A naturalistic view is clueless on how to assemble the chemicals in the assumed prebiotic soup, in fact, it would seem that the naturalistic view is inadequate as an explanation.
5. Thermodynamic of living systems
Separate from the four main issues noted above, an unavoidable issue with chemical evolution is the thermodynamics of living systems.
Let’s start off with a basic understanding of thermodynamics. Although the total energy in a system will remain the same throughout time, the second law of thermodynamics tells us that the overall energy will lead away from order into chaos, from complexity to simplicity, since energy becomes more uniformly distributed so that states of higher energy tend to move to those of lower energy. We call this entropy. For instance, rocks roll downhill since lower altitude corresponds to lower gravitational energy.
Because of entropy the natural tendency of things is irreversible decay with a greater tendency toward increasing disorder. How does all of this relate to chemical evolution? Well, since the important macromolecules of living systems such as DNA, protein, etc… are more energy-rich than their precursors (amino acids, heterocyclic bases, phosphates, and sugars), classical thermodynamics would predict that such macromolecules will not spontaneously form. It goes against the natural order of the world.
However, there are some physical processes that can help to beat the odds. Sometimes systems can, although at a seemingly low probability, move naturally from states of higher entropy to those of lower-entropy. How does this happen? Making something more orderly requires what is called negative entropy, which is uncommon because devolution into disorder is much more probable than the development of order in a random system. So if entropy is the amount of disorder, negative entropy means something has less disorder, or more order. And this is possible.
According to the second law of thermodynamics, when we look at a closed system, overall disorder (entropy) will always increase, as order (negative entropy) must be balanced out by more disorder (positive entropy). So negative entropy can only occur when we look at a small portion of a closed system. For example, if you iron your shirt, you’re using energy and you are therefore becoming more disordered while the shirt becomes less disordered, undergoing negative entropy. But as energy use isn’t totally efficient the system as a whole (taking into consideration both you and the shirt) experiences positive entropy.
So is negative entropy really capable of producing life? Could it be that there was a temporary region of order with concentrated amounts of energy that brought together the first living system against the disruptive force of entropy?
Well, firstly, one big issue is that on earth negative entropy is highly unlikely to last long enough for the creation of the first living system, since it is a rare and temporary state. Another issue is that while greater levels of energy may help increase order, it does not necessarily have the ability to correctly organize all the molecules to create the first living system. Only recently has it been appreciated that the distinguishing feature of living systems is complexity rather than just order, that the essential ingredients for a replicating system such as nucleic acids in DNA is that they are all information-bearing molecules. In contrast, consider crystals. They are very orderly, spatially periodic arrangements of atoms, but they carry very little information. Crystals fail to qualify as living because while they are orderly, they still lack the specified complexity; mixtures of random polymers fail to qualify because they lack specificity. Living organisms are distinguished by their specified complexity, namely, they carry information, instructions, code. And they carry a lot of it. The sequence matters. Only by specifying the sequence letter by letter (ranging from hundreds of thousands to even billions of instructions) could we tell a chemist what to make. Our instructions would occupy not a few sentences, but books or even bookshelves full. So naturalism holds that ‘nature’ must find some way of selecting the appropriate vast chemical composition out of a random soup mixture. The task of arranging these selected monomers in the proper sequence for biological function is the ‘coding work’, and realistically, no amount of energy flow is adequate to sort and select all the right ingredients to make a living system. After all, how would you expect the natural world to mindlessly figure that out?
The problems with complexity and the origin of life from a thermodynamic point of view is that thermodynamics tends to work against the necessary configurational work of coding, not to mention the work of sorting and selecting. What’s more, thermal entropy has nothing to do with information, and no amount of energy flow through the system and negative entropy can produce even a small amount of information. You can’t get gold out of copper, apples out of oranges or information out of negative thermal entropy. There does not seem to be any physical basis for the widespread assumption that an open system is a sufficient explanation for the complexity and coding necessary for life.
Edward Steele and his thirty-two co-authors, spread over eleven countries, in ‘Progress in Biophysics and Molecular Biology’ concluded the following: “The transformation of an ensemble of appropriately chosen biological monomers (e.g., amino acids, nucleotides) into a primitive living cell capable of further evolution appears to require overcoming an information hurdle of super astronomical proportions, an event that could not have happened within the time frame of the Earth except, we believe, as a miracle. All laboratory experiments attempting to simulate such an event have so far led to dismal failure.”
One tipping point between chemistry before and biology after is the feature of self-replication. Now, design theorist Eric H. Anderson has highlighted that discovering the pathway to a self-replicating entity is one of the central challenges facing origin of life research.
Abiogenesis begins with undirected chemical reactions, leading by chance to a simple self-replicating molecule of some kind. Why must this occur by chance in a naturalistic worldview? Because only once the first self-replicating system appeared could natural selection then come into play. The feature of self-replication is the key that unleashes the process of natural selection and so biological evolution. Once you have a self-replicating system/molecule it is then imagined that further random mutations and natural selection can step in to supposedly help the tender molecule acquire additional traits, eventually leading to the first living organism. The story of materialistic abiogenesis makes self-replication the first ability, the initial characteristic of the ancestor of all life. But the first self-replicating system, rather than being a simple kick starting point at the beginning of the long road of evolution, lies at the end of an extremely complicated, sophisticated, and specified engineering process, which according to abiogenesis, must have been the outcome of chance without even natural selection on its side.
True self-replication is a crazily difficult task. Consider a ‘simple’ cell: in order to self-replicate a cell goes through a process of in-house engineering. For example, it first expands and then constructs a copy of itself (including its DNA) inside itself, using its own cell membrane to form the protective environment for construction, and then divides by drawing the cell membrane inward between the original and the copy, eventually sealing off the gap. A new cell wall and membrane separates the two halves and releases the new completed copy into the larger environment. This approach allows the cell to properly self-replicate while avoiding disastrous interfering cross-reaction with other chemicals and molecules in the environment. It also keeps the cellular components from drifting apart and being lost in the water like environment.
To illustrate this, it would be like seeing a car along the highway expanding its own frame to encompass a space the size of two cars, construct and assemble the new internal components in that protected space, and then build a metal wall between the identical sections in order to release the completed running copy of the car onto the road next to it. This all sounds a bit sci-fi but it demonstrates how remarkable self-replication really is. Not to mention, a truly autonomous self-replicating entity must also be able to locate, acquire and make use of raw materials for the construction of new parts and generate its own power from materials available in the environment. And for long-term successful replication over more than just a few generations, it would be critical to have numerous feedback and quality control mechanisms, error correction capabilities and the like. Our car becomes a factory making factory. This is all just a partial outline of what would be involved in building a truly self-replicating system, and puts into perspective what is involved in creating a truly self-replicating machine in the macro-world.
The notion that mindless natural forces could somehow assemble a self-replicating machine with its many abilities from a solitary molecule in the macro-world feels rather far-fetched, in fact, absurd. What sort of super-molecule would be required? Neither DNA nor RNA, after all, comes even close to having the full suite of capacities needed to self-replicate all by itself. Each needs the other, and each needs the cell. But what really is the cell? It’s a work of nanotechnology beyond anything scientists ever expected. The geneticist Michael Denton described it as “an object of unparalleled complexity and adaptive design.” Denton goes further to say that if we witness all the cells components working together, “what we would be witnessing would be an object resembling an immense automated factory, a factory larger than a city and carrying out almost as many unique functions as all the manufacturing activities of man on earth. However, it would be a factory which would have one capacity not equaled in any one of our most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours. To witness such an act… would be an awe-inspiring spectacle.” When we observe that a cell can self-replicate we can begin to ask what technologies, what capacities, would be required for that to be possible. It’s a long list. A list which casts doubt on all materialistic explanations for life’s origin.
Richard Dawkins claimed in ‘The Selfish Gene’ that “a molecule that makes copies of itself is not as difficult to imagine as it seems at first, and it only had to arise once.” Yes, just imagine. The idea of spontaneous generation of such a macro self-replicating machine requires a tonne of imagination and a whole lot of luck. I think Eric H.Anderson summarised the point well:
“The accumulated evidence, taken together, strongly suggests that self-replication lies at the end of a very complicated, deeply integrated, highly sophisticated, thoughtfully planned, carefully controlled engineering process… The abiogenesis paradigm, with its placement of self-replication as the first stage of development, is fundamentally flawed at a conceptual level. It is opposed to both the evidence and our real-world experience and needs to be discarded.”
Let’s face the facts, the origin of life from a naturalistic view remains a mystery. Those who think otherwise have been misinformed on the matter, or have assumed wrongly. It feels like the scientific community has been playing Chinese whispers, where one scientific magazine or article says life began spontaneously, while no one can really reconcile how we get to that conclusion, what evidence proves it, and what answers face the challenges to it. The conclusion is clear: chemical evolution is at a crisis. We have no plausible justification for believing life had a random beginning, and in fact the evidence points away from such a view.
With naturalistic explanations of spontaneous generation (Abiogenesis) proving to be inadequate, we start to expand our thoughts elsewhere, for the right explanation. The formation of the original cell cannot plausibly be explained by any undirected process, we know this. In any other context, the identification of a nanotechnology vessel capable of energy production, information processing, and the other identified requirements would immediately be recognised as a product of design by any reasonable criteria. In particular, cellular structures and operations demonstrate unmistakable evidence of foresight, coordination, and goal-directedness, which are telltale signs of intelligent activity.
So, maybe the mystery of life’s origin can be solved after all. Maybe the answer is staring at us right in the face.
This essay references material from ‘The Mystery of Life’s Origin: The Continuing Controversy’