Although I am not really interested in evolution debates generally, I recently came across a piece written by John Lennox that seemed to be so fresh and interesting that my mind ticked over with its implications until late that night. Lennox is a mathematician at Oxford, and a prolific author and debater.
Few responsible scientists would deny evolution wholesale. I think that God probably created in a way that we can – and should – investigate scientifically, just that Darwinian evolution as proposed by Dawkins et al. wasn’t it. I don’t doubt evolution for theological reasons but simply because it is incomplete as a scientific theory. Some scientistic fundamentalists like Richard Dawkins do make grandiose claims about its explanatory power, but most serious secular scientists recognise the severe shortcomings of the theory.
Over the years, several arguments against purely Darwinian evolution have also been advanced. Michael Behe’s argument from irreducible complexity has endured much scrutiny, but has not been refuted as far as I can tell. However, since it requires in-depth knowledge of molecular biology not available to most people, it isn’t always an effective tool for layperson discussions. I think Alvin Plantinga’s Evolutionary Argument Against Naturalism (EAAN) does provide his “undefeated defeater”, but after critiques and rejoinders the logic becomes so advanced that most people can’t follow, and so it isn’t practical either.
Enter the semiotic argument against naturalism. By applying powerful (and easily understood) results from modern mathematics and information theory to natural systems, it seems to me that a sound argument for the logical impossibility of naturalistic origins of the DNA molecule can be made. Lennox’s thorough treatment in Chapters 8-11 of God’s Undertaker is highly recommended, and what follows here is a brief and heavily simplified summary of his argument.
The argument can be written as follows:
1. Biological systems are information systems
2. The information contained in a DNA molecule is algorithmically incompressible.
3. Such information-producing algorithms aren’t present in nature
4. Algorithms that produce incompressible pieces of information have to themselves be more complex, or receive a more complex input of information, than that which they produce, and therefore do not produce new information.
5. Therefore the algorithm of evolution by natural selection (or any other unguided process) cannot produce any new information, including that contained in the DNA molecule.
1) Biological systems are information systems
What lies at the heart of every living thing is not a fire, warm breath, nor a “spark of life”. It is information, words, instructions… Think of billions of discrete digital characters… If you want to understand life, think about digital technology. Richard Dawkins.
The concept of information is central both to genetics and evolutionary theory. John Maynard Smith
The problem of the origin of life is clearly basically equivalent to the problem of the origin of biological information. Bernd-Olaf Küppers
In Shannon information theory (which concerns the transmission of information in a channel), there is a difference between syntactic and semantic information. EVOY OL UI represents syntactic information: 10 symbols. But the symbols don’t mean anything; they are not semantic. I LOVE YOU contains the same letters, but now represents semantic information in a way that no other arrangement of letters would convey in the English language.
The DNA molecule contains about 7 billion bits of semantic information, coded with a 4-letter alphabet: A, G, C, and T: Adenine, Guanine, Cytosine, and Thymine. Their order in the genome, along with the RNA molecule, code for the production of various proteins like instructions in a computer program or a recipe book. Except that this recipe book would fill a whole library. Clearly, biological systems are information systems of a high order.
2) The information contained in a DNA molecule is algorithmically incompressible.
Assume that you come across two strings 6.5 billion digits long: The one says ILOVEYOUILOVEYOUILOVEYOU… and the other is a string of random letters typed by a monkey at a keyboard. Are both equally complex? Not at all: the first string is simply ILOVEYOU repeated many times in an algorithm like
Write ILOVEYOU.
Repeat 437.5 million times.
Stop.
The second string cannot be shortened in an algorithm like the one above. Lennox remarks that algorithmic compressibility is a good measure of randomness.
Now suppose you have a string of digits 3.5 billion letters long, but that they are of the books in a library. They are as complex as the monkey’s string in the sense that they are incompressible, but they exhibit specified complexity. That is, the information is semantic and intelligible to us because we have learned the English language independently of them, and can interpret them. Both random strings and strings of specified complexity are therefore algorithmically incompressible, but the latter conveys semantic information, whilst the former doesn’t. An analogy of ink spilled on a page vs. a message written on a page can also be used: both geometries are equally unlikely, but the one conveys information, whilst the other one doesn’t. Suppose now that you have a 10-word sentence. There are 362 880 ways in which the sentence could be written, but only 1 grammatical way in which it conveys the correct meaning. In the same way, there are 10320 sequence alternatives for the genome to code the simplest biologically significant amino acids, and only a few of them work.
Notice here that the argument isn’t about likelihood and improbability; it is about the information content of the genome, and the fact that such information is algorithmically incompressible.
3) Such information-producing algorithms aren’t present in nature
Can specific randomness be the guaranteed product of a mechanical, law-like process, like a primordial soup left to the mercy of familiar laws of physics and chemistry? No it couldn’t. No known law of nature could achieve this… We conclude that biologically relevant macromolecules simultaneously possess two vital properties: randomness and extreme specificity. A chaotic process could possibly achieve the former property but would have a negligible probability of achieving the latter. Paul Davies, The Fifth Miracle.
Pre-biological evolution is a contradiction in terms. Theodosius Dobzhansky.
Notice that careful scientists use the word ‘negligible’ to describe the probability of this situation arising accidentally or randomly. However, this is not a fine-tuning argument relying on small probabilities; no-one seriously believes that semantic information like this arises randomly. The question, for all scientists, is ‘by what mechanism did it arise’?
One possible objection to this might be that the DNA molecule could be determined somehow through its physics and chemistry, a kind of “self-assembly” as is sometimes present in nature, and that there it isn’t as unlikely as we might think. However, this possibility has been roundly dismissed:
Whatever may be the origin of a DNA configuration, it can function as a code only if its order is not due to the forces of potential energy. It must be as physically indeterminate as the sequence of words is on a printed page. Michael Polanyi
… the message [printed on the page] is not derivable from the physics and chemistry of paper and ink. John Lennox
Attempts to relate the idea of order with biological organization or specificity must be regarded as a play on words which cannot stand careful scrutiny. Informational macromolecules can code genetic messages and therefore carry information because the sequence of bases or residues is affected very little, if at all, by physiochemical factors. Hubert Yockey, Information Theory and Biology
4) Algorithms that produce incompressible information have to themselves be longer, or receive a more complex input of information, than that which they produce, and therefore do not produce new information.
A machine does not create any new information, but it performs a very valuable transformation of known information. Léon Brillouin
The brilliant mathematician Kurt Gödel, described by many as the best logician since Aristotle, laid the groundwork for this premise. Gödel was an Austrian mathematician and philosopher who later worked at Princeton’s Institute for Advanced Study with his best friend Albert Einstein. He became famous for two proofs which he devised at the age of 24, refuting Bertrand Russell and Alfred North Whitehead’s magisterial Principia Mathematica and showing that mathematical knowledge is always incomplete – proofs with such far-reaching and profound implications that the scientific world is still reeling 85 years later:
Kurt Gödel showed that it is impossible to establish the internal logical consistency of a very large class of deductive systems – elementary arithmetic, for example – unless one adopts principles of reasoning so complex that their internal consistency is as open to doubt as that of the sytems themselves. Gordana Dodig-Crnkovic
After trying for many years to salvage his life’s work, Russell was forced to concede defeat:
I wanted certainty in the kind of way in which people want religious faith. I thought that certainty is more likely to be found in mathematics than anywhere…But after some twenty years of arduous toil, I came to the conclusion that there was nothing more that I could do in the way of making mathematical knowledge indubitable.
Although Gödel didn’t develop this further, he foresaw the implications of his work:
More generally, Gödel believes that mechanism in biology is a prejudice of our time which will be disproved. In this case, one disproval, in Gödel’s opinion, will consist in a mathematical theorem to the effect that the formation within geological times of a human body by the laws of physics (or any other laws of a similar nature), starting from a random distribution of the elementary particles and the field, is as unlikely as the separation by chance of the atmosphere into its components. Hao Wang. Nature’s imagination – the frontiers of scientific vision.
The mathematician Gregory Chaitin did build on Gödel’s work, and found that you cannot prove that a sequence of numbers has a greater complexity than the program required to generate it.
Chaitin’s arguments are based on the Turing machine. This is an abstract mathematical construct named after its inventor, the brilliant mathematician Alan Turing, who worked at Bletchely Park in the UK during the Second World War and lead the team that cracked the famous Enigma code. The upshot of Chaitin’s work is to make possible the idea that no Turing machine can generate information that does not either belong to its input or its own informational structure. Why is this important? Because according to the Church-Turing thesis, any computational device whatsoever (past, present or future) can by simulated by a Turing machine. On that basis, any result obtained for Turing machines can be at once translated into the digital world. John Lennox, God’s Undertaker
Bernd-Olaf Küppers took this even further:
In sequences that carry semantic information the information is clearly coded irreducibly in the sense that it is not further compressible. Therefore there do not exist any algorithms that generate meaningful sequences where those algorithms are shorter than the sequences they generate. Complexity and Gödel’s incompleteness theorem, ACM SIGACT News, no. 9, April 1971, 11-12
Küppers’ conjecture seems to me to be the weakest part of the argument and warrants further investigation. He admits that this is a conjecture since by Chaitin it is impossible to prove, for a given sequence and algorithm, that there is no shorter algorithm that could generate the sequence. It is a conjecture by mathematical standards, but passes as sound scientific theory with flying colours. No counterexample has ever been devised; moreover it is self-evident for me personally as I have coded many algorithms during my career. The point of writing them is to solve problems you couldn’t solve yourself – to give you new information once you interpret the results. But algorithms only ever do what you tell them to do. Ask any first-time programmer: this can be very frustrating. Without highly directed, thoughtful, intelligent input (it is much like poetry), all you do is make mistakes faster. GIGO, or “Garbage In, Garbage Out” is a programming mantra that has become even become cliché.
5) Therefore the algorithm of evolution by natural selection (or any other unguided process) cannot produce any new information, including that contained in the DNA molecule.
As Lennox says, “much interesting and difficult work remains to be done in this area.” Nevertheless, the implications of such a conclusion is staggering. If this semantic information could not be produced naturalistically, it had to come from outside the system. Notwithstanding the known intertextual references, there may be even more to John’s opening statement that “in the beginning was the Word” than we think.
This is not a god-of-the-gaps argument in the classical sense, although it could be seen as a God-of-the-gap argument. The nature of the gap is important: generally, god-of-the-gaps arguments are criticized because they are based on an absence of knowledge: “we don’t know how it happened, so God must have done it.” As scientific knowledge progresses, we find a naturalistic explanation and the gap vanishes, along with the god. However, the gap in this case is due to a presence of knowledge. Up to now naturalistic biogenesis was an evolution-of-the-gaps argument. “We don’t know how, but evolution must have done it!” But that doesn’t work anymore; it is not that we don’t know how it could arise naturalistically, it is that we are proving that it couldn’t.
Herman blogs at standard-deviations.com
Recent Comments