You are here
Part 2 — Dr. William Dembski
Dr. William Dembski:
Thank you Genie, and thank you American Museum for inviting me to this event.
Can't hear. Can't hear.
I said, thank you Eugenie, is that better?
Try the other mike. The mike that is sort of bent over there. Oh, that's a light. [laughter]
Is it working? Is that better? That better?
Would you just be a couple of inches shorter? [pause] Now I'm starting your time.
Good. Evolutionary biology teaches that all biological complexity is the result of material mechanisms. These include principally the Darwinian mechanism of natural selection and random variation but also include other mechanisms. Symbiosis, gene transfer, genetic drift, the action of regulatory genes in development, self-organizational processes, etc., these mechanisms are just that, mindless material mechanisms that do what they do irrespective of intelligence. To be sure, mechanisms can be programmed by an intelligence but any such intelligent programming of evolutionary mechanisms is not properly part of evolutionary biology. Intelligent design, by contrast, teaches that biological complexity is not exclusively the result of material mechanisms but also requires intelligence, where the intelligence in question is not reducible to such mechanisms. The central issue therefore is not the relatedness of all organisms, what is commonly called common descent. Indeed, intelligent design is perfectly compatible with common descent. Rather the central issue is how biological complexity emerged and whether intelligence played a pivotal role in its emergence.
Suppose therefore for the sake of argument that intelligence, one irreducible to material mechanisms actually did play a decisive role in the emergence of life's complexity and diversity. How could we know it? To answer this question, let's run a thought experiment. Imagine that Alice is sending Bob encrypted messages over a communication channel and that Eve is listening in. For simplicity let's assume that the signals are bit strings. How could Eve know that Alice is not merely sending Bob random coin flips, but meaningful messages?
To answer this question, Eve will require two things. First, the bit strings sent across the communication channel need to be reasonably long. In other words, they need to be complex. If not, chance can readily account for them. Just as there is no way to reconstruct a piece of music given just one note, there is no way to preclude chance for a bit string that consists of only a few bits. For instance, there are only eight strings consisting of three bits, and chance readily accounts for any of them.
There's a second requirement for Eve to know that Alice is not sending Bob random gibberish. Eve needs to observe a suitable pattern in the signal Alice sends Bob. Even if the signal is complex, it may exhibit no pattern characteristic of intelligence. Flip a coin enough times and you'll observe a complex sequence of coin flips but that sequence will exhibit no pattern characteristic of intelligence. For cryptanalysts like Eve, observing a pattern suitable for identifying intelligence amounts to finding a cryptographic key that deciphers the message. Patterns suitable for identifying intelligence I call specifications.
In sum, Eve requires both complexity and specification to infer intelligence in the signals Alice is sending Bob. This combination of complexity and specification, or specified complexity as I call it, is the basis for design inferences across numerous special sciences, including archaeology, cryptography, forensics, and SETI, the Search for Extra-Terrestrial Intelligence. I detail this in my book, The Design Inference, a peer-reviewed statistical monograph that appeared with Cambridge University Press in 1998.
So, what's all the fuss about specified complexity? The actual term specified complexity did not originate with me. It first occurs in the origin of life literature where Leslie Orgel used it to describe what he regards as the essence of life. And that was thirty years ago. More recently, in 1999, surveying the state of origin of life research, Paul Davies remarked, quote "Living organisms are mysterious not for their complexity per se but for their tightly specified complexity." Close quote. Orgel and Davies used specified complexity loosely. I, in my own research I formalized it as a statistical criterion for identifying the effects of intelligence. For identifying the effects of animal, human, and extraterrestrial intelligence, the criterion works just fine. Yet, when anyone attempts to apply the criterion to biological systems, all hell breaks loose. Let's consider why. Evolutionary biologists claim to have demonstrated that design is superfluous for understanding biological complexity. The only way to actually demonstrate this, however, is to exhibit material mechanisms that account for the various forms of biological complexity out there. Now, if for every instance of biological complexity, some mechanism could be readily produced that accounts for it, intelligent design would drop out of scientific discussion. Occam's razor, by proscribing superfluous causes, would, in this instance, finish off intelligent design quite nicely. But that hasn't happened. Why not?
The reason is that there are plenty of complex biological systems for which no biologist has a clue how they emerged. I'm not talking about hand waving just-so stories. Biologists have plenty of those. I'm talking about detailed, testable accounts of how such systems could have emerged. To see what's at stake, consider how biologists propose to explain the emergence of the bacterial flagellum, a molecular machine that has become the mascot of the intelligent design movement. Howard Berg at Harvard called the bacterial flagellum the most efficient machine in the universe. The flagellum is a nano-engineered outboard rotary motor on the backs of certain bacteria. It spins at tens of thousands of RPM, can change direction in a quarter turn, and propels the bacterium through its watery environment. According to evolutionary biology, it had to emerge via some material mechanism. Fine. But how? The usual story is that the flagellum is composed of parts that previously were targeted for different uses and that natural selection then co-opted to form a flagellum. This seems reasonable until we try to fill in the details. The only well-documented examples that we have of successful co-optation come from human engineering. For instance, an electrical engineer might co-opt components from a microwave oven, a radio, and the screen from a computer to form a working television. But in that case, we have an intelligent agent who knows all about electrical gadgets and about televisions in particular. But natural selection doesn't know a thing about bacterial flagella. So how is natural selection going to take extant protein parts and co-opt them to form a flagellum?
The problem is that natural selection can only select for pre-existing function. It can, for instance, select for larger finch beaks when the available nuts are harder to open. Here the finch beak is already in place and natural selection merely enhances its present functionality. Natural selection might even adapt a pre-existing structure to a new function. For example, it might start with finch beaks adapted to opening nuts and end with beaks adapted to eating insects. But for co-optation to result in a structure like the bacterial flagellum, we are not talking about enhancing the function of an existing structure or re-assigning an existing structure to a different function, but re-assigning multiple structures previously targeted for different functions to a novel structure exhibiting a novel function. The bacterial flagellum requires about 50 proteins for its assembly and structure. All these proteins are necessary in the sense that lacking any of them a working flagellum does not result.
The only way for natural selection to form such a structure by co-optation then is for natural selection gradually to enfold existing protein parts into evolving structures whose functions co-evolved with the structures. We might, for instance, imagine a five-part mousetrap consisting of a platform, spring, hammer, holding bar, and catch evolving as follows. It starts off as a doorstop, thus merely consisting of the platform. Then it evolves into a tie clip by attaching the spring and hammer to the platform, and finally becomes a full mousetrap by also including the holding bar and catch. Ken Miller finds such scenarios not only completely plausible, but also deeply relevant to biology. In fact, he regularly sports a modified mousetrap cum tie clip.
Intelligent design proponents by contrast, regard such scenarios as rubbish. Here's why. First, in such scenarios the aim of human design and intention meddles everywhere. Evolutionary biologists assure us that eventually they will discover just how evolutionary processes can take the right and needed steps without the meddling hand of design. But all such assurances presuppose that intelligence is dispensable in explaining biological complexity. The only evidence we have of successful co-optation, however, comes from engineering, and confirms that intelligence is indispensable in explaining complex structures like the mousetrap and by implication the bacterial flagellum. Intelligence is known to have the causal power to produce such structures; we're still waiting for the promised material mechanisms.
The other reason design theorists are less than impressed with co-optation concerns an inherent limitation of the Darwinian mechanism. The whole point of the Darwinian selection mechanism is that you get from anywhere in configuration space to anywhere else provided you can take small steps. How small? Small enough that they are reasonably probable. But what guarantees do you have that a sequence of baby steps connects any two points in configuration space?
Richard Dawkins compares the emergence of biological complexity to climbing a mountain, Mount Improbable, as he calls it. According to him, Mount Improbable always has a gradual serpentine path leading to the top that can be traversed in baby steps. But that's hardly an empirical claim. Indeed, the claim is entirely gratuitous. It might be a fact that nature, about nature, that, that Mount Improbable is sheer on all sides and getting to the top from the bottom via baby steps is effectively impossible. A gap like that would reside in nature herself and not and not in our knowledge of nature. It would not, in other words, constitute a god of the gaps. The problem is worse yet, for the Darwinian selection mechanism to connect point A to point B in configuration space, it is not enough that there merely exists a sequence of baby steps connecting the two. In addition, each baby step needs in some sense to be successful. In biological terms, each step requires an increase in fitness as measured in terms of survival and reproduction. Natural selection, after all, is the motive force behind each baby step, and selection only selects what is advantageous to the organism. Thus, for the Darwinian mechanism to connect two organisms there must be a sequence of successful baby steps connecting the two.
Again, it is not enough merely to presuppose this - this, it must be demonstrated. For instance, it is not enough to point out that some genes for the bacterial flagellum are the same as those for a Type III secretory system, a type of pump, and then hand wave that one was co-opted from the other. Anybody can arrange complex systems in a series. But such series do nothing to establish whether the end evolved in Darwinian fashion from the beginning unless the probability of each step can be quantified, the probability of each step turns out to be reasonably large, and each step constitutes an advantage to the organism. Convinced that the Darwinian mechanism must be capable of doing such evolutionary design work, evolutionary biologists rarely ask whether such a sequence of successful baby steps even exists. Much less do they attempt to quantify the probabilities involved?
I attempt, in chapter 5 of my most recent book, "No Free Lunch", to do that, to lay out the probabilities, there I lay out techniques for assessing the probabilistic hurdles that the Darwinian mechanism faces in trying to account for complex biological structures like the bacterial flagellum. The probabilities that I calculate, and I try to be conservative, are horrendous, and render the natural selection entirely implausible as a mechanism for generating the flagellum and structures like it. If I'm right, and the probabilities really are horrendous, then the bacterial flagellum exhibits specified complexity. Furthermore, specified complexity is a reliable empirical marker of intelligent agency, then systems like the bacterial flagellum bespeak intelligent design, and they're not solely the effects of material mechanisms.
It's here that critics of intelligent design raise the "argument from ignorance" objection. For something to exhibit specified complexity entails that no known material mechanism operating in known ways is able to account for it. That leaves unknown material mechanisms; it also leaves no material mechanisms operating in unknown ways. Isn't arguing for design on the basis of specified complexity therefore merely an argument from ignorance?
Two comments to this objection. First, the great promise of the Darwinian and other naturalistic accounts of evolution was precisely to show how known material mechanisms operating in known ways could produce all of biological complexity. So, at the very least, specified complexity is showing that problems claimed to be solved by naturalistic means have not been solved. Second, the argument from ignorance could in principle be raised for any design inference that employs specified complexity including those where humans are implicated in constructing artifacts. An unknown material mechanism might explain the origin of the Mona Lisa in the Louvre, or the Louvre itself, or Stonehenge, or how two students wrote exactly the same essay. But no one is looking for such mechanisms. It would be madness even to try. Intelligent design caused these objects to exist and we know that because of their specified complexity. Specified complexity by being defined relative to known material mechanisms operating in known ways might always be defeated by showing that some relevant mechanism was omitted. That's always a possibility, though, as with the plagiarism example and with many other cases, we don't take it seriously. As William James put it, "there are live possibilities and there are bare possibilities". There are many design inferences which to doubt require invoking a bare possibility. Such bare possibilities if realized would defeat specified complexity in what way? Not by rendering the concept incoherent, but by dissolving it.
In fact, that is how Darwinists, complexity theorists, and anyone intent on defeating specified complexity as a marker of intelligence usually attempts it, namely by showing that it dissolves once we have a better understanding of the underlying material mechanisms that render the object in question reasonably probable. By contrast, design theorists argue that specified complexity in biology is real, that any attempt to palliate - [pause] palliate the complexities or improbabilities by invoking as yet unknown mechanisms, or known mechanisms operating in unknown ways is destined to fail. This can, in some cases, be argued convincingly, as with Michael Behe's irreducibly complex biochemical machines, and with biological structures whose geometry allows complete freedom in possible arrangements of parts.
Consider, for instance, a configuration space comprising all possible character sequences from a fixed alphabet. Such spaces model not only written text, but also polymers like DNA, RNA, and proteins. Configuration spaces like this are perfectly homogeneous, with one character string geometrically interchangeable with the next. Geometry, therefore, precludes any underlying mechanisms from distinguishing or preferring some character strings over others. Not material mechanisms, but external, semantic information in the case of written texts, or functional information, in the case of polymers, is needed to generate specified complexity [pause] in these instances. To argue that this semantic or functional information reduces to material mechanisms is like arguing that scrabble pieces have inherent in them preferential ways they like to be sequenced. They don't. Michael Polanyi offered such arguments for biological design in the 1960s.
In summary, evolutionary biology contends that material mechanisms are capable of accounting for all of biological complexity, yet for biological systems that exhibit specified complexity, these mechanisms provide no explanation of how they were produced. Moreover, in contexts with a causal history that is independently verifiable, specified complexity is reliably correlated with intelligence. At a minimum, biology should therefore allow the possibility of design in cases of biological specified complexity. But that's not the case. Evolutionary biology allows for only one line of criticism: namely, to show that a complex specified biological structure could not have evolved via any material mechanism. In other words, so long as some unknown material mechanism might have evolved the structure in question, intelligent design is proscribed. This renders evolutionary theory immune to disconfirmation in principle, because the universe of unknown material mechanisms can never be exhausted. Furthermore, the evolutionist has no burden of evidence. Instead, the burden of evidence is shifted entirely to the evolution skeptic, and what is required of the evolution skeptic? The skeptic must prove nothing less than a universal negative.
That is not how science is supposed to work. Science is supposed to pursue the full range of possible explanations. Evolutionary biology, by limiting itself to material mechanisms, has settled in advance which biological explanations are true apart from any consideration of empirical evidence. This is armchair philosophy. Intelligent design may not be correct, but the only way we could discover that is by admitting design as a real possibility, not ruling it out a priori. Darwin himself would have agreed. In the Origin of Species he wrote, a fair result can be obtained only by fully stating and balancing the facts the facts and arguments on both sides of each question. Thank you. [applause]