You are here

Part 5 — Dr. William Dembski, Audience Q&A

ES:
Ken! You're done. [laughter/applause] [long pause] Okay, I will, we have a little bit of time for questions from the audience. Uh, not a whole lot of time because we do want to let you people go before midnight. Um. I don't, I guess I can probably see hands being raised. Um, not too bad, um, this man. Oh! Hello?

Audience Member 1:
Hi, uh, as I teach science and, and most people in the room do, Science involves gathering data, after a question has been asked and a hypothesis offered as an explanation. Philosophy on the other hand does involve offering answers based upon logical thinking as opposed to empirical evidence. What I'd like to know is how you could explain ID as science and not as Philosophy.

ES:
Do I need to repeat the question or did you all hear it? He has a good strong voice. I guess that's for, uh, Bill Dembski?

AM1:
Yes.

WD:
Um, [sigh]. You know, yes I'm trained as a philosopher and there are philosophical implications to what I do, but uh, these questions of Design detection, I mean they arose out of my mathematical work trying to understand the nature of randomness. I mean that's, that's where it came from, because I found that one can only really make sense out of randomness in relation to certain types of patterns. This was, this was a problem actually in understanding the nature of randomness that, uh, randomness, when people are writing for instance, random number generators, they would be good only so long as one did not find a pattern which these, uh, which these uh, random number generators matched. As long as the violated all of the set of statistical patterns, they were fine. And then, this was a constant pattern that, uh, sorry to overuse that word, but, that that random number generators would, would go by the board once a pattern was discovered in them. So, I was trying to make sense out of the types of patterns that we use to defeat randomness, and from there that connected with a certain structure of inference for detecting design that I found in a whole bunch of different, different areas. So I mean, my work in The Design Inference I mean it appeared in Cambridge Studies in Probability Induction and Decision Theory. Yes, Brian Skyrms is a philosopher who is the head of, uh, he edits that series, but he's also a member of the National Academy of Sciences. And so, you know, I don't there are any neat and clean distinctions between Philosophy and science, I mean, you know, uh, natural philosophy, that's what science used to be, you know, so the very use of the word science is only about a hundred fifty years old. Uh, so, you know I, I see the work as relevant, and if you go to my, um, there's a website, a scholarly society that deals with Intelligent Design, dub-dub-dub dot iscid dot org. ISCID; International Society for Complexity, Information and Design. We, uh, actually are doing some, uh, I think some interesting research there. One thing is in computer modeling to try to get a sense of, just how effective is uh, is the Darwinian mechanism when you try to represent it computationally with a program called MESA: Monotonic Evolutionary Simulation Algorithm which tries to make sense of what happens when you start coupling variables and how does that impede the, the progress of the natural selection mechanism, in, in finding its way through a search space. So, anyway, uh...

ES:
Thank you. Thank you. Is there an interest in a one-minute follow up from either of you, or are you happy. [pause] okay. Next question. Um. In the aisle.

AM2:
[inaudible]

WD:
Well, let me answer you in two parts. One, if you throw enough money at researchers, you'll be getting research, right. So I think, uh, I think the, you know, the, the research you're citing, I don't mean to dismiss it, I think there's a lot of good stuff being done, but it's certainly, the moneys, the research funds are the evolutionary side, we don't have very much funding, we're not getting funding from NSF and NIH, so it's a mainly, mainly private at this point. And I would say yes, we have our work cut out for us. In 1997 we met at a conference, but there was a conference later that year that which was a private gathering, titled "A Consultation on Intelligent Design", Where the idea was to try to jump start this as a research program. We weren't there at the time. So, you know, I, I agree, we've got our work cut out for us, but, uh, we're making some slow, slow progress. You know I think uh, we're still at the point, I mean, I think that my, my work in No Free Lunch and um, Design Inference was trying to lay some theoretical foundations. And, Uh, you know. But I, I do see, there's, there's some good work being done, and, I can, I can list some for you. We are getting some stuff into the peer reviewed literature, it's not, it's not a whole lot, you know. So yeah, we've got our work cut out.

KM:
Genie?

ES:
Yes, one minute.

KM:
Do you want me to respond?

ES:
Yes, one-minute response.

KM:
Be shorter than that. Um, I, I, I, I have to say that I don't quite share Bill's optimism that the, the uh... I, I agree you got your work cut out for you, but I don't share your optimism that things are happening. And an example of that, actually, was when I pointed to the entire history of life, the fossil record, and I just asked you "When were the Design events?" and all I got was a shrug and the bacterial flagellum, and: "... the others we have to test!" Surely, no matter how much or how little funding you have, sooner or later you'd be able to put a few more arrows on that graph and say here's a design event, there's a design event.

WD:
I think we can, I think we can put some more arrows, in that I think there's some good work happening at the level of individual, uh, enzymes, looking at the design there, so, there's work being done, but, uh, it's not nearly as fast as I would like to see. You know.

RP:
Um...

WD:
Go ahead.

ES:
Okay.

RP:
Apropos of this sort of question, your explanatory filter and the, the CS criterion is supposed to be a reliable, uh, detector of design. You've admitted that it can come up with false, uh false negatives, uh-things will fall through the filter. Um, um, Is there something, though, where you've tested it to be reliable? Whenever we have an instrument, we usually go through and we, um, check it out, see how many times it works, how many times it fails. Uh, so a reliability claim is something that we would expect to be tested. And look in your No Free Lunch book; there isn't any indication that you've actually done, done tests on that. Can you tell us, um, how many times it works sort of on, on average. Is it ninety-five per cent of the time, a hundred percent of the time, twenty percent of the time? Have you done some tests on this?

WD:
Well, the fact is that I've got plenty of critics, you included, who

RP:
But this is something where you should test it.

WD:
WH, WH, What are the counterexamples? I mean you've, you've been following the design movement, give me a counter example.

RP:
But the question has to do with whether you've given us any. You... you state that it's an, an inductive generalization. That's part of the reason we should expect this. But I haven't seen even a single case that you've offered where we've seen just how well it works. It's not a matter of responding to counterexamples; it's a matter of testing the instrument.

WD:
We, we, we use these inferences across a whole range of special sciences. I would say that, in fact that's what makes a lot of these design inferences work.

RP:
So they work a hundred percent of the time, or ninety-five percent of the time?

WD:
Well, the, the, the accuracy depends on how well we've, we've grasped the relevant probability distributions, and also the level of improbability that we set to eliminate chance and infer design. And so, you know, I would say that we are, we are, perfectly reliable when we're at a universal probability bound. You know,

RP:
Perfect? So you've tested and you get perfect reliability, that's pretty amazing.

WD:
No, No No. What are you saying that that's pretty amazing? I mean, this isn't coming up in a vacuum, right; I mean Emile Borel, a very well known mathematician in the 1930's, was putting forward universal probability bound. Uh, cryptographers use these notions all the time to assess the reliability of cryptographic systems. They usually get improbabilities at the level of ten to the minus ninety, are as secure, you know, for cryptographic attacks across the whole universe. If you had the entire universe as a non-quantum computational system, then a system is reliable against cryptographic attacks. You know, so it's, it's not, you know...

RP:
This is an a priori argument, not an empirical argument.

WD: It's not, it's not a, it's not an a priori... well, it's uh, it certainly depends on how much, what are the limits in the case of cryptography, what are the limits of matter and the physics of matter, in terms of what's er, how much computation can you get out of it. Now, there's a law, Moore's Law, and we all know that uh, computational power has gone up dramati, dramatically, I mean, the, your desktop, or rather your, your notebook computer has more power than the super computer of fifteen years ago. But that's not going to, you know, Moore's law says that computational power increases, doubles every eighteen months, but it's not going to do that indefinitely. You're going to reach limits of matter, you know, of what computation can accomplish, and so, those limits tell us, in the case of cryptography, what, uh, what, what sort of degree of security a cryptographic system has when it's, when...

RP:
Sure, a priori argument, so at this point we don't have an empirical test for this.

WD:
But when you say a priori, that is not accurate. Th-, there is an a priori element in the mathematics, but there's also, it depends on what the physics is telling us about ...

RP:
So you have done an empirical test?

WD:
Can I finish the thought?

RP:
Sorry.

WD:
There's also a limit to what computation can, how fast computation can go based on physics. Now that's empirical, so you can't just say [inaudible]

RP:
So the answer is, you have not yet done the test. [laughter]

WD:
The... uh, th, th, the NSA has done the tests. I mean these arguments appear across the board in a lot of different areas. I don't, you know, don't try to put words in my mouth.

ES:
Okay, let's, let's end there. Um, I'm afraid we've run out of time for questions. Uh, Mike, could you come up and get, uh, get started. Thank you very much. [applause]