Let's look into these question in some more detail. For instance the origin of information in the first cell: Why would Dembski be interested in the origin of information in the first cell, one may wonder? Simple, because science has shown that information can in fact increase in a cell under purely natural processes of regularity and chance. Unable to eliminate chance and regularity, the design inference remains quite powerless. But all hope should not be abandoned, one can always move the origin of 'information' to an earlier time in history, such as the 'first cell' or if that does not work, to the origin of the universe. This concept is known as 'front loading' and merely acknowledges that given a particular initial condition, natural processes are sufficient in explaining its evolution. In other words, Dembski's move to front loading has made Intelligent Design even more vacuous. So what about the concept of information? Much has been written on how Intelligent Design defines and in the eyes of some, redefines and muddles, the concept of information. So what is information in ID speak? It's the negative base-2 logarithm of the probability p. If an event has a probability p, then Dembski defines the information of such an event to be -log 2 p. Nothing wrong with that other than that ID activists confuse the concept of information ala Dembski with how the term is used in science. So what's the problem with Dembski's definition? First of all, how is the probability p calculated? Is it the probability of the event happening under the assumption of a uniform distribution function? Is it the probability of the event happening under the assumption of a particular chance and regularity pathway? Irregardless of which one of these definitions is used, there are some major problems. Let's take the first definition. Under this definition, there is no reason to presume that chance and regularity processes cannot be responsible for the event, for the same reason that 'design' is also a possibility. The second definition is more interesting because it shows the vacuity of the design inference. Once a particular natural pathway has been shown, the probability of the event becomes close to 1 and thus the amount of 'information' in the event drops to zero. In other words, by defining information in this manner, Dembski has all but guaranteed that natural processes cannot generate information. In addition, one may argue that if a designer was involved, then the probability of the event would also be close to 1 and thus the information would also be close to zero. In other words, the concept of information is a meaningless concept as proposed by Dembski. So how does real science deals with the concept of information? A good example is found in the work by Tom Schneider. Schneider uses 'Shannon Information' to show how natural processes can increase the amount of information in the genome. In other words, there is at least in principle no reason to reject natural processes as being responsible for information in the first cell. Of course, there is also in principle no reason to reject that the first cell was designed. It all comes down to the evidence and to a comparison of hypotheses generated by science versus the hypotheses generated by Intelligent Design. Fair enough, after all, lacking any such hypotheses from either side, one may at most conclude that 'we don't know'. And there is nothing wrong with such a position, at least from a science perspective. From an Intelligent Design perspective, our ignorance should be counted as evidence in favor of something called 'design'. The question is why? Because of 'specification'... So what is specification? Well, according to Dembski, specification in biology is merely 'function'. But wait a minute, function is exactly that which one would expect to arise under the processes of chance and regularity (Dawin's theory of evolution), so again, Intelligent Design cannot claim that specification somehow resolves our ignorances to one side or another. Which means that we return to hypotheses and the very important question: What hypotheses does Intelligent Design propose to explain a particular system or event it claims to have been designed? Remember, we have already determined that design itself is not sufficient as natural processes such as variation and selection can lead to complex specified information. In the past, people have in fact asked Dembski exactly this question and his response is quite helpful in establishing the scientific vacuity of Intelligent Design. Rafe Gutman described a plausible scenario as to how science explains the complement system and asked Dembski to provide an explanation based on Intelligent Design. Dembski reponded as follows:What are the other vexing questions facing biologists that we are led to believe have already been solved? How about the origin of the information in the first cell? How about the origin of molecular machines? What about Haldane's dilemma?
Link In other words, not only will ID remain scientifically vacuous but also content free as ID cannot match the 'pathetic level of detail in telling mechanistic stories' (aka hypotheses). Let's see what else we can say about the original question " How about the origin of molecular machines? " Again, we first can establish that Intelligent Design neither provides any detailed hypotheses nor a scientifically relevant foundation. So how well does science do? Before I answer this question, let me point out that science is in the comfortable position that from the start it is competing with 'we don't know'. The same of course applies to ID but as I have shown, ID cannot even really compete with the null-hypothesis. So how well does science do in explaining the origin of molecular machines? An often quoted example of 'design' is the bacterial flagellum. Nick Matzke has presented quite a detailed overview of the origin and evolution of the bacterial flagellum. But Matzke not only presented testable hypotheses, he also made some predictions which have recently been shown to be supported by additional data. So, a valid question seems to be: What has Intelligent Design contributed to our understanding of the bacterial flagellum? The answer is a 'shocking' nothing, nada... Don't take my word for it, check out the content free, and science free website Uncommon Descent.As for your example, I'm not going to take the bait. You're asking me to play a game: "Provide as much detail in terms of possible causal mechanisms for your ID position as I do for my Darwinian position." ID is not a mechanistic theory, and it's not ID's task to match your pathetic level of detail in telling mechanistic stories. If ID is correct and an intelligence is responsible and indispensable for certain structures, then it makes no sense to try to ape your method of connecting the dots. True, there may be dots to be connected. But there may also be fundamental discontinuities, and with IC systems that is what ID is discovering."
— Dembski
41 Comments
steve s · 23 May 2006
By all means, do check out Uncommonly Dense. But know in advance, you aren't going to find knowledgeable posts of the kind you find here, nothing resembling, "No genes were lost in the making of this whale", or "Jellyfish lack true Hox genes!". What you will find, however, is lots of comedy, and lots of Jesus.
secondclass · 23 May 2006
Actually, -log2(P) is the standard definition for self-information, so Dembski isn't cheating by using this definition. The problems occur when, as Pim notes, he glosses over his premises for calculating P, or when he defines bizarre spin-offs, like CSI or added information. Dembski is either utterly incompetent or deliberately obfuscative with regards to information theory.
Jason · 23 May 2006
Dude, you used irregardless. That's not a word.
Glen Davidson · 23 May 2006
Yeah, you know it's supposed to be "irregardlessly".
Glen D
http://tinyurl.com/b8ykm
Chris Hyland · 23 May 2006
Not being all too knowledgeable in information theory I was more interested in how these ideas apply to biology. So I read up to the part where the calculation to find the probability of the evolution of the flagellum included a calculation of the probability of all the proteins forming form random combinations of amino acids. Then I put the book down and backed slowly towards the door.
Vyoma · 23 May 2006
Ironically, by doing what he's doing, Dembski demonstrates that ID itself represents the sum of what he thinks mutations must ultimately add to: a complete loss of information. Talk about self-fulfilling prophecy.
Glen Davidson · 23 May 2006
Wheels · 23 May 2006
secondclass · 23 May 2006
Bruce Thompson GQ · 23 May 2006
steve s · 23 May 2006
Bruce Thompson GQ · 23 May 2006
Sounder · 23 May 2006
Shalini · 23 May 2006
As expected, I was asked to leave Uncommon Descent by D_mbski himself after I made the following remark:
Dear Bill,
You'll get your prize when you start doing some science.
On a side note:
Yup, I asked for it.
Shalini · 23 May 2006
As expected, I was asked to leave Uncommonly Dense by D_mbski himself.
(snicker)
Thanks Bill, that just shows what a great champion for 'academic freedom' you are.
(double snicker)
Henry J · 23 May 2006
Re "I think Random Chance Chemists would have a heck of a time trying to identify 92 natural elements "
Not to mention 24+ unnatural elements.
Henry
Pete Dunkelberg · 23 May 2006
PvM · 23 May 2006
PvM · 23 May 2006
Bored Huge Krill · 23 May 2006
The whole argument presented by Dembski about "where did the information come from" is actually much worse than the characterization here. The point is not merely that "natural processes can increase the amount of information in the genome". Rather, any stochastic process necessarily generates information. No other qualifiers necessary. For anybody actually familiar with information theory (and one presumes Dembski's target audience must be people who aren't) his argument is simply baffling; it just makes no sense whatsoever.
Shannon explains this rather well in his original 1948 paper "A Mathematical Theory of Communication" (Google for it and read it - it's one of the most important papers of the 20th century) I can't talk enough about how insightful it is. For those not familiar, this paper introduced the concept of information theory, particularly in the context of communication systems, and it's formulations and arguments are still very regularly referenced today. You can't go to a conference on, for example, wireless communication systems (my field) without hearing people talk about the "Shannon bound" at length. In brief, Shannon was able to set bounds on the performance of a forward error correction system (such as relied upon by CDs, DVDs, cellphones, communication satellites and so on) decades before the technology existed to implement such systems. All of this relies fundamentally on Shannon's formulations for information. It's not often a paper comes along that far ahead of it's time.
Dembski prefers to use Kolmogorov-Chaitin complexity measures, rather than Shannon's information measure. However, the two are essentially different ways of measuring the same thing; Shannon's formulation is typically used by communication-heads like me, whereas Kolmogorov-Chaitin is typically used by computer science-heads, because they happen to be more useful analytical tools in each case. Nonetheless, they're essentially isomorphic.
One important other thing: Shannon defines "information entropy" in his paper, and creationists often try to link this into the second law of thermodynamics and use it to imply that information only ever decreases. A typical rebuttal is to suggest that information entropy and thermodynamic entropy are unconnected, citing the apocryphal story that Shannon picked the term entropy because "nobody understands what it means". This assertion is clearly false; not only are the two forms of entropy connected, but Shannon was well aware of it at the time (his paper points this out, and references a contemporaneous book on statistical mechanics).
The thing is, the creationist argument is actually backwards; if you apply the second law of thermodynamics to Shannon's entropy measure (and you can - Shannon's entropy is linearly related to statistical thermodynamic entropy), you discover the following:
In a closed system, the total amount of information can never decrease
This is counterintuitive, but makes complete sense once you acquire a feel for what information really is. Sadly, IDiots like Dembski appear impervious to understanding the most basic tenets of the theory in which they claim to be "experts". What a farce.
PvM · 23 May 2006
Glen Davidson · 23 May 2006
snaxalotl · 24 May 2006
As secondclass notes, -log2(p) is shannon's information. Usually you talk about 8 (say) states needing 3 (log 8) bits of shannon information to specify one of them. If you talk about a chance of 1/8 vs a 7/8 chance of being in an alternative state, log2(1/8) = -3 so -log2(1/8) = 3 = information needed to specify the state with probability of 1/8.
Normal distribution & probability given certain pathways has nothing to do with it - generally, IF there's a probability of p then that outcome represents -log2(p) information, just like one symbol from an equiprobable set of eight represents 3 bits, so Dembski's unconventional use of 'information' is not his use of -log2(p).
Dembski's major errors are (1) he prattles on about the probability of things as if their probability is known or well defined, when neither is true and (2) designists try to conflate information with intended information or meaning, which is an easy intuition pump because it is a common use of the word information; in Dembski's case this conflation is 'innocently' achieved by using the word specification to mean function, which implies design to most people. If he were less adept at covering his ass for a scientific audience, and more pitched at just an audience of believers in the manner of Hovind, he probably would have used the term 'designed information'. If your audience accepts your sleight that information means intended information, then they have accepted your premise of intelligent design.
PvM · 24 May 2006
snaxalotl · 24 May 2006
excellent post from Bored Huge Krill. just to be explicit on something he is saying, a random string contains more information than the sort of string we perceive as non-random (which is more obvious if you are looking at KC than Shannon information). Where this is counterintuitive is where we slip into our intuition (which ID is always trying to pump) that information is intended information. For example most intelligent communications are quite information redundant and "look designed". However, this is just the most familiar everyday situation - if you look at a compressed media file where intended information is conveyed near maximum efficiency, then it looks random, which is a prerequisite of transmitting the most information in a given bandwidth.
As far as conservation of information goes, this is another case of ID slipping into "intended conveyed meaning" without too many people noticing. If I generate a meaningful fact, like x=7, I can derive a whole pile of additional facts like 2x=14 which at face value involve more information (more bits to be transmitted) but with proper compression those derived facts really don't require more information (i.e. they could be derived after transmission). Another helpful example might be that if I need two coordinates to specify something, I still only need the equivalent of two coordinates even when I radically change the coordinate system. Anyhoo, what is really not increased is meaning, and the sleight is that it's very easy to draw attention to a lot of situations where this can be reformulated as a conserved information, but in general conserved information is a distraction not a law.
PvM · 24 May 2006
Another issue just occured to me: Dembski defines three possible explanations, chance, regularity and design but why should we accept this? Why should design not be reducible to chance and regularity and why should ID activists be allowed to make the assertion that it is not reducible to chance and regularity?
In fact I argue that intelligence and chance and regularity are equivalent.
Taner Edis showed how Godel's theorom when extented to include randomness basically showed how intelligence and algorithms which include randomness are 'equivalent'.
By defining three distinct possibilities, Dembski has reached a conclusion he has yet to support namely that intelligence is irreducible to chance and regularity.
snaxalotl · 24 May 2006
snaxalotl · 24 May 2006
Keith Douglas · 24 May 2006
PvM: I've been saying that for some time. It amounts to beggining the question against a materialist understanding of intelligence, which is sort of at issue.
secondclass · 24 May 2006
Obfuscation is ID's lease on life, and Dembski is the king of confusion. By overloading existing terms, offering new terms where none are needed, and adding unnecessary clutter to his arguments, Dembski creates the appearance of scientific controversy. If he were to state his arguments clearly and concisely, he would have no followers.
For example, I submit that Dembski's explanatory filter can be reduced to a single sentence:
Large, unexplained regularities are of supernatural origin.
This sentence is obviously indefensible, and ID proponents would undoubtedly accuse me of mischaracterizing Dembski's position. So I'll parse the sentence to show that this is, in fact, an accurate rendering of Dembski's EF:
Specificity is regularity -- no more, no less. Dembski's examples of specified events most often exhibit regularity in the form of redundant patterns, e.g. a pattern of rocks on the ground that matches a constellation, or a conceptual pattern that matches a physical pattern. Dembski also holds that compressible strings are specified, making the equivalence of regularity and specificity even more obvious.
The "large" condition rules out chance. Small regularities, like flipping 5 heads in a row, can occur by chance, but large regularities, like flipping 500 heads in a row, cannot. A deterministic element is always present in the production of large regularities.
The "unexplained" condition says that this deterministic element is not explained by known natural laws. According to Dembski, this condition rules out necessity, but of course it doesn't take into account unknown or little-understood natural laws. Dembski tells us that we should disregard such possibilities.
So there you have the explanatory filter laid bare -- easily understood, and clearly invalid.
'Rev Dr' Lenny Flank · 24 May 2006
steve s · 24 May 2006
Steviepinhead · 24 May 2006
I think PvM's series of Emperor's Clothes exposes on ID would be even funnier if they were numbered:
Vacuity of ID MCLXIV.
That kind of thing...
PvM · 24 May 2006
stevaroni · 25 May 2006
There is, of course, the irony that roman numerals are a very inefficient way of conveying this particular specified information.
Henry J · 26 May 2006
Re "roman numerals are a very inefficient way"
Would 1164 be better? :)
Henry
Aureola Nominee, FCD · 26 May 2006
Would 010010001100 be any better?
stevaroni · 28 May 2006
Aureola Nominee, FCD · 28 May 2006
MCLXIV = 1164
MCXLVI = 1146
How's the word for dyslexia with numbers? ;)
stevaroni · 28 May 2006
Henry J · 28 May 2006
Ah well, like that old saying - there's 10 kinds of people in the world. Those who understand binary, and those who don't.
Henry