Does CSI enable us to detect Design? A reply to William Dembski

Posted 7 April 2013 by

In response to our post and comments on Stephen Meyer needs your help William Dembski has replied at Evolution News and Views. He is upset that we were "attempting to disparage" Meyer's book without having seen it. (More on that below). In the post, I made my suggestion of content for Meyer's book. I suggested that Meyer acknowledge in his book that Dembski's Design Inference using Complex Specified Information (CSI) had failed, because the theorem that Dembski needed did not exist. Dembski disagrees:
Felsenstein's request for clarification could just as well have been addressed to me, so let me respond, making clear why criticisms by Felsenstein, Shallit, et al. don't hold water.
If Dembski has refuted these criticisms, that is worth careful attention; you would need to understand why Shallit and I were wrong. Were we? Even if we were right, has Dembski supplanted his earlier arguments with newer ones that that do a better job of arguing against the effectiveness of natural selection? As the argument needs more than a few lines, I will place most of it below the fold. There I will argue that Let's see .... CSI Dembski's argument depends on Complex Specified Information. In my 2007 article I accepted the validity of CSI (though many other critics of Dembski have argued that it is meaningless or unusable). In effect, it uses a scale -- in my case I made this the ultimate scale, fitness. In Dembski's original formulation in his books The Design Inference: Eliminating Chance through Small Probabilities (1998) and No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2002) CSI is present if the population is far enough out on the fitness scale that there would be fewer than 1 individual in 10150 there in the original population. Seeing a population that is that fit would be astronomically unlikely if the process of evolution were random mutation -- say, monkeys typing out genome sequences on 4-letter typewriters. And yet it is obvious that real organisms have CSI: you could type trillions of random genomes trillions of times, and never make a fish that could swim or a bird that could fly. But what about natural selection? Is it unable to get the genome to contain Complex Specified Information? For the observation of CSI to imply that a process like Design is needed, we have to be able to rule out that natural selection could get the population to have CSI. The Conservation Law That is what Dembski's Law of Conservation of Complex Specified Information (LCCSI) was supposed to do. It assumes that we are in a space of genomes, and models evolution as a 1-1 transformation in that space. Dembski then argues that the genome cannot come to have CSI unless it starts out having it -- it cannot get into the extreme top tail of the distribution of possible fitnesses unless it started there. If any theorem of this sort were valid, this would be a Big Problem for evolutionary biology. But is Dembski's theorem valid? There are two problems. Dembski sketches a proof in No Free Lunch: For the case where the evolutionary process is deterministic, he argues that after the 1-1 evolutionary process has operated, the strength of the specification is the same afterwards as it was before. He does this by defining a new specification and showing that it is just as strong as the one we started with.[Actually, I erred here (and in my 2007 paper). Dembski does not restrict dterministic evolutionary causes to be 1-1 transforms. He allows many-to-one transforms as well. But the remainder of my critique still works for those, See below (at the end of this post for the details of the correction.] The method is simple: in place of "in the top 10-150 of the original fitness distribution" he replaces it by a specification, "when transformed backwards through the 1-1 transformation, in the top 10-150 of the original fitness distribution." Thus after the evolutionary process operates, we just go backwards through the 1-1 mapping and the population finds itself ourselves back where it started, and thus are in a region that is just as strongly specified. That argument would work fine but for two problems: It is very easy to come up with models of natural selection acting in populations that move the population to regions of higher fitness, and if this goes on long enough at enough sites, the population comes to be in the top 10-150 of the original distribution fitnesses. So no theorem like the LCCSI seems possible, once we require that the specification stay the same throughout the process. Dembski has nowhere argued that Elsberry and Shallit were wrong about the technical mistake, and he has nowhere argued that I was wrong about the problem of changing the specification. So does that mean that he now concedes that he was mistaken? He doesn't seem to do that either. Instead he points to new and different arguments of his. Dembski's revised LCCSI argument In 2005, in his paper Specification, the Pattern That Signifies Intelligence Dembski put forward a new version of his measure of CSI. Admittedly, I did not deal with this revision in my 2007 paper, so let me comment on it now. His formula includes the probability P(T|H) that a target region T is reached given a "chance hypothesis" H. In section 8 of the 2005 paper, Dembski makes it clear when Design is detected, the "chance hypothesis" should be one that includes all natural biological processes, including not only mutation but also natural selection. So detection of Design from some adaptation using this new formula works like this:
  1. Work out the probability that this good an adaptation could arise by natural processes including mutation and natural selection.
  2. If this is small enough (in the new case, less than 10-120), then we declare Design to have been detected.
  3. From that we can conclude that the adaptation could not have arisen by mutation and natural selection.
I think that the reader will see the problem: the new form of specified complexity cannot simply be determined by how improbable the adaptation would be in genomes produced by monkeys with four-letter typewriters. Now a calculation must also be made of the probability that such a mutational process together with natural selection could produce the adaptation. Simply showing that one is in the top 10-120 of all the fitnesses in the original pool of genotypes is not enough to declare CSI. Given that, the declaration that Specified Complexity is observed in nature is not obvious (it was obvious under the previous definition of CSI). To compute Dembski's quantity we need to determine whether natural processes could produce the observed adaptations -- which is the very thing we were trying to decide.

The Search For A Search arguments Dembski points out in his reply that his CSI argument
has since been reconceptualized and significantly expanded in its scope and power through my subsequent joint work with Baylor engineer Robert Marks.
He states that Shallit and I think
that having seen my earlier work on conservation of information, they need only deal with it (meanwhile misrepresenting it) and can ignore anything I subsequently say or write on the topic.
He declares that
Felsenstein betrays a thoroughgoing ignorance of this literature.
Actually, my ignorance is not quite as thoroughgoing as that. I have commented on the Dembski/Marks papers, and done so at Panda's Thumb, in two postings in August, 2009 here and here). I commend them to Dembski. Let me make the point again that I made there. I am skeptical of the scientific usefulness of the measures that Dembski and Marks introduce in these SFS papers, but for the CSI/Design argument that question is mostly irrelevant -- the issue is whether these papers provide us with a method for detection of Design, ruling out that the adaptations could be produced by natural selection. Very explicitly, they do not. For the whole point of these papers is to measure whether the fitness surface (the association of fitnesses with genotypes) is sufficiently smooth that natural selection is able to move uphill and effectively produce the adaptation. If Dembski and Marks see such a fitness surface, they argue that it would be extremely unlikely in a universe where fitnesses are randomly associated with genotypes. Therefore, they argue, a Designer must have chosen that fitness surface. Chosen it to be one in which natural selection works. I disagree. I think that ordinary physics, with its weakness of long-range interactions, predicts smoother-than-random fitness surfaces. But whether I am right about that or not, Dembski and Marks have not provided any new argument that shows that a Designer intervenes after the population starts to evolve. In their scheme, ordinary mutation and natural selection can bring about the adaptation. Far from reformulating the Design Inference, they have pushed it back to the formation of the universe. Conclusion Attempting to disparage? Was the post about Meyer's forthcoming book unfair criticism of it, without any of us having read it? Neither I nor any of the commenters claim to have read the book. Perhaps Meyer will prove to have dealt with all the points we raised. Perhaps he will have brilliantly made, or brilliantly refuted, the arguments we made. But if he ignores important points we raised, then he cannot argue that no one pointed them out to him. And if
we can expect Meyer's 2013 book Darwin's Doubt to show full cognizance of the conservation of information as it exists currently
(as defined by Dembski) then Meyer's book will not contain any valid argument that Complex Specified Information can be used to detect the intervention of a Designer in the evolutionary process. _______________________________________________________________ Correction: (10 April 2013) commenter diogeneslamp0 asked where in NFL Dembski said that the evolutionary change is modeled by a 1-1 transform. On closer examination, nowhere, I was wrong about that. He allows many-to-one transforms as well. However it is still true that his argument is, as I said in the 2007 paper and above, that the Before and After states must satisfy equivalently strong specifications, so that both have CSI or both don't have CSI. And it is still true that these specifications are not required by him to be the same. The one before is still constructed from the one afterwards using the transform. And it is still true that if you require the specifications evaluated before and after to be the same, then there are lots of examples where a conservation law would not work.

359 Comments

Joe Felsenstein · 7 April 2013

Folks, I am going to patrol (or "patroll") this thread aggressively. Our usual trolls and our usual troll-chasing will not be welcome and all that will be sent to the Wall. I hope that we can discuss the science and not spend time on denunciations of the motivation of our opponents.

liddle.elizabeth · 7 April 2013

Bravo, Joe!

Very succinct, very clear. You may be right that in his Specification paper he intended people to a probability for evolutionary processes. It didn't occur to me that he meant that, as it would have undermined his entire argument. I still think that at that stage he was under the illusion that the NFL theorems meant that he didn't need to worry about evolutionary processes - they wouldn't increase the probability above blind search.

Either way, it makes no sense.

And, in any case, as you say, he has now moved design back to the origin of the universe. For which he doesn't have a pdf, so he can't say whether the universe we observe was inevitable or infinitessimally probable. And as he equates Information with probability under the null of random search (with a fancy -log transform) then he has no way of computing how much Information a Designer would have had to put there.

https://me.yahoo.com/a/JxVN0eQFqtmgoY7wC1cZM44ET_iAanxHQmLgYgX_Zhn8#57cad · 7 April 2013

The real motive behind Dembski's CSI criterion is and was to confuse biology with technology, thereby to claim design for the former without even addressing all of the decidedly undesign-like structure and function of life.

Why would we even have biology as a science if Dembski's assumptions and presumptions were correct? Life could just be studied as engineering and styling, while in fact IDists almost don't bother to study life at all. Dembski's "conclusions" are only denial of thoroughgoing aspects of life, like its slavish derivation from ancestors, at least in most plants and animals.

Glen Davidson

DS · 7 April 2013

It is obvious that Dembski is just trying desperately to come up with something that he thinks evolution cannot accomplish. But he has no idea how mutation and natural selection works, he has no idea what it is capable of. All he can do is misrepresent the science and beg the question until he has everyone confused enough to believe he might be right. It takes a special kind of dedication to pay enough attention to charlatans and posers to be able to call them on their shenanigans. Thanks for your diligence Joe.

And of course, even if someone could somehow prove that there was something that our current conception of mutation and natural selection could not accomplish that is actually observed, it would not in the slightest provide any kind of evidence for any kind of god. That would just be wishful thinking, or in this case, non thinking. It's a solution in search of a problem.

Mike Elzinga · 7 April 2013

Joe Felsenstein is correct about long range forces contributing to the smoothing of a fitness landscape. There are also other reasons for the smoothing that have to do with the exploration process being seen as a “sampling” of that terrain. I mentioned this in a thread by Elizabeth Liddle over at The Skeptical Zone. The field of signal and imaging processing has a nice mathematical description that explains the process of smoothing. It comes under the heading of “dithering” or, equivalently, under the concept of “convolution.” Look at convolution first. If we are sampling a signal feature in either time or space, the width of the sampling window will place a limit on how much detail we will see in the feature we are sampling. If the window is narrow – either in time or in spatial extent – we will sample sharp features with sharp, distinct edges. If the window is wide, the features will have rounded and smoothed edges. Everything inside a sampling window is averaged; and then the window is moved and another sample takes place, and so on. Here is a simple demonstration.

Take a piece of paper and make several sampling slots 1, 2, 3, 4, and 5 digits wide. Place a slot over the following set of numbers, average the numbers that appear within the slot window, and plot the results below the set of numbers. The set is: {0,0,0,0,0,0,10,10,0,0,0,0,0,0}. With each slot width, you are plotting a running average that smoothes the “spike” represented by 10’s in the set. The wider the slot, the smoother the result.

From the “dithering” perspective, we are looking at the features of the signal in the Fourier transform domain; and the idea that is involved here is called the “shift theorem.” Let’s use a spatially distributed signal. If the signal S(x) has a Fourier transform F{S(x)}, then the spatially shifted signal S(x - a)has a Fourier transform eikaF{S(x)}, where k is a spatial frequency (e.g., in lines per mm). In other words, the Fourier transform of the spatially shifted signal is the Fourier transform of the unshifted signal multiplied by a phase factor; and here is trick: jiggling the sampling point back and forth randomly over the signal will produce larger phase shifts for the higher spatial frequencies for a given shift a. This “washes out” (they tend to phase cancel) the higher spatial frequencies leaving only the lower frequencies. When we do the inverse Fourier transform of the dithered result, we get an image with smoothed edges and all the fine details (higher spatial frequencies) are gone. So both perspectives give the same result. The “dithering” that appears in the sampling of the fittness landscape of a phenotype is provided by the random variations going on at the genetic level. If the landscape is also changing, the “dithering” folds in those changes as well. Atoms and molecules interact. The more complex the system, the more that slight variations in the interactions smooth things out.

Ray Martinez · 7 April 2013

Joe Felsenstein: "Dembski and Marks have not provided any new argument that shows that a Designer intervenes after the population starts to evolve."

---The quote pasted above (and others in the OP) assumes and implies Dembski and Marks accept the existence of natural (non-supernatural/Intelligent) causation operating in nature.

---Like Darwin and all original Darwinists, Joe Felsenstein and his colleagues completely reject supernatural or Intelligent causation operating in nature. The preceding fact means Darwinism accepts causation mutual exclusivity.

---Joe Felsenstein's acceptance of causation mutual exclusivity should allow him to dispense with the claims of Dembski and Marks based solely on their acceptance of the existence of natural causation.

Joe Felsenstein · 7 April 2013

Ray Martinez said: Joe Felsenstein: "Dembski and Marks have not provided any new argument that shows that a Designer intervenes after the population starts to evolve." ---The quote pasted above (and others in the OP) assumes and implies Dembski and Marks accept the existence of natural (non-supernatural/Intelligent) causation operating in nature. ---Like Darwin and all original Darwinists, Joe Felsenstein and his colleagues completely reject supernatural or Intelligent causation operating in nature. The preceding fact means Darwinism accepts causation mutual exclusivity. ---Joe Felsenstein's acceptance of causation mutual exclusivity should allow him to dispense with the claims of Dembski and Marks based solely on their acceptance of the existence of natural causation.
That is absurd. Mutual exclusivity? Ridiculous. Whatever I think of Dembski and Marks's arguments, or YEC arguments for that matter, I know that whatever supernatural interventions in the real world they accept, they also know that ordinary gravity continues to operate. All those folks, and even Ray himself, accept that there is natural causation. Let's get back to Dembski's actual arguments, not this absurd parody.

apokryltaros · 7 April 2013

A better question to ask Dembski would be "What have you done with your CSI calculations?"

Joe Felsenstein · 7 April 2013

I'm going to be a contrarian on that one. It matters little whether CSI can be done in practice easily enough to be useable. We can define and calculate SI in simple population-genetic models of evolution. If Dembski had a conservation law for CSI for those models, that would be a Big Problem for evolutionary theory. So my question is whether he has that (he doesn't).

apokryltaros · 7 April 2013

Well, that Dembski never actually had a CSI conservation law to begin with perfectly explains why he's never ever accomplished anything with his alleged CSI calculations.

apokryltaros · 7 April 2013

Glen Davidson said: The real motive behind Dembski's CSI criterion is and was to confuse biology with technology, thereby to claim design for the former without even addressing all of the decidedly undesign-like structure and function of life.
Similar to Michael Behe's dithering on about "Irreducible Complexity"?

Joe Felsenstein · 7 April 2013

Could we take a break from repetitively agreeing with each other how evil the other side is? I sometimes think the Internet's purpose is to enable people to "vent" after a hard week at work.

Now about the science ...

For example, have I misinterpreted Dembski? MIssed an important argument or a major reply he has made? Missed a major critique of him (I did cite a bunch of them in my 2007 article)?

SensuousCurmudgeon · 7 April 2013

I've always thought these declarations of improbability were worthless, because the unspoken assumption is that the whole thing just appeared, fully assembled. That certainly is improbable, but it doesn't happen like that, so all the philosophy in the world about the wonderment of it all is based on nonsense.

We all know that everything in biology is the result of a very long chain of events, and each tiny step along the way is perfectly natural. If the result survives it reproduces. The next step in the chain doesn't have to start from the beginning, because it already has the previously accumulated steps. So at any place an observer steps in to marvel at the result, he's looking at end of a chain of natural events, with no intervening miracles, so the totality of the entire chain is therefore natural -- albeit unpredictable in the beginning.

SensuousCurmudgeon · 7 April 2013

Joe Felsenstein said: Could we take a break from repetitively agreeing with each other how evil the other side is?
Sorry, Joe. I was composing my comment and I didn't see yours.

dphorning · 7 April 2013

So, I believe Dembski's "search for a search" papers all focus on evolutionary algorithms, rather than actual evolution in the field or any of the extensive examples of directed evolution. It's easier to argue that the "default" fitness landscape is a random one when the landscapes in question are all arbitrary man-made constructs. But has he tried to argue that for a real fitness landscape of a protein, or a genome? Even Axe's islands of function (or I think he's also called them gemstones in a desert) view of protein sequence space (which, I doubt it's necessary to point out, doesn't exactly rest on strong evidence) is a huge deviation from the random landscape, as there are still pretty strong local correlations between sequence and function.

I'm also curious as to what would count as "active information" addition in a real-world example. Imagine putting a population of bacteria in a simple gradient of antibiotic, from low enough to have no effect to high enough to be 100% lethal, or a similar temperature gradient. Presumably you're manipulating the fitness landscape pretty extensively by doing that, favoring some genomes, disfavoring others etc. But the act of making a gradient seems pretty low-information relative to the global effects on the landscape, it doesn't really match up with something likel the Weasel program (Dembski's favorite), where the experimenter is fine tuning each incremental step toward a pre-determined sequence. Is any evolution that happens in these experimental set-ups attributable to "active information" the scientist injected into the system?

Matt Young · 7 April 2013

MIssed an important argument or a major reply he has made?

I haven't kept up on Dembski lately, but I once noted that his 500-bit "limit" could easily be circumvented by duplication and mutation -- 1 unit that has <500 bits splits into 2 units with >500 bits, and one of them mutates. Now by his standard we have a single unit with >500 bits. Let's call it Combobulated Complexity. The Law of Conservation of Combobulated Complexity says that there is no Law of Conservation of Combobulated Complexity.

Mike Elzinga · 7 April 2013

Matt Young said: I haven't kept up on Dembski lately, but I once noted that his 500-bit "limit" could easily be circumvented by duplication and mutation -- 1 unit that has <500 bits splits into 2 units with >500 bits, and one of them mutates. Now by his standard we have a single unit with >500 bits. Let's call it Combobulated Complexity. The Law of Conservation of Combobulated Complexity says that there is no Law of Conservation of Combobulated Complexity.
I posted this somewhere over on The Skeptical Zone - I think; I don’t remember where I posted it. I borrowed the exact form of calculation from UD where it is purported to be exactly the way Dembski does it. Suppose for example we find a rock weighing approximately 60 grams, that is a mixture of polycrystals of mostly SiO2 and some polycrystals of other compounds as well (about 3 atoms per molecule on average). This allows and estimate of approximately 1027 molecules in the rock with approximately 1018 molecules per crystal on average. Let N = 1027, the number of molecules. Let P = 109, the number of crystals. There are P! permutations of all the crystals in the sample. Each crystal has an orientation in a 3-dimensional space; so we choose three perpendicular axes about which rotations can be made. There are 360 degrees, 60 minutes, 60 seconds per complete rotation about each axis, therefore each crystal can have 12960003 orientations in 3-dimensional space. Since there are P crystals, there are 12960003P ways to orient all the crystals. The number of permutations of the individual atoms is conservatively (3N)!. There is also the number of possible orientations of the original rock when it was noted in the heath; and this is again 12960003. Therefore, the number of possible arrangements and orientations of crystals and atoms and rock is Ω = (3N)! x P! x 12960003(P + 1). The amount of information in this particular rock is thus log2Ω, a number that far, far exceeds 500. Therefore we can conclude without hesitation that this particular rock was designed; and since this is an arbitrary rock picked up in an arbitrary location, we can say that any rock is designed after we examine at it and carefully specify its structure. But we haven’t even dealt with function yet. Suppose this rock was found with bird droppings on it. The rock therefore had the function of preventing the droppings from directly hitting the ground. It could also serve as shelter for insects and worms. It also can divert water; divert the path of a growing plant root. In fact there is literally no limit to the functions that a rock can perform. So we can take that extremely large number of functions and raise it to a power equal to the number of rocks in the universe and conclude that there is specified functional complexity in rocks as well as specified complexity in each and every rock. But rocks can also be organized into unified larger rocks and planets and moons; so there is specified organizational complexity in rocks as well. They don’t even have to be melded together, they can be disjoint clusters of rocks that prevent erosion or divert a river, or provide shelter. The organizational complexity of rocks is enormous. Therefore ALL rocks are intelligently designed.

Chris Lawson · 7 April 2013

Joe,

I appreciate your desire to make this about the science, but the history of the ID movement shows that there is little science behind their work (and never has been), and that the key ID people are perpetually making things up, misapplying theorems they don't understand ("written in jello" comes to mind), answering criticisms of their work with distractions and subject-changings that are really just more tedious versions of the Gish Gallop, and always promising to answer their critics with their next book (while refusing to address any of the errors in their published books).

I'd love to share a scientific discussion on this topic, but I just don't think it's possible beyond pointing out fallacies on the DI side.

Chris Lawson · 7 April 2013

Mike:

Love the rock calculation.

Joe Felsenstein · 7 April 2013

Chris Lawson said: Joe, I appreciate your desire to make this about the science, but the history of the ID movement shows that there is little science behind their work (and never has been), and that the key ID people are perpetually making things up, misapplying theorems they don't understand ("written in jello" comes to mind), answering criticisms of their work with distractions and subject-changings that are really just more tedious versions of the Gish Gallop, and always promising to answer their critics with their next book (while refusing to address any of the errors in their published books). I'd love to share a scientific discussion on this topic, but I just don't think it's possible beyond pointing out fallacies on the DI side.
Well, the object of this discussion is to consider what answers (if any) there are to their scientific arguments. There will be people confused by those, thinking that maybe they have some decisive refutation of natural selection. There will also be people who, while supporting evolutionary biology, are not sure how to explain the scientific points to others. Then again, maybe they have some dramatic refutation of the last 150 years of evolutionary biology. I haven't seen it yet, if they do. Consoling ourselves about how iniquitous their motives are is less useful for the present discussion.

liddle.elizabeth · 8 April 2013

As far as I can see, what Dembski has done in his recent ENV postings is to finally draw lay attention to what was essentially a huge concession in his papers with Marks (and even, conceivably, in Specification, although I didn't read it that way at the time): that his NFL argument doesn't work as an argument against evolution.

So his blustering about how naughty Joe was not to have read his later work boils down to a clear concession that "Joe might have been right about my earlier argument, but now he needs to deal with my new one." Which of course Joe has now done.

And which, as far as I can see, boils down to the ontological argument for God. You certainly can't base a probability argument for God on a probability distribution you don't actually have, limited as we are to one exemplar of universes.

TomS · 8 April 2013

Whenever one of these probability arguments arises, I wonder whether anyone has done a similar estimate for their alternative.

What is the probability that designer(s) which are capable of doing more than natural causes, and have no known limitations on what they might do, would do such-and-such?

My estimate is that that, because the number of possible design outcomes is greater than the number of natural outcomes, the probability of design is less than the probability of natural causes.

Rolf · 8 April 2013

Isn’t Dembski a little late on the stage with his brainchild, his redefinition of CSI? It may have had some appeal for creationists in 1998 when he published The Design Inference, but science has come a long way since then, has it not? It seems to me that there is much more to evolution than RM and NS to be taken into account.

But that is another topic.

Joe Felsenstein · 8 April 2013

liddle.elizabeth said: As far as I can see, what Dembski has done in his recent ENV postings is to finally draw lay attention to what was essentially a huge concession in his papers with Marks (and even, conceivably, in Specification, although I didn't read it that way at the time): that his NFL argument doesn't work as an argument against evolution. So his blustering about how naughty Joe was not to have read his later work boils down to a clear concession that "Joe might have been right about my earlier argument, but now he needs to deal with my new one." Which of course Joe has now done. And which, as far as I can see, boils down to the ontological argument for God. You certainly can't base a probability argument for God on a probability distribution you don't actually have, limited as we are to one exemplar of universes.
I didn't just now finally deal with the Search For a Search argument , I dealt with it some in my 2007 article, where I ended up saying:
We live in a universe whose physics might be special, or might be designed — I wouldn't know about that. But Dembski's argument is not about other possible universes — it is about whether natural selection can work to create the adaptations that we see in the forms of life we observe here, in our own universe, on our own planet. And if our universe seems predisposed to smooth fitness functions, that is a big problem for Dembski's argument.
(which, I should clarify, meant for Dembski's argument about the ineffectiveness of natural selection. I also wrote about in two 2009 PT postings (here and here) that I linked to in my post above. By the way, Mark Perakh also had an article attacking the SFS here in 2007. Anyway, the issue of ontological arguments for God seems to be beside the point. Dembski was asserting that natural selection doesn't work. Now he Dembski and Marks say, well even if it does work there has to be a God in the picture at the beginning, though not necessarily later. So what has happened to Dembski's argument for the ineffectiveness of natural selection? It is not necessary to get involved in the ontology-and-God debate to see that Dembski's LCCSI argument isn't around anymore, at least not if the SFS has replaced his earlier argument. And if the LCCSI argument is still around, then it needs some defending.

Joe Felsenstein · 8 April 2013

TomS said: Whenever one of these probability arguments arises, I wonder whether anyone has done a similar estimate for their alternative. What is the probability that designer(s) which are capable of doing more than natural causes, and have no known limitations on what they might do, would do such-and-such? My estimate is that that, because the number of possible design outcomes is greater than the number of natural outcomes, the probability of design is less than the probability of natural causes.
I think they would answer, in effect, that we can't predict what the Designer intended, that the Designer's powers are infinite and his [it is a "he", you know] knowledge is infinite, and therefore, whatever happened, that was what should have been predicted. Alas, the prediction is after the fact -- the target is drawn on the side of the barn after the arrow hits it, and it is drawn right around the arrow.

Joe Felsenstein · 8 April 2013

Rolf said: Isn’t Dembski a little late on the stage with his brainchild, his redefinition of CSI? It may have had some appeal for creationists in 1998 when he published The Design Inference, but science has come a long way since then, has it not? It seems to me that there is much more to evolution than RM and NS to be taken into account. But that is another topic.
I'd acquit Dembski on this charge. If you want to make the argument that the other newer phenomena that have been discovered in genetics and genomics since 1998 rescue the assertion that natural selection and random mutation bring about the adaptations that we see, I think you have conceded a huge point. Are you really agreeing that the processes we knew about before 1998 can't do the job, that Dembski's CSI critique worked and refuted the effectiveness of natural selection and random mutation? Are you conceding that invoking newer exotica is necessary to save the Modern Synthesis from Dembski's critique? Sure, a full explanation of everything must invoke all phenomena that we know about (and some discovered after 2013, probably). But the fact that RM+NS leads to improved adaptation, in ordinary models of evolution, is not refuted by Dembski's conservation law argument, nor by his No Free Lunch argument, nor by his and Marks's Search For a Search argument. And his assertions were that these arguments showed that RM+NS would not lead to adaptation. They don't. I don't think we have to "give away the farm". We have to deal with the issue of whether the LCCSI refutes the effectiveness of RM+NS, and we don't need to fall back on post-1998 phenomena to do that.

Paul Burnett · 8 April 2013

DS said: ...even if someone could somehow prove that there was something that our current conception of mutation and natural selection could not accomplish that is actually observed, it would not in the slightest provide any kind of evidence for any kind of god.
...and even if it did, it would not in the slightest provide any kind of evidence for that god being the creator god of Genesis. Meyer's previous book tried to prove that the "signature in the cell" was the "signature" of the creator god of Genesis - and failed miserably.

Joe Felsenstein · 8 April 2013

Mike Elzinga said: Joe Felsenstein is correct about long range forces contributing to the smoothing of a fitness landscape. ... [most of Mike's explanation snipped] The “dithering” that appears in the sampling of the fittness landscape of a phenotype is provided by the random variations going on at the genetic level. If the landscape is also changing, the “dithering” folds in those changes as well. Atoms and molecules interact. The more complex the system, the more that slight variations in the interactions smooth things out.
Well, yes and no. All this will happen, but I think there is an bigger effect from even simpler physics. In a fitness surface that has random associations of possible fitnesses with genotypes, the surface is infinitely rough (a "white noise" fitness surface). That means that any nucleotide substitution moves you to a nearby DNA sequence that has a fitness drawn, in effect, randomly from all possibilities. In other words, one mutation has the same effect as mutating every site in the genome simultaneously. Now, mutation doesn't work that way. We all carry some mutants, but we are not totally destroyed, just a little nonfunctional. So the fitness surface is much smoother than the "white noise" surface. Why? I would point to the lack of totally tight interaction between, say, my enzymes that produce earwax and the photosensitive pigments in my eye. Why don't they interact strongly? For two reasons:
  1. Physics -- they are not functioning at the same place and time, in the same cells. Weakness of long-range interactions.
  2. Evolution -- if they start to get more interactive, evolution tends to move away from that. We stay in regions of the genome space that do not result in totally locked-in tightly-interacting organisms. This sounds teleological, but computer simulations and theory done by people such as Lee Altenberg have verified that individual selection can in fact do this.

Paul Burnett · 8 April 2013

Joe Felsenstein said: Now, mutation doesn't work that way. We all carry some mutants, but we are not totally destroyed, just a little nonfunctional.
Non-functional, or differently functional - or do you mean neutral? (I would have said "more or less functional, but mostly neutral.") You haven't bought into the creationists' "All mutations are harmful," have you? :)

Joe Felsenstein · 8 April 2013

Paul Burnett said:
Joe Felsenstein said: Now, mutation doesn't work that way. We all carry some mutants, but we are not totally destroyed, just a little nonfunctional.
Non-functional, or differently functional - or do you mean neutral? (I would have said "more or less functional, but mostly neutral.") You haven't bought into the creationists' "All mutations are harmful," have you? :)
I stand corrected, mostly. However, even if you just consider coding sequences, we all carry some mutant alleles (most of which are not brand-new mutations). Most of the those have some tiny effect, so "a little nonfunctional" might be right for them.

glipsnort · 8 April 2013

Joe Felsenstein said: By the way, Mark Perakh also had an article attacking the SFS here in 2007.
Note: your link to Mark Perakh's article leads only back to this article. (Another note: I find it disorienting to see "SFS" refer to an argument by Dembski, since for me "SFS" refers either to "Site Frequency Spectrum" or to my initials.) Steve Schaffner

Joe Felsenstein · 8 April 2013

glipsnort said:
Joe Felsenstein said: By the way, Mark Perakh also had an article attacking the SFS here in 2007.
Note: your link to Mark Perakh's article leads only back to this article. (Another note: I find it disorienting to see "SFS" refer to an argument by Dembski, since for me "SFS" refers either to "Site Frequency Spectrum" or to my initials.) Steve Schaffner
Oops. here is the correct link.I leave the "SFS" as it stands as a tribute to you.

TomS · 8 April 2013

Joe Felsenstein said:
TomS said: Whenever one of these probability arguments arises, I wonder whether anyone has done a similar estimate for their alternative. What is the probability that designer(s) which are capable of doing more than natural causes, and have no known limitations on what they might do, would do such-and-such? My estimate is that that, because the number of possible design outcomes is greater than the number of natural outcomes, the probability of design is less than the probability of natural causes.
I think they would answer, in effect, that we can't predict what the Designer intended, that the Designer's powers are infinite and his [it is a "he", you know] knowledge is infinite, and therefore, whatever happened, that was what should have been predicted. Alas, the prediction is after the fact -- the target is drawn on the side of the barn after the arrow hits it, and it is drawn right around the arrow.
I would love to hear that kind of response from an ID advocate. By the way, I don't need to ascribe infinite powers and knowledge of the designer(s) for my inquiry, only that they are significantly greater than natural causes. I am willing to concede (in order to avoid red herrings) to the advocates their claims of distinguishing designer(s) from god(s). I would imagine that their response would more likely be some attempt at changing the subject. Oh, and I think that I was careless, and should have said that the probability of such-and-such as a result of design is less than the probability as a result of natural causes.

David vun Kannon · 8 April 2013

I'll say two things:

1 - I think Dembski and Marks' SfS papers were attempts to put some building blocks in place for later work, not the final work itself. In this sense we should expect them to answer all questions and criticisms posed to his earlier work.

2 - Dembski's argument seems to be that if evolution can't work in all possible universes (fitness spaces) it can't work in a subset of them, or if it does, we don't live in that subset. He doesn't actually argue the last part, since that would force him to deal with real biology and physics instead of pure math.

Beyond white noise, 'islands', or 'gemstones' of fitness, work on deceptive fitness functions shows that GAs can solve problems even when most of the local fitness slopes point away from the global optimum.

DS · 8 April 2013

David vun Kannon said: I'll say two things: 1 - I think Dembski and Marks' SfS papers were attempts to put some building blocks in place for later work, not the final work itself. In this sense we should expect them to answer all questions and criticisms posed to his earlier work. 2 - Dembski's argument seems to be that if evolution can't work in all possible universes (fitness spaces) it can't work in a subset of them, or if it does, we don't live in that subset. He doesn't actually argue the last part, since that would force him to deal with real biology and physics instead of pure math. Beyond white noise, 'islands', or 'gemstones' of fitness, work on deceptive fitness functions shows that GAs can solve problems even when most of the local fitness slopes point away from the global optimum.
That's the point. Nowhere does Dembski deal with actual biology. Nowhere does he demonstrate that he actually knows how mutations work or how natural selection works. If he wants to do math, why not start with classic population genetics models? Why not show how they are wrong? Why not deal with the evidence that actually does exist? Maybe it's because he knows nothing about biology. This is like an engineer claiming that bumble bees can't fly. The guy didn't actually study bees, so he has no idea how they do what they do. All he knows is that, with his limited imagination, he couldn't build one. That doesn't mean that mutation and natural selection could not. Some day he will have to deal with the evidence that mutation and natural selection actually did produce the diversity of life, regardless of the limitations of his imagination. Until then, he'll just go on insulting and denigrating every real biologist who actually did study the biology and wondering why nobody likes him.

Paul Burnett · 8 April 2013

TomS said: I am willing to concede (in order to avoid red herrings) to the advocates their claims of distinguishing designer(s) from god(s).
How detailed is intelligent design creationism "theory" in distinguishing between the "designer" and the creator who implemented the design? Did the creator subcontract out the design to a separate entity/entities? Or are the "intelligent designer" and the creator the same entity? Or did some other entity hire both a design subcontractor and a construction subcontractor. (Those of us who deal with construction projects know how badly that can work out - see http://pages.uoregon.edu/ftepfer/SchlFacilities/TireSwingTable.html )

TomS · 8 April 2013

Paul Burnett said:
TomS said: I am willing to concede (in order to avoid red herrings) to the advocates their claims of distinguishing designer(s) from god(s).
How detailed is intelligent design creationism "theory" in distinguishing between the "designer" and the creator who implemented the design? Did the creator subcontract out the design to a separate entity/entities? Or are the "intelligent designer" and the creator the same entity? Or did some other entity hire both a design subcontractor and a construction subcontractor. (Those of us who deal with construction projects know how badly that can work out - see http://pages.uoregon.edu/ftepfer/SchlFacilities/TireSwingTable.html )
There are so many things wrong with ID. One that doesn't get enough attention is the difference between a design and the implementation of the design. Of course, the advocates of ID don't tell us what they mean by "design", so they can always respond with "that's not what we mean by 'design'." But in any common-sense notion of "intelligent design", there are plenty of non-existent things that are intelligently designed: centaurs, flying carpets, Penrose triangles. This would lead one to say that intelligent design does not, by itself, account for the existence of something. My guess is that there was a time, before the industrial revolution, when designers were artisans, people who made things, and that thus philosophers didn't recognize the distinction between design and manufacture (or creation).

Tomato Addict · 8 April 2013

It seems to me that the ability of a mutation/select genetic search to find a useful increase in fitness is just a function of the local gradient and the number of population/generations given to the search. A gradient greater-than-or-equal-to G will be found with probability greater-than-or-equal-to P in less-than-or-equal-to N generation/events. From this we might define gradients which are too weak to drive evolution, and where gradients are strong enough to be declared "Irreducibly Inevitable".

It seems to me that even if the fitness landscape is perfectly flat, any amount of local competition for resources must introduce a local gradient. This would raise fitness for any discovery of under-utilized resources, driving the search into new areas of the fitness landscape, even if it is not ideal fitness from the original perspective. In this way, the search can eventually escape any local maximum, unless the gradient is actually discontinuous (not smooth?).
Even in the discontinuous case, there is still a probability of "jumping the gap" through an unlikely mutation.

https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 · 8 April 2013

10^-150 was Dembski's upb... What does it have to do with what is described here? (Sory about my ignorance...)

Mike Elzinga · 8 April 2013

Joe Felsenstein said: Well, yes and no. All this will happen, but I think there is an bigger effect from even simpler physics. In a fitness surface that has random associations of possible fitnesses with genotypes, the surface is infinitely rough (a "white noise" fitness surface). That means that any nucleotide substitution moves you to a nearby DNA sequence that has a fitness drawn, in effect, randomly from all possibilities. In other words, one mutation has the same effect as mutating every site in the genome simultaneously. Now, mutation doesn't work that way. We all carry some mutants, but we are not totally destroyed, just a little nonfunctional. So the fitness surface is much smoother than the "white noise" surface. Why? I would point to the lack of totally tight interaction between, say, my enzymes that produce earwax and the photosensitive pigments in my eye. Why don't they interact strongly?
Well yes; fitness landscapes are multidimensional; and so are the steps through that landscape. There are many paths from multidimensional point A to multidimensional point B; so one would expect steps along a given path could change the probabilities of progress along another path. That doesn’t negate the ideas behind the sampling picture used in signal and image processing. “Signals” and “images’ aren’t restricted to 1, 2, or 3 dimensions. In fact, sampling theory in signal and image processing can also fold in changes to the signal by the sampling process itself. The math becomes more complicated because the process now becomes nonlinear. In the modeling, a “peak” is a configuration that is presumed to be a stable representation of a phenotype in a given environment. The sampling of points in that space is smoothed by the “dithering” in the sampling process; but on top of that we stop the program when we reach a local peak. “Dithering” contributes to making the peaks smooth. So does the physics and chemistry. The sampling ideas can also be applied to soft matter; and underlying all this is the second law of thermodynamics that allows for convergence to binding sites as energy is released in the process of binding. Square wells in physics are only approximations to deep wells in physics. Those kinds of binding energies pertain to only the hard parts of living organisms; and they are put in place atom by atom. On closer inspection, all wells have smoothed features. This is particularly true in soft matter systems where the potential energy wells are things like Van der Waals potentials. Because kinetic energies are comparable to binding energies, wells are “sampled;” and this contributes to the smoothing of the wells along with long range effects and nonlinear effects due to charge redistribution when atoms and molecules are in close proximity. Tunneling to adjacent configurations also contributes to smoothing transitions to molecules with different properties. Tunneling is much more probable when electrons or bonded molecules are near the tops of the wells in which they are confined; in other words, in soft matter. Kinetic energies in the oscillations of molecules distorts wells, pulls down barriers, and makes “jumping” to different configurations much more likely. The bottom line is that you cannot have living, adapting organisms with all tightly bound systems; they have to be soft. And soft means smooth potential wells; there is no way around it. The “white noise” you speak of is the “dithering” of the steps in the fitness landscape due to the variability – due to physics and chemistry – of the genotype “exploring” this landscape.

Rolf · 8 April 2013

We have to deal with the issue of whether the LCCSI refutes the effectiveness of RM+NS, and we don’t need to fall back on post-1998 phenomena to do that.
All right, I am just trying to understand as best I can. Just this and I will shut up: If I may suggest a probable scenario using Dog genetics as a model: Might not any species under corresponding, but natural conditions, say by geographical separation leading to relaxed selection pressure, display changes in behavior and phylogeny not dependent on mutations? Providing fertile ground for existing, presently neutral, or new mutations to come into play? Or would that be irrelevant wrt LCCSI vs. effectiveness of RM+NS?

Joe Felsenstein · 8 April 2013

David vun Kannon said: ... 2 - Dembski's argument seems to be that if evolution can't work in all possible universes (fitness spaces) it can't work in a subset of them, or if it does, we don't live in that subset. He doesn't actually argue the last part, since that would force him to deal with real biology and physics instead of pure math. Beyond white noise, 'islands', or 'gemstones' of fitness, work on deceptive fitness functions shows that GAs can solve problems even when most of the local fitness slopes point away from the global optimum.
I think that the argument about "white noise" fitness landscapes is that evolution won't work in average ones. But it definitely can work in smoother fitness landscapes. The word "all" is important here. There is no proof that it cannot work in any adaptive landscape -- it has a hard time in almost all of them but not all. The ones it doesn't have a hard time in are the small fraction of them that are smoother than the "white noise" ones. GAs, and similar genetic models, do well if there are some paths upward, but as you say, do not require that all paths be upward. An implication of even the smoothest fitness landscapes is that half or more of the steps you could make lead downward.

Joe Felsenstein · 8 April 2013

Tomato Addict said: It seems to me that the ability of a mutation/select genetic search to find a useful increase in fitness is just a function of the local gradient and the number of population/generations given to the search. A gradient greater-than-or-equal-to G will be found with probability greater-than-or-equal-to P in less-than-or-equal-to N generation/events. From this we might define gradients which are too weak to drive evolution, and where gradients are strong enough to be declared "Irreducibly Inevitable". It seems to me that even if the fitness landscape is perfectly flat, any amount of local competition for resources must introduce a local gradient. This would raise fitness for any discovery of under-utilized resources, driving the search into new areas of the fitness landscape, even if it is not ideal fitness from the original perspective. In this way, the search can eventually escape any local maximum, unless the gradient is actually discontinuous (not smooth?). Even in the discontinuous case, there is still a probability of "jumping the gap" through an unlikely mutation.
If the landscape is almost perfectly flat then whether or not you find a higher point wouldn't matter very much. The whole point of the argument is whether the organism can become well adapted -- in a situation where there are lots of ways it could be much worse. However, your point about wandering into new regions of the genome space is well-taken. Sewall Wright, in a famous 1932 paper that introduced the metaphor of the adaptive landscape (or fitness surface, or whatever) made the same point.

harold · 8 April 2013

If he wants to do math, why not start with classic population genetics models? Why not show how they are wrong? Why not deal with the evidence that actually does exist?
Because I came from a non-traumatizing and tolerant but austere and somewhat evangelical background, when I discovered organized creationism circa 1999, my first response was to reach out, Biologos style perhaps (even though I am not religious), and explain things like this. (I'm a far cry from a population geneticist, but I had to take a population genetics class for scheduling reasons in university, and I credit it with really helping my understanding of biological evolution.) I understand that the focus of this thread is to give Dembski the fairest possible reading, and show where he is wrong, and I respect that. However, even if, or indeed, especially if, one wishes to "overturn" a field of inquiry, the first step is to learn what is already known. Dembski's failure to demonstrate familiarity with the general field of mathematical treatments of biology is worth noting.

Joe Felsenstein · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 said: 10^-150 was Dembski's upb... What does it have to do with what is described here? (Sory about my ignorance...)
It is the Universal Probability Bound. It is intended to be a conservative calculation of how many individual events could have occurred in the Universe since it started. If there are 1080 particles each of which could have changed state at most 1070 times, you get that there have been at most 10150 events ever, anywhere. So if you see a favorable outcome for which the probability of one that is that favorable, or more favorable, is less than 10-150 you are justified in saying that it was not a random happenstance, that the dealer might have stacked the deck. For some unknown reason Dembski wrote in his recent reply to me that the UPB was
(a perennial sticking point for Shallit and Felsenstein)
when actually I've never complained about it. He cites it as 1 in 10120 in the 2005 paper, based on a less-conservative but still conservative calculation. It doesn't matter much what it is -- the point is that real organisms have adaptations that would occur far less often than that, if all that ever happened was mutation, with no natural selection. There I agree with Dembski. The problem comes after that when we try to rule out the role of natural selection.

https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 · 8 April 2013

Thanks.

https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 · 8 April 2013

So... (just to check if I understood), the number dembski used in the NFL theorem (about the LCCSI) comes from this upb?

https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 · 8 April 2013

*NFL book (not NFL theorem)

Joe Felsenstein · 8 April 2013

harold said:
[David vun Kannon] If he wants to do math, why not start with classic population genetics models? Why not show how they are wrong? Why not deal with the evidence that actually does exist?
Because I came from a non-traumatizing and tolerant but austere and somewhat evangelical background, when I discovered organized creationism circa 1999, my first response was to reach out, Biologos style perhaps (even though I am not religious), and explain things like this. (I'm a far cry from a population geneticist, but I had to take a population genetics class for scheduling reasons in university, and I credit it with really helping my understanding of biological evolution.) I understand that the focus of this thread is to give Dembski the fairest possible reading, and show where he is wrong, and I respect that. However, even if, or indeed, especially if, one wishes to "overturn" a field of inquiry, the first step is to learn what is already known. Dembski's failure to demonstrate familiarity with the general field of mathematical treatments of biology is worth noting.
In arguing the ineffectiveness of natural selection -- and that is what the LCCSI tried to argue -- he was going against 100 years of theoretical population genetics, against Sewall Wright, against JBS Haldane, even against R.A. Fisher whose tail probability formulation he used. And against Motoo Kimura, Geoff Watterson, Warren Ewens, Richard Lewontin, James F. Crow, John Maynard Smith, Oscar Kempthorne, Clark Cockerham, Sam Karlin, Tomoko Ohta. and many more. I knew quite a few of these folks (all but Fisher, and I only got to see Haldane once). You had to get up early in the morning to outthink any of them. So Dembski's arguments were either great bravery ... or chutzpah. That means that it should not be too hard to find the flaw in his arguments.

Matt Young · 8 April 2013

Suppose for example we find a rock weighing approximately 60 grams, that is a mixture of polycrystals of mostly SiO2 and some polycrystals of other compounds as well (about 3 atoms per molecule on average). ...

Dembski tells us that a regular geometric pattern such as a snowflake (or a crystal) is not CSI because it is formed "simply in virtue of the properties of water," so he would rule our Mr. Elzinga's argument. We discuss the snowflake in Why Intelligent Design Fails. In response to another question, the 500-bit limit is the same as 10-150.

Tomato Addict · 8 April 2013

It is the Universal Probability Bound. It is intended to be a conservative calculation of how many individual events could have occurred in the Universe since it started....
But (correct me if I am wrong) this is intended to apply to individual events, not sequences of events. Dembski's usage of UPB seems to be a quote-mine of Emile Borel's intended statement. I work with likelihoods smaller than 10^-150 on a regular basis, and it has no particular meaning in itself. I have difficulty understanding how CSI could be anything more than an very-broken likelihood ratio test. Further, I do not see how Dembski can can have this much knowledge of information theory and still fail to admit that CSI is a very-broken likelihood ratio test. CSI amounts to (and perhaps I first heard this here, long ago?) a Bayesian hypothesis with a prior probability for Design equal to 1.0. ** ** For the non-statisticians, that implies CSI is assuming Design and ignoring any evidence to the contrary.

Mike Elzinga · 8 April 2013

Matt Young said:

Suppose for example we find a rock weighing approximately 60 grams, that is a mixture of polycrystals of mostly SiO2 and some polycrystals of other compounds as well (about 3 atoms per molecule on average). ...

Dembski tells us that a regular geometric pattern such as a snowflake (or a crystal) is not CSI because it is formed "simply in virtue of the properties of water," so he would rule our Mr. Elzinga's argument. We discuss the snowflake in Why Intelligent Design Fails.
Indeed that is what they claim over at UD. But then they go right ahead and “prove” a stone artifact was designed using exactly the same kind of calculations I parodied in the example using the rock. (I think the character over at UD was kairosfocus, who used an example of an ancient artifact dug up from some archeological site. He marked off several points around the circumference of a ribbed stone and made his calculation come out “proving” design.) That “demarcation” between objects with “natural” properties and those to which they assert the calculations apply is completely arbitrary. They will not say where along the spectrum of complexity in condensed matter that “information” and “intelligent design” has to take over and do the job that physics and chemistry can no longer do. They can’t even provide a binding energy threshold. So I claim that, without that demarcation, rocks are fair game. :-)

diogeneslamp0 · 8 April 2013

David vun Kannon said: ...work on deceptive fitness functions shows that GAs can solve problems even when most of the local fitness slopes point away from the global optimum.
Marvelous-- do you have a reference for that? It puts the axe to Axe.

DS · 8 April 2013

Joe Felsenstein said: In arguing the ineffectiveness of natural selection -- and that is what the LCCSI tried to argue -- he was going against 100 years of theoretical population genetics, against Sewall Wright, against JBS Haldane, even against R.A. Fisher whose tail probability formulation he used. And against Motoo Kimura, Geoff Watterson, Warren Ewens, Richard Lewontin, James F. Crow, John Maynard Smith, Oscar Kempthorne, Clark Cockerham, Sam Karlin, Tomoko Ohta. and many more. I knew quite a few of these folks (all but Fisher, and I only got to see Haldane once). You had to get up early in the morning to outthink any of them. So Dembski's arguments were either great bravery ... or chutzpah. That means that it should not be too hard to find the flaw in his arguments.
This was my point. He hasn't even considered any of the previous work in the field. As if all he has to do is wave his hands and every other real scientist will magically disappear. I would chalk it up to ignorance, but then again, he is the one who deliberately assumed that nobody else in the history of science ever really knoew what they were doing, so I guess arrogance is also part of it. Now if you want to figure out what mutation and natural selection can and cannot do, you should really consider things such as mutation rate, population size, selection coefficient, degree of dominance, etc. You know, all of things that the classic population genetics models incorporate. If you are unable or unwilling to do this, then making up fake crap about "CSI" and refusing to calculate it isn't going to convince anybody. And neither is not publishing in any reputable journal.

diogeneslamp0 · 8 April 2013

Dr. Dembski? Hello...?

It would be nice-- but completely out of character-- if Dr. Dembski were to come here and face his critics and try to point to a flaw in the thorough debunkings by Shallit, Elsberry and Felsenstein etc. etc.

But, this is the same cowardly Dembski that ran from Dover with his tail between his legs, rather than face cross-examination-- his own Vice Strategy in reverse.

Instead he will cringe and cower at Evolution News and Views, knowing that no criticisms or corrections, no comments, no questions, no doubts are ever permitted there, whence the cowardly Dembski may preach to his cultists at Uncommonly Dense.

Joe Felsenstein · 8 April 2013

Kindly cool it with the namecalling. Back to the science.

https://www.google.com/accounts/o8/id?id=AItOawnfNIVpzAqOmPHwqlUX9yJkEnoKWH_jrh8 · 8 April 2013

The paper "The Search for a Search" is especially troubling. For a couple of months, there had been an erratum to it (see A new erratum for Dembski's and Marks's The Search for a Search. But my main problem is that the model Dembski and Marks are using just doesn't work for what they are calling an assisted search (see Dembski's and Mark's "The Search for a Search": Trying to work out one of their examples)

(Yes, that's my blog which I'm shamelessly pushing, no, I don't know why I'm a masked panda)

Richard B. Hoppe · 8 April 2013

I second the request for a reference.
diogeneslamp0 said:
David vun Kannon said: ...work on deceptive fitness functions shows that GAs can solve problems even when most of the local fitness slopes point away from the global optimum.
Marvelous-- do you have a reference for that? It puts the axe to Axe.

diogeneslamp0 · 8 April 2013

As I see it, there are at least five different ways of computing Specified Complexity. Three are "guaranteed success" methods guaranteed to assign a high CSI to organisms, biological structures, etc., as well as to objects that we all agree were intelligently designed, like artworks, Mr. Rushmore etc. Two are "guaranteed failure" methods guaranteed to return zero or low CSI to products of biological evolution, and also to the output of genetic algorithms, evolutionary programming, artificial life, Avida, Schneider's ev program, etc. IDologues absolutely need the "guaranteed failure" methods so that they can claim that evolutionary algorithms and in silico simulations of RM+NS do NOT generate CSI. So, CSi is *supposed* to tell us something about the *process* by which something is formed. However, when it comes to actually *applying* the method for a design inference, the very first question that Dembski or any IDologue asks is: tell me exactly how it was formed. They need to know how it was formed in order for their method to tell you how it was formed. Here is the actual method of Dembski's CSI. Step 1. In order for me to tell you how a structure was formed, first you tell me how it was formed. Step 2. If we know it was formed by artificial means (Mt. Rushmore, a 747) then IDologues apply one of the three "guaranteed success" methods. Step 3. If we know it was formed by RM+NS (e.g. evolution in test tube, novel enzymatic functions etc.), or an in silico simulation of RM+NS (genetic algorithms, Avida, Schneider's ev etc.), or by any non-random natural process (snowflake, crystallization, weather pattern etc.) then IDologues apply one of the two "guaranteed failure" methods. Step 4. If it is a biological structure that has been around for millions of years (e.g. big human brain, genetic code etc.) do nothing until *after* step 5. Step 5. By applying Step 2 (guaranteed success) to all known artificially designed objects, and Step 3 (guaranteed failure) to all structures know to be produced by evolution or evolutionary simulations, we claim that it has been proven that only intelligence can create CSI and that the design inference via CSI produces no false positives. This general principle, "only intelligence creates CSI" is guaranteed to be "proven", because in order to compute the CSI of all examples, you must first tell me whether it was intelligently designed, and I use different methods for different cases. Step 6. If it is a biological structure that has been around for millions of years (e.g. big human brain, genetic code etc.) you may know safely apply one of the "guaranteed success" methods, yielding a high CSI. Step 7. Invoking the alleged general principle that "only intelligence creates CSI" which we "proved" in step 6, we have now "proven" that the malaria parasite, schistosoma parasite etc. are all intelligently designed. A Brief Sketch of the Five Methods for Computing CSI. What they all have in common is that they all claim to require that intelligently designed objects be both "specified" and "complex" at the same time, but the definition of "specified" varies greatly, and the definition of "complexity" varies as well. Creationist Complexity: As A Logarithm of the Tornado Probability First note that "creationist complexity" is not real complexity. It is certainly not the same as Kolmogorov complexity. Dembski says it is; he's a lying little shit. In his recent cowardly anti-Felsentein post Dembski lies with sweaty desperation:
Debmski, 2013: "One would [for Dembski to prove why criticisms by Felsenstein, Shallit, et al. don't hold water] would be for me to review my work on complex specified information (CSI), show why the concept is in fact coherent... indicate how this concept has since been strengthened by being formulated as a precise information measure... [and] show how CSI as a criterion for detecting design is conceptually equivalent to information in the dual senses of Shannon and Kolmogorov, and finally characterize conservation of information within a standard information-theoretic framework. Much of this I have done in a paper titled "Specification: The Pattern That Signifies Intelligence" (2005) and in the final chapters of The Design of Life (2008)."" ["Darwinists Waste No Time in Criticizing Darwin's Doubt", William Dembski, ENV, April 4, 2013.]
What a lying little shit-- Dembski showed no such thing, and in fact Shannon information and Kolmogorov complexity are different from each other and totally unrelated to Dembski's CSI. The most common method for computing "creationist complexity", and that is applicable to genetic and protein sequences, is to compute a probability that the sequence will be formed by total random recombination of all parts. I call this THE TORNADO PROBABILITY inspired of course by Wickramasinghe's analogy that the appearance of the first living cell was as probably as a tornado in a junkyard assembling a 747. This is very simple: first, define K = how many KINDS of parts there are (K= 4 for genetic sequences, because there are four nucleotides; K= 20 for proteins because there are 20 amino acids; K= 26 for English words because there are 26 letters, etc.) Next, define the number of parts (length of string) as L. Then the probability of formation by totally random scramble, drawing from an infinite pool of parts, is
Tornado probability = 1/K^-L Then the creationist complexity is just Creationist complexity = -log(base 2)[p] = -log(base 2)[1/K^-L] = L* log(base2)[K]
Which for various simple problems becomes
Creationist complexity for DNA = L * 2 Creationist complexity for protein sequences = L * 4.322 Creationist complexity for Engish words = L * 4.70
With this common framework: The Three "Guaranteed Success" methods for computing CSI "Guaranteed Success" Method 1. This method requires first that, by the above definition,
Tornado Probability < (1 / 10^120) Creationist Complexity > -log(base2)[1 / 10^120] But -log(base2)[1 / 10^120] = -log(base2)[10]*log(base10)[ 1/ 10^120], thus which means Creationist Complexity > 3.322 * 120 = ~399 bits
But it ALSO requires Creationist Complexity > ~399 bits AND and says everything is either "specified" or "NOT specified." Here "specified" is a BINAARY quantity, either YES or NO, not a number (this rule changes after 2005). Something is "specified" if either: 1. It is biological; OR 2. It matches an independently given pattern. (What's an "independently given pattern"? That's explained later.) It might seem odd that Dembski gives a special rule for biological structures, but he does. His reason is that, obviously, his religion teaches that all life is intelligently designed, so he gives a special exception for biological structures. He's CHEATING but he rationalizes his obvious cheating by saying that all biological structres have function, and anything with function must matches an "independently given pattern" because all functions are independently given. Of course this is bullshit. Take a look at some enzyme functions, which took scientists many decades to deduce; this from the Enzyme Commission numbering scheme: DNA glycosylase Phosphoric monoester hydrolases Hydrolase acting on acid anhydrides. Hydrolase acting on organic halides. Hydrolase acting on sulfur-sulfur bonds I could go on at great length also about the ligases and lyases-- you get the point-- enzyme functions are not "independently given patterns" because no scientists could even GUESS at these functions until there were deduced after decades of painful experimentation. But Dembski declares by fiat that all biological functions are "independently given patterns" and thus all biological structures are SPECIFIED. Thus the only real question is: how big is the sequence? Following from above:
Dembski's "Guaranteed Success" Method 1: Any DNA sequence with length > 200 bps (200 = 400/2) has creationist complexity > 400, Any protein sequence with length > 92.55 amino acids ((92.55 = 400/4.322) has creationist complexity > 400 Both are =SPECIFIED (because they are biological) and COMPLEX (because they are long).
This is the method laid out by Dembski in No Free Lunch and The Design Inference. The 2005 and later methods are different; I will write on them when I get a chance.

diogeneslamp0 · 8 April 2013

Errata on my last comment:
Tornado probability = 1/K^-L Then the creationist complexity is just Creationist complexity = -log(base 2)[p] = -log(base 2)[1/K^-L]
Should of course be:
Tornado probability = 1/K^L Then the creationist complexity is just Creationist complexity = -log(base 2)[p] = -log(base 2)[1/K^L]

Joe Felsenstein · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnfNIVpzAqOmPHwqlUX9yJkEnoKWH_jrh8 said: (Yes, that's my blog which I'm shamelessly pushing, no, I don't know why I'm a masked panda)
Anonymous logins here are all Masked Pandas. Next time you post here with an anonymous login, I urge you to sign the post with your name. We can figure out in this case because of the blog link -- you are the DiEb who runs that blog. You should sign any comments with either "DiEb" or qwith your real name, perhaps making that signature also a link to your blog. This will help us keep from confusing you with a bunch of other Masked Pandas that we have.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 8 April 2013

So what is the evidence that natural selection can produce CSI, ie a biologically functional subsystem?

Ray Martinez · 8 April 2013

Joe Felsenstein said:
Ray Martinez said: Joe Felsenstein: "Dembski and Marks have not provided any new argument that shows that a Designer intervenes after the population starts to evolve." ---The quote pasted above (and others in the OP) assumes and implies Dembski and Marks accept the existence of natural (non-supernatural/Intelligent) causation operating in nature. ---Like Darwin and all original Darwinists, Joe Felsenstein and his colleagues completely reject supernatural or Intelligent causation operating in nature. The preceding fact means Darwinism accepts causation mutual exclusivity. ---Joe Felsenstein's acceptance of causation mutual exclusivity should allow him to dispense with the claims of Dembski and Marks based solely on their acceptance of the existence of natural causation.
That is absurd. Mutual exclusivity? Ridiculous. Whatever I think of Dembski and Marks's arguments, or YEC arguments for that matter, I know that whatever supernatural interventions in the real world they accept, they also know that ordinary gravity continues to operate. All those folks, and even Ray himself, accept that there is natural causation. Let's get back to Dembski's actual arguments, not this absurd parody.
Since Joe Felsenstein and his colleagues accept causation mutual exclusivity the same cannot be “Ridiculous.” And his assumption that Newton and me (a Paleyan IDist) along with DI-IDists and YECs accept gravity as non-created and undesigned is grave error. But our context is biology and diversity, not physics and gravity. It's quite clear that Joe wants nothing to do with my main point concerning causation mutual exclusivity. It appears he does not want to risk awakening any "sleeping giants," lest they come to be like him, his colleagues, and myself (we accept causation mutual exclusivity). By dismissing my main point, Joe does not realize that he's showing weakness and fear regarding the existence of natural (non-supernatural/Intellignet) causation. In this context he is silently using DI-IDists and YECs as evidence supporting the existence of natural causation when he should be standing on the (alleged) evidence alone.

KlausH · 8 April 2013

Rolf said:
We have to deal with the issue of whether the LCCSI refutes the effectiveness of RM+NS, and we don’t need to fall back on post-1998 phenomena to do that.
All right, I am just trying to understand as best I can. Just this and I will shut up: If I may suggest a probable scenario using Dog genetics as a model:
Muppet dogs?

Joe Felsenstein · 8 April 2013

Ray Martinez said: ... It's quite clear that Joe wants nothing to do with my main point concerning causation mutual exclusivity. It appears he does not want to risk awakening any "sleeping giants," lest they come to be like him, his colleagues, and myself (we accept causation mutual exclusivity).
This stupid nonsense will be discussed, if at all, on the Bathroom Wall. I will send all further comments by Ray on this there, and all replies to them.

phhht · 8 April 2013

This comment has been moved to The Bathroom Wall. (I said what I meant and I meant what I said, folks. JF)>/b>

TomS · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: So what is the evidence that natural selection can produce CSI, ie a biologically functional subsystem?
I will not be as demanding as to ask for evidence. Only after getting a description, what it would be like for an "Intelligent Design" to happen, would I bring up the question of evidence. Please describe "Intelligent Design" producing anything? CSI or anything else. What sorts of things does ID do, when and where? What methods and materials does it use? What sorts of things does ID not do? On the other hand, what evidence is there that CSI needs any explanation for its appearance? Doesn't it only exist "in the eye of the beholder"?

patrickmay.myopenid.com · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: So what is the evidence that natural selection can produce CSI, ie a biologically functional subsystem?
There's a thread at The Skeptical Zone that demonstrates how simple evolutionary mechanisms can generate CSI by Dembski's own definition.

Joe Felsenstein · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: So what is the evidence that natural selection can produce CSI, ie a biologically functional subsystem?
Dembski's definition of CSI is what we are discussing here, and it does not involve "a biologically functional subsystem". It involves a scale, sauch as fitness, and how far out the individual (or the species) is on that scale, i.e., how well adapted it is. If you want to discuss your "ie" definition, that is off topic here.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 8 April 2013

Joe Felsenstein said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: So what is the evidence that natural selection can produce CSI, ie a biologically functional subsystem?
Dembski's definition of CSI is what we are discussing here, and it does not involve "a biologically functional subsystem". It involves a scale, sauch as fitness, and how far out the individual (or the species) is on that scale, i.e., how well adapted it is. If you want to discuss your "ie" definition, that is off topic here.
No, Joe. It involves bilogically functional subsystems. Read page 148 of "No Free Lunch": "Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems." CSI has nothing to do with what you said. What you said is nowhere in ID literature.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 8 April 2013

patrickmay.myopenid.com said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: So what is the evidence that natural selection can produce CSI, ie a biologically functional subsystem?
There's a thread at The Skeptical Zone that demonstrates how simple evolutionary mechanisms can generate CSI by Dembski's own definition.
Thanks Patrick. However that doesn't do anything. For one it starts with replicators. For another it doesn't produce CSI by Dembski's definition.

Joe Felsenstein · 8 April 2013

phhht said: What does "causation mutual exclusivity" mean? Please give a definition for your use of the term.

This comment has been moved to The Bathroom Wall. (I said what I meant and I meant what I said, folks. JF)

(Sorry, I am bungling editing these. Anyway this one went to the BW)

Joe Felsenstein · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: CSI has nothing to do with what you said. What you said is nowhere in ID literature.
"[Dembski] Arno Wouters cashes it out globally in terms of the viability of whole organisms." There it is, right in the quote you gave. Dembski's theorems have nothing in them that talks of anything but where you are on that scale, and fitness (or a component of fitness like viability) is a valid way of "cashing out" the specification. If we're going to discuss whether Dembski's theorems are correct, we've got to stay on the scale and not go off into other matters.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 8 April 2013

In "No Free Lunch" Dembski makes it clear that CSI with respect to biology means biological function. And that means if you are starting with biological organisms then you are starting with the very thing that you need to explain.

However, even given biological organisms, having more offspring does not mean it can create biologically functional subsystems. There isn't any connection between natural selection and the construction of functional subsystems.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 8 April 2013

Viability is not the same as fitness. Even a weak organism is viable.

But go ahead Joe, attack your strawman, if it makes you feel good.

Mike Elzinga · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: CSI has nothing to do with what you said. What you said is nowhere in ID literature.
Are you suggesting that kairosfocus’s FSCO/I over at UD has superseded Dembski’s CSI? I borrowed that to prove that all rocks are intelligently designed. I have asked many of Dembski’s Abel’s, and kairosfocus’s followers where the cutoff is for using this, and I have never received an answer. Where along the spectrum of complexity in condensed matter do physics and chemistry leave off and “intelligence” and “information” start pushing atoms and molecules around? So why not use it on rocks? The answer falls right out automatically. Designed; no thinking required. Kairosfocus used in on ancient stone artifacts and “proved” they were designed. I just copied his methods.

Joe Felsenstein · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Viability is not the same as fitness. Even a weak organism is viable. But go ahead Joe, attack your strawman, if it makes you feel good.
Gee, thanks. We're going to continue to discuss Dembski's (and Marks's) theorems.

Paul Burnett · 8 April 2013

Joe Felsenstein said: So Dembski's arguments were either great bravery ... or chutzpah.
More like a classic example of the Dunning–Kruger effect. Dembski may be a smart guy in his field(s), but he doesn't know jack about population genetics.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 8 April 2013

Mike Elzinga said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: CSI has nothing to do with what you said. What you said is nowhere in ID literature.
Are you suggesting that kairosfocus’s FSCO/I over at UD has superseded Dembski’s CSI? I borrowed that to prove that all rocks are intelligently designed. I have asked many of Dembski’s Abel’s, and kairosfocus’s followers where the cutoff is for using this, and I have never received an answer. Where along the spectrum of complexity in condensed matter do physics and chemistry leave off and “intelligence” and “information” start pushing atoms and molecules around? So why not use it on rocks? The answer falls right out automatically. Designed; no thinking required. Kairosfocus used in on ancient stone artifacts and “proved” they were designed. I just copied his methods.
Rocks do not exhibit CSI. And obvioulsy artifacts cannot be accounted for via phyics and chemistry.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 8 April 2013

Joe Felsenstein said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Viability is not the same as fitness. Even a weak organism is viable. But go ahead Joe, attack your strawman, if it makes you feel good.
Gee, thanks. We're going to continue to discuss Dembski's (and Marks's) theorems.
Whatever Joe. You sure as hell ain't discussing CSI. And you sure as hell don't have any positive evidence for natural selection producng CSI. So carry on. I am sure you are proud of what you think you are doing.

Mike Elzinga · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Rocks do not exhibit CSI. And obvioulsy artifacts cannot be accounted for via phyics and chemistry.
According to the calculations I borrowed from the ID/creationist community they most certainly do! Why can’t they be use on rocks? At what point along the spectrum of complexity in condensed matter does it become appropriate to use them? What makes the vital difference? Seriously; are you just another follower of ID/creationism that can’t tell us where the cutoff is?

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 8 April 2013

Mike Elzinga said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Rocks do not exhibit CSI. And obvioulsy artifacts cannot be accounted for via phyics and chemistry.
According to the calculations I borrowed from the ID/creationist community they most certainly do! Why can’t they be use on rocks? At what point along the spectrum of complexity in condensed matter does it become appropriate to use them? What makes the vital difference? Seriously; are you just another follower of ID/creationism that can’t tell us where the cutoff is?
No Mike. Just because you can totally bitcher a concept, that doesn't make it right. Obvioulsy forensic scientists and archaeologists can tell where the cutoff is. Or are you totally oblivious to the fact that we can dtermine design from not and many fields depend upon it? And please demonstrate condensed matter can exist without information.

Mike Elzinga · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: And please demonstrate condensed matter can exist without information.
Would you believe in rocks? How about water? Gold? Planets? Stars? If you have never seen anything like that, then I can’t demonstrate it for you. But you still haven’t answered the question; “Where along the spectrum of condensed matter does it become appropriate to use the calculations of the ID/creationists to 'prove' design?"

Scott F · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
Mike Elzinga said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Rocks do not exhibit CSI. And obvioulsy artifacts cannot be accounted for via phyics and chemistry.
According to the calculations I borrowed from the ID/creationist community they most certainly do! Why can’t they be use on rocks? At what point along the spectrum of complexity in condensed matter does it become appropriate to use them? What makes the vital difference? Seriously; are you just another follower of ID/creationism that can’t tell us where the cutoff is?
No Mike. Just because you can totally bitcher a concept, that doesn't make it right. Obvioulsy forensic scientists and archaeologists can tell where the cutoff is. Or are you totally oblivious to the fact that we can dtermine design from not and many fields depend upon it? And please demonstrate condensed matter can exist without information.
Mike's math seemed pretty convincing, and I have yet to see him get an equation wrong, so I'm leaning toward believing him. Can you show me mathematically where Mike is wrong? Simply saying "No Mike", doesn't sound convincing. Please be specific where Mike "butchered" the concept. You say it is "obvious". I'm afraid that I don't see that it's "obvious" at all. Most rocks look quite "designed" to me. But forensic scientists and archaeologists don't use Dembski's formulation of "CSI". In fact, they don't use any formulation of CSI or ID. They rely on scientific knowledge of who, what, when, where, why, and how in order to conclude design. CSI and ID don't answer those questions. They appear to simply assume CSI a priori, and then create post hoc rationalizations. Or, perhaps they do. Maybe you could explain it to me? (Sorry, Joe, if this is off topic. I don't have the math hutzpah to critique Dembski's positions.)

Scott F · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: And please demonstrate condensed matter can exist without information.
"Condensed matter". Do you mean like, where steam condenses into water? Does that require "information"? If so, did the steam contain "information" before it condensed into water? If I understand the general hypothesis of "conservation of information", then "information" can be neither created nor destroyed. During a phase transition (such as gas to liquid, or liquid to solid) what happens to the "information" in the system? You seem pretty certain that you understand this, and I was hoping you could explain it to me. Thanks.

Ray Martinez · 8 April 2013

Paul Burnett said:
Joe Felsenstein said: So Dembski's arguments were either great bravery ... or chutzpah.
More like a classic example of the Dunning–Kruger effect. Dembski may be a smart guy in his field(s), but he doesn't know jack about population genetics.
Then why is Felsenstein giving him so much microphone?

diogeneslamp0 · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
Joe Felsenstein said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: So what is the evidence that natural selection can produce CSI, ie a biologically functional subsystem?
Dembski's definition of CSI is what we are discussing here, and it does not involve "a biologically functional subsystem". It involves a scale, sauch as fitness, and how far out the individual (or the species) is on that scale, i.e., how well adapted it is. If you want to discuss your "ie" definition, that is off topic here.
No, Joe. It involves bilogically functional subsystems. Read page 148 of "No Free Lunch": "Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems." CSI has nothing to do with what you said. What you said is nowhere in ID literature.
UHHH... except in the quote you just copied and pasted, UDite! Do you even read what you copy and paste? Here's a hint: before you tell Prof. Felsenstein to "read page 148 of No Free Lunch", you should first read page 148 of No Free Lunch yourself.

prongs · 8 April 2013

Scott F said: Most rocks look quite "designed" to me.
I assert that my exquisite Quartz crystal is a better candidate for design than Paley's watch. No creationist has even challenged me, much less proven my crystal is not designed. C'mon Ray, Steve, FL, IBIG - what's wrong with you guys? Prove to me my Quartz is NOT designed.

Ray Martinez · 8 April 2013

Mike Elzinga said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Rocks do not exhibit CSI. And obvioulsy artifacts cannot be accounted for via phyics and chemistry.
According to the calculations I borrowed from the ID/creationist community they most certainly do! Why can’t they be use on rocks? At what point along the spectrum of complexity in condensed matter does it become appropriate to use them? What makes the vital difference? Seriously; are you just another follower of ID/creationism that can’t tell us where the cutoff is?
I find that hard to believe, Mike. Isn't that what this entire discussion, in essence, is all about---where the cutoff or threshold is? You, of course, say there is none, but Dembski and Marks say there is. (And be advised: I know for a fact that "The Design Inference" is completely false.)

prongs · 8 April 2013

Scott F said: But forensic scientists and archaeologists don't use Dembski's formulation of "CSI". In fact, they don't use any formulation of CSI or ID. They rely on scientific knowledge of who, what, when, where, why, and how in order to conclude design.
True indeed. The only design scientists and archaeologists recognize is human design because that's the only kind of intelligent design we know (excluding bower birds, chimps using tools, etc.). And guess what? Some stone artifacts are so 'primitive' that archaeologists argue whether they are 'natural' or 'hominid'. Go figure.

Mike Elzinga · 8 April 2013

Scott F said: "Condensed matter".
The Wikipedia article is a little dated, but it gives the general idea. More generally, condensed matter consists of collections of matter that interact strongly. It consists of the solid state, liquids, plasmas, soft matter, atomic nuclei, quark/gluon plasmas as well as the study of transitions into these various states. It spans complexities all the way from simple solids and liquids, made up of only one type of atom or molecule, to living systems. It also studies emergent properties as condensing systems become more and more complex.

Ray Martinez · 8 April 2013

prongs said:
Scott F said: Most rocks look quite "designed" to me.
I assert that my exquisite Quartz crystal is a better candidate for design than Paley's watch. No creationist has even challenged me, much less proven my crystal is not designed. C'mon Ray, Steve, FL, IBIG - what's wrong with you guys? Prove to me my Quartz is NOT designed.
Nearly all evo scholars accept Paley's opening paragraph as correctly framing the debate. No one can say a stone is designed when compared or contrasted against the watch (which represents individual organisms or species).

diogeneslamp0 · 8 April 2013

Creationist pWQie writes: But go ahead Joe, attack your strawman, if it makes you feel good.
Felsenstein understands Dembski's math far better than almost anyone at Uncommon Descent; indeed, NO Intelligent Design advocate, including Dembski, understands Dembski's math better than evolutionists like Felsenstein, Shallit, Elsberry-- even Wein the undergraduate understood Dembski's math better than Dembski himself did. The undergrad made Dembski cry. NO ONE at Uncommonly Dense understands how to compute CSI, except possibly VJ Torley and Gpuccio-- this is proven by the infamous MathGRRL thread at UD where MathGRRL asked the Udites how to compute CSI for very simple cases and none of those egomaniacs at UD could calculate squat (except VJ Torley and Gpuccio). Remember the MathGrrl thread, UDite? We do. We remember. At comment #282 in the first MathGrrl thread IDer VJ Torley concludes that natural processes like gene duplication produce huge amounts of CSI, as defined by Dembski in his 2005 paper. At comment #309 he admits Dembski's definition of CSI needs to be revised. By comment #334 the IDer, the only IDer who could actually compute CSI, admits that CSI as defined by Dembski in his 2005 paper cannot indicate design.
VJ Torley writes: First, I agree that a high degree of CSI, as originally defined by Professor Dembski in his 2005 paper “Specification: The Pattern that Specifies Intelligence” is not sufficient by itself to warrant a design inference. [VJ Torley at Uncommon Descent]
Here's MathGRRL thread part two where the question was asked again. Same result: no ID creationist could answer it. And these are not two isolated cases. Consider the infamous thread of Joe "Security Clearance" Gallien, blogmeister of Intelligent Reasoning, who at UD is treated as a greeat intellectual. (He insists he is a scientist but can't show us his publications nor patents because his credentials are classified top secret, national security, see.) At this miserable thread, "Security Clearance" Joe was asked the simplest possible ID question: Is the sequence below,
100011101001011100010111010101
intelligently designed or not? Simple question, no? He responds as all IDers do: he insists that, in order for him to tell us the process by which it was made, Step #1 is that we must tell HIM the process by which it was made.
Security Clearance Joe sez: In order to tell if blipey's string- 100011101001011100010111010101- is designed or not I would need to know where he got it from. [Source]
No matter how simple nor how complex such sequences are made, no IDologues can ever compute their CSI, because in order for them to tell us the process by which it was made, Step #1 is that we must tell THEM the process by which it was made, as is proven in the two MathGrrl threads and the Joe Gallien thread. Admit it: No ID creationist anywhere (except possibly VJ Torley and Gpuccio) has a clue how to compute CSI. Most of the people who DO know how to compute CSI are evolutionists: Schneider, Felsenstein, Elsberry, Shallit, and me. Dembski can't apply his own math! If you ID creationists want to learn how to compute CSI, you ID creationists must ask an evolutionist, as the MathGRRL thread proved. You can't even ask Dembski-- he doesn't know.

Scott F · 8 April 2013

Mike Elzinga said:
Scott F said: "Condensed matter".
The Wikipedia article is a little dated, but it gives the general idea. More generally, condensed matter consists of collections of matter that interact strongly. It consists of the solid state, liquids, plasmas, soft matter, atomic nuclei, quark/gluon plasmas as well as the study of transitions into these various states. It spans complexities all the way from simple solids and liquids, made up of only one type of atom or molecule, to living systems. It also studies emergent properties as condensing systems become more and more complex.
That's okay. I'm a little "dated" myself. But, you really set me back there. I thought "condensed matter" was more "ordered" or "orderly", in some sense (something that could sit on a lab bench), and the ordering produces different emergent properties at different scales. A "plasma" seems to me to be about as "uncondensed" or as "unordered" as one could imagine, with the only orderedness consisting of variations in density, pressure, and temperature. Then what isn't "condensed matter"? Light? Neutrinos perhaps?

Chris Lawson · 8 April 2013

Joe Felsenstein said: For some unknown reason Dembski wrote in his recent reply to me that the UPB was
(a perennial sticking point for Shallit and Felsenstein)
when actually I've never complained about it. He cites it as 1 in 10120 in the 2005 paper, based on a less-conservative but still conservative calculation.
Joe, I think you *should* have a problem with Dembski's UPB. First of all, there are not 1080 particles in the universe, there are 1080 baryons in the observable universe. Since (i) there is more to the universe than we can observe by an unknown factor, (ii) baryons are made up of subparticles (3 of them), (iii) baryons only make up about 4% of the mass of the observable universe, (iv) of the free quarks, there is an impossible number of them to measure, especially as they are constantly popping into and out of existence acausally, this means that Dembski has massively underestimated the number of particles in the universe. Counting events is no better. What Dembski is essentially saying is that an event is defined as a change in state of a baryon (i.e., protons, neutrons, electrons, and their anti-particles). But plenty of things can happen that I would call an event that do not involve any baryons changing state. Since we're especially interested in evolution here, let's consider mutations. As far as I'm aware (I'm happy to be corrected by someone with a better understanding of particle physics in chemistry), *none* of the common mechanisms of mutation involve any change in baryon state. IOW, Dembski underestimates the number of possible events by a googlish amount. And then there's a problem with the assumption that a change of state is a binary probability. Even if there could only be 10150 events in the history of the universe (which I strongly dispute), how do we decide how probable a given event is? One could argue that if you buy a lottery ticket, it's either a winning ticket or a losing ticket, therefore there is a 50% chance of winning...but that only works by counting all possible events as equally likely. In reality, each event has its own probability space which needs to be considered. And then there's the problem that the UPB, however one chooses to estimate it, only really defines the upper bound of a pre-specified outcome. As Tomato Addict has already pointed out, we can arbitrarily create any event with an improbability that exceeds any UPB for the universe. If the UPB is 10n, then any random string of n/log10(2) bits...which comes to about 3.32n bits...will exceed the proposed probability boundary. That is, P(random.string[498 bits]) < 10-150, and for any UPB calculation Dembski cares to make, we can always find a less likely string just by running a random number generator for the required number of bits. I can break the UPBDembski of the entire history of the observable universe in a few microseconds on my laptop, in under 3 minutes rolling a pair of dice, and if I do it the slow way, about 8 minutes tossing a coin every 4 seconds. Essentially Dembski argument is: pevolution < 1/UPBuniverse, therefore evolution is not possible. However, he cannot estimate pevolution, cannot estimate UPBuniverse, and does not understand that pobserved event < 1/UPBuniverse does not mean that pobserved event could not have happened naturally...or that pobserved event could even be overwhelmingly likely in the universe.

Chris Lawson · 8 April 2013

Ray Martinez said: Then why is Felsenstein giving him so much microphone?
Joe isn't giving him a lot of microphone, the DI and his "university" are. Joe is responding to public comments.

Mike Elzinga · 8 April 2013

Scott F said: That's okay. I'm a little "dated" myself. But, you really set me back there. I thought "condensed matter" was more "ordered" or "orderly", in some sense (something that could sit on a lab bench), and the ordering produces different emergent properties at different scales. A "plasma" seems to me to be about as "uncondensed" or as "unordered" as one could imagine, with the only orderedness consisting of variations in density, pressure, and temperature. Then what isn't "condensed matter"? Light? Neutrinos perhaps?
Not a problem; and thanks for linking to that reference. Phillip Anderson coined the term somewhere in the 1960s, as I remember. The field was expanding so rapidly that “solid state” was no longer appropriate. Phil Anderson has also contributed a great deal of insight into the current state of particle physics and the understanding of quark/gluon interactions. Condensed matter theory and experiment are now being applied on cosmological scales. But the 1960s seem like centuries ago in research advancement terms. Strongly interacting matter spans a whole lot of subfields now; and the theories and experimental methods are finding use across the entire spectrum. It may turn out that “condensed matter” will be obsolete some day; but I think that it is the “condensed” part that pulls all these seemingly diverse fields together (the pun is irresistible). Matter that is clustered in close proximity takes on rapidly emerging characteristics that provide a virtually infinite landscape to explore. No lack of experimental and theoretical opportunity here.

diogeneslamp0 · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
patrickmay.myopenid.com said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: So what is the evidence that natural selection can produce CSI, ie a biologically functional subsystem?
There's a thread at The Skeptical Zone that demonstrates how simple evolutionary mechanisms can generate CSI by Dembski's own definition.
Thanks Patrick. However that doesn't do anything. For one it starts with replicators. For another it doesn't produce CSI by Dembski's definition.
How the fuck would a creationist know whether or not anything produces CSI by Dembski's definition? Creationists don't understand Dembski's math. It's like asking a penguin's opinion on how to do long division. When ID creationists are asked to create CSI, for example in the MathGrrl thread and at Joe G's blog also, almost none of them can do it without asking an evolutionist.
However that doesn't do anything... it starts with replicators.
So the fuck what? Dembski's alleged "proofs" of the Law of Conservation of Information DO NOT CONTAIN IN THE MATH ANY SUSPENSION OR EXCEPTION FOR REPLICATORS, NOR ANY REFERENCE TO REPLICATORS AT ALL. It's not in Dembski's equations. Where are there "replicators" in the math by which he alleges to prove the "Law" on Conservation of Information? Show it or shut up. Our creationist friend pWQie insinuates (but does not state) that the presence of "replicators" temporarily suspends the "Law" of Conservation of CSI; apparently he prays that the many counter-examples involving "replicators" cannot disprove the LCCSI. But there's no exception in the math for replicators. The whole point of the law is to state that only intelligent agents can increase CSI. Consequently, ANY counter-example, even just one, of any natural process increasing CSI DISPROVES the Law of conservation of information, whether it has "replicators" or not. If any process that is not intelligent should increase CSI, replicators or no replicators, then the "Law" of Conservation of Information is no "Law" at all but is dead. Which we knew already, mathematically. 1. Felsenstein pointed to one major flaw in the "Law" of Conservation of Information; so it is dead. 2. Elsberry and Shallit pointed to other flaws in the "Law" of Conservation of Information; so it is doubly dead. 3. I've found a bunch of other flaws in Dembski's 2005 paper, and Gpuccio has as well. 4. It has been proven and admitted by IDers that many natural processes increase CSI by astronomical amoounts, e.g. gene duplication and many other processes. So the "Law" of Conservation of Information is quadruply dead. If EITHER 1, 2, 3 or 4 above were true, then the Law of Conservation of Information is DISPROVEN. In fact, all 4 above are true. It's dead, dead, dead, DEAD. Moreover, we know this is the whole point of Meyer's book Darwin's Doubt. Meyer insists that no natural process could have increased information during the Cambrian Explosion, which was full of replicators. We know life on Earth predated the Cambrian by ~4 billion years. The Earth is full of stromatolite fossils, Grypania fossils from ~2 Billion years ago, and complex animals in the Ediacaran (including several bilaterians) that are the precursors to the Cambrian animals. So the specific problem addressed by Meyer involves huge numbers of diverse, complex replicators BEFORE the start of the Cambrian. If there were in fact an exception to the Law of Conservation of CSI-- an exception which says "If you have replicators, natural processes can make CSI can increase"-- then the alleged increase in "information" during the Cambrian Explosion could be explained entirely by natural processes (RM+NS) and the presence of replicators. The whole point of Meyer's assertion is that RM+NS can never increase "information", whether you have replicators (like the complex Ediacaran bilaterian animals) or not. So pWQie's "replicator" exception is debunked because: 1. It's not in Dembski's math that alleges to "prove" the LCCSI 2. We know the LCCSI is full of math fallacies 3. We know many observed counter-examples that disprove the LCCSI in practice (evolution of novel proteins and enzymes, in silico simulation of evolution/ Schneider's ev) 4. Meyer's Cambrian Explosion argument assumes natural processes cannot produce new CSI even in the presence of many diverse, complex, Ediacaran animals that are precursors to Cambrian biota.

diogeneslamp0 · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Rocks do not exhibit CSI.
How the fuck would a creationist know how much CSI in in anything? In this thread Uncommon Descent's greatest intellectual, Joe "Security Clearance" Gallien is asked to compute the CSI in a string of 25 digits and he can't even do that! If you creationist idiots can't handle a string of 25 digits, how you gonna handle the human genome? When creationists they want to know how to compute CSI, they have to ask an evolutionist. Mike Elzinga presented math to show that rocks have CSI. What math have you presented, creationist? It's math or GTFO. If you know what CSI is, then copy and paste the equation. Go ahead-- copy and paste the equation. Oh wait-- YOU CAN'T. Creationists can't compute squat.

bigdakine · 8 April 2013

David vun Kannon said: I'll say two things: 1 - I think Dembski and Marks' SfS papers were attempts to put some building blocks in place for later work, not the final work itself. In this sense we should expect them to answer all questions and criticisms posed to his earlier work. 2 - Dembski's argument seems to be that if evolution can't work in all possible universes (fitness spaces) it can't work in a subset of them, or if it does, we don't live in that subset. He doesn't actually argue the last part, since that would force him to deal with real biology and physics instead of pure math. Beyond white noise, 'islands', or 'gemstones' of fitness, work on deceptive fitness functions shows that GAs can solve problems even when most of the local fitness slopes point away from the global optimum.
Somebody should throw a party so Dembski can meet the Anthropic argument.

Scott F · 8 April 2013

Mike Elzinga said:
Scott F said: That's okay. I'm a little "dated" myself. But, you really set me back there. I thought "condensed matter" was more "ordered" or "orderly", in some sense (something that could sit on a lab bench), and the ordering produces different emergent properties at different scales. A "plasma" seems to me to be about as "uncondensed" or as "unordered" as one could imagine, with the only orderedness consisting of variations in density, pressure, and temperature. Then what isn't "condensed matter"? Light? Neutrinos perhaps?
Not a problem; and thanks for linking to that reference. Phillip Anderson coined the term somewhere in the 1960s, as I remember. The field was expanding so rapidly that “solid state” was no longer appropriate. Phil Anderson has also contributed a great deal of insight into the current state of particle physics and the understanding of quark/gluon interactions. Condensed matter theory and experiment are now being applied on cosmological scales. But the 1960s seem like centuries ago in research advancement terms. Strongly interacting matter spans a whole lot of subfields now; and the theories and experimental methods are finding use across the entire spectrum. It may turn out that “condensed matter” will be obsolete some day; but I think that it is the “condensed” part that pulls all these seemingly diverse fields together (the pun is irresistible). Matter that is clustered in close proximity takes on rapidly emerging characteristics that provide a virtually infinite landscape to explore. No lack of experimental and theoretical opportunity here.
Hmm... Okay, but why that specific adjective? From what you're describing, it sounds like it pretty much covers "all" matter, with the possible exception of free electrons or neutrinos in deep space.

diogeneslamp0 · 8 April 2013

Hey pWQie, since you creationists know how to detect design, can you tell me which of the following two strings is intelligent designed? The other is a random scramble.
1. “chimuaruyoniitekonamiuedakorutoruganoyobonaireomoiriposuhanohiodahi” 2. “ueomuitearukonamidagakoborenaiyoniomoidasuharunohihitoripochinoyoru”
Please show all work by which you computed the CSI of the above string to identify the Intelligently Designed one.

bigdakine · 8 April 2013

Joe Felsenstein said:
https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 said: 10^-150 was Dembski's upb... What does it have to do with what is described here? (Sory about my ignorance...)
It is the Universal Probability Bound. It is intended to be a conservative calculation of how many individual events could have occurred in the Universe since it started. If there are 1080 particles each of which could have changed state at most 1070 times, you get that there have been at most 10150 events ever, anywhere. So if you see a favorable outcome for which the probability of one that is that favorable, or more favorable, is less than 10-150 you are justified in saying that it was not a random happenstance, that the dealer might have stacked the deck. For some unknown reason Dembski wrote in his recent reply to me that the UPB was
(a perennial sticking point for Shallit and Felsenstein)
when actually I've never complained about it. He cites it as 1 in 10120 in the 2005 paper, based on a less-conservative but still conservative calculation. It doesn't matter much what it is -- the point is that real organisms have adaptations that would occur far less often than that, if all that ever happened was mutation, with no natural selection. There I agree with Dembski. The problem comes after that when we try to rule out the role of natural selection.
If you accept 1080 as a good guestimate for the number of particles. I submit that the max number of events would be the age of the universe times divided by the Planck Time scale of 10-43 secs.That yields approx. 1060 events per particle so 10150 is not a bad, but liberal upper bound..

bigdakine · 8 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
Mike Elzinga said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Rocks do not exhibit CSI. And obvioulsy artifacts cannot be accounted for via phyics and chemistry.
According to the calculations I borrowed from the ID/creationist community they most certainly do! Why can’t they be use on rocks? At what point along the spectrum of complexity in condensed matter does it become appropriate to use them? What makes the vital difference? Seriously; are you just another follower of ID/creationism that can’t tell us where the cutoff is?
No Mike. Just because you can totally bitcher a concept, that doesn't make it right. Obvioulsy forensic scientists and archaeologists can tell where the cutoff is. Or are you totally oblivious to the fact that we can dtermine design from not and many fields depend upon it? And please demonstrate condensed matter can exist without information.
Of course they can tell where the cutoff is cuz they are familiar with the actions of known designers. ID, particularly CSI, claims to be able to detect design by unknown intelligences. Big difference. Condensed matter gets all the *information* it needs from the laws of physics. That is Mike's point. You need to tell us when, how and from what does additional information come from that is required for biological organisms.

Joe Felsenstein · 8 April 2013

Chris Lawson said:
Joe Felsenstein said: For some unknown reason Dembski wrote in his recent reply to me that the UPB was
(a perennial sticking point for Shallit and Felsenstein)
when actually I've never complained about it. He cites it as 1 in 10120 in the 2005 paper, based on a less-conservative but still conservative calculation.
Joe, I think you *should* have a problem with Dembski's UPB. First of all, there are not 1080 particles in the universe, there are 1080 baryons in the observable universe. Since (i) there is more to the universe than we can observe by an unknown factor, (ii) baryons are made up of subparticles (3 of them), (iii) baryons only make up about 4% of the mass of the observable universe, (iv) of the free quarks, there is an impossible number of them to measure, especially as they are constantly popping into and out of existence acausally, this means that Dembski has massively underestimated the number of particles in the universe. ... [Rest snipped: more UPBing]
I don't think it is terribly important whether we use 10150 or 10120 or whatever. I am sure I can show that even 10300 would still leave us with no chance that monkeys with typewriters, typing that many genomes, would ever come up with a fish that could swim or a bird that could fly, and both able to reproduce themselves. The interesting issue is whether natural selection can move us along the fitness scale far enough to get into the top 10-300 of the original distribution of genotypes. It can, because Dembski's conservation law doesn't stop it.

Joe Felsenstein · 8 April 2013

Mike Elzinga said: Why can’t they be use on rocks? At what point along the spectrum of complexity in condensed matter does it become appropriate to use them? What makes the vital difference? Seriously; are you just another follower of ID/creationism that can’t tell us where the cutoff is?
Mike, I must not understand your argument (out of laziness, I am sure). In the arguments I was making we had a scale, and I used fitness as that scale. CSI was defined as being in the top 10-150 of the original population distribution on that scale. What plays the role of the scale in your rock examples?

Mike Elzinga · 8 April 2013

Scott F said: Hmm... Okay, but why that specific adjective? From what you're describing, it sounds like it pretty much covers "all" matter, with the possible exception of free electrons or neutrinos in deep space.
Well, it’s turning out that much of the work done in what was once referred to as “solid state” expanded to questions about the details about how matter transitioned into that state. There was already well established field of fluid mechanics, but questions about how fluids behaved started getting into the issues of how loosely bound molecules produced the properties of fluids. Fluid interactions raise questions about the transition from a vapor state to the solid state; what actually goes on in the bonding process? What about the transition in the other direction; how do solids come apart when heated; how do we account for the enthalpies of melting and vaporization? It now turns out that we can actually predict what these will be for many organic compounds. But what about loosely bound systems that are now referred to as soft matter? These are things like polymers and other complicated chains of molecules that can have structure but deform easily. Their binding energies are comparable to the kinetic energies in their thermal motion. This gets us into the physics of organic compounds and the molecules involved in life. The advances in supercomputer simulations have allowed us to check out our understanding of matter under these conditions. But now we can also begin to simulate matter condensing to form galaxies and stars. There was already a very well-developed study of matter under extreme conditions in the simulations of nuclear explosions and other kinds of extreme phenomena. Much of that work was, and still is, classified. Simulations are done on star formation, on exploding stars, on neutron stars, on black holes, and all sorts of exotic condensations of matter. These simulations along with experimental data from real events spiral in on a more complete understanding of how our universe behaves. No “intelligence” is required to build up these structures. So all matter interacts, at all levels, and ever since the Big Bang. There is a lot of interest in being able to simulate events going as far back in time as we can with the physics we already know. Many of these simulations are carried out on supercomputers and parallel processors using knowledge acquired from the study of matter more closely clustered. The big, unspoken misconception that persists among the ID/creationists is that the second law of thermodynamic says that everything is coming apart. They learned that back in the early 1970s from Henry Morris if not earlier; and they have carried those misconceptions ever since. That is why they think “information” and “intelligence” is required to produce not just complexity, but complexity with complex behaviors. But it is exactly wrong; matter tends to condense and bind together, and in doing so sheds energy. It happens at all levels. It is a trade-off between kinetic energy and binding energies. The second law is required for this to occur; otherwise it is either totally elastic scattering or no interactions whatsoever, and nothing gets made from that. We know from the study of condensed matter that complex properties emerge and are driven by thermal motions that are comparable with the potential energies of interaction among the constituents of these systems. ID/creationists are stuck in the past with almost no knowledge of chemistry and physics. They read for quote mining but not for understanding.

Scott F · 8 April 2013

Mike Elzinga said: Fluid interactions raise questions about the transition from a vapor state to the solid state; what actually goes on in the bonding process? What about the transition in the other direction; how do solids come apart when heated; how do we account for the enthalpies of melting and vaporization? It now turns out that we can actually predict what these will be for many organic compounds.
Really? Like, from first principles we can successfully predict bulk properties of, not just elemental but organic compounds as well? Cool! I hadn't realized that we were that far along, computationally. Are we to the point of actually designing materials from scratch with the desired bulk properties, rather than (say) just extrapolating from experimentation? Is that (for example) where they're coming up with these "high" temperature superconducting compounds? So, we really can design rocks?? :-) (Joe, sorry again if this is getting off topic. I'm sure we won't be long here.)

Mike Elzinga · 8 April 2013

Joe Felsenstein said:
Mike Elzinga said: Why can’t they be use on rocks? At what point along the spectrum of complexity in condensed matter does it become appropriate to use them? What makes the vital difference? Seriously; are you just another follower of ID/creationism that can’t tell us where the cutoff is?
Mike, I must not understand your argument (out of laziness, I am sure). In the arguments I was making we had a scale, and I used fitness as that scale. CSI was defined as being in the top 10-150 of the original population distribution on that scale. What plays the role of the scale in your rock examples?
Joe, the scale is the atoms and molecules scattered in a “primordial soup.” How probable was it - according to the ID conception of matter tending to come all apart and all that Morris, Gish, Abel, and Sewell tornado-in-a-junkyard stuff - that they came together in just this specified structure? Is there any scale where ID advocates admit that matter condenses according to the laws of physics and chemistry but then a sudden cutoff where “intelligence” and “information” take over and start pushing atoms and molecules into place from a “primordial soup?” I was just “innocently” imitating what I have been observing over at UD. I copied from an example done on a stone artifact and on calculations being done on polymer chains and DNA. Nobody told me I couldn’t do this; and when one looks closely at a polycrystalline rock, it turns out that there is considerable detail and complexity that can be exactly specified. Complex and Specified; and if I take the log base 2, according to the recipe, I get Information. So the question I am trying to get an answer for is the one I have been asking; “Where along the spectrum of complexity in condensed matter is it appropriate to use these calculations to “prove” design? With no clear demarcation, how do we rule out rocks? With no demarcation, how do we rule out organic compounds; I can specify these just as completely as a rock. What specific criteria do folks like Dembski and others over at UD use in deciding when to use the calculation to demonstrate design? It appears that they have already decided on design before they do the calculation; and that they can make the calculation come out in favor of design. Well, I can do that too; I did it with rocks, and I can do it with lots of other forms of condensed matter. All one needs to do is look carefully at the complexity and enumerate the details. So where is the cutoff? How do ID/creationists decide? I have not yet received an answer. I am imitating what I see, so am I misrepresenting the calculation?

Mike Elzinga · 9 April 2013

Scott F said: Really? Like, from first principles we can successfully predict bulk properties of, not just elemental but organic compounds as well? Cool! I hadn't realized that we were that far along, computationally.
Enthalpies, heat capacities, entropies; yes. There are tables of calculated values for various organic compounds that one can find on the internet. I located several and downloaded them, but I don’t remember the site. It’s drifting off topic, but I could look again tomorrow. In some of my research on infrared detecting CCD imagers, I was able to model from first principles the exact temperature dependence of the spectral response of these devices. That meant getting deep into the physics details using knowledge of condensed matter. The Air Force swooped in and classified all my work; and as far as I know from indirect evidence, it is still classified after nearly 30 years. Really pissed me off at the time; but when I learned the reasons, I reluctantly agreed with the decision. Some of our science would be much farther ahead if it were not for classification. Still irks me when I think about it.

TomS · 9 April 2013

Mike Elzinga said: What specific criteria do folks like Dembski and others over at UD use in deciding when to use the calculation to demonstrate design? It appears that they have already decided on design before they do the calculation; and that they can make the calculation come out in favor of design. Well, I can do that too; I did it with rocks, and I can do it with lots of other forms of condensed matter. All one needs to do is look carefully at the complexity and enumerate the details. So where is the cutoff? How do ID/creationists decide? I have not yet received an answer. I am imitating what I see, so am I misrepresenting the calculation?
OTOH, I do a calculation with design, rather than natural causes, as the cause, and I get an answer of an even smaller (maybe even zero) probability. Could it be that there is something wrong with this kind of calculation: that it doesn't show that design isn't ruled out (in all cases), nor that natural causes aren't ruled out in the case of rocks, nor that natural causes aren't ruled out in the case of biological molecules? Maybe this kind of calculation will present a very small probability no matter what causes and no matter what consequences? Could it be that there are unspecified grounds which determine whether the ID advocates will do the calculation? Maybe they will do the calculation only in the cases when they want to rule out natural causes?

Rolf · 9 April 2013

KlausH said:
Rolf said:
We have to deal with the issue of whether the LCCSI refutes the effectiveness of RM+NS, and we don’t need to fall back on post-1998 phenomena to do that.
All right, I am just trying to understand as best I can. Just this and I will shut up: If I may suggest a probable scenario using Dog genetics as a model:
Muppet dogs?
After I wrote I've been thinking it over and am considering the viewpoint that wrt a fitness landscape the how and why of mutations are irrelevant.

Chris Lawson · 9 April 2013

Joe Felsenstein said: I don't think it is terribly important whether we use 10150 or 10120 or whatever. I am sure I can show that even 10300 would still leave us with no chance that monkeys with typewriters, typing that many genomes, would ever come up with a fish that could swim or a bird that could fly, and both able to reproduce themselves.
I agree, Joe, and that was my point. The very concept of UPB is sterile. UPB cannot be estimated. The probability of specific evolutionary events cannot be estimated. The belief that PEVENT < 1/UPB disproves the natural possibility of (EVENT) is a variation of the Prosecutor's Fallacy. Dembski's method fails at every step of the process...which is precisely why he doesn't use much precision when he calculates UPB and can swing from estimates of 10120 to 10150 so blithely. It's because the actual numbers are irrelevant to the argument, despite Dembski's insistence on it being a rigorous mathematical approach.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

diogeneslamp0 said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
Joe Felsenstein said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: So what is the evidence that natural selection can produce CSI, ie a biologically functional subsystem?
Dembski's definition of CSI is what we are discussing here, and it does not involve "a biologically functional subsystem". It involves a scale, sauch as fitness, and how far out the individual (or the species) is on that scale, i.e., how well adapted it is. If you want to discuss your "ie" definition, that is off topic here.
No, Joe. It involves bilogically functional subsystems. Read page 148 of "No Free Lunch": "Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems." CSI has nothing to do with what you said. What you said is nowhere in ID literature.
UHHH... except in the quote you just copied and pasted, UDite! Do you even read what you copy and paste? Here's a hint: before you tell Prof. Felsenstein to "read page 148 of No Free Lunch", you should first read page 148 of No Free Lunch yourself.
I read page 148. Do you have a pint?

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

Mike Elzinga said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: And please demonstrate condensed matter can exist without information.
Would you believe in rocks? How about water? Gold? Planets? Stars? If you have never seen anything like that, then I can’t demonstrate it for you. But you still haven’t answered the question; “Where along the spectrum of condensed matter does it become appropriate to use the calculations of the ID/creationists to 'prove' design?"
Nice non-sequitur- and as I said archaeologists, geologists and forensic scientists know design from non-design. And I certainly wouldn't use CSI to detect if an object was designed. Perhaps you would but then again you would use a chain-saw to open a can of tuna.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

Scott F said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
Mike Elzinga said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Rocks do not exhibit CSI. And obvioulsy artifacts cannot be accounted for via phyics and chemistry.
According to the calculations I borrowed from the ID/creationist community they most certainly do! Why can’t they be use on rocks? At what point along the spectrum of complexity in condensed matter does it become appropriate to use them? What makes the vital difference? Seriously; are you just another follower of ID/creationism that can’t tell us where the cutoff is?
No Mike. Just because you can totally bitcher a concept, that doesn't make it right. Obvioulsy forensic scientists and archaeologists can tell where the cutoff is. Or are you totally oblivious to the fact that we can dtermine design from not and many fields depend upon it? And please demonstrate condensed matter can exist without information.
Mike's math seemed pretty convincing, and I have yet to see him get an equation wrong, so I'm leaning toward believing him. Can you show me mathematically where Mike is wrong? Simply saying "No Mike", doesn't sound convincing. Please be specific where Mike "butchered" the concept. You say it is "obvious". I'm afraid that I don't see that it's "obvious" at all. Most rocks look quite "designed" to me. But forensic scientists and archaeologists don't use Dembski's formulation of "CSI". In fact, they don't use any formulation of CSI or ID. They rely on scientific knowledge of who, what, when, where, why, and how in order to conclude design. CSI and ID don't answer those questions. They appear to simply assume CSI a priori, and then create post hoc rationalizations. Or, perhaps they do. Maybe you could explain it to me? (Sorry, Joe, if this is off topic. I don't have the math hutzpah to critique Dembski's positions.)
That's just plain stupid. No one has to know "who, what, when, where, why, and how" before determining design. All of that comes from examining the design and evidence. Did we know "who, what, when, where, why, and how" before we said Stonehenge was designed? No. It's as if evos are proud to be ignorant...

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

diogeneslamp0 said: Hey pWQie, since you creationists know how to detect design, can you tell me which of the following two strings is intelligent designed? The other is a random scramble.
1. “chimuaruyoniitekonamiuedakorutoruganoyobonaireomoiriposuhanohiodahi” 2. “ueomuitearukonamidagakoborenaiyoniomoidasuharunohihitoripochinoyoru”
Please show all work by which you computed the CSI of the above string to identify the Intelligently Designed one.
Context- There isn't any way that nature produced either of those strings. BTW archaeologists and forensic scientistrs also claim to be able to detect design. Do you think they would pass your "test"?

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

diogeneslamp0 said:
Creationist pWQie writes: But go ahead Joe, attack your strawman, if it makes you feel good.
Felsenstein understands Dembski's math far better than almost anyone at Uncommon Descent; indeed, NO Intelligent Design advocate, including Dembski, understands Dembski's math better than evolutionists like Felsenstein, Shallit, Elsberry-- even Wein the undergraduate understood Dembski's math better than Dembski himself did. The undergrad made Dembski cry. NO ONE at Uncommonly Dense understands how to compute CSI, except possibly VJ Torley and Gpuccio-- this is proven by the infamous MathGRRL thread at UD where MathGRRL asked the Udites how to compute CSI for very simple cases and none of those egomaniacs at UD could calculate squat (except VJ Torley and Gpuccio). Remember the MathGrrl thread, UDite? We do. We remember. At comment #282 in the first MathGrrl thread IDer VJ Torley concludes that natural processes like gene duplication produce huge amounts of CSI, as defined by Dembski in his 2005 paper. At comment #309 he admits Dembski's definition of CSI needs to be revised. By comment #334 the IDer, the only IDer who could actually compute CSI, admits that CSI as defined by Dembski in his 2005 paper cannot indicate design.
VJ Torley writes: First, I agree that a high degree of CSI, as originally defined by Professor Dembski in his 2005 paper “Specification: The Pattern that Specifies Intelligence” is not sufficient by itself to warrant a design inference. [VJ Torley at Uncommon Descent]
Here's MathGRRL thread part two where the question was asked again. Same result: no ID creationist could answer it. And these are not two isolated cases. Consider the infamous thread of Joe "Security Clearance" Gallien, blogmeister of Intelligent Reasoning, who at UD is treated as a greeat intellectual. (He insists he is a scientist but can't show us his publications nor patents because his credentials are classified top secret, national security, see.) At this miserable thread, "Security Clearance" Joe was asked the simplest possible ID question: Is the sequence below,
100011101001011100010111010101
intelligently designed or not? Simple question, no? He responds as all IDers do: he insists that, in order for him to tell us the process by which it was made, Step #1 is that we must tell HIM the process by which it was made.
Security Clearance Joe sez: In order to tell if blipey's string- 100011101001011100010111010101- is designed or not I would need to know where he got it from. [Source]
No matter how simple nor how complex such sequences are made, no IDologues can ever compute their CSI, because in order for them to tell us the process by which it was made, Step #1 is that we must tell THEM the process by which it was made, as is proven in the two MathGrrl threads and the Joe Gallien thread. Admit it: No ID creationist anywhere (except possibly VJ Torley and Gpuccio) has a clue how to compute CSI. Most of the people who DO know how to compute CSI are evolutionists: Schneider, Felsenstein, Elsberry, Shallit, and me. Dembski can't apply his own math! If you ID creationists want to learn how to compute CSI, you ID creationists must ask an evolutionist, as the MathGRRL thread proved. You can't even ask Dembski-- he doesn't know.
MathGrrl is an asshole. He was shown how to calculate CSI. And what evidence do you have that says gene duplication is a blind watchmaker process? And blipey's string- CONTEXT matters. But seeing that you are scientifically illiterate you wouldn't know anything about that.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

diogeneslamp0 is lying. I followed its links. I read:

In order to tell if blipey's string- 100011101001011100010111010101- is designed or not I would need to know where he got it from.

For example, did it just pop into his bitty little head, was it found on the wall of a cave, was it on a piece of paper or what?

There wasn't any question about HOW the string was made. All the questions pertained to CONTEXT.

You people are just sick...

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

Chris Lawson said:
Joe Felsenstein said: I don't think it is terribly important whether we use 10150 or 10120 or whatever. I am sure I can show that even 10300 would still leave us with no chance that monkeys with typewriters, typing that many genomes, would ever come up with a fish that could swim or a bird that could fly, and both able to reproduce themselves.
I agree, Joe, and that was my point. The very concept of UPB is sterile. UPB cannot be estimated. The probability of specific evolutionary events cannot be estimated. The belief that PEVENT < 1/UPB disproves the natural possibility of (EVENT) is a variation of the Prosecutor's Fallacy. Dembski's method fails at every step of the process...which is precisely why he doesn't use much precision when he calculates UPB and can swing from estimates of 10120 to 10150 so blithely. It's because the actual numbers are irrelevant to the argument, despite Dembski's insistence on it being a rigorous mathematical approach.
Chris, probability arguments are useless seeing that your position can't even demonstrate a feasibilty. You sure as hell cannot show that natural selection can produce CSI. So what do you guys have besides strawmen of ID to attack?

patrickmay.myopenid.com · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
patrickmay.myopenid.com said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: So what is the evidence that natural selection can produce CSI, ie a biologically functional subsystem?
There's a thread at The Skeptical Zone that demonstrates how simple evolutionary mechanisms can generate CSI by Dembski's own definition.
Thanks Patrick. However that doesn't do anything. For one it starts with replicators. For another it doesn't produce CSI by Dembski's definition.
Proof by bald assertion is not particularly compelling. If you read the discussion at the link I posted, you will see that the measurement reflects exactly what Dembski wrote and that the evolutionary mechanisms modeled by several participants do, in fact, produce CSI by his definition. "Starting with replicators" is a red herring (or goalpost moving, take your pick).

diogeneslamp0 · 9 April 2013

patrickmay.myopenid.com said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
patrickmay.myopenid.com said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: So what is the evidence that natural selection can produce CSI, ie a biologically functional subsystem?
There's a thread at The Skeptical Zone that demonstrates how simple evolutionary mechanisms can generate CSI by Dembski's own definition.
Thanks Patrick. However that doesn't do anything. For one it starts with replicators. For another it doesn't produce CSI by Dembski's definition.
Proof by bald assertion is not particularly compelling. If you read the discussion at the link I posted, you will see that the measurement reflects exactly what Dembski wrote and that the evolutionary mechanisms modeled by several participants do, in fact, produce CSI by his definition. "Starting with replicators" is a red herring (or goalpost moving, take your pick).
Exactly. There's nothing in Dembski's alleged MATHEMATICAL proof of the LCCSI that suspends the LCCSI in the presence of replicators. That means that if even ONE counter-example is found of natural processes increasing CSI-- in fact there are many-- then Dembski's math is dead wrong. Which we knew already, because Elsberry, Shallit, Felsenstein, the undergrad Wein and others pointed out the fallacies in Dembski's math. There is no LCCSI. Moreover, Stephen Meyer is again claiming that information cannot increase by natural processes-- invoking the LCCSI--- during the Cambrian Explosion, but "replicators" existed for billions of years before the start of the Cambrian, and the Ediacaran era was full of complex animals including bilaterians that were the precursors to the Cambrian biota. Meyer's whole argument presupposes that there is NO suspension of the LCCSI in the presence of replicators, so our creationist friend pWQie is contradicting his authority Myers.

diogeneslamp0 · 9 April 2013

Chris, probability arguments are useless seeing that your position can't even demonstrate a feasibilty.
Gish gallop. Felsenstein, Shallit and others disproved Dembski's LCCSI, which means the creationists have NO argument. pWQie can't admit that the LCCSI is disproven, so he tries to change the subject. Give it up, the LCCSI is disproven and Dembski is exposed. Hey pWQie, why not copy and paste the equation for CSI right here, right now? Creationists can't copy and paste the equation because Intelligent Design is a fraud. Not an alternate model-- a fraud.

patrickmay.myopenid.com · 9 April 2013

bigdakine said:
Joe Felsenstein said:
https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 said: 10^-150 was Dembski's upb... What does it have to do with what is described here? (Sory about my ignorance...)
It is the Universal Probability Bound. It is intended to be a conservative calculation of how many individual events could have occurred in the Universe since it started. If there are 1080 particles each of which could have changed state at most 1070 times, you get that there have been at most 10150 events ever, anywhere. So if you see a favorable outcome for which the probability of one that is that favorable, or more favorable, is less than 10-150 you are justified in saying that it was not a random happenstance, that the dealer might have stacked the deck. For some unknown reason Dembski wrote in his recent reply to me that the UPB was
(a perennial sticking point for Shallit and Felsenstein)
when actually I've never complained about it. He cites it as 1 in 10120 in the 2005 paper, based on a less-conservative but still conservative calculation. It doesn't matter much what it is -- the point is that real organisms have adaptations that would occur far less often than that, if all that ever happened was mutation, with no natural selection. There I agree with Dembski. The problem comes after that when we try to rule out the role of natural selection.
If you accept 1080 as a good guestimate for the number of particles. I submit that the max number of events would be the age of the universe times divided by the Planck Time scale of 10-43 secs.That yields approx. 1060 events per particle so 10150 is not a bad, but liberal upper bound..
Even if Dembski's UPB weren't fatally flawed for other reasons, that calculation doesn't take into account the vast number of possible interactions among those particles.

Dave Lovell · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
diogeneslamp0 said: Hey pWQie, since you creationists know how to detect design, can you tell me which of the following two strings is intelligent designed? The other is a random scramble. 1. “chimuaruyoniitekonamiuedakorutoruganoyobonaireomoiriposuhanohiodahi” 2. “ueomuitearukonamidagakoborenaiyoniomoidasuharunohihitoripochinoyoru” Please show all work by which you computed the CSI of the above string to identify the Intelligently Designed one.
Context- There isn't any way that nature produced either of those strings.
At the word games again BOZO? If the use of a designed encoding method to specify a system precludes an assessment of whether or not it was designed, how can you ever give an example of something that is not designed?

patrickmay.myopenid.com · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: MathGrrl is an asshole. He was shown how to calculate CSI.
Then it shouldn't be a problem for you to provide a link to the where that was done. vjtorley tried but got the "wrong" answer and so later recanted, claiming that it was unreasonable to expect to be able to calculate it. Support your claim or retract it.

ogremk5 · 9 April 2013

I thing always bothered me about that UPB of 500 bits. It depends on what you are calculating.

If you use DNA, then the UPB is 250 nucleotides, because each nucleotide needs 2 bits to represent it.

Let's take a DNA strand that is 252 nucleotides long. That's above the UPB. But when we translate that into a protein, the resulting chain is 84 amino acids.

There are 20 possible amino acids, so each amino acid requires 5 bits. 84*5 is 420 bits of information... less than the UPB.

It seems to me that is a fundamental flaw in Dembski's work. If you measure the DNA, you could get something that must be designed. But if that DNA sequence is translated, then you won't be able to determine design on the protein.

BTW: I'm using the exact method that JoeG and Gordon Mullings use for calculating CSI.

DS · 9 April 2013

Dave called it first. Joe B. is obsesses with Joe F. You knew he would show up her sooner or later.

As for the science. Dembski hasn't considered the many different ways in which selection can operate. Therefore, his conclusions are biologically unfounded. For example, where in his equations does he consider directional selection, disruptive selection, or stabilizing selection? What about sexual selection, frequency dependent selection or fluctuating selection? How about rate of recombination, hitchhiking or meiotic drive?

Everybody knows that it's impossible for a bumblebee to fly. All you have to do is ignore all of the biology and you can up with a "theory" to prove it.

Robin · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Nice non-sequitur- and as I said archaeologists, geologists and forensic scientists know design from non-design. And I certainly wouldn't use CSI to detect if an object was designed. Perhaps you would but then again you would use a chain-saw to open a can of tuna.
Utter nonsense. No archaeologists, geologists, or forensic scientists know anything about design (as portrayed by the folks at UD), nor would they know if something was designed if it came up and bit them. They can (and do) know human-specified activity. And in point of fact, they only know this HSA in very narrow and specific contexts; e.g., within the scope of historic understanding. The point is, however, that there is no archaeologist, geologist, or forensic scientist who even thinks about, much less cares about, "CSI". They don't look for specification or "design" or anything like that. They look for human specific activities.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

patrickmay.myopenid.com said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
patrickmay.myopenid.com said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: So what is the evidence that natural selection can produce CSI, ie a biologically functional subsystem?
There's a thread at The Skeptical Zone that demonstrates how simple evolutionary mechanisms can generate CSI by Dembski's own definition.
Thanks Patrick. However that doesn't do anything. For one it starts with replicators. For another it doesn't produce CSI by Dembski's definition.
Proof by bald assertion is not particularly compelling. If you read the discussion at the link I posted, you will see that the measurement reflects exactly what Dembski wrote and that the evolutionary mechanisms modeled by several participants do, in fact, produce CSI by his definition. "Starting with replicators" is a red herring (or goalpost moving, take your pick).
The bald assertion is saying that it is an example of natural selection producing CSI. And starting with replicators is the whole deal. Anyone who reads "No Free Lunch" knows that. And Elizabeth claims to have read it. Has Elizabeth written to Dembski and told him of her discovery? I am sure he would love the laugh.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

Robin said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Nice non-sequitur- and as I said archaeologists, geologists and forensic scientists know design from non-design. And I certainly wouldn't use CSI to detect if an object was designed. Perhaps you would but then again you would use a chain-saw to open a can of tuna.
Utter nonsense. No archaeologists, geologists, or forensic scientists know anything about design (as portrayed by the folks at UD), nor would they know if something was designed if it came up and bit them. They can (and do) know human-specified activity. And in point of fact, they only know this HSA in very narrow and specific contexts; e.g., within the scope of historic understanding. The point is, however, that there is no archaeologist, geologist, or forensic scientist who even thinks about, much less cares about, "CSI". They don't look for specification or "design" or anything like that. They look for human specific activities.
What a jerk. We were discussing ROCKS and how we can determine design from not.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

patrickmay.myopenid.com said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: MathGrrl is an asshole. He was shown how to calculate CSI.
Then it shouldn't be a problem for you to provide a link to the where that was done. vjtorley tried but got the "wrong" answer and so later recanted, claiming that it was unreasonable to expect to be able to calculate it. Support your claim or retract it.
Shannon told us how to calculate the number of bits. So wrt nucleotides each has 2 bits of information carrying capacity. How many nucleotides does it take to make a functional protein? Pick one and do the math.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

ogremk5 said: I thing always bothered me about that UPB of 500 bits. It depends on what you are calculating. If you use DNA, then the UPB is 250 nucleotides, because each nucleotide needs 2 bits to represent it. Let's take a DNA strand that is 252 nucleotides long. That's above the UPB. But when we translate that into a protein, the resulting chain is 84 amino acids. There are 20 possible amino acids, so each amino acid requires 5 bits. 84*5 is 420 bits of information... less than the UPB. It seems to me that is a fundamental flaw in Dembski's work. If you measure the DNA, you could get something that must be designed. But if that DNA sequence is translated, then you won't be able to determine design on the protein. BTW: I'm using the exact method that JoeG and Gordon Mullings use for calculating CSI.
There are 64 coding codons. Most amino acids have more than one coding codon. There may be only twenty amino acids but there are more than twenty different tRNAs carrying/ ferrying the amino acids.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

patrickmay.myopenid.com said:
bigdakine said:
Joe Felsenstein said:
https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 said: 10^-150 was Dembski's upb... What does it have to do with what is described here? (Sory about my ignorance...)
It is the Universal Probability Bound. It is intended to be a conservative calculation of how many individual events could have occurred in the Universe since it started. If there are 1080 particles each of which could have changed state at most 1070 times, you get that there have been at most 10150 events ever, anywhere. So if you see a favorable outcome for which the probability of one that is that favorable, or more favorable, is less than 10-150 you are justified in saying that it was not a random happenstance, that the dealer might have stacked the deck. For some unknown reason Dembski wrote in his recent reply to me that the UPB was
(a perennial sticking point for Shallit and Felsenstein)
when actually I've never complained about it. He cites it as 1 in 10120 in the 2005 paper, based on a less-conservative but still conservative calculation. It doesn't matter much what it is -- the point is that real organisms have adaptations that would occur far less often than that, if all that ever happened was mutation, with no natural selection. There I agree with Dembski. The problem comes after that when we try to rule out the role of natural selection.
If you accept 1080 as a good guestimate for the number of particles. I submit that the max number of events would be the age of the universe times divided by the Planck Time scale of 10-43 secs.That yields approx. 1060 events per particle so 10150 is not a bad, but liberal upper bound..
Even if Dembski's UPB weren't fatally flawed for other reasons, that calculation doesn't take into account the vast number of possible interactions among those particles.
Even if Intelligent Design is completely wrong, YOU still don't have any evidence to support unguided/ blind watchmaker evolution. So get bent...

DS · 9 April 2013

Joe Felsenstein said: Folks, I am going to patrol (or "patroll") this thread aggressively. Our usual trolls and our usual troll-chasing will not be welcome and all that will be sent to the Wall. I hope that we can discuss the science and not spend time on denunciations of the motivation of our opponents.
TIme for a dump to the bathroom wall. I guess you are going to have to check the ISP for Joe every time he changes names. Too bad it can't be done automatically, it would sure save a lot of grief.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

DS said:
Joe Felsenstein said: Folks, I am going to patrol (or "patroll") this thread aggressively. Our usual trolls and our usual troll-chasing will not be welcome and all that will be sent to the Wall. I hope that we can discuss the science and not spend time on denunciations of the motivation of our opponents.
TIme for a dump to the bathroom wall. I guess you are going to have to check the ISP for Joe every time he changes names. Too bad it can't be done automatically, it would sure save a lot of grief.
The OP needs to be dumped. And it would save everyone some grief if you would stop erecting and attacking strawmen. But you won't because that is all you have...

patrickmay.myopenid.com · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
patrickmay.myopenid.com said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: MathGrrl is an asshole. He was shown how to calculate CSI.
Then it shouldn't be a problem for you to provide a link to the where that was done. vjtorley tried but got the "wrong" answer and so later recanted, claiming that it was unreasonable to expect to be able to calculate it. Support your claim or retract it.
Shannon told us how to calculate the number of bits. So wrt nucleotides each has 2 bits of information carrying capacity. How many nucleotides does it take to make a functional protein? Pick one and do the math.
You claimed that Mathgrrl was shown how to calculate CSI. Provide a link to where that was done, aside from vjtorley who recanted as already noted, or retract your claim.

ogremk5 · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
ogremk5 said: I thing always bothered me about that UPB of 500 bits. It depends on what you are calculating. If you use DNA, then the UPB is 250 nucleotides, because each nucleotide needs 2 bits to represent it. Let's take a DNA strand that is 252 nucleotides long. That's above the UPB. But when we translate that into a protein, the resulting chain is 84 amino acids. There are 20 possible amino acids, so each amino acid requires 5 bits. 84*5 is 420 bits of information... less than the UPB. It seems to me that is a fundamental flaw in Dembski's work. If you measure the DNA, you could get something that must be designed. But if that DNA sequence is translated, then you won't be able to determine design on the protein. BTW: I'm using the exact method that JoeG and Gordon Mullings use for calculating CSI.
There are 64 coding codons. Most amino acids have more than one coding codon. There may be only twenty amino acids but there are more than twenty different tRNAs carrying/ ferrying the amino acids.
That's very true, but the ID discussion doesn't talk about how DNA was translated into a protein Instead, they specifically talk about if the protein (or whatever) could have formed from random chance. That doesn't use the codon chart at all. They completely ignore the fact that no modern protein appeared due to random alignment of AAs. Which is part of the point. In reply to Robin... if an IDist was a forensic specialist, a crime scene conversation might go something like this. ID Forensic Specialist: "Yes ma'am, we've confirmed that your husband was murdered. All indicators point to this being a non-random, designed event. Unfortunately, the principles of ID specifically avoid trying to find the designer. Have a nice day."

Robin · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
Robin said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Nice non-sequitur- and as I said archaeologists, geologists and forensic scientists know design from non-design. And I certainly wouldn't use CSI to detect if an object was designed. Perhaps you would but then again you would use a chain-saw to open a can of tuna.
Utter nonsense. No archaeologists, geologists, or forensic scientists know anything about design (as portrayed by the folks at UD), nor would they know if something was designed if it came up and bit them. They can (and do) know human-specified activity. And in point of fact, they only know this HSA in very narrow and specific contexts; e.g., within the scope of historic understanding. The point is, however, that there is no archaeologist, geologist, or forensic scientist who even thinks about, much less cares about, "CSI". They don't look for specification or "design" or anything like that. They look for human specific activities.
What a jerk. We were discussing ROCKS and how we can determine design from not.
Sorry Joe, but that doesn't rebut my response. IDists may well claim they can detect "design" and Mike's example of the problem of determining the CSI of rocks certainly reveals the problem IDists face with that claim, but that doesn't change the fact that your claim above that archaeologists, geologists, and forensic scientists "know design from non-design" is just plain absurd. They don't know any such thing. They know human activity and the contextual historic association.

TomS · 9 April 2013

Robin said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Nice non-sequitur- and as I said archaeologists, geologists and forensic scientists know design from non-design. And I certainly wouldn't use CSI to detect if an object was designed. Perhaps you would but then again you would use a chain-saw to open a can of tuna.
Utter nonsense. No archaeologists, geologists, or forensic scientists know anything about design (as portrayed by the folks at UD), nor would they know if something was designed if it came up and bit them. They can (and do) know human-specified activity. And in point of fact, they only know this HSA in very narrow and specific contexts; e.g., within the scope of historic understanding. The point is, however, that there is no archaeologist, geologist, or forensic scientist who even thinks about, much less cares about, "CSI". They don't look for specification or "design" or anything like that. They look for human specific activities.
Moreover, no one ever offers "intelligent design" as an account for (much less explanation of) any event, phenomenon, or feature. The closest that one might think of is that there is a coroner's verdict of "natural causes": I don't know enough about the law to say whether there could be a verdict of "not natural causes"; but I'd bet that "intelligent design" would never be considered. There is, of course, "homicide", but that designates "who" and "why". Oh, by the way, not even advocates of ID offer ID as an account for something.

David vun Kannon · 9 April 2013

My design detector has detected a strong resemblance between several 'masked panda' posts and the infamous Internet Tough Guy Joe Gallien himself. Insults, short sentence fragments, 'evidence', misspellings, multiple posts - it's all there.

DS · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
DS said:
Joe Felsenstein said: Folks, I am going to patrol (or "patroll") this thread aggressively. Our usual trolls and our usual troll-chasing will not be welcome and all that will be sent to the Wall. I hope that we can discuss the science and not spend time on denunciations of the motivation of our opponents.
TIme for a dump to the bathroom wall. I guess you are going to have to check the ISP for Joe every time he changes names. Too bad it can't be done automatically, it would sure save a lot of grief.
The OP needs to be dumped. And it would save everyone some grief if you would stop erecting and attacking strawmen. But you won't because that is all you have...
I will thank you to leave my erections out of this.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

ogremk5 said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
ogremk5 said: I thing always bothered me about that UPB of 500 bits. It depends on what you are calculating. If you use DNA, then the UPB is 250 nucleotides, because each nucleotide needs 2 bits to represent it. Let's take a DNA strand that is 252 nucleotides long. That's above the UPB. But when we translate that into a protein, the resulting chain is 84 amino acids. There are 20 possible amino acids, so each amino acid requires 5 bits. 84*5 is 420 bits of information... less than the UPB. It seems to me that is a fundamental flaw in Dembski's work. If you measure the DNA, you could get something that must be designed. But if that DNA sequence is translated, then you won't be able to determine design on the protein. BTW: I'm using the exact method that JoeG and Gordon Mullings use for calculating CSI.
There are 64 coding codons. Most amino acids have more than one coding codon. There may be only twenty amino acids but there are more than twenty different tRNAs carrying/ ferrying the amino acids.
That's very true, but the ID discussion doesn't talk about how DNA was translated into a protein Instead, they specifically talk about if the protein (or whatever) could have formed from random chance. That doesn't use the codon chart at all. They completely ignore the fact that no modern protein appeared due to random alignment of AAs. Which is part of the point. In reply to Robin... if an IDist was a forensic specialist, a crime scene conversation might go something like this. ID Forensic Specialist: "Yes ma'am, we've confirmed that your husband was murdered. All indicators point to this being a non-random, designed event. Unfortunately, the principles of ID specifically avoid trying to find the designer. Have a nice day."
What a moron. Nothing in ID prevents anyone from looking into who was the designer. And transcription and translation are evidence for Intelligent Design. The genetic code is arbitrary, meaning it is not determined by any law.

DS · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
ogremk5 said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
ogremk5 said: I thing always bothered me about that UPB of 500 bits. It depends on what you are calculating. If you use DNA, then the UPB is 250 nucleotides, because each nucleotide needs 2 bits to represent it. Let's take a DNA strand that is 252 nucleotides long. That's above the UPB. But when we translate that into a protein, the resulting chain is 84 amino acids. There are 20 possible amino acids, so each amino acid requires 5 bits. 84*5 is 420 bits of information... less than the UPB. It seems to me that is a fundamental flaw in Dembski's work. If you measure the DNA, you could get something that must be designed. But if that DNA sequence is translated, then you won't be able to determine design on the protein. BTW: I'm using the exact method that JoeG and Gordon Mullings use for calculating CSI.
There are 64 coding codons. Most amino acids have more than one coding codon. There may be only twenty amino acids but there are more than twenty different tRNAs carrying/ ferrying the amino acids.
That's very true, but the ID discussion doesn't talk about how DNA was translated into a protein Instead, they specifically talk about if the protein (or whatever) could have formed from random chance. That doesn't use the codon chart at all. They completely ignore the fact that no modern protein appeared due to random alignment of AAs. Which is part of the point. In reply to Robin... if an IDist was a forensic specialist, a crime scene conversation might go something like this. ID Forensic Specialist: "Yes ma'am, we've confirmed that your husband was murdered. All indicators point to this being a non-random, designed event. Unfortunately, the principles of ID specifically avoid trying to find the designer. Have a nice day."
What a moron. Nothing in ID prevents anyone from looking into who was the designer. And transcription and translation are evidence for Intelligent Design. The genetic code is arbitrary, meaning it is not determined by any law.
That's funny, so is CSI.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

patrickmay.myopenid.com said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
patrickmay.myopenid.com said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: MathGrrl is an asshole. He was shown how to calculate CSI.
Then it shouldn't be a problem for you to provide a link to the where that was done. vjtorley tried but got the "wrong" answer and so later recanted, claiming that it was unreasonable to expect to be able to calculate it. Support your claim or retract it.
Shannon told us how to calculate the number of bits. So wrt nucleotides each has 2 bits of information carrying capacity. How many nucleotides does it take to make a functional protein? Pick one and do the math.
You claimed that Mathgrrl was shown how to calculate CSI. Provide a link to where that was done, aside from vjtorley who recanted as already noted, or retract your claim.
It was done over on UD- in the threads you participated in. Your denials of the facts mean nothing.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

Robin said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
Robin said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Nice non-sequitur- and as I said archaeologists, geologists and forensic scientists know design from non-design. And I certainly wouldn't use CSI to detect if an object was designed. Perhaps you would but then again you would use a chain-saw to open a can of tuna.
Utter nonsense. No archaeologists, geologists, or forensic scientists know anything about design (as portrayed by the folks at UD), nor would they know if something was designed if it came up and bit them. They can (and do) know human-specified activity. And in point of fact, they only know this HSA in very narrow and specific contexts; e.g., within the scope of historic understanding. The point is, however, that there is no archaeologist, geologist, or forensic scientist who even thinks about, much less cares about, "CSI". They don't look for specification or "design" or anything like that. They look for human specific activities.
What a jerk. We were discussing ROCKS and how we can determine design from not.
Sorry Joe, but that doesn't rebut my response. IDists may well claim they can detect "design" and Mike's example of the problem of determining the CSI of rocks certainly reveals the problem IDists face with that claim, but that doesn't change the fact that your claim above that archaeologists, geologists, and forensic scientists "know design from non-design" is just plain absurd. They don't know any such thing. They know human activity and the contextual historic association.
CSI is the wrong tool to use on rocks. And no, they don't know if it was human activity of not. And they do determine design from not.

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

TomS said:
Robin said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Nice non-sequitur- and as I said archaeologists, geologists and forensic scientists know design from non-design. And I certainly wouldn't use CSI to detect if an object was designed. Perhaps you would but then again you would use a chain-saw to open a can of tuna.
Utter nonsense. No archaeologists, geologists, or forensic scientists know anything about design (as portrayed by the folks at UD), nor would they know if something was designed if it came up and bit them. They can (and do) know human-specified activity. And in point of fact, they only know this HSA in very narrow and specific contexts; e.g., within the scope of historic understanding. The point is, however, that there is no archaeologist, geologist, or forensic scientist who even thinks about, much less cares about, "CSI". They don't look for specification or "design" or anything like that. They look for human specific activities.
Moreover, no one ever offers "intelligent design" as an account for (much less explanation of) any event, phenomenon, or feature. The closest that one might think of is that there is a coroner's verdict of "natural causes": I don't know enough about the law to say whether there could be a verdict of "not natural causes"; but I'd bet that "intelligent design" would never be considered. There is, of course, "homicide", but that designates "who" and "why". Oh, by the way, not even advocates of ID offer ID as an account for something.
Intelligent design is offered to explain artifacts and crimes

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ · 9 April 2013

DS said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
DS said:
Joe Felsenstein said: Folks, I am going to patrol (or "patroll") this thread aggressively. Our usual trolls and our usual troll-chasing will not be welcome and all that will be sent to the Wall. I hope that we can discuss the science and not spend time on denunciations of the motivation of our opponents.
TIme for a dump to the bathroom wall. I guess you are going to have to check the ISP for Joe every time he changes names. Too bad it can't be done automatically, it would sure save a lot of grief.
The OP needs to be dumped. And it would save everyone some grief if you would stop erecting and attacking strawmen. But you won't because that is all you have...
I will thank you to leave my erections out of this.
Your "erection" can't even be measured. Methinks you have an "innie"

patrickmay.myopenid.com · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
patrickmay.myopenid.com said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
patrickmay.myopenid.com said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: MathGrrl is an asshole. He was shown how to calculate CSI.
Then it shouldn't be a problem for you to provide a link to the where that was done. vjtorley tried but got the "wrong" answer and so later recanted, claiming that it was unreasonable to expect to be able to calculate it. Support your claim or retract it.
Shannon told us how to calculate the number of bits. So wrt nucleotides each has 2 bits of information carrying capacity. How many nucleotides does it take to make a functional protein? Pick one and do the math.
You claimed that Mathgrrl was shown how to calculate CSI. Provide a link to where that was done, aside from vjtorley who recanted as already noted, or retract your claim.
It was done over on UD- in the threads you participated in. Your denials of the facts mean nothing.
And yet you still can't provide a link. It's almost as though you are lying. Either support your claim or retract it.

diogeneslamp0 · 9 April 2013

pWQie is angry that the inability of creationists to compute CSI has been exposed! Let's compare what pWQie says to the truth.
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: diogeneslamp0 is lying. I followed its links [to Joe Gallien]. I read: In order to tell if blipey's string- 100011101001011100010111010101- is designed or not I would need to know where he got it from. For example, did it just pop into his bitty little head, was it found on the wall of a cave, was it on a piece of paper or what? There wasn't any question about HOW the string was made. All the questions pertained to CONTEXT. You people are just sick...
Joe "Security Clearance" Gallien asks the question
did it just pop into his bitty little head
pWQie says Joe Gallien says
There wasn't any question about HOW the string was made.
Joe "Security Clearance" Gallien asks the question
did it just pop into his bitty little head
pWQie says Joe Gallien says
All the questions pertained to CONTEXT.
So pWQie is lying. But suppose that Joe Gallien ALSO asked questions about context. So what? In Dembski's "proof" of the LCCSI, there's no mention of context. It's not in the equations. Dembski (and indeed ALL Intelligent Design assertions) say that their methods can infer design based on a PATTERN with no knowledge of where the PATTERN came from. If we give you a PATTERN, and you ask "what's the context?" then Intelligent Design is a fraud.
Dembski wrote, 2005: And this brings us to the other objection, namely, that we must know something about a designer’s nature, purposes, propensities, causal powers, and methods of implementing design before we can legitimately determine whether an object is designed... By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause. ...Which approach is correct? I submit the latter, which, happily, is also consistent with employing specified complexity to infer design. ...what if the designer actually responsible for the object brought it about by means unfathomable to us...? This [problem]... points up that what leads us to infer design is not knowledge of designers and their capabilities but knowledge of the patterns exhibited by designed objects (a point that specified complexity captures precisely).
Got that pWQie? If we give you a pattern and you cannot infer design, and you ask for more information outside the design, then Intelligent Design is a fraud.
Dembski wrote, 2005: ...Our ability to recognize design must therefore arise independently of induction and therefore independently of any independent knowledge requirement about the capacities of designers. In fact, it arises directly from the patterns in the world that signal intelligence, to wit, from specifications. [William Dembski, "Specification: The Pattern that Signifies Intelligence", 2005]
Dembski and all ID proponents sell their method as recognizing Intelligent Design just from a pattern and ONLY a pattern. But if you give them a pattern, they ask you how it was created. Moreover, it's a fact that Dembski has about 5 or 6 different ways to compute CSI. To compute CSI, the first question he asks is: by what process was this pattern made? If we know the pattern was made by a person (artwork, technology etc.) then Dembski uses one of his three "guaranteed success" methods of computing CSI. If we know the pattern was made by a natural (crystallization, magnetization etc.) or observed examples of evolution of new complexity (new proteins, new enzyme functions) or genetic algorithms, artificial life, and in silico simulations of RM+NS then Dembski uses one of his three "guaranteed failure" methods of computing CSI. In this way, the hoax of "No False Positives for the Design Inference" is perpetrated. But Intelligent Design was sold as a product claiming to tell us about the process by which things were made. In practice, to compute CSI, the first step is Dembski demands we tell him how it was made. That is selling a product under false pretenses, which is fraud.

TomS · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Intelligent design is offered to explain artifacts and crimes
Give an example. To the contrary, imagine a scenario in which some thing presents a puzzle, and how inappropriate "intelligent design" is as an explanation. "Why is there that smile on the Mona Lisa?" "Because it was intelligently designed." "Why is there an infield fly rule in baseball?" "Because it is intelligently designed." "How can airplanes defy gravity and fly?" "By being intelligently designed." Here are some things which are intelligently designed: Centaurs, flying carpets, Penrose triangles Note that being intelligently designed cannot account for anything about them, not even their existence. How, then, can intelligent design account for anything about anything?

David vun Kannon · 9 April 2013

Richard B. Hoppe said: I second the request for a reference.
diogeneslamp0 said:
David vun Kannon said: ...work on deceptive fitness functions shows that GAs can solve problems even when most of the local fitness slopes point away from the global optimum.
Marvelous-- do you have a reference for that? It puts the axe to Axe.
The basic idea of a deceptive fitness function is simple. Axe's illustration shows a spike towering over a rough plain of noise. To be deceptive, put the spike at the bottom of a bowl, with the surface sloping away from it in every direction. The global optimum is at the center, but the local optimum found by hill climbing is a point on the rim. Another example - take a hypercube and a population with 4 bit genotypes. The fitness (phenotype) of each possible bit pattern goes up with the number of 0's in the pattern, except that 1111 has a higher fitness than all other patterns. A fitness function can be more or less deceptive, depending on the number of bits involved and the steepness of the slope away from the global optimum. Deceptive fitness functions were introduced by David Goldberg, one of the early researchers in the GA field. His excellent 2002 monograph, The Design of Innovation, divides the challenges to a GA into noise, a non-stationary fitness function over time, and deception. Most of this book is about finding the edges of GA performance, truly a book Dembski wishes he wrote. You can find many references by Googling the phrase, but this book is an in depth treatment. A reminder, Dembski wrote the MESA GA system in order to show that GAs could not defeat noise. It showed the opposite.

Robin · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
Robin said: Sorry Joe, but that doesn't rebut my response. IDists may well claim they can detect "design" and Mike's example of the problem of determining the CSI of rocks certainly reveals the problem IDists face with that claim, but that doesn't change the fact that your claim above that archaeologists, geologists, and forensic scientists "know design from non-design" is just plain absurd. They don't know any such thing. They know human activity and the contextual historic association.
CSI is the wrong tool to use on rocks.
How do you know? Where is that written? Such a claim doesn't even make sense given what Demski wrote in his Search for a Search papers with Marks. I'd love to see an explanation for that assertion. Oh, and Joe...according to Dembski, CSI isn't a tool; it's a measure. You might want to actually read some of the documents that detail the subject you are attempting to defend.
And no, they don't know if it was human activity of not.
Of course they do. That's what their jobs are limited to: Archaeology, or archeology[1] (from Greek ἀρχαιολογία, archaiologia – ἀρχαῖος, arkhaios, "ancient"; and -λογία, -logia, "-logy[2]"), is the study of human activity in the past, primarily through the recovery and analysis of the material culture and environmental data that they have left behind, which includes artifacts, architecture, biofacts and cultural landscapes (the archaeological record). Forensic Science: The application of scientific knowledge and methodology to legal problems and criminal investigations. (And yes Joe, criminal investigations are limited to human activity As for geologists, they study earth materials and the history there of. Nothing about design in their work: http://geology.com/articles/what-is-geology.shtml http://en.wikipedia.org/wiki/Geologist
And they do determine design from not.
False as shown.

Robin · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: Intelligent design is offered to explain artifacts and crimes
Still false.

patrickmay.myopenid.com · 9 April 2013

diogeneslamp0 said:
Dembski wrote, 2005: By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause.
The bolded part is the strongest claim made by Dembski. It would be fascinating if it were true, but he has never shown that to be the case.

diogeneslamp0 · 9 April 2013

David vun Kannon said: A reminder, Dembski wrote the MESA GA system in order to show that GAs could not defeat noise. It showed the opposite.
No, I did not know that. Do you have a link or reference to Dembski's failure?

DS · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
DS said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
DS said:
Joe Felsenstein said: Folks, I am going to patrol (or "patroll") this thread aggressively. Our usual trolls and our usual troll-chasing will not be welcome and all that will be sent to the Wall. I hope that we can discuss the science and not spend time on denunciations of the motivation of our opponents.
TIme for a dump to the bathroom wall. I guess you are going to have to check the ISP for Joe every time he changes names. Too bad it can't be done automatically, it would sure save a lot of grief.
The OP needs to be dumped. And it would save everyone some grief if you would stop erecting and attacking strawmen. But you won't because that is all you have...
I will thank you to leave my erections out of this.
Your "erection" can't even be measured. Methinks you have an "innie"
Thanks for addressing the science. The bathroom wall awaits, followed by banishment. Enjoy.

diogeneslamp0 · 9 April 2013

patrickmay.myopenid.com said:
diogeneslamp0 said:
Dembski wrote, 2005: By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause.
The bolded part is the strongest claim made by Dembski. It would be fascinating if it were true, but he has never shown that to be the case.
More clearly than the above, Dembski stated
ABSTRACT: Specification denotes the type of pattern that highly improbable events must exhibit before one is entitled to attribute them to intelligence. This paper analyzes the concept of specification and shows how it applies to design detection (i.e., the detection of intelligence on the basis of circumstantial evidence). Always in the background throughout this discussion is the fundamental question of Intelligent Design (ID): Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? [“Specification: The Pattern That Signifies Intelligence”. By William A. Dembski. August 15, 2005, version 1.22.]
Dembski answers yes. This is what Dembski calls "fundamental question of Intelligent Design." However, if we ask our creationist friends pWQie or Joe "Security Clearance" Gallien the simplest possible questions, such as: Which of the following sequences are intelligently designed?
1. “chimuaruyoniitekonamiuedakorutoruganoyobonaireomoiriposuhanohiodahi” 2. “ueomuitearukonamidagakoborenaiyoniomoidasuharunohihitoripochinoyoru”
The first thing they do is ask us to tell them how it was formed. They don't use CSI to identify which string is intelligently designed, because they can't compute CSI and it wouldn't give the right answer anyway.

diogeneslamp0 · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
patrickmay.myopenid.com said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
patrickmay.myopenid.com said:
https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said: So what is the evidence that natural selection can produce CSI, ie a biologically functional subsystem?
There's a thread at The Skeptical Zone that demonstrates how simple evolutionary mechanisms can generate CSI by Dembski's own definition.
Thanks Patrick. However that doesn't do anything. For one it starts with replicators. For another it doesn't produce CSI by Dembski's definition.
Proof by bald assertion is not particularly compelling. If you read the discussion at the link I posted, you will see that the measurement reflects exactly what Dembski wrote and that the evolutionary mechanisms modeled by several participants do, in fact, produce CSI by his definition. "Starting with replicators" is a red herring (or goalpost moving, take your pick).
The bald assertion is saying that it is an example of natural selection producing CSI. And starting with replicators is the whole deal.
Bullshit. Dembski's Law of Conservation of CSI is disproven. As I said before: There’s nothing in Dembski’s alleged MATHEMATICAL proof of the LCCSI that suspends the LCCSI in the presence of replicators. That means that if even ONE counter-example is found of natural processes increasing CSI– in fact there are many– then Dembski’s math is dead wrong. Which we knew already, because Elsberry, Shallit, Felsenstein, the undergrad Wein and others pointed out the fallacies in Dembski’s math. There is no LCCSI. Moreover, Stephen Meyer is again claiming that information cannot increase by natural processes– invoking the LCCSI— during the Cambrian Explosion, but “replicators” existed for billions of years before the start of the Cambrian, and the Ediacaran era was full of complex animals including bilaterians that were the precursors to the Cambrian biota. Meyer’s whole argument presupposes that there is NO suspension of the LCCSI in the presence of replicators, so our creationist friend pWQie is contradicting his authority Myers. Our creationist friend pWQie (who may be Atheistoclast, but who I suspect is Joe "Security Clearance" Gallien) says "starting with replicators is the whole deal" ignoring all the assertions of ID: Stephen Meyer knows the Cambrian Explosion "started with replicators", and there is no mention of replicators in Dembski's long disproven "Law" of Conservation of CSI. The LCCSI is dead, dead, dead.

diogeneslamp0 · 9 April 2013

https://www.google.com/accounts/o8/id?id=AItOawnwsK3wlP8NWGKEz-zi8I95cJLbCdN-pWQ said:
diogeneslamp0 said: Hey pWQie, since you creationists know how to detect design, can you tell me which of the following two strings is intelligent designed? The other is a random scramble.
1. “chimuaruyoniitekonamiuedakorutoruganoyobonaireomoiriposuhanohiodahi” 2. “ueomuitearukonamidagakoborenaiyoniomoidasuharunohihitoripochinoyoru”
Please show all work by which you computed the CSI of the above string to identify the Intelligently Designed one.
Context- There isn't any way that nature produced either of those strings.
Your method just flagged RANDOM NOISE as INTELLIGENTLY DESIGNED, you stupid fuck! So the "Design Inference" DOES produce False Positives! A big part of the ID fraud is the claim the CSI never produces a false positive. However, one of the sequences above was randomly scrambled. Our creationist friend pWQie (who is probably Joe "Security Clearance" Gallien but might be Holocaust-Denying Joe Bozorgmehr) uses the Design Inference and concludes that RANDOM NOISE is INTELLIGENTLY DESIGNED! For the record, the strings were
1. “chimuaruyoniitekonamiuedakorutoruganoyobonaireomoiriposuhanohiodahi” 2. “ueomuitearukonamidagakoborenaiyoniomoidasuharunohihitoripochinoyoru”
Idiot! You just proved your bullshit methods flag random noise as a meaningful sequence! If random noise triggers a design inference, why should we care if DNA triggers a design inference? If the human genome triggers a "design inference" by your bullshit methods, it could be just random noise! If DNA is full of "CSI" by your bullshit methods, it could ALL be random noise! How the fuck can you idiots tell the difference with this moronic hokey pokey?

ogremk5 · 9 April 2013

I'll note that several design proponents believe that NOTHING in the universe is not designed, and as such, even those random strings of noise were designed.

Of course, if everything is designed, then how can we possibly tell that something isn't?

diogeneslamp0 · 9 April 2013

I wanna know who our creationist friend pWQie is. Is he: 1. Holocaust denyin', sellin' weapons to the Iranians Joe Bozorgmehr? or 2. Joe "Security Clearance" Gallien, who writes things like:
Joe "Security Clearance" Gallien writes: Quote [from critic]: My personal opinion is that you are a poser and not a scientist. Your personal opinion is meaningless to reality. And one does not have to be a scientist to conduct science. Quote: What is your position that requires having done science? Electronic engineer and research scientist. Quote: What experiments have you done? Many dealing with ion trap mobilty spectrometry & mass spectrometry. Many more dealing with electronic circuitry and electricity. I can't get specific as it deals with security. If you can get a security clearance I could show you what I do. Then there is astronomy. On any given night I can have 3 telescopes pointing skyward. 2 4,5" aps with a 910mm FL(one automated and one manual) as well as a 10" ap with an 1125mm FL. And that is just the tip of the ole iceberg. That doesn't count the experiments I conduct in my basement. Some labs would be jealous of the equipment I house & use there. For example I now know that ticks are more attracted to watermelon rinds then they are to orange peels or orange slices. I also know that dragonflies play. [Source]

David vun Kannon · 9 April 2013

diogeneslamp0 said:
David vun Kannon said: A reminder, Dembski wrote the MESA GA system in order to show that GAs could not defeat noise. It showed the opposite.
No, I did not know that. Do you have a link or reference to Dembski's failure?
http://www.iscid.org/mesa/ and our masked panda is definitely JoeG

diogeneslamp0 · 9 April 2013

Joe and Mike, I would like to comment on your disagreement about whether or not a rock has CSI. My point is that Dembski has multiple definitions of CSI, and you two are disagreeing about which method to use.
Mike Elzinga said:
Joe Felsenstein said:
Mike Elzinga said: Why can’t they be use on rocks? At what point along the spectrum of complexity in condensed matter does it become appropriate to use them?
Mike, I must not understand your argument (out of laziness, I am sure). In the arguments I was making we had a scale, and I used fitness as that scale. CSI was defined as being in the top 10-150 of the original population distribution on that scale. What plays the role of the scale in your rock examples?
Joe, the scale is the atoms and molecules scattered in a “primordial soup.” How probable was it - according to the ID conception of matter tending to come all apart and all that Morris, Gish, Abel, and Sewell tornado-in-a-junkyard stuff - that they came together in just this specified structure? Is there any scale where ID advocates admit that matter condenses according to the laws of physics and chemistry but then a sudden cutoff where “intelligence” and “information” take over and start pushing atoms and molecules into place from a “primordial soup?” I was just “innocently” imitating what I have been observing over at UD. I copied from an example done on a stone artifact and on calculations being done on polymer chains and DNA. Nobody told me I couldn’t do this; and when one looks closely at a polycrystalline rock, it turns out that there is considerable detail and complexity that can be exactly specified. Complex and Specified; and if I take the log base 2, according to the recipe, I get Information. ...What specific criteria do folks like Dembski and others over at UD use in deciding when to use the calculation to demonstrate design? It appears that they have already decided on design before they do the calculation; and that they can make the calculation come out in favor of design. Well, I can do that too; I did it with rocks, and I can do it with lots of other forms of condensed matter... I am imitating what I see, so am I misrepresenting the calculation?
Mike is right by the pre-2005 definition of CSI. Where Joe talks about a probability < 10^-150 of a system being in an original [random] population, Joe's reasoning is closer to (but not the same as) Dembski's 2005 reasoning. In Dembski's definition in No Free Lunch and The Design Inference, a rock does have CSI, which is a straightforward calculation. The only criteria are: 1. It's complex, which in practice means, it has many parts 2. It matches an independently given pattern. Rocks certainly match an independently given pattern. If isopods (roly-poly bugs) live on the underside, the rock matches the pattern "shelter for bugs." If it's oblong it matches the pattern "oblong." If it's jagged it matches the pattern "jagged." After 2005 Dembski introduced two more definitions but, by at least one of those methods, rocks have CSI. In Dembski's 2005 paper, he gave TWO new methods for computing CSI. One is the "semiotic string" method, which I call Dembski's "Guaranteed Success" Method #3 aka the "semiotic string" method: you count the number of words in a verbal description of the object, then compute a factor that is 250,000 raised to the power of the number of words. If the number of words is W, then you compute 250,000^W. (The number 250,000 is chosen because it is supposedly the number of words in an English dictionary.) For Mike's rock, a verbal description of his rock might be "shelter for isopods" so W =3. You then multiply this but what I've called the "Tornado Probability", the probability of assembly by random rearrangement of parts. If the sequence length = L and the number of kinds of parts = K, (e.g. K=4 for DNA, K=20 for proteins, etc.) then the tornado probability is Tornado probability = 1/K^L So you multiply these together and take -log base 2.
CSI = -log(base 2) [(250,000)^w / K^L] = L * log(base 2)[K] - 17.93*w
Dembski "Guaranteed Success" Method #3 infers design if the equation above is more than one. For Mike's rock, a verbal description of his rock might be "shelter for isopods" so W =3, so depending on what he counts as K,
CSI = -log(base 2) [(250,000)^w / K^L] = L * log(base 2)[K] - 53.8
I don't know what K or L are for Mike's rock, but L is clearly huge, so Mike's rock is clearly intelligently designed by Dembski's "Guaranteed Success" Method #3. Some more common examples.
For English language (K = 26): SC = L * 4.70 - w * 17.93 For DNA sequences (K = 4): SC = L * 2 - w * 17.93 For protein (amino acid) sequences (K = 20): SC = L * 4.32 - w * 17.93
I will describe the OTHER method (Dembski's "Guaranteed Success" Method #2) from Dembski's 2005 paper in my next comment.

diogeneslamp0 · 9 April 2013

David vun Kannon said:
diogeneslamp0 said:
David vun Kannon said: A reminder, Dembski wrote the MESA GA system in order to show that GAs could not defeat noise. It showed the opposite.
No, I did not know that. Do you have a link or reference to Dembski's failure?
http://www.iscid.org/mesa/ and our masked panda is definitely JoeG
Thanks a bunch, David!

diogeneslamp0 · 9 April 2013

Dembski's "Guaranteed Success" Method #2 This is the other method discussed in Dembski's 2005 paper. The Tornado Probability is the same, but the semiotic string is replaced with another number, that represents the simplicity of the pattern that the sequence matches. A match to a simpler pattern gives a lower number; a more complex pattern gives a higher number. This is counter-intuitive; evolutionists think it's an error, and even IDer Gpuccio agrees. Dembski chooses a pattern and judges its simplicity by the Kolmogorov criterion: specifically, given a pattern S of length L, how many sequences of Length L have equal or less Kolmogorov complexity? Call that number N. This is a mere matter of counting N, ASSUMING that you can compute Kolmogorov complexity of sequences (you can't, but you can rank them), but for real world problems this rapidly becomes extremely difficult to compute. So Dembski never used "Guaranteed Success" Method #2 for any real world problem. In his 2005 paper he considered only the simplest possible thought experiment, involving a uniform bit string pattern like 1111111. There are only two strings as simple or simpler than that: 1111111 and 0000000. So here N = 2. Given that you then compute the CSI with the usual Tornado Probability.
CSI = -log(base 2) [ N/ K^L] = L * log(base 2)[K] - log(base 2)[N]
Dembski "Guaranteed Success" Method #2 infers design if the equation above is more than one. FP runs the Numbers: CSI of a Grain of Salt Now I don't know how to apply "Guaranteed Success" Method #2 to Mike's rock. However, I can apply it easily to a grain of salt, NaCl. Let us encode sodium as 1 and chlorine as 0. Then a crystal of table salt would be a long string like: 1010101010. This can compressed into the program "print "10" 5 times". It seems to me that the number of strings that are as simple or simpler than this by the Kolmogorov criterion is FOUR. Correct me if I'm wrong.
Count the strings as simple or simpler than "1010101010" 1. 1010101010 (program "print "10" 5 times".) 2. 0101010101 (program "print "10" 5 times".) 3. 1111111111 (program "print "1" 10 times".) 4. 0000000000 (program "print "0" 10 times".)
Am I right about that? If anyone else can think of more strings as simple or simpler than 1010101010, please let me know. From the above I conclude N = 4 for table salt. This website estimates that if a grain of salt weighs 5.85x10^-5 grams, then there are "1.2x10^18 atoms, half of which are sodium atoms" in it. That means that there are 6x10^17 instances of NaCl, or encoded as a bit string, there are 6x10^17 repetitions of "10" in the sequence. For a grain of salt, the length of the sequence L = 1.2x10^18, the number of kinds of parts (Na or Cl) K=2, and above we estimated estimated simplicity of pattern as N = 4 so:
CSI = -log(base 2) [ N/ K^L] = L * log(base 2)[K] - log(base 2)[N] = (1.2x10^18)*log(base 2)[2] - log(base 2)[4] = (1.2x10^18) - 1
Thus the CSI in a grain of salt is approximately 1.2x10^18. This is far, far, far higher than Dembski's UPB so Dembski's "Guaranteed Success" Method #2 concludes all grains of salt are intelligently designed.

diogeneslamp0 · 9 April 2013

David vun Kannon said:
diogeneslamp0 said:
David vun Kannon said: A reminder, Dembski wrote the MESA GA system in order to show that GAs could not defeat noise. It showed the opposite.
No, I did not know that. Do you have a link or reference to Dembski's failure?
http://www.iscid.org/mesa/ and our masked panda is definitely JoeG
David, I took a look at that link. Under the "Results" tab:
RESULTS results will be posted here in the future.
At the bottom of the page:
�2002 International Society for Complexity, Information and Design
Eleven years ago. Yeah, that's our Dembski.

Joe Felsenstein · 9 April 2013

Sorry to have been away, folks. I had other preoccupations, such as writing a letter of recommendation, and sleeping (it was night here).

I am not going to send all posts by Masked Panda -pWQ to the Bathroom Wall, I agree that he is most likely "JoeG", and we all know what he's like. But he isn't banned here. However, when he is involved things tend to get heated and the thread forks off in all directions. So I'm going to put a stop to the most off-topic ones. So no more discussion here (by him or anyone) of how archaeologists detect Design. All that will henceforth go to the Bathroom Wall.

I'd be happy to engage -pWQ in a careful, slow, focused discussion of whether Dembski's LCCSI theorem is correct, and whether the change of specification that occurs prevents the theorem from being used to argue that Design has been detected. For context, let me note that the discussion is about evolution after the Origin Of Life, as that has been a problem before. The issue is whether Dembski's theorem can be used to detect Design in post-origin-of-life evolution.

I would urge -pWQ to post sparingly, using restraint and understatement. If Demsbki's LCCSI theorem works, he should be able to convince us of that.

Note -- when I send a fork of this thread off to the Bathroom Wall, all parts of it go, including the miscreant and the replies to the miscreant.

diogeneslamp0 · 9 April 2013

Joe,

can you explain the idea behind Dembski's one-to-one mapping? I don't quite get what it's about-- I guess you mean he says there's a mapping of the population before evolution to the population after evolution. You can use jargon.

Do you have a page number in No Free Lunch (which I have scanned) that has the transformation?

David vun Kannon · 9 April 2013

diogeneslamp0 said:
David vun Kannon said:
diogeneslamp0 said:
David vun Kannon said: A reminder, Dembski wrote the MESA GA system in order to show that GAs could not defeat noise. It showed the opposite.
No, I did not know that. Do you have a link or reference to Dembski's failure?
http://www.iscid.org/mesa/ and our masked panda is definitely JoeG
David, I took a look at that link. Under the "Results" tab:
RESULTS results will be posted here in the future.
At the bottom of the page:
�2002 International Society for Complexity, Information and Design
Eleven years ago. Yeah, that's our Dembski.
Yup. IIRC, MESA is just a GA built to solve the OneMax (count the number of ones in the genotype) with the fitness being jittered by noise. It is not really a hard problem until the noise component starts to swamp the signal, and even then a large enough population can make progress. (As opposed to a population that records fitnesses accurately, but suffers large amounts of mutation.) OneMax is a toy problem, but was solved for Gigabit sized genomes as described here: http://www.slideshare.net/deg511/a-billion-bits-or-bust That is within an order of magnitude of the human genome. 5 years ago. We stand on the threshold of evolving very large, complicated things!

Joe Felsenstein · 9 April 2013

diogeneslamp0 said: Joe, can you explain the idea behind Dembski's one-to-one mapping? I don't quite get what it's about-- I guess you mean he says there's a mapping of the population before evolution to the population after evolution. You can use jargon. Do you have a page number in No Free Lunch (which I have scanned) that has the transformation?
I am embarrassed to say that my copy of NFL is at home and I can't get to it for some hours. But anyway it was in the chapter where he talks about the conservation law. He models evolution (the nonrandom part of it anyway) as a 1-1 transformation. I'll get back to you with the page number. (Until I can, can someone else supply the page number?) It can be argued that natural selection on a fitness surface (adaptive landscape) is many-to-one. For example if we consider the population to be one haploid asexual organism, and have one-step mutation to neighbors, and very strong selection, we might say that, very roughly, evolution looks around and picks the most fit neighbor and moves there. That can mean that two genotypes have the same successor, which is a many-to-one transformation. But even without quibbling about that, Dembski's argument has big problems, so I have not worried about that.

DS · 9 April 2013

David vun Kannon said:
diogeneslamp0 said:
David vun Kannon said: A reminder, Dembski wrote the MESA GA system in order to show that GAs could not defeat noise. It showed the opposite.
No, I did not know that. Do you have a link or reference to Dembski's failure?
http://www.iscid.org/mesa/ and our masked panda is definitely JoeG
He may not be banned, but if he has used different names to post on this blog he should be.

Mike Elzinga · 9 April 2013

There is another angle to this discussion that Joe raised; namely the possibility that natural selection can increase CSI, no matter how Dembski wants to define it.

As I have mentioned a number of times, living organisms fall into the category of soft matter systems; they exist in a narrow temperature range in which the kinetic energies are comparable to the binding energies of these constituents.

This allows for reconfigurations to occur due thermal perturbations as well as other perturbations coming in from the environment. Therefore there will be variability at the atomic/molecular level. That variability is manifested in gross features that “feel” the effects of the surrounding environment; hence the structures are susceptible to selection.

But just to drive the point home with the rock example, let’s move the temperature range up to where these rocks become soft matter. What kinds of environment can change CSI?

Well, if the molecules that will condense into a rock structure are in a rapidly cooling environment where molecules become locked in place more quickly than they can migrate into more orderly positions, then we will see the formation of small, irregularly shaped polycrystals with all those random orientations that contributed to a very large CSI in the calculation.

However, if the temperature is declining slowly and/or cycling up and down around the “freezing” point of the molecules, there is more time for molecules to migrate into larger crystals, so there would be fewer of them. If the temperature changes are just right, these molecules could eventually grow into a single, large crystal. Here is an environment the lowers the CSI.

Now step back and look at it from the point of view of natural selection. One environment produces higher CSI, the other lower CSI. Can a rock with low CSI become the grist for a rock with higher CSI? Absolutely; just have the environment suddenly increase in temperature while providing a turbulent flow of material to mix things up, and then cool relatively quickly. Voila; the rock evolved into something with higher CSI. No “intelligence” required.

ID/creationists also want to further “qualify” the application of these calculations to systems with “Functionally Complex Specified Organization.” Apparently the idea is to limit the application to the molecules of life. They want to enumerate function and organization as well.

So let’s take a look at the nervous system of some animal such as a mammal. It is very complex. It has organized behavior. It performs complex functions. One can sit down and enumerate until one’s brains fall out; and sure enough, one can produce a large enough number so that taking log base 2 will far exceed 500.

But note what happens when we change the temperature of the system just a little bit. Raise the temperature just a few degrees Celsius and the whole system goes chaotic. Lower the temperature a few degrees and the atom and molecule mobility drops as they lock into more restricted positions. The entire system ceases to function.

Organization and function are temperature dependent; and temperature dependence is a purely physics/chemistry phenomena. Where is the “intelligence” or “information” that is pushing atoms and molecules around?

This “intelligence” or “information” can only work within a narrow energy window; in other words, energies that are easily measurable simply by kT. Does this mean that “information” and “intelligence” are driven out by moving a system outside a very narrow energy range?

I’m still looking for the cutoff point where physics and chemistry no longer apply and “intelligence” and “information” have to step in to do the job physics and chemistry can no longer do.

The temperature range can be where living organisms live, or it can be where rocks become soft matter.

Why is “design” required in soft matter systems – such as living systems - and not in rocks when they are near their melting temperatures? Where is the crossover point?

diogeneslamp0 · 9 April 2013

Suppose pWQie really is Security Clearance Joe G. If so, that would make him pretty dishonest, because then he defended his own blog while pretending not to be the owner of that blog.
pWQie [Joe Gallien?] wrote diogeneslamp0 is lying. I followed its links. I read: In order to tell if blipey’s string- 100011101001011100010111010101- is designed or not I would need to know where he got it from. For example, did it just pop into his bitty little head, was it found on the wall of a cave, was it on a piece of paper or what? There wasn’t any question about HOW the string was made. All the questions pertained to CONTEXT. You people are just sick…
The links he followed are to his own blog, and when he wrote "I read" above, he was describing and defending the words he wrote, without admitting he was their author. I also have to note that his style of Gollum-style of talking really creeps me out, e.g.
diogeneslamp0 is lying. I followed its links.
"I followed its links"!? Just yesterday I watched "The Hobbit" and the riddle scene scared the bejeebers out of me.
Is it tasty? Is it juicy? It answers our questions or we eats it, my precious. We eats it!

David vun Kannon · 9 April 2013

Mike Elzinga said: There is another angle to this discussion that Joe raised; namely the possibility that natural selection can increase CSI, no matter how Dembski wants to define it.
I'm tempted to say that natural selection can increase CSI for exactly half of all possible definitions! Another way forward in this conversation is whether CSI is properly a property of an individual or a population? It seems to me that "change in allele frequencies", "evolution is what populations do", etc would indicate that CSI is if anything relevant at all, a population level quality. Do all of Dembski's definitions still work (as well as they ever did) if the object under study is a population rather than an individual?

Ray Martinez · 9 April 2013

Joe Felsenstein said: ....we might say that, very roughly, evolution looks around and picks the most fit neighbor and moves there....
I thought the watchmaker is blind?

Ray Martinez · 9 April 2013

This comment has been moved to The Bathroom Wall. (I meant it. JF)

diogeneslamp0 · 9 April 2013

BW for Ray. Nice knowin ya... not really.

prongs · 9 April 2013

Ray Martinez said:
prongs said:
Scott F said: Most rocks look quite "designed" to me.
I assert that my exquisite Quartz crystal is a better candidate for design than Paley's watch. No creationist has even challenged me, much less proven my crystal is not designed. C'mon Ray, Steve, FL, IBIG - what's wrong with you guys? Prove to me my Quartz is NOT designed.
Nearly all evo scholars accept Paley's opening paragraph as correctly framing the debate. No one can say a stone is designed when compared or contrasted against the watch (which represents individual organisms or species).
I am no "evo scholar" and do not accept Paley's erroneous assertion. I can, and do, say my Quartz crystal has "design." Indeed, in Paley's day no human could make a Quartz crystal, yet they could make pocket watches. You say pocket watches are obvious designs of human engineering, therefore "designed." I say that my Quartz crystal exhibits such exquisite design that no human could mimic it until the 20th Century. So which has more "design"? Your pocket watch or my crystal? Shall we perform a calculation according to Dembski as taught recently by Mike Elzinga? Why yes, of course we should. Disclaimer - My use of formulae according to Dembski in no way constitutes approval tacit or otherwise, of his opinions, positions, theology, or any other statements. I merely make use of his method for the discussion at hand.
Thus spake Elzinga: This allows an estimate of approximately 10^27 molecules in the rock with approximately 10^18 molecules per crystal on average. Let N = 10^27, the number of molecules. Let P = 10^9, the number of crystals. There are P! permutations of all the crystals in the sample. Each crystal has an orientation in a 3-dimensional space; so we choose three perpendicular axes about which rotations can be made. There are 360 degrees, 60 minutes, 60 seconds per complete rotation about each axis, therefore each crystal can have 12960003 orientations in 3-dimensional space. Since there are P crystals, there are 12960003P ways to orient all the crystals. The number of permutations of the individual atoms is conservatively (3N)!. There is also the number of possible orientations of the original rock when it was noted in the heath; and this is again 12960003. Therefore, the number of possible arrangements and orientations of crystals and atoms and rock is Ω = (3N)! x P! x 12960003(P + 1). The amount of information in this particular rock is thus log2Ω, a number that far, far exceeds 500. Therefore we can conclude without hesitation that this particular rock was designed; and since this is an arbitrary rock picked up in an arbitrary location, we can say that any rock is designed after we examine it and carefully specify its structure.
My crystal is about 3 pounds 6 ounces, or about 1532 grams. So my N is bigger. I get 60gm per mole, or about 1.5 x 10^25 molecules. N = 1.5 x 10^25 I will change P, the number of crystals to 1. Although my specific Quartz crystal has a uncountable multitude of attached microscopic crystals, and possible internal left-hand and right-hand twin regions, I will ignore them all as this is a Gedanken experiment. So, my P = 1, but all else remains the same, with N modified appropriately. Therefore, the number of possible arrangements and orientations of crystal and molecules is Ω = (3N)! x 1! x 12960003^(1 + 1) = (3N)! x 12960003^2 The amount of information in this particular crystal is a number that far, far exceeds 500. Therefore we can conclude without hesitation that this particular crystal was designed; and since this is an arbitrary crystal collected in a specific location by yours truly, we can say that any crystal is designed after we examine it and carefully specify its structure. So Ray, if you can show me the error of my calculations, or otherwise provide an alternative calculation according to Dembski or Paley, please do so. Until then, you have failed to prove to me my Quartz crystal has no "design." (For cryin' out loud, you can just look at it and tell, can't you? I can.)

Joe Felsenstein · 9 April 2013

Ray Martinez said:
Joe Felsenstein said: ....we might say that, very roughly, evolution looks around and picks the most fit neighbor and moves there....
I thought the watchmaker is blind?
No, I think that in that analogy the watchmaker is blind to the structure of the watch, and just tinkers randomly, but the watchmaker does have the ability to tell whether the timing of the watch is or is not improved. i.e., natural selection.

harold · 9 April 2013

You then multiply this but what I’ve called the “Tornado Probability”, the probability of assembly by random rearrangement of parts. If the sequence length = L and the number of kinds of parts = K, (e.g. K=4 for DNA, K=20 for proteins, etc.) then the tornado probability is Tornado probability = 1/K^L
Speaking of very large and very small numbers, here's an interesting thought. How many cell divisions do you think there are in the biosphere, per unit of time (say, second)? It's beyond comprehension. The number of mutations that occur per second is beyond comprehension.

Mike Elzinga · 9 April 2013

It’s hard to miss the irony in AiGs little lesson on hydrothermal vents.

JimNorth · 9 April 2013

diogeneslamp0 said: Count the strings as simple or simpler than "1010101010" 1. 1010101010 (program "print "10" 5 times".) 2. 0101010101 (program "print "10" 5 times".) 3. 1111111111 (program "print "1" 10 times".) 4. 0000000000 (program "print "0" 10 times".)
Would (program "print "1111111111" 1 times") be simpler? It's been a long, long time since I've programmed anything in basic...and nothing since then.

Dave Luckett · 9 April 2013

Joe Felsenstein said: No, I think that in that analogy the watchmaker is blind to the structure of the watch, and just tinkers randomly, but the watchmaker does have the ability to tell whether the timing of the watch is or is not improved. i.e., natural selection.
Further, that the blind watchmaker works simultaneously on a very large number of different watches, and he busts and throws away any number of them...

Joe Felsenstein · 9 April 2013

Dave Luckett said:
Joe Felsenstein said: No, I think that in that analogy the watchmaker is blind to the structure of the watch, and just tinkers randomly, but the watchmaker does have the ability to tell whether the timing of the watch is or is not improved. i.e., natural selection.
Further, that the blind watchmaker works simultaneously on a very large number of different watches, and he busts and throws away any number of them...
In a population genetic model, yes. If there is non-infinitely-strong selection, the watchmaker would then also enrich for better watches by reproducing them, but not instantly replacing every member of the population by the best watch. In a simplified model with just one watch which always replaces the watch when a better one is found, these subtleties are not present.

Joe Felsenstein · 10 April 2013

David vun Kannon said:
Mike Elzinga said: There is another angle to this discussion that Joe raised; namely the possibility that natural selection can increase CSI, no matter how Dembski wants to define it.
I'm tempted to say that natural selection can increase CSI for exactly half of all possible definitions! Another way forward in this conversation is whether CSI is properly a property of an individual or a population? It seems to me that "change in allele frequencies", "evolution is what populations do", etc would indicate that CSI is if anything relevant at all, a population level quality. Do all of Dembski's definitions still work (as well as they ever did) if the object under study is a population rather than an individual?
It is pretty unclear. The evolving population is represented as occupying a state in the space. Evolution in one generation is a mapping from that state to another. There is no real discussion of population properties in NFL other than that. I think Dembski's machinery will work for a population (represented that way) whether or not the population consists of a single individual or many.

Joe Felsenstein · 10 April 2013

Mike Elzinga said:
Joe Felsenstein said:
Mike Elzinga said: Why can’t they be use on rocks? At what point along the spectrum of complexity in condensed matter does it become appropriate to use them? What makes the vital difference? Seriously; are you just another follower of ID/creationism that can’t tell us where the cutoff is?
... What plays the role of the scale in your rock examples?
Joe, the scale is the atoms and molecules scattered in a “primordial soup.” How probable was it - according to the ID conception of matter tending to come all apart and all that Morris, Gish, Abel, and Sewell tornado-in-a-junkyard stuff - that they came together in just this specified structure? Is there any scale where ID advocates admit that matter condenses according to the laws of physics and chemistry but then a sudden cutoff where “intelligence” and “information” take over and start pushing atoms and molecules into place from a “primordial soup?” I was just “innocently” imitating what I have been observing over at UD. I copied from an example done on a stone artifact and on calculations being done on polymer chains and DNA. Nobody told me I couldn’t do this; and when one looks closely at a polycrystalline rock, it turns out that there is considerable detail and complexity that can be exactly specified. Complex and Specified; and if I take the log base 2, according to the recipe, I get Information. So the question I am trying to get an answer for is the one I have been asking; “Where along the spectrum of complexity in condensed matter is it appropriate to use these calculations to “prove” design? With no clear demarcation, how do we rule out rocks? With no demarcation, how do we rule out organic compounds; I can specify these just as completely as a rock. What specific criteria do folks like Dembski and others over at UD use in deciding when to use the calculation to demonstrate design? It appears that they have already decided on design before they do the calculation; and that they can make the calculation come out in favor of design. Well, I can do that too; I did it with rocks, and I can do it with lots of other forms of condensed matter. All one needs to do is look carefully at the complexity and enumerate the details. So where is the cutoff? How do ID/creationists decide? I have not yet received an answer. I am imitating what I see, so am I misrepresenting the calculation?
I do not understand what specification you have. If I toss a coin 1000 times and record the sequence, that is complex but not specified, and Dembski would not accord it CSI. It seems to me you may be doing that and then saying that the complexity gives the molecules and atoms CSI if it is complex enough. I don't think it does.

Mike Elzinga · 10 April 2013

Joe Felsenstein said: I do not understand what specification you have. If I toss a coin 1000 times and record the sequence, that is complex but not specified, and Dembski would not accord it CSI. It seems to me you may be doing that and then saying that the complexity gives the molecules and atoms CSI if it is complex enough. I don't think it does.
Well, you are putting your finger on some problems I deliberately imitated from the examples I saw over a UD. I had hoped that someone would raise these issues for discussion. I get the impression from watching the examples I see from those over at UD that specification can be just as arbitrary as the decision to assert that something is or is not designed. For example, quantizing the orientations of crystals in degrees, minutes, and seconds is not the only way to slice up orientations. There is no “natural unit of quantization” of orientation unless one is dealing with things like spin orientations in magnetic fields at the quantum level. But some of the folks over at UD pick an arbitrary unit of quantization when deciding how finely they want to specify something. Another common mistake I see ID/creationists making - which I deliberately copied, by the way – is to count each permutation of identical atoms as different. That would not have changed the conclusion in the case of a polycrystalline rock example, because I could have arbitrarily changed the quantization of orientations or zeroed in on other fine details such as roughness. These are things we learn to watch out for when enumerating energy states in statistical mechanics for example. CSI could be a useful means of specifying detail, provided that there always remains some natural unit of quantization of the parameters used in the specification. That would also mean that any emergent properties that occurred in the process of evolution also came with their own natural units of quantization. And those natural units of quantization would have to be objective so that anyone doing the calculation of CSI would use the same features and units. There is a little bit of that when looking at things like polymer chains or DNA. But a further problem remains in the assumption that such features pop into existence from repeated uniform, random samplings of “idea gases” of atoms and molecules. In reality not all emerging properties are equally probable; so I don’t see how CSI is any real measure of evolution. It seems to me that CSI is limited to enumerating objective, quantized features only. I don’t see any connection between CSI and things like fitness, or probability of occurrence, or design, or “intelligence.”

ogremk5 · 10 April 2013

I do not understand what specification you have. If I toss a coin 1000 times and record the sequence, that is complex but not specified, and Dembski would not accord it CSI. It seems to me you may be doing that and then saying that the complexity gives the molecules and atoms CSI if it is complex enough. I don't think it does.
I think you could make the 1000 random coin tosses specified though. Remember that specification has something to do with meaning. So if you performed a function where you took the bits of a sentence and then added the bits generated from the coin toss, then you would be adding specificity. It would still appear to be random (that's the point of cryptography after all), but it would be highly specified in that a person with the same sequence of coin tosses could return meaning to the sentence.

Joe Felsenstein · 10 April 2013

I think you could make the 1000 random coin tosses specified though. Remember that specification has something to do with meaning. So if you performed a function where you took the bits of a sentence and then added the bits generated from the coin toss, then you would be adding specificity. It would still appear to be random (that's the point of cryptography after all), but it would be highly specified in that a person with the same sequence of coin tosses could return meaning to the sentence.
What you describe is a one-time-pad cipher. You could send someone the final string, and also the sequence of random tosses, and they could recover the bits that get you the English sentence (by sending the sequence of tosses you render the cryptosystem ineffective, but never mind). But I don't see where that is leading. You sent a sequence of 2000 bits. It is complex and specified. I was wondering whether Mike Elzinga's crystals are specified. The thing about CSI is that the specification is supposed to be known in advance, which I don't think is true of the rock example. In your 2000-bit example we know in advance that the second 1000 tosses encrypt the first 1000, and we have some specification for the final unencrypted result "is recognizably a sentence in English".

ogremk5 · 10 April 2013

Joe Felsenstein said:
I think you could make the 1000 random coin tosses specified though. Remember that specification has something to do with meaning. So if you performed a function where you took the bits of a sentence and then added the bits generated from the coin toss, then you would be adding specificity. It would still appear to be random (that's the point of cryptography after all), but it would be highly specified in that a person with the same sequence of coin tosses could return meaning to the sentence.
What you describe is a one-time-pad cipher. You could send someone the final string, and also the sequence of random tosses, and they could recover the bits that get you the English sentence (by sending the sequence of tosses you render the cryptosystem ineffective, but never mind). But I don't see where that is leading. You sent a sequence of 2000 bits. It is complex and specified. I was wondering whether Mike Elzinga's crystals are specified. The thing about CSI is that the specification is supposed to be known in advance, which I don't think is true of the rock example. In your 2000-bit example we know in advance that the second 1000 tosses encrypt the first 1000, and we have some specification for the final unencrypted result "is recognizably a sentence in English".
Fair enough. The issue, in my mind, is that specification is being used as 'meaning'. Let's say that the sentence I encrypted wasn't English, but Swahili. Even after decrypting it, could you know that it was actually specified and not random letters (provided that you don't know anything about Swahili)? Same thing with the rock. For example, there are several minerals with the formula Al2SiO5. Kyanite is a blue, fibrous mineral while Andalusite can be fibrous, but it can also be massive and is pink or green in cross section. So why can't the specification of the mineral be an example of CSI? I agree with you that the specification must be known in advance, which is why the whole thing is basically meaningless.

diogeneslamp0 · 10 April 2013

Joe Felsenstein said: I was wondering whether Mike Elzinga's crystals are specified. The thing about CSI is that the specification is supposed to be known in advance, which I don't think is true of the rock example.
But this is a big problem with Dembski's pre-2005 definitions of CSI. Something has CSI if 1. It has creationist complexity, which basically means, it has lots of parts; and 2. It matches an independently given pattern. I fail to understand why Mike's rock does not have CSI, because I think "shelter for bugs" is an independently given pattern. I can go to the pet store and buy a house for bugs. It has function; its function is to house bugs. If a rock also has the function of being a house for bugs, why is that not an independently given pattern? Remember, pre-2005 Dembski said that biological structures are ALL specified because they ALL have function. Indeed, our creationist friend pWQie, who is apparently Security Clearance Joe G, quoted that very passage from Dembski. So if function = specification for biological structures, pre-2005, why should a rock with the function "house for bugs" not be specified, pre-2005? Consider the following passage from a creationist textbook, on how glaciers show intelligent design because they have a clear function.
“God designed the glaciers to store water in the coldest months when water is not usually scarce and to release it in warmer months when streams and reservoirs are low.” [Science 4 for Christian Students, Bob Jones Univ. Press (1990), p.196]
These creationists clearly believe that glaciers have a function and purpose, thus they are intelligently designed. Debmski, pre-2005, says function IS specification. Why are glaciers not specified? What if the rock is a natural bridge, say, in Utah? That's a rock that's irreducibly complex (take out a chunk and it collapses), it has a function (you can walk across the streams that eroded it) and it matches in independently given pattern (it's a bridge.) Depending on how you define "part" one might argue that a natural bridge does not have enough parts to have CSI. Of course, there is a strategic question how much we should emphasize the weaknesses of "specification" pre-2005, because Dembski (while not admitting he was wrong) now accuses us of ignoring his inferior post-2005 shit. But the fact is that the UDites at Uncommon Descent ALL employ Dembski's pre-2005 definition of specification to say, e.g. that biological things are specified, Mt. Rushmore is specified, therefore biological things were designed like Mt. Rushmore. When pro-ID people use Dembski's pre-2005 definition of specification, as they do every single day at UD, no IDer ever calls them ignorant. No IDer would call the creationist textbook Science 4 for Christian Students (quoted above) "ignorant" because it says glaciers have a clear purpose and function. But, when anti-ID people use Dembski's pre-2005 definition of specification, we're called ignorant.

Mike Elzinga · 10 April 2013

This is getting at something I and other physicists have been trying to say about this “targeting” stuff for decades. There is no “target” in what falls out of the condensation of matter.

The notion of “target” – hence; specification – lies in the setup of a computer program to mimic nature by using a process that takes place in nature – or in a made-up universe.

Most, if not all, genetic algorithms can be converted into a minimization of potential energy just by flipping the sign of the “fitness peak,” which then turns the peak into a potential well.

When we posit a fitness peak or a potential well, we are considering a specified case of a general rule that we see in nature; and then we use a posited search strategy to locate that peak or well.

The specified characteristics of an organism, or the remaining energy in a system, or some other objectively measurable characteristic used in specifying the peak or potential well are simply the “targets” that represent the peak or well. They are stand-in specifications only in so far as the characteristics being used do in fact occur at the peak or well. Nature determines what those will be in reality; but in simulations, we can make whatever rules and correlates we like.

Those specifications don’t usually have anything to do with the mathematical shape of the peak or well. We could use the mathematical shapes of peaks and wells if we knew how to represent them; then the problem would be reduced to a simple root-finding algorithm. But without knowledge of the shape of the peak or well, Monte Carlo type searches and genetic algorithms are far more useful.

We don’t know ahead of time what the shape of the potential wells will be when given chunks of matter cluster, either by gravity, electromagnetism, gluons, or whatever other force by which matter interacts. We don’t know ahead of time what the cluster will look like. All that depends on what properties emerge as matter clusters.

One way to do this is to model the physics by putting in the masses, the forces, the initial positions and velocities, and then stepping through the simulation to learn what falls out.

Another way is to keep randomly repositioning the particles, checking the potential energy, retaining the configuration that gives the current minimum, making random adjustments to that system, and repeating until the potential is as low as it can get. Once we have that, we can make a profile of the potential energy well of the final configuration.

This latter process is how nature does it; and we can refine the simulation by checking the positions and momentums of all the particles after an initial iteration and throwing that data back into the simulation as a first estimate of particle trajectories. What starts out as a random simulation gradually builds into a developing pattern.

What I am illustrating with some physical potential wells is what genetic algorithms do, but with the signs of the wells reversed to make them peaks.

Specifying a “target” is not putting in the answer. The reason ID/creationists keep accusing scientists of this is because ID/creationists believe it is all “spontaneous molecular chaos” down there, and that it is all tornados-in-a-junkyard activity with inert objects that never interact at any level. All they are doing is engaging in a gritty denial of all of physics and chemistry and replacing it with old, medieval vitalism and teleology masquerading as “information” and “intelligent guidance.”

I still want to see an ID/creationist who can do a high school level calculation that scales up the charge-to-mass ratios of protons and electrons to kilogram-sized masses separated by distances on the order of meters, and then calculate the energy of interaction in joules and megatons of TNT. I don’t believe Dembski or any of the gurus of ID can do it or understand what it means.

harold · 10 April 2013

Specifying a “target” is not putting in the answer.
Even well-meaning humans often make the mistake of assuming that evolution is a process that seeks a specific target. In fact it's the opposite. It's a process that arrives at adaptations through random generation of variability, followed by selection acting on the variability. The target is not consciously "known" in advance. Even the use of the term "target" is suboptimal. I guess I was too cryptic when I made this point above...
You then multiply this but what I’ve called the “Tornado Probability”, the probability of assembly by random rearrangement of parts. If the sequence length = L and the number of kinds of parts = K, (e.g. K=4 for DNA, K=20 for proteins, etc.) then the tornado probability is Tornado probability = 1/K^L
Speaking of very large and very small numbers, here’s an interesting thought. How many cell divisions do you think there are in the biosphere, per unit of time (say, second)? It’s beyond comprehension. The number of mutations that occur per second is beyond comprehension
Creationists love to talk about the denominator. It's absolutely true that only a small percentage of mutations affect phenotype at all, and of those, only a small percentage are adaptive. That's the denominator. The numerator, which they love to ignore, is the sheer unimaginable number of germ cell divisions, and thus germline mutations, generated in the biosphere every minute of every day. Evolution is strongly constrained, but within constraints, is there is near infinite capacity for variation. Given the way genetics works at the molecular level, biological evolution is the expected outcome. The onus is on denialists to explain why it shouldn't happen. All Dembski has ever really done has been to make up functions with very arbitrary constants and very ambivalent variables, and then say "I declare that if I declare the solution to this function to be such and such, then something or other was 'designed', and if I declare it designed, it could not have evolved". It's amusing to plug numbers into the functions and show that things that aren't designed look designed and so on, but the whole premise of Dembski's exercise is flawed to begin with. Even if his calculations made all rocks look undesigned and all forms of modern life look designed it would still just mean that his arbitrary formulas over-detect "design". We have evidence the modern life evolved from earlier life. Making up arbitrary functions with arbitrary constants and ambivalent variables doesn't change that.

Mike Elzinga · 10 April 2013

ogremk5 said: Same thing with the rock. For example, there are several minerals with the formula Al2SiO5. Kyanite is a blue, fibrous mineral while Andalusite can be fibrous, but it can also be massive and is pink or green in cross section. So why can't the specification of the mineral be an example of CSI? I agree with you that the specification must be known in advance, which is why the whole thing is basically meaningless.
If one asks the question, “Can nature produce THIS specified example of a rock from a tornado-in-a-junkyard strategy?”, the answer is “Not likely.” The ID response to that answer is “Aha; evolution needs “intelligence” and “information, otherwise it can’t produce anything!” It’s the bogus Lottery Winner Fallacy all over again. I think Dembski is just attempting to find more devious ways to hide and word-game the fallacy; he as never showed any interest in the science itself. The appropriate answer to “Can nature produce THIS specified example of a rock?” is, “Yes; here it is! Are you interested in how it did that?” For the last 50 years or so, the only answer we have received from the ID/creationists is, No!”

https://me.yahoo.com/a/FGBzQUZtsemthPseqYzKYHG1950GVQdQElnix0p2OwOZDtFTmQ--#98415 · 10 April 2013

This comment has been moved to The Bathroom Wall. (This is Atheisticlast again JF)

ogremk5 · 10 April 2013

https://me.yahoo.com/a/FGBzQUZtsemthPseqYzKYHG1950GVQdQElnix0p2OwOZDtFTmQ--#98415 said:
But what about natural selection? Is it unable to get the genome to contain Complex Specified Information?
Flawed question, since any answer to it is unfalsfiable without any demonstrable evidence. The onus is on Felsenstein to show that natural selection can create anything resembling CSI as defined by Dembski and others. So far, he has offered nada.
That's been done. By any definition of information or complexity, evolution can increase it. I would encourage you to look up this paper: http://faculty.washington.edu/wjs18/Newgenes.pdf from about 10 years ago that describes multiple methods by which new genes appear. It's the onus of ID proponents to show that A) CSI is a useful concept that describes real world situations B) It is a consistent system c) It actually measures or explains something D) That it exists. You statement is merely assuming that CSI exists, it applies, and that Dembski has described it correctly. None of which I, for one, am willing to admit without some kind of peer-reviewed support. (Further discussion of this Atheistoclast subthread will take place on the Bathroom Walll) JF

petrushka · 10 April 2013

This comment has been moved to The Bathroom Wall.(The discussion on this Atheistoclast subthread will take place there. JF)

Joe Felsenstein · 10 April 2013

diogeneslamp0:

Sorry to be so slow getting back to you about the 1-1 transformation. I have just added a Correction to the post, and you are greatly to be thanked for bringing up the matter.

On a closer read of NFL (maybe almost as close as your read of it), no, there is nowhere where Dembski requires the transformation f that represents evolution to be a 1-1 transform. He explicitly allows it to be many-to-one as well. This is on pages 152-154 (of the 2007 paperback printing of the first edition of NFL). Oopsies.

What effect does that have on my argument? Little, as it happens. Dembski actually works in the reverse time order from my argument above. If a species has a genome E1 that ends up satisfying a specification (which he calls
T1), then he argues that the inverse image of the specification, T0, has the same probability P as does T1. The inverse image is all events E that map to ones in T1 using the transformation f. (Later on he discusses it on page 160, where he uses the numbering T1 and T2 instead of T0 and T1).

So he is arguing that if a population has CSI after, it must have had it before. I said it the other way 'round: if it had it before, it has it afterwards. His presentation is more logical on that point.

The result is still that he is arguing that the specifications are equally strong, but not at all requiring that they be the same specification. So if we totally accept his argument up to that point, we are still left with no conservation law that holds when the specifications before and after are the same. And, as I said in the paper and this post, it is easy to find counterexamples when the specifications are the same.

By the way, Erik Tellgren (in an article available on-line at Talk Reason here) has an extremely detailed examination of whether Dembski's Conservation Law is proven. He does not address the issue of whether the specification has to be the same before as after if one is to use the Law to show that natural selection cannot improve adaptation.

Many thanks for making me re-examine this.

Mike Elzinga · 10 April 2013

Just a comment on Dembski’s writing style in his recent “Conservation of Information Made Simple” over on ENV. Dembski follows the same tactic that I see in just about every ID/creationist advancing an “argument,” including the staff over at AiG. That tactic sets forth an assertion about what science cannot explain and proposes that the speaker is about to explain to you why that is so. But then he begins to hop all over the map, injecting all kinds of extraneous information that he implies will clarify what he is about to tell you. In the process he makes numerous other assertions for which he provides no evidence. And in making those evidence-free assertions, new “issues” are bought up that he implies he will explain. In the process, he throws in a whole bunch of links to purported explanations and “proofs” of his assertions. Many of those links simply go to the same evidence-free assertions in another paper by the author. By the time he has spiraled deeply into this nested set of assertions and goals, the reader is sitting there wondering when the author is ever going to explain what he said he would explain. Larding up ideas with diversions and trips up alleyways and down the drain pipes is a version of the Gish Gallop that forces the reader to keep track of so many evidence-free assertions that it becomes difficult to focus on the original evidence-free assertion. But it is Dembski’s Easter egg hunt example points at the issue that he buries under so many more paragraphs.

Maybe it was a guided search in which someone, with knowledge of the egg's whereabouts, told the seeker "warm, warmer, no colder, warmer, warmer, hot, hotter, you're burning up." Such guidance gives the seeker added information that, if the information is accurate, will help locate the egg with much higher probability than mere blind search -- this added information changes the probability distribution.

The bottom line answer to Dembski comes from nature; it is NOT a “blind search” with a uniform sampling distribution over essentially “infinite” sample spaces. Clustering matter “knows” what is nearby and when it is “getting warmer.” Dembski, et. al., have yet to learn what the concept of a field is al about. I am still looking for an ID/creationist who can do a high school level physics/chemistry calculation.

Ray Martinez · 10 April 2013

This comment has been moved to The Bathroom Wall. (Ray's stupid either-or argument happens there. Someone tell explain to him that mutation can be random even though selection isn't. JF)

DS · 10 April 2013

This comment has been moved to The Bathroom Wall (along with all the other replies to Ray's either-or argument. JF)

Henry J · 10 April 2013

On whether rocks with embedded crystals are inferred to be "designed" because they happen to serve a function for somebody or some critters, one question to ask is whether that function actually depends on the particular arrangement of those embedded crystals, or for that matter does it even depend on what they're made of (at least as long as it isn't something toxic to the critters residing under it).

Henry

diogeneslamp0 · 11 April 2013

Joe, thanks much for the clarification. One thing that seemed dubious to me was:
Joe Felsenstein said: diogeneslamp0: Dembski actually works in the reverse time order from my argument above. If a species has a genome E1 that ends up satisfying a specification (which he calls T1), then he argues that the inverse image of the specification, T0, has the same probability P as does T1.
Is it really 1. "the inverse image of the specification, T0, has the same probability P as does T1" or did you perhaps mean 2. "the specification, T0, of the inverse image has the same probability P as does T1." If it's 1, that seems impossible-- how could the initial state of a system [inverse image] have the same probability as the final state? e.g. think of gas escaping a canister-- the final state has many more probabilistic options-- or conversely, imagine marbles rolling down a funnel: many probabilistic options become 1. But if it's 2, how do we assign probabilities to specifications? A specification is a pattern-- in his pre-2005 work I don't see how he could assign probabilities to patterns. I need to chaw on this and think it over and re-read that passage in NFL, and I'll come back here after chawing.

diogeneslamp0 · 11 April 2013

Henry J,
Henry J said: On whether rocks with embedded crystals are inferred to be "designed" because they happen to serve a function for somebody or some critters, one question to ask is whether that function actually depends on the particular arrangement of those embedded crystals
Yeah, that'd be a common sense question, but that doesn't factor into Dembski's pre-2005 definitions, as far as I know. A specification is an independently given pattern. You might like to add something to it-- "the pattern should require that the structure needs at least N parts"-- but that's not what he said, SFAIK.

diogeneslamp0 · 11 April 2013

Mike, I read that 2012 Dembski piece "Conservation of Information Made Simple". It's horrible, but I don't think you're accurately describing why it's horrible.
Mike Elzinga said: [snip] But it is Dembski’s Easter egg hunt example points at the issue that he buries under so many more paragraphs.

Maybe it was a guided search in which someone, with knowledge of the egg's whereabouts, told the seeker "warm, warmer, no colder, warmer, warmer, hot, hotter, you're burning up." Such guidance gives the seeker added information that, if the information is accurate, will help locate the egg with much higher probability than mere blind search -- this added information changes the probability distribution.

The bottom line answer to Dembski comes from nature; it is NOT a “blind search” with a uniform sampling distribution over essentially “infinite” sample spaces. Clustering matter “knows” what is nearby and when it is “getting warmer.” Dembski, et. al., have yet to learn what the concept of a field is al about.
No, I don't think that's Dembski's point. He knows that evolutionary search is much faster than blind search. Rather, his point is his allegation that the fact that evolutionary search is much faster than blind search, is due to magic. That's the conclusion, but to get there his logic is: 1. All "searches" (he uses this word to appear to mean either fitness function, or target, or both) are equally probable 2. If all "searches" were equally probable, evolutionary search SHOULD BE as slow as blind search (invoking NFL theorem) 3. It's not, evolution is faster than blind search, and this is improbable 4. Since it's improbable, an information was added; i.e. information = -log2[probability] 5. Information can only be created by "Intelligence" 6. A Magic Man done it (to quote Ricky Gervais) I hope we can all see the fallacies here are in steps 1, 4, and 5. In particular 5 is pure circular logic: he is assuming what he needs to prove. I'll rebut these fallacies in REVERSE order. Fallacy in Step 5. Information can only be created by "Intelligence." Obviously, this is rank circular logic. He is trying to prove "the Law of Conservation of Information" which says that only intelligence can create information, and in order to prove it, he assumes what he needs to prove: only intelligence can create information. Dembski has no definition of "intelligence" and it does not appear in any of the math in his papers; hence, he cannot claim to have proven that Intelligence can even create ANY information-- much less has he proven that it is the ONLY possible source of information. Fallacy in Step 4. Again we see the old canard that creationist information is the -log of a probability. No, logarithms of probabilities are not necessarily information, and anyway, Dembski's probabilities are wrong, computed from a mathematical idealization, as we see in a moment. Fallacy in Step 1. Dembski has no evidence that, in Nature, All "searches" are equally probable. This he just assumes. No. In nature, and especially in ALL genomes of ALL species, smooth fitness functions are much more probable than rough fitness functions. To make things very simple with a toy model. Imagine a very simple space which can have four fitness functions. S,T,U,V. Suppose that S is smooth and T,U,V are infinitely rough. Dembski ASSUMES that all "searches" have equal probability, which really means all FITNESS FUNCTIONS have equal probability. (He rarely says "fitness function" and often says "search", and in many case he says "target" when his statement ought to say "fitness function." This happens so often it's more deception from Dembski-- watch out for how he says "search" not "fitness function.") This means in our toy example with four FITNESS FUNCTIONS (not "searches") each should have probability 1/4. Now we all know that ALL species have fitness functions that are smooth, not infinitely rough. Any species whose fitness function under genetic mutation was infinitely rough would go extinct in an eyeblink-- one mutation would have the same effect as scrambling the whole genome randomly. In real nature, the probability of S (our smooth fitness function) is 100%, and the probability of T,U,V (our rough fitness functions) are 0%. Dembski says this is improbable, therefore information, therefore A Magic Man Done It. We estimate the probabilities of S,T,U,V from nature, and Dembski estimates it from an intelligently designed idealization that exists nowhere in nature. The irony here is that a flat probability distribution, in which S,T,U and V all have probability 1/4, is *ALWAYS* intelligently designed. It only occurs in mathematical idealizations, not in nature. What nature actually produces is a very non-flat probability distribution, in which the probability of S (our smooth fitness function) is 100%, and the probability of T,U,V (our rough fitness functions) are 0%. Dembski says the flat probability distribution (over all possible fitness functions) is "natural" and the non-flat distribution is "intelligently designed." We say the non-flat probability distribution (over all possible fitness functions) is "natural" and the flat distribution is "intelligently designed." Dembski's argument is thus analogous to the old Natural Theology argument that infinite disorder is far more probable than order. Since improbable, A Magic Man Done It. Here, "order" means smooth fitness functions.

diogeneslamp0 · 11 April 2013

Mike, Joe, I want to return to Dembski’s 2012 “Conservation of Information Made Simple” over on ENV, which I find is filled with remarkable falsehoods-- lies, I think we should say, but correct me if I'm wrong. He is back to lying about the No Free Lunch theorems, which IDers always lie about. The boldface below I believe to be lies.
William Dembski said: Anybody familiar with the No Free Lunch (NFL) theorems will immediately see that conservation of information is very much in the same spirit. The upshot of the NFL theorems is that no evolutionary search outperforms blind search once the information inherent in fitness (i.e., the fitness landscape) is factored out. NFL is a great equalizer. It says that all searches are essentially equivalent to blind search when looked at not from the vantage of finding a particular target but when averaged across the different possible targets that might be searched. If NFL tends toward egalitarianism by arguing that no search is, in itself, better than blind search when the target is left unspecified, conservation of information tends toward elitism by making as its starting point that some searches are indeed better than others (especially blind search) at locating particular targets... Some searches do better, indeed much better, than blind search, and when they do, it is because they are making use of target-specific information.
Here are my objections. 1. "all searches are essentially equivalent to blind search when looked at not from the vantage of finding a particular target but when averaged across the different possible targets". No. NFL theorems address the averaging over all possible FITNESS FUNCTIONS, not over all possible targets. This is a crucial point and I regard this as deception on Dembski's part. Without averaging over all possible FITNESS FUNCTIONS, including mostly infinitely rough fitness functions, some searches WILL do better than others, and evolutionary search will be much faster than blind search. 2. "when they do, it is because they [searches] are making use of target-specific information." Outright lying. Evolutionary search outperforms blind search because the fitness function is smooth. This is part of Dembski's years-long crusade to attack Schneider for his ev program which generates information by a GA. Dembski has for years claimed that Schneider in his ev and other scientists like the authors of Avida, Tierra, etc. cheated and "smuggled" information into their GA. But it is the smoothness of the fitness function that mostly makes evolutionary algorithms be much faster than blind search, and Dembski wants to distract out attention away from the 600 pound gorilla in the room. 3. "the information inherent in fitness". Again, this is Dembski begging the question. To Dembski, information is -log2[probability]. But he is ASSUMING that he knows the probabilities of each fitness function, that the probability of infinitely rough fitness functions [many] are equal to the probability of smooth fitness functions [few]. He doesn't know the probability, so he doesn't know the information. If I have misunderstood the NFL theorems and Dembski's representation of them, please correct me.

Joe Felsenstein · 11 April 2013

diogeneslamp0 said: Is it really 1. "the inverse image of the specification, T0, has the same probability P as does T1" or did you perhaps mean 2. "the specification, T0, of the inverse image has the same probability P as does T1." If it's 1, that seems impossible-- how could the initial state of a system [inverse image] have the same probability as the final state? e.g. think of gas escaping a canister-- the final state has many more probabilistic options-- or conversely, imagine marbles rolling down a funnel: many probabilistic options become 1. But if it's 2, how do we assign probabilities to specifications? A specification is a pattern-- in his pre-2005 work I don't see how he could assign probabilities to patterns. I need to chaw on this and think it over and re-read that passage in NFL, and I'll come back here after chawing.
In Dembski's argument, a specification is represented by a subset of the genotypes (states of his process). That's because the statement that is a specification designates a subset as Specified. So T1 is a subset. The inverse image of a subset (not of a single state) is the union of all the inverse images of its states. That is, it's the set of states that map into the subset. So it too is a subset, T0. The subsets can have probabilities -- they're the sum of the probabilities of their members. So it really is #1.

Joe Felsenstein · 11 April 2013

diogeneslamp0 said: ... Here are my objections. 1. "all searches are essentially equivalent to blind search when looked at not from the vantage of finding a particular target but when averaged across the different possible targets". No. NFL theorems address the averaging over all possible FITNESS FUNCTIONS, not over all possible targets. This is a crucial point and I regard this as deception on Dembski's part. Without averaging over all possible FITNESS FUNCTIONS, including mostly infinitely rough fitness functions, some searches WILL do better than others, and evolutionary search will be much faster than blind search.
To Dembski, a "search" is a fitness function. So he would argue that he is innocent of deception.
2. "when they do, it is because they [searches] are making use of target-specific information." Outright lying. Evolutionary search outperforms blind search because the fitness function is smooth. This is part of Dembski's years-long crusade to attack Schneider for his ev program which generates information by a GA. Dembski has for years claimed that Schneider in his ev and other scientists like the authors of Avida, Tierra, etc. cheated and "smuggled" information into their GA. But it is the smoothness of the fitness function that mostly makes evolutionary algorithms be much faster than blind search, and Dembski wants to distract out attention away from the 600 pound gorilla in the room.
Again, in their critiques of GAs, Dembski and his allies always regard the "hotter ... colder" feedback, which comes from the fitness function, as information that is smuggled in. You or I mght say that it is just the structure of (say) physics that causes it, but to them that's smuggled information.
3. "the information inherent in fitness". Again, this is Dembski begging the question. To Dembski, information is -log2[probability]. But he is ASSUMING that he knows the probabilities of each fitness function, that the probability of infinitely rough fitness functions [many] are equal to the probability of smooth fitness functions [few]. He doesn't know the probability, so he doesn't know the information.
He would, I suspect, argue that equiprobability is some sort of prior when we are in a state of ignorance. I would argue that the prior should come from what we know about the physics. Maybe his argument amounts to assuming that our universe is chosen out of all possible universes, and that front-loads all the information. I have argued (in my 2007 article and in a PT post about Dembski's SFS argument here) that if our universe is one in which natural selection works, then that is what is of importance to us, and if the cosmologists are arguing about possible other universes, most biologists are happy not to get involved in that.
If I have misunderstood the NFL theorems and Dembski's representation of them, please correct me.
You understand the NFL theorems correctly. In the Search For a Search papers, Dembski and Marks do not explicitly invoke those theorems. In effect they just argue the rarity of the searches that are smooth, take the -log of their probability, and call it the Active Information.

diogeneslamp0 · 11 April 2013

Joe Felsenstein said:
diogeneslamp0 said: Is it really 1. "the inverse image of the specification, T0, has the same probability P as does T1" or did you perhaps mean 2. "the specification, T0, of the inverse image has the same probability P as does T1." If it's 1, that seems impossible-- how could the initial state of a system [inverse image] have the same probability as the final state? e.g. think of gas escaping a canister-- the final state has many more probabilistic options-- or conversely, imagine marbles rolling down a funnel: many probabilistic options become 1. But if it's 2, how do we assign probabilities to specifications? A specification is a pattern-- in his pre-2005 work I don't see how he could assign probabilities to patterns. I need to chaw on this and think it over and re-read that passage in NFL, and I'll come back here after chawing.
In Dembski's argument, a specification is represented by a subset of the genotypes (states of his process). That's because the statement that is a specification designates a subset as Specified. So T1 is a subset. The inverse image of a subset (not of a single state) is the union of all the inverse images of its states. That is, it's the set of states that map into the subset. So it too is a subset, T0. The subsets can have probabilities -- they're the sum of the probabilities of their members. So it really is #1.
Joe, thank you, that is very clear. I will chaw on it some more and come back here. Thanks also for sending the trolls to the BW.

Mike Elzinga · 11 April 2013

diogeneslamp0 said: No, I don't think that's Dembski's point. He knows that evolutionary search is much faster than blind search. Rather, his point is his allegation that the fact that evolutionary search is much faster than blind search, is due to magic.
I am simply pointing out that ID/creationists ignore the real science and embrace their own misconceptions. Dembski and all ID/creationists “know” it is all “spontaneous molecular chaos” down there. He and all ID/creationists “know” that the second law of thermodynamics says that everything comes all apart. He and all ID/creationists “know” that evolution conflicts with the laws of thermodynamics. He and all ID/creationists “know” that scientists are “just digging in” and asserting, despite all the above, that evolution happens anyway; no matter how improbable. Why are these huge improbabilities so significant to creationists? It is because of their fundamental misconceptions about the second law and about the properties of matter. All those screeds about CSI, “spontaneous molecular chaos,” and conflicts with the second law are back-handed accusations that scientists are so stupid that they don’t even understand their own science; so therefore scientists are effectively asserting “magic” as you say. It is a loud accusation, and lie, accusing scientists of exactly what they, the ID/creationists, are themselves doing. In other words, it is projection; howling, sneering, repeated projection. The world is so unfair and mean to them. They have learned from Philip Johnson to cast “reasonable doubt” on science without ever declaring clearly what their alternative is. Their alternative remains an illusive word-game of smoke-and-mirrors played with a sneering smugness that absolves them of any responsibility for their evidence-free accusations. This pastiche is then handed off to their political operatives to be injected into the public school curriculum. That is ID/creationism’s socio/political history. No research, no self-checking, no introspection, and no effort is required; EVER. Just sit in one’s plush office and keep churning it out while being well paid. Dembski and the rest of those gurus want and love the attention they get; they don’t have to work for it or answer for it. It’s just rope-a-dope.

Joe Felsenstein · 11 April 2013

Things being somewhat quiet here lately, now is a good time to comment on the thread that developed over at Uncommon Descent (here). It's by "kairosfocus" with lots of comments by "JoeG". Their comments about me show their typical degree of charity ;-):
[JoeG]: It is obvious that Joe F doesn’t even understand the concept of CSI and he thinks that natural selection chooses. ... [snip a passge quoted from my 2007 article] That is a major WTF? It boggles the mind that someone so clueless about a concept would actually try to refute it. And the scary part is that some people think that he did a good job.
[kairosfocus]: In short, this is a willful, continuing misrepresentation on JoeF’s part. He has signally failed in duties of care to accuracy, truth, and fairness in serious discussion.
I am so charmed it is hard to keep my composure. Anyway, let me (in the following comment) make a few remarks about their criticisms of my 2007 article.

diogeneslamp0 · 11 April 2013

Joe Felsenstein said: Things being somewhat quiet here lately, now is a good time to comment on the thread that developed over at Uncommon Descent (here). It's by "kairosfocus" with lots of comments by "JoeG". Their comments about me show their typical degree of charity ;-):
[JoeG]: It is obvious that Joe F doesn’t even understand the concept of CSI and he thinks that natural selection chooses. ... [snip a passge quoted from my 2007 article] That is a major WTF? It boggles the mind that someone so clueless about a concept would actually try to refute it. And the scary part is that some people think that he did a good job.
[kairosfocus]: In short, this is a willful, continuing misrepresentation on JoeF’s part. He has signally failed in duties of care to accuracy, truth, and fairness in serious discussion.
I am so charmed it is hard to keep my composure. Anyway, let me (in the following comment) make a few remarks about their criticisms of my 2007 article.
I was banned from UD, like a lot of people, and no one there now knows any math or understands Dembski's math, except arguably VJ Torley and maybe Gpuccio. You'll recall that none of the trolls attacking this thread made even ONE mathematical argument.

Mike Elzinga · 11 April 2013

diogeneslamp0 said: I was banned from UD, like a lot of people, and no one there now knows any math or understands Dembski's math, except arguably VJ Torley and maybe Gpuccio. You'll recall that none of the trolls attacking this thread made even ONE mathematical argument.
We also noted that Granville Sewell (PhD) can’t even get units right when plugging “X-entropy” for entropy into an equation. And, one of their own, Sal Cordova, was attacked viciously – he may have even been banned - when he borrowed some of our critiques of Sewell over on The Skeptical Zone and disagreed with Sewell on his own thread over at UD. If you type in “thermodynamics” in the search box over at UD, you can pull up some of the worst “mathematical arguments” you will ever see. Nobody should ever expect a response from Joe G other than feces hurling; he comprehends nothing and just hurls on some automatic impulse. And that kairosfocus character uses nothing but huge barrages of copy/paste junk and scolding to shut people up. Barry Arrington just bans people for not reciting some catechism he sets out as a purity test. It is probably not useful to try to refute vapor thought by wrangling with the vapor thinkers themselves. Better to just study it and use it for trying to educate those who really want to learn.

Joe Felsenstein · 11 April 2013

Two things they complain about are the ID cartoon I used as illustration in my 2007 article, and my failure to see that Dembski's CSI discussion was really about islands of function and macroevolution as opposed to microevolution: The cartoon The Visigoths are Coming was from the Access Research Network (it is by Chuck Asay, and T-shirts and mugs are available from ARN). It is really a marvellous cartoon and alludes to many of the arguments that ID types use. The reason I used it was to show that Dembski's argument, which is represented in the cartoon by the battering ram labeled "Information Content of DNA", is widely considered to be a major and devastating argument against "Darwinism". Dennis Wagner, who runs ARN, was kind enough to give permission to run the cartoon with my article in Reports of the National Center for Science Education. In the online version of that article NCSE simply linked to the image on the ARN site. kairosfocus and JoeG are outraged that my article
[kairosfocus]: tries to present a tee shirt/editorial cartoon as substantially representing the design theory case.
Of course my article does try (fairly sucessfully in my biased opinion) to explain William Dembski's argument. But they misunderstand what my article was trying to explain, and what it was trying to refute. For example, they keep dragging the discussion over to "islands of function" and to Michael Behe's arguments. kairosfocus even manages to wrestle it over to his concept of "FSCO/I". My article was about William Dembski's CSI conservation law argument, about his No Free Lunch argument, and about his front-loading (in effect, Search For a Search) arguments. Dembski's Conservation Law theorem does not mention islands of function, or body plans, or irreducible complexity. It involves genotypes and an evolutionary process (modeled as a mapping). That's all. Nevertheless, with lots of effort KF and JG think that they have shown that what Dembski was talking about was islands of function, body plans and irreducible complexity. They do not even attempt any argument that I was wrong about the Conservation Law theorem, about the issue of changing the specification, and how that makes the Conservation Law unable to show that the observation of CSI implies design (rather than natural selection). I offered to do a slow, careful, step-by-step discussion here with JoeG (when he was showing up in this thread as "A Masked Panda ... -pWQ") on the Conservation Law and what it implied about evolutionary change. At which point JoeG vanished in a puff of smoke. Pity ...

diogeneslamp0 · 11 April 2013

Mike Elzinga said:
diogeneslamp0 said: I was banned from UD, like a lot of people, and no one there now knows any math or understands Dembski's math, except arguably VJ Torley and maybe Gpuccio. You'll recall that none of the trolls attacking this thread made even ONE mathematical argument.
We also noted that Granville Sewell (PhD) can’t even get units right when plugging “X-entropy” for entropy into an equation. And, one of their own, Sal Cordova, was attacked viciously – he may have even been banned - when he borrowed some of our critiques of Sewell over on The Skeptical Zone and disagreed with Sewell on his own thread over at UD. If you type in “thermodynamics” in the search box over at UD, you can pull up some of the worst “mathematical arguments” you will ever see. Nobody should ever expect a response from Joe G other than feces hurling; he comprehends nothing and just hurls on some automatic impulse. And that kairosfocus character uses nothing but huge barrages of copy/paste junk and scolding to shut people up. Barry Arrington just bans people for not reciting some catechism he sets out as a purity test. It is probably not useful to try to refute vapor thought by wrangling with the vapor thinkers themselves. Better to just study it and use it for trying to educate those who really want to learn.
But every single one of them BELIEVES themselves to be among the great scientists of the world, and good at math, although almost none of them can give a mathematical argument on any topic, and most never cite anything in the scientific literature. My mental image of the UDite is that of an IT professional who is hated by his co-workers because he controls administrator password in his office-- and when his co-workers forget their password or need help, they loathe, loathe having to ask the arrogant UDite for assistance, because he lords his power over them. Like that Saturday Night Live sketch about the asshole computer administrator who orders his co-workers away from their own keyboards, with a tone of contemptuous disgust: "Mooove!" An UDite is usually an IT professional who considers himself a scientist because he has a spool of ethernet cable. A dog could be an IT professional if it knew administrator password.

Joe Felsenstein · 11 April 2013

OK guys, fun's fun. Let's cool things down.

DS · 11 April 2013

Joe Felsenstein said: Two things they complain about are the ID cartoon I used as illustration in my 2007 article, and my failure to see that Dembski's CSI discussion was really about islands of function and macroevolution as opposed to microevolution: The cartoon The Visigoths are Coming was from the Access Research Network (it is by Chuck Asay, and T-shirts and mugs are available from ARN). It is really a marvellous cartoon and alludes to many of the arguments that ID types use. The reason I used it was to show that Dembski's argument, which is represented in the cartoon by the battering ram labeled "Information Content of DNA", is widely considered to be a major and devastating argument against "Darwinism". Dennis Wagner, who runs ARN, was kind enough to give permission to run the cartoon with my article in Reports of the National Center for Science Education. In the online version of that article NCSE simply linked to the image on the ARN site. kairosfocus and JoeG are outraged that my article
[kairosfocus]: tries to present a tee shirt/editorial cartoon as substantially representing the design theory case.
Of course my article does try (fairly sucessfully in my biased opinion) to explain William Dembski's argument. But they misunderstand what my article was trying to explain, and what it was trying to refute. For example, they keep dragging the discussion over to "islands of function" and to Michael Behe's arguments. kairosfocus even manages to wrestle it over to his concept of "FSCO/I". My article was about William Dembski's CSI conservation law argument, about his No Free Lunch argument, and about his front-loading (in effect, Search For a Search) arguments. Dembski's Conservation Law theorem does not mention islands of function, or body plans, or irreducible complexity. It involves genotypes and an evolutionary process (modeled as a mapping). That's all. Nevertheless, with lots of effort KF and JG think that they have shown that what Dembski was talking about was islands of function, body plans and irreducible complexity. They do not even attempt any argument that I was wrong about the Conservation Law theorem, about the issue of changing the specification, and how that makes the Conservation Law unable to show that the observation of CSI implies design (rather than natural selection). I offered to do a slow, careful, step-by-step discussion here with JoeG (when he was showing up in this thread as "A Masked Panda ... -pWQ") on the Conservation Law and what it implied about evolutionary change. At which point JoeG vanished in a puff of smoke. Pity ...
The cartoon business is just cry baby stuff and can safely be ignored. The macro evolution stuff is just goal post shifting. If random mutation and natural selection can indeed increase information (and it obviously can), then there is absolutely nothing at all preventing it from producing new body plans and entirely new lineages of organisms. Indeed, there is ample evidence that it has done so repeatedly. Ignoring all of the evidence and trying to make a theoretical argument is ridiculous. Just keep repeating to yourself, "bumble bees cannot fly", "bumble bees cannot fly". And even if they can, they still can't turn into armadillos, so there!

Henry J · 11 April 2013

Wasn't that "bumble bees can't fly" thing caused by a faulty assumption that their wings don't change shape during flight?

diogeneslamp0 · 11 April 2013

This is typical UDite behavior. Dembski claimed to have a mathematical proof of conservation of CSI.

His "proof" had no variables for "replicators" as Joe G would say, no variables for origin of life, no variables for genetic code, no variables for old phyla or new phyla, no variables for the Cambrian Explosion, no variables for islands of function. Function is incorporated only indirectly, via probabilities-- function is not itself a variable per se.

Any counter-example of natural processes increasing CSI means the LCCSI is dead, dead, dead. They want to evade that with "replicators", origin of life, genetic code, old phyla or new phyla, Cambrian Explosion, islands of function, blah blah blah.

All of those evasions are irrelevant. The LCCSI has no variable for them, so their status does not suspend the LCCSI.

Again: even ONE counter-example of natural processes increasing CSI means the LCCSI is dead, dead, dead. We have many, so it's dead.

Mike Elzinga · 11 April 2013

There is a related concept that gets mangled by ID/creationists; and that is the concept of Shannon information.

The formula is fairly simple.

IShannon = - ∑ pi log pi,

where the sum is over i, 1 to Ω, the total number of “states,” and pi, is the probability of state i.

This expression has the property that it becomes a maximum when all states are equally probable; i.e., pi = 1/Ω for all i. In this case, the Shannon information becomes just log Ω.

It really doesn’t make any difference which logarithm base one uses, but if it is base 2, the answer comes out in the number of bits. It’s just a minor point.

There is an implicit and subtle connection with the CSI calculation because the CSI calculation takes all the ways something can happen and takes the log of it. That is the same as taking minus log of the reciprocal of the number of ways; i.e., minus log of the probability that a particular event occurred out of the sample space represented by that total number of ways.

Note that the underlying assumption in the CSI calculation is that all probabilities are equal. It is the special case of the Shannon information where all probabilities are the reciprocal of the size of the sample space so the Shannon information simply becomes the log of the size of the sample space, or the log of the number of ways an even can occur. Tornado-in-a-junkyard sampling.

In that example with the rock, it means that every specified configuration of the rock was equally probable. It was built from a “primordial soup” in which the sampling was uniform.

Tomato Addict · 11 April 2013

@Joe Felsenstein: My knowledge of information theory comes from the overlap with statistical inference, but I'm really certain that UPB doesn't show up in any of my theory books. From my perspective CSI is simply a Likelihood Ratio test gone wrong, and UPB is something Dembski quote-mined from Emile Borel, without any useful application. Am I missing some insight from Information Theory where a concept like the UPB might actually be useful?

I ask because it seems like much of this discussion hinges on the usefulness of UPB, and like the UPB itself that usefulness is very near to zero.

TomS · 11 April 2013

Henry J said: Wasn't that "bumble bees can't fly" thing caused by a faulty assumption that their wings don't change shape during flight?
The story, as related by Wikipedia Bumble bee Misconceptions Flight, is not at all clear. But it doesn't seem that anyone was seriously suggesting that bumble bees could not fly.

diogeneslamp0 · 11 April 2013

Mike Elzinga said: There is a related concept that gets mangled by ID/creationists; and that is the concept of Shannon information. The formula is fairly simple. IShannon = - ∑ pi log pi, where the sum is over i, 1 to Ω, the total number of “states,” and pi, is the probability of state i. This expression has the property that it becomes a maximum when all states are equally probable; i.e., pi = 1/Ω for all i. In this case, the Shannon information becomes just log Ω.
Mike, that equation is for Shannon uncertainty, not information. Uncertainty is proportional to entropy in statistical thermodynamics. The Shannon equation for mutual information is more relevant, as it relates how much knowing variable X reduces our uncertainty in Y. So mutual information is a reduction in uncertainty. There's a good wikipedia page for mutual information.

Mike Elzinga · 11 April 2013

TomS said:
Henry J said: Wasn't that "bumble bees can't fly" thing caused by a faulty assumption that their wings don't change shape during flight?
The story, as related by Wikipedia Bumble bee Misconceptions Flight, is not at all clear. But it doesn't seem that anyone was seriously suggesting that bumble bees could not fly.
Apparently the important point was that the bumble bee didn’t know he couldn’t fly, so he did.

SWT · 11 April 2013

TomS said:
Henry J said: Wasn't that "bumble bees can't fly" thing caused by a faulty assumption that their wings don't change shape during flight?
The story, as related by Wikipedia Bumble bee Misconceptions Flight, is not at all clear. But it doesn't seem that anyone was seriously suggesting that bumble bees could not fly.
The version I'd heard decades ago was that the engineers who did the analysis applied the equations for fixed-wing aircraft to the bumblebee. They applied an obviously inappropriate equation to the situation, leading to an obviously incorrect conclusion. Remind me again why this came up in a discussion of Dembski's CSI arguments?

diogeneslamp0 · 11 April 2013

TomS said:
Henry J said: Wasn't that "bumble bees can't fly" thing caused by a faulty assumption that their wings don't change shape during flight?
The story, as related by Wikipedia Bumble bee Misconceptions Flight, is not at all clear. But it doesn't seem that anyone was seriously suggesting that bumble bees could not fly.
OT, but noted expert entomologist, Kent "scientists can never prove insects are alive" Hovind has clarified the issue.
Chiropractor Larry Tyler: To show the flaws of mechanistic (mainstream) science - mechanistic science has proven that bumble bees can't fly Kent Hovind: Wings aren't big enough. Body is too heavy Tyler: Wings aren't big enough. The body is too heavy. Problem is they forgot to tell the bumble bee Hovind: They fly just fine Tyler: They fly very good. Faster than I can run. Okay, so you know, bumble bees disprove, if you will, mechanistic science as an explanation for life Hovind: Sure, sure. Source: The Bible and Health (2001) @ 1:36:10. [http://kent-hovind.com/quotes/scienceii.htm]
Well there you go. Science is finished.

Mike Elzinga · 11 April 2013

diogeneslamp0 said:
Mike Elzinga said: There is a related concept that gets mangled by ID/creationists; and that is the concept of Shannon information. The formula is fairly simple. IShannon = - ∑ pi log pi, where the sum is over i, 1 to Ω, the total number of “states,” and pi, is the probability of state i. This expression has the property that it becomes a maximum when all states are equally probable; i.e., pi = 1/Ω for all i. In this case, the Shannon information becomes just log Ω.
Mike, that equation is for Shannon uncertainty, not information. Uncertainty is proportional to entropy in statistical thermodynamics. The Shannon equation for mutual information is more relevant, as it relates how much knowing variable X reduces our uncertainty in Y. So mutual information is a reduction in uncertainty. There's a good wikipedia page for mutual information.
It’s been called a lot of things; e.g., uncertainty, information, and Shannon “entropy” The names are relatively unimportant; it is what the formulas do for us. ID/creationists often conflated this with entropy in thermodynamics because it has the same mathematical expression. There is even a story that John von Neumann told Claude Shannon to call it “entropy” because nobody knew what entropy was. Since I know a little about the humor some physicists and mathematicians use, I suspect it was probably a sardonic wise crack. But beyond the mathematical expression, there is no connection. The entropy of isolated systems goes to a maximum because matter interacts with matter to redistribute energy states with equal probability. There is also nothing to be gained by claiming that the entropy is a measure of our lack of “information” about the state of a thermodynamic system. “Information” about what? What state the system is in? At maximum entropy you know the states are equally probable. Who cares what specific microstate? And we already know about Bose-Einstein, Fermi-Dirac, and Maxwell-Boltzmann distributions, and can calculate partition functions for many relatively simple cases. It is the relationship of entropy to the other thermodynamic variables that is most important. For example that 1/T = ∂S/∂E where E is the total energy of the system. What does taking the partial of Shannon “entropy” with respect to energy get us? Things like temperature, energy, and entropy in thermodynamics and statistical mechanics are dynamically interrelated because matter interacts with matter. Letters, digits, marbles do NOT. So such mathematical formulas may label “states;” but they don’t say anything about how states change or why. The Shannon “entropy” is the same for all permutations of a given alphanumeric string, for example. But if the distribution of characters changes, then the Shannon “entropy” changes. But what caused the change in the distribution? Is there a dynamic connection between the distribution and something else? Unless you have some theoretical system of connections among variables, something with interaction “energies,” then there is no theoretical framework to make assertions about what will happen or what is probable from simply calculating CSI or Shannon “information.” ID/creationists know absolutely nothing about thermodynamics and statistical mechanics.

W. H. Heydt · 11 April 2013

Henry J said: Wasn't that "bumble bees can't fly" thing caused by a faulty assumption that their wings don't change shape during flight?
As I understand it, the analysis was done using "fixed wing" aerodynamics. Since those doing the analysis were quite well aware that (a) bumblebees are NOT "fixed wing aircraft" and (b) bumblebees DO fly, they were under no illusions that the analysis was correct. Like the well known and oft repeated with an incorrect conclusion story about King Canute at the sea shore, the actual point of the story is inverted from its intended meaning.

Mike Elzinga · 11 April 2013

SWT said: Remind me again why this came up in a discussion of Dembski's CSI arguments?
Oops! I forgot I had two windows open and pasted it in the wrong thread.

Mike Elzinga · 11 April 2013

Mike Elzinga said:
SWT said: Remind me again why this came up in a discussion of Dembski's CSI arguments?
Oops! I forgot I had two windows open and pasted it in the wrong thread.
Ok; so now I'm confused also. Never mind.

Joe Felsenstein · 11 April 2013

Could we let the bumblebees fly (or walk) away?

Mike Elzinga · 11 April 2013

Joe Felsenstein said: Could we let the bumblebees fly (or walk) away?
When I replied to the bumble be thing, I think I had two windows open and thought I was replying on another thread. Sorry.

Joe Felsenstein · 11 April 2013

Mike Elzinga said: When I replied to the bumble be thing, I think I had two windows open and thought I was replying on another thread. Sorry.
Bumblebees tend to appear when you have windows open ...

Mike Elzinga · 11 April 2013

By the way – perhaps I should say this more explicitly – in an isolated system, with matter interacting with matter, entropy goes to a maximum as all energy microstates become equally probable.

If Dembski is dealing with strings of characters with given Shannon “entropy,” all permutations of that string have the same Shannon “entropy.” If Dembski is asserting that there is nothing that can change the distribution, he is implying that there is no interaction among the “states” that can do that. That could lead to his “Law of Conservation of Information.” But that is of course true for inert things that don’t interact with each other or with the “outside environment.”

If he is talking about conservation of CSI, then the Shannon expression is maximized because all “states” are equally probable. Again; with no interactions with an outside world, there is no change in CSI.

In a real, isolated thermodynamic system, all energy microstates are equally probable because we know that matter interacts with matter. What happens if the interactions among microstates went to zero? If that were the case, the system would be in whatever microstate it was in at the time it was isolated. But we wouldn’t know what that was, because the system is isolated; we can no longer probe it.

However, if we knew the system’s entropy at the time it was isolated, we know it will remain in the same microstate as long as those now-isolated microstates did not interact among themselves. So the entropy remains constant but doesn’t go to maximum.

(In fact, by definition, the entropy is zero because the system is in a single, unknown microstate. So zero entropy doesn’t mean we know which microstate either. Such systems are extremely difficult – if not impossible - to produce.)

If we take a given string of characters and calculate the Shannon “entropy,” it remains constant for all permutations of that string. If all characters are equally probable, the Shannon “entropy” is at its maximum; and it is now also called the CSI. That doesn’t change either. The characters do not interact among themselves or with an “environment.”

So here we are right back to Dembski’s Law of Conservation of Complex Specified Information. What is it based on?

It is based on THE Fundamental Misconception of All ID/creationists, namely that atoms and molecules (alphanumeric characters and marbles) are inert things that do not interact with each other or with their environments.

When Elizabeth Liddle or Joe Felsenstein routinely demonstrate that distributions of characters can change so that the CSI increases, what are they showing? CSI is the maximum of the Shannon “entropy.” If a process can change the distribution of characters in a character string and increase CSI, what does that mean?

It means that the already-maximized Shannon “entropy” is now greater. What other ways can entropy increase in the real world governed by physics? There are a number of ways; let’s look at two.

One way is that entropy scales with volume when we increase the number of molecules. If a system gets an infusion of molecules at constant temperature, there are more internal degrees of freedom, and hence more energy microstates. The total internal energy also scales with molecules (volume) so that ∂S/∂E remains constant (i.e., T is constant as we stipulated).

(Note that this is an example suggesting that increases in CSI come from the environment.)

Another way is with an insulated system – such as ice in an ice chest – where the internal degrees of freedom are restricted by molecules tightly bonded together. Break the bonds by adding salt, for example. This increases the number of internal degrees of freedom; hence the entropy. Because of the insulation, total energy stays constant. So 1/T = ∂S/∂E increases, therefore T decreases.

Genetic algorithms work because they contain implicitly the laws of physics. “Fitness” peaks are nothing more than potential wells. Falling into potential wells and staying there is a manifestation of the second law of thermodynamics; evolution requires the second law. Thus, process put into a program that successively converge on the peak (well) are processes that are the equivalent of dissipating energy so that the program converges and stays there.

The ID/creationist’s use of CSI totally ignores physical law. There are no couplings among the characters in the distributions of characters in character strings and other phenomena in the physical world. Thus there are no interactions among characters in the character strings.

There is nothing in ID/creationist “theory” that corresponds to coupled variables such as 1/T = ∂S/∂E. ID/creationist CSI is conserved because ID/creationists don’t let it change. They threw out chemistry and physics long ago, back in the 1970s.

Mike Elzinga · 11 April 2013

A further point:

If what we mean by a character string also allows emergent properties, CSI increases. However, we have to find a way to enumerate those properties in some objective, quantized manner.

The problem with most examples we see from ID/creationists is that they don’t believe in emergent properties. I fact, they sneer at the notion.

Mike Elzinga · 12 April 2013

Ok, I hesitate to mention this because it gets into an entirely different area; and I don’t want to pull the thread off topic. I made this parenthetical remark:

(In fact, by definition, the entropy is zero because the system is in a single, unknown microstate. So zero entropy doesn’t mean we know which microstate either. Such systems are extremely difficult – if not impossible - to produce.)

Just for a thought experiment – but not for discussion – what kind of particle would not interact with others of its kind but would interact with particles of another kind when in contact with those other particles by being connected to a larger system? The universe of particles is pretty intertwined. These are issues that come up with Dark Matter and Black Holes. It gets into areas dealing with the entropy of the universe. (Joe: You can kick this off to the Bathroom Wall if you wish. I won't be going over to that "cesspool" to discuss it however.)

Joe Felsenstein · 12 April 2013

Mike Elzinga said: Ok, I hesitate to mention this because it gets into an entirely different area; and I don’t want to pull the thread off topic. ... Just for a thought experiment – but not for discussion – what kind of particle would not interact with others of its kind but would interact with particles of another kind when in contact with those other particles by being connected to a larger system? (Joe: You can kick this off to the Bathroom Wall if you wish. I won't be going over to that "cesspool" to discuss it however.)
It is better suited to a Granville Sewell thread, here or at The Skeptical Zone. You and the guy you are arguing with, that Elzinga guy, might take it there.

Joe Felsenstein · 12 April 2013

Mike Elzinga said: ,,, If Dembski is dealing with strings of characters with given Shannon “entropy,” all permutations of that string have the same Shannon “entropy.” If Dembski is asserting that there is nothing that can change the distribution, he is implying that there is no interaction among the “states” that can do that. That could lead to his “Law of Conservation of Information.” But that is of course true for inert things that don’t interact with each other or with the “outside environment.” If he is talking about conservation of CSI, then the Shannon expression is maximized because all “states” are equally probable. Again; with no interactions with an outside world, there is no change in CSI. ... So here we are right back to Dembski’s Law of Conservation of Complex Specified Information. What is it based on? It is based on THE Fundamental Misconception of All ID/creationists, namely that atoms and molecules (alphanumeric characters and marbles) are inert things that do not interact with each other or with their environments. When Elizabeth Liddle or Joe Felsenstein routinely demonstrate that distributions of characters can change so that the CSI increases, what are they showing? CSI is the maximum of the Shannon “entropy.” If a process can change the distribution of characters in a character string and increase CSI, what does that mean?
Whoa there. Dembski's Conservation Law is not based on any entropy calculation. One indication is that he does not spend time on entropy in his argument. Instead he has his Specification argument. In No Free Lunch, he sketches a proof of that. The key to the proof is that he uses the function f that represents evolution to define the Specification in the previous generation. That guarantees that the specification (which is functioning as a measure of how special the genotype is) will always have the same specialness (measured by the tail probability in the original generation) from generation to generation. In effect he fooled himself into thinking that something was being conserved, by always counting the same set (his composition of P and f brings us back to the same set). (The other two arguments he presented, later, his 2005 and Search For a Search arguments, do not have a conservation law).

Joe Felsenstein · 12 April 2013

(OK, now I am going off topic too, but just briefly)
Mike Elzinga said: ... What other ways can entropy increase in the real world governed by physics? There are a number of ways; let’s look at two. One way is that entropy scales with volume when we increase the number of molecules. ... (Note that this is an example suggesting that increases in CSI come from the environment.) Another way is with an insulated system – such as ice in an ice chest – where the internal degrees of freedom are restricted by molecules tightly bonded together.
My take on why living systems can change to states that are less probable is this: 1. The earth, which contains the biosphere is (nearly) closed but it is not isolated. Energy arrives from the sun (let's for the moment ignore other sources of energy). 2. The energy hangs around a bit, being incorporated into the offspring of current life forms. Ultimately is radiated off, at a lower wavelength than when it arrived. 3. The reproduction of organisms that this enables is necessary to make the equations of population genetics work. Years ago I was talking to a guy who had trained in thermodynamics, and I presented to him the simplest one-locus haploid model of genetic drift (the Wright-Fisher model). We have an urn with k white balls and N-k black balls. We draw N times to make up a new urn, each draw representing the reproduction of one haploid individual. The draws are with-replacement. The result is changes of gene frequency, ultimately taking the urn to be all white or all black (in fact, the probability of getting all white is the same as the initial proportion of white balls). So I asked him, how does this fit with the Second Law of Thermodynamics? He thought a bit and said, surprised, that it violated it. But it doesn't, because we failed to notice that the whole thing involved reproduction. Each draw was associated with a flow of energy from the sun which made the reproduction possible (and with a possible increase of the number of molecules incorporated into the population of living organisms). Counting the whole sun-earth system there is no violation of the Second Law. To drive a system like this into a more improbable state you need lots of reproduction and lots of energy flow from the sun. The living system is nonequilibrium. (Quibble: you can get genetic drift with no reproduction if you just have random death, but that involves the population losing molecules and losing energy to the outside).
Genetic algorithms work because they contain implicitly the laws of physics. “Fitness” peaks are nothing more than potential wells.
I disagree. You and I have wrestled with this before. I remember (thanks to Google) that in November 2010 we discussed it and I thought you then backed away from that assertion. Look here to see our exchange. For example, if one genotype is able to eat more kinds of prey than another, and ends up increasing in frequency in the population, I don't see how that is falling into a potential well. Each of them is a big complicated system with lots of molecules bouncing into (and out of) potential wells.

Carl Drews · 12 April 2013

diogeneslamp0 said: Consider the following passage from a creationist textbook, on how glaciers show intelligent design because they have a clear function.
“God designed the glaciers to store water in the coldest months when water is not usually scarce and to release it in warmer months when streams and reservoirs are low.” [Science 4 for Christian Students, Bob Jones Univ. Press (1990), p.196]
These creationists clearly believe that glaciers have a function and purpose, thus they are intelligently designed. Debmski, pre-2005, says function IS specification. Why are glaciers not specified?
Well, whaddya know? Theistic Glaciology! Nobody claims that God manually shovels snow into the accumulation zone of the high cirques way up in the mountains, compacts the snow by hand into ice halfway down, and personally melts it out at the bottom. Of course not - the "design" of a glacier is implemented "naturally" according to the laws of chemistry and physics. By these laws, ice deforms under pressure and flows as a fluid slowly down the valley. Glaciers carve out U-shaped valleys, insulate themselves with a layer of snow, and gradually melt during the warm season. God does not have to push every glacier downhill personally. Studying the behaviour of glaciers is an honourable and worthy pursuit. If Bob Jones University accepts Theistic Glaciology, then why not biological evolution? Methinks they have not thought through their position very well.

TomS · 12 April 2013

Two things that I wonder about:

Why the talk about conservation of CSI when clearly they think that CSI can spontaneously decrease? (And if there were a conservation law, wouldn't there also be a corresponding symmetry?)

Don't the same arguments against evolution apply with at least as much force against reproduction?

diogeneslamp0 · 12 April 2013

TomS said: Two things that I wonder about: Why the talk about conservation of CSI when clearly they think that CSI can spontaneously decrease?
Words mean whatever Dembski wants them to mean. In arguing with Dembski, we have to deal with his definitions in order to point out where he contradicts himself. By Dembski's definition of "Conservation Law" means stuff goes down but can't go up, EXCEPT if there's intelligence. Even the negative entropy of an isolated system would not have a "conservation law" by that definition, because 2LOT makes no exception for intelligence.
(And if there were a conservation law, wouldn't there also be a corresponding symmetry?) Don't the same arguments against evolution apply with at least as much force against reproduction?
Yeah yeah.

TomS · 12 April 2013

diogeneslamp0 said: By Dembski's definition of "Conservation Law" means stuff goes down but can't go up, EXCEPT if there's intelligence. Even the negative entropy of an isolated system would not have a "conservation law" by that definition, because 2LOT makes no exception for intelligence.
Which brings up another one of my pet peeves. Of course the 2LOT makes no exception for intelligence, because then some clever engineer would be able to intelligently design (and intelligently build) a perpetual motion machine. IMHO, the whole ID project is flawed from the get-go, and to discuss it at length as if there were some even marginally rational basis to it is to grant it an appearance of respectability which it doesn't deserve. The discussion is interesting, to be sure. ID/creationism touches on so many subjects that are interesting on their own, even if ID is not wrong-in-an-interesting way ("not even wrong").

Tenncrain · 12 April 2013

Carl Drews said: Well, whaddya know? Theistic Glaciology! [snip] If Bob Jones University accepts Theistic Glaciology, then why not biological evolution? Methinks they have not thought through their position very well.
Well, on the other hand, seems I recall Bob Jones University had mixed support for Intelligent Falling...

diogeneslamp0 · 12 April 2013

Joe Felsenstein said:
diogeneslamp0 said: Is it really 1. "the inverse image of the specification, T0, has the same probability P as does T1" or did you perhaps mean 2. "the specification, T0, of the inverse image has the same probability P as does T1." If it's 1, that seems impossible-- how could the initial state of a system [inverse image] have the same probability as the final state? e.g. think of gas escaping a canister-- the final state has many more probabilistic options-- or conversely, imagine marbles rolling down a funnel: many probabilistic options become 1. But if it's 2, how do we assign probabilities to specifications? A specification is a pattern-- in his pre-2005 work I don't see how he could assign probabilities to patterns. I need to chaw on this and think it over and re-read that passage in NFL, and I'll come back here after chawing.
In Dembski's argument, a specification is represented by a subset of the genotypes (states of his process). That's because the statement that is a specification designates a subset as Specified. So T1 is a subset. The inverse image of a subset (not of a single state) is the union of all the inverse images of its states. That is, it's the set of states that map into the subset. So it too is a subset, T0. The subsets can have probabilities -- they're the sum of the probabilities of their members. So it really is #1.
As I said, I have chawed on this argument of Dembski, and I'm confused-- this idea of a transformation f that runs evolution backwards in time, say, 20 million years. Let me give a concrete example. Let us consider the specification at the present, T1 = "fusiform animal." Now many aquatic animals are fusiform. 1. Mako shark 2. Swordfish 3. Dolphin 4. Harbor seal 5. Sea snake 6. Ichthyosaur 7. Plesiosaur 8. Mosasaur 9. Many others that can be imagined, haven't been seen; all possible animals that are "streamlined". etc. Now, Dembski's transformation f runs evolution backward in time, so rewind evolution for the above let's say 50 million years. I will add a prime ' to the numbers above. But Dembski would say that all 1' to 9' satisfy his specification T0, which means "will evolve into a fusiform shape tens of millions of years into the future." 1'. Elasmobranch [e.g. Cladoselache] 2'. Actinopterygiian [e.g Andreolepis] 3'. Cetartiodactyl [e.g. Pakicetus] 4'. Caniformia [e.g. Puijila Darwini] 5'. Squamate [e.g. Najash] 6'. Utatsusaurus 7'. Claudiosaurus 8'. Dallasaurus 9'. ??? (Note in passing: the fact that I can make that list shows how many transitional fossils we've got now.) With the exception of Cladoselache and Andreolepis, categories 3' to 8' are known for sure to be MUCH LESS fusiform or streamlined if you rewind the tape of evolution some 50 million years. There are three major problems with this. 1. As pointed out by Elsberry and Shallit, the specification T0 depends on the transformation f, but Dembski himself required that the specification be defined independently of the transformation. Thus the specification T0 is what Dembski would call a "fabrication", a specification that is NOT independently given, but artificially ginned up after looking at the data by reading off the data. Dembski broke his own rules. 2. As pointed out by Joe Felsenstein, Dembski changed his specification from T0 before to T1 after the transformation f. So it's not the same specification, thus there is no evidence of "specified complexity" nor of "specified anything" being conserved. 3. I will add to the above: I don't think at all that the probability of specification T0 equals the probability of specification T1. In fact, in our example of convergent evolution above, the pre-evolved structures 1'-9' cover enormously more morphological space that do the post-evolved structures 1-9. Thus, for convergent evolution, Convergent Evolution: The probability of specification T0 >> probability of specification T1. One can easily imagine cases of DIVERGENT evolution (e.g. lobe-finned fishes into amphibians, reptiles, birds, land mammals, whales etc.) where the reverse would be true: Divergent Evolution: The probability of specification T0 << probability of specification T1. Thus, regarding the discussion of Joe and I whether or not Dembski's transformation f is or isn't a 1-to-1 transformation, I think Dembski's bad math implies that f MUST BE a 1-to-1 transformation, but his math does not model reality. In reality, evolution can form one-to-many transformations (divergent evolution) or many-to-one transformations (convergent evolution). For convergent evolution, the probability of specification T0 >> probability of specification T1, which means that the information (= -log2[probability]) has increased, so evolution can increase CSI according to Dembski's math.

Joe Felsenstein · 12 April 2013

diogeneslamp0 said: ... As I said, I have chawed on this argument of Dembski, and I'm confused-- this idea of a transformation f that runs evolution backwards in time, say, 20 million years.
Well, actually f is the transformation that runs evolution forward in time. Dembski does not use -1 at all (I know, I said he did in my 2007 article but was in error on that, as I have explained here). When he argued for the Conservation Law, he started with the genotype after selection, which satisfied a specification T1 which had probability P(T1). Then he finds the inverse image of the set T1, calls that T0, and wants to see if that satisfies an equally strong specification. It does but it is not the same specification, and it must be stated in terms of f.
Let me give a concrete example. ... [Interesting example of streamlinedness omitted] With the exception of Cladoselache and Andreolepis, categories 3' to 8' are known for sure to be MUCH LESS fusiform or streamlined if you rewind the tape of evolution some 50 million years. There are three major problems with this. 1. As pointed out by Elsberry and Shallit, the specification T0 depends on the transformation f, but Dembski himself required that the specification be defined independently of the transformation. Thus the specification T0 is what Dembski would call a "fabrication", a specification that is NOT independently given, but artificially ginned up after looking at the data by reading off the data. Dembski broke his own rules.
Yes, as Elsberry and Shallit pointed out.
2. As pointed out by Joe Felsenstein, Dembski changed his specification from T0 before to T1 after the transformation f. So it's not the same specification, thus there is no evidence of "specified complexity" nor of "specified anything" being conserved.
Yes, as I pointed out.
3. I will add to the above: I don't think at all that the probability of specification T0 equals the probability of specification T1. In fact, in our example of convergent evolution above, the pre-evolved structures 1'-9' cover enormously more morphological space that do the post-evolved structures 1-9. Thus, for convergent evolution, Convergent Evolution: The probability of specification T0 >> probability of specification T1. One can easily imagine cases of DIVERGENT evolution (e.g. lobe-finned fishes into amphibians, reptiles, birds, land mammals, whales etc.) where the reverse would be true: Divergent Evolution: The probability of specification T0 << probability of specification T1. Thus, regarding the discussion of Joe and I whether or not Dembski's transformation f is or isn't a 1-to-1 transformation, I think Dembski's bad math implies that f MUST BE a 1-to-1 transformation, but his math does not model reality. In reality, evolution can form one-to-many transformations (divergent evolution) or many-to-one transformations (convergent evolution). For convergent evolution, the probability of specification T0 >> probability of specification T1, which means that the information (= -log2[probability]) has increased, so evolution can increase CSI according to Dembski's math.
Dembski's math does work if you don't worry about the Elsberry-Shallit issue (somehow regarding it as not important to satisfy that condition). But of course we really want to keep the specification the same before and after, And it is when you try to keep the specification the same before and after that it really falls apart. The streamlining issue is an empirical example, but there are many more one can make. Just about any example you construct, in fact.

Mike Elzinga · 12 April 2013

Joe Felsenstein said: Whoa there. Dembski's Conservation Law is not based on any entropy calculation.
I am trying to deal with only the mathematical expressions I see being used by the people over at UD – who probably don’t understand what Dembski means anyway. In fact, every time I read anything of Dembski’s, I don’t get the impression that Dembski knows what he means either. And I am not trying to be snarky; I think he is confused. I don’t work very hard at trying to understand the writings of ID/creationists because I almost always catch a blizzard of misconceptions and misrepresentations in the first few sentences and paragraphs of their writings. Most of the time I don’t have to go beyond the abstract to know something is wrong. “Complex Specified Information” is not a term I or anyone I know would use in specifying anything. I am simply using the fact that the Shannon “entropy” reduces to - log 1/Ω = log Ω = “CSI”. This is what I am seeing among the ID/creationists over at UD. They claim to know all about CSI, and that is how they use it. When it comes down to exactly what probabilities they are calculating, things begin to get extremely murky and arbitrary; not only murky and arbitrary, but not even quantized. Just exactly what are they specifying? Can the “specification” be quantized and enumerated in an objective manner. Most of the time it is just comes down to assertions. In the case of character strings – they like character strings - it is possible to calculate a Shannon “entropy;” and all that gives us is some measure of the probability distribution of characters. If all characters are represented with equal probability, then the Shannon “entropy” is maximized for that string and, according to the UD people, it becomes the way they calculate CSI. I don’t think any of them know that CSI is a special case of Shannon “entropy.” I don’t know if Dembski approves, but as far as I know, he has never stepped in to correct them. So my discussion above works backward from their usage - I didn’t invent the terms, I don’t like the terms, and I wouldn’t use them. The use of character strings to represent “states” is not out of line. The use of the Shannon “entropy” of those strings is not out of line. There can be a correspondence between character strings and the enumeration of microstates in a thermodynamic system. We do that kind of correspondence whenever we make such a representation of states in a diagram. Now, when one starts making assertions about the “CSI” in those character strings, they can now be checked about their understanding of how the microstates of real systems change. If they say “Conservation of CSI,” then the corresponding conclusion in real thermodynamic systems is “isolated system.” If they say “CSI cannot be generated by a genetic algorithm,” then the corresponding conclusion in real thermodynamic systems is “no interactions among matter and the environment.”

So I asked him, how does this fit with the Second Law of Thermodynamics? He thought a bit and said, surprised, that it violated it. But it doesn’t, because we failed to notice that the whole thing involved reproduction.

There; you hit on the key point. What is involved in reproduction? It is chemistry and physics to the core. Atoms and molecules are brought in from the environment, and they end up in new organisms. Entropy increases.

I disagree. You and I have wrestled with this before.

Yes; and I don’t think we disagree. However, at that point, we didn’t get into the concept of what the entropy and energy content of an organism is; and this is related to another misconception by the ID/creationists that “more advanced organisms have lower entropy.” Here is what you said; and I don’t disagree.

Well, one has to be cautious as the organism is a much higher-level entity than the individual molecules, as you know. Consider two individuals with different genotypes. Each has many molecules interacting and bouncing around (and out of) potential wells. But there may be no easy correspondence between the molecular energy calculations and the fitness. One individual might be of higher fitness because it was smaller than the other, and that might be just because there were more small food items around than larger ones. I don’t think that will be easy to predict from the molecular energy potentials.

Given other things like temperature and composition being equal, entropy scales with number of molecules and/or volume. An amoeba has less entropy than a mouse, but that doesn’t make it more advanced. It also has less energy content because total energy scales with volume (number of molecules) at the same temperature also. Molecules and other complex structures are put in place atom-by-atom, molecule-by-molecule. Changes in the arrangements of the constituents of these structures involve energy exchanges with the environment. They exist in a heat bath that maintains the temperature (average kinetic energy per degree of freedom) in a range where adjustments can happen. If you lower the temperature too much, everything “freezes” (sinks deeper into mutual potential wells), and nothing moves. If you raise the temperature too much, everything starts coming apart. There is no particular correspondence between entropy and total energy content and fitness. Fitness is much more closely related to the variability within a structure and whether or not some portion of that variability happens to overlap the environmental conditions in which the consequences of that variability (phenotype if you like) is under less environmental stress (lower in a well). There has also been some discussion about whether or not living organisms are in a higher or lower state of entropy than nonliving matter. As I already said, the entropy of an organism has more to do with size, given temperature and composition being equal. Here it is better to discuss this in terms of molar entropy; i.e., entropy per mole, or entropy per same number of molecules. And here the conclusion is pretty clear. Soft matter has higher entropy per mole precisely because it is more loosely bound than solid materials; it has more internal degrees of freedom per mole. Said another way, it has a higher heat capacity per mole. Liquids are too loosely bound, which tends to diminish internal degrees of freedom because coupled motions are reduced. Gases have even fewer degrees of freedom, those being limited to translations and rotations. Solids have fewer internal degrees of freedom because atoms or molecules are locked into position and can only oscillate in position or shove around electrons depending on how tightly bound the electrons are. So kilogram for kilogram, soft matter tends to have higher heat capacities, thus higher entropy, than equal quantities of gases, liquids, or solids. But that doesn’t mean that the entropy goes down once the soft matter comes all apart. It could go either way depending on details and environment. It is complicated in the case of living organisms because when they die, other organisms use them for food. One can argue specific cases and come do differing conclusions depending on the case. However, the broad, overall conclusion is that the total entropy of the universe increases. Atoms and molecules participate in bonding, and that requires the release of energy (falling into potential wells and staying there); entropy increases. Taking matter apart requires the input of energy, some of which gets wasted; entropy increases. I have written many modeling programs and data analysis programs over the course of my career, including genetic algorithms. In many of those cases I can borrow some of my own code and just flip the signs and reinterpret the meaning of the program. At the core, they use basic physics ideas. Synthetic aperture algorithms, for example, mimic least-time propagation, a fundamental concept in physics rooted in fundamental laws. What pops out is an emergent image that isn’t targeted and not even known to be there.

Joe Felsenstein · 12 April 2013

Let me just comment on the first part of Mike's comment.
Mike Elzinga said:
Joe Felsenstein said: Whoa there. Dembski's Conservation Law is not based on any entropy calculation.
I am trying to deal with only the mathematical expressions I see being used by the people over at UD – who probably don’t understand what Dembski means anyway. ... “Complex Specified Information” is not a term I or anyone I know would use in specifying anything. I am simply using the fact that the Shannon “entropy” reduces to - log 1/Ω = log Ω = “CSI”. This is what I am seeing among the ID/creationists over at UD. They claim to know all about CSI, and that is how they use it.
Not really. Only if the specification is being the best single one of the strings is the SI that value. I claim to know what (at least in some simple cases) is meant by CSI. By the way, the general notion of complex specified information is due to Leslie Orgel, who did know about things like physical chemistry. He trained with a Ph.D. from Oxford in chemistry, started out as a theoretical organic chemist, and even wrote a 1961 book An Introduction to Transition-Metal Chemistry. The Ligand Field Theory. So in him you're up against a heavy hitter! In simple model systems SI is just the -log2 of the tail probability of being at, or greater than, the value of some phenotype. I think fitness is the most relevant one to think about. You call it CSI when it is > 500. That value is chosen so that a pure mutational process can't produce anything that extreme in the amount of time available since the universe started, even if you devote all the resources in the universe to the population. I'm going to outrage everyone by saying that this is a perfectly sensible concept, even if not terribly practical to evaluate in real life. And if Dembski has proven that you can't get this by natural selection, that is hugely important. But he has not proven that.
When it comes down to exactly what probabilities they are calculating, things begin to get extremely murky and arbitrary; not only murky and arbitrary, but not even quantized. Just exactly what are they specifying? Can the “specification” be quantized and enumerated in an objective manner. Most of the time it is just comes down to assertions. In the case of character strings – they like character strings - it is possible to calculate a Shannon “entropy;” and all that gives us is some measure of the probability distribution of characters. If all characters are represented with equal probability, then the Shannon “entropy” is maximized for that string and, according to the UD people, it becomes the way they calculate CSI. I don’t think any of them know that CSI is a special case of Shannon “entropy.”
A model of equiprobable strings is used by them, but CSI is not the same as Shannon entropy, because of the use of the scale.

Mike Elzinga · 12 April 2013

Joe Felsenstein said: By the way, the general notion of complex specified information is due to Leslie Orgel, who did know about things like physical chemistry.
I am not arguing with Orgel. If that is where the term CSI originated and that is what he was referring to, that is not the way I am seeing it used by the people at UD. And they claim to speak for Dembski. There are lots of ways to “measure success” in simulations and programs of the “Monte Carlo” genre. All of them make use of concepts that have roots in real physical laws or rules in an imaginary universe. The implication is always that the underlying rules apply. What emerge from the algorithms are not the rules, but the consequences of the rules. They are not foreseen; they are not targeted. The word “target” may be a bit unfortunate; but whatever one calls it, it is a representative stand-in for having met the condition that stops the program. There have been a couple of authors who have tried to reinterpret statistical mechanics in the language of “information.” I’ve read large portions of at least one of those textbooks, and parts of another. What I see happening conceptually with this approach is that they contribute to confusion, sloppy use of terms, and a loss of insight about underlying physical interactions and rules. I have worked in multidisciplinary fields most of my life, and I am well aware of how these confusions arise and how the same terms are used in different fields for entirely different concepts. Somewhere between those attempts at different perspectives – which are legitimate – and the uses to which these attempts are put in the ID community, physics and chemistry are thrown away. It is not surprising, because that has been ID/creationism’s history. I apologize if I have been derailing or adding confusion to what you are trying to accomplish. This is interesting, but I am also neglecting items on my schedule. I’ll shut up for now. (Retirement is far busier than I expected; I had the illusion that I would always get to do what I wanted to do.)

SWT · 12 April 2013

Mike Elzinga said: There has also been some discussion about whether or not living organisms are in a higher or lower state of entropy than nonliving matter. As I already said, the entropy of an organism has more to do with size, given temperature and composition being equal.
I think this one is actually pretty clear. If you were to take a living organism and isolate it from its surroundings, it would transition spontaneously to a non-living object with the same total energy and mass. Since you've isolated it, you know that its total entropy must have increased as it made the transition from a living organism to a dead one. From that, I would conclude that organism had a lower entropy when it was alive.

Mike Elzinga · 12 April 2013

SWT said: I think this one is actually pretty clear. If you were to take a living organism and isolate it from its surroundings, it would transition spontaneously to a non-living object with the same total energy and mass. Since you've isolated it, you know that its total entropy must have increased as it made the transition from a living organism to a dead one. From that, I would conclude that organism had a lower entropy when it was alive.
If it froze to death, its entropy would be less. Consider a frozen wooly mammoth or ”Iceman.” :-) (I’m not trying to quibble; I’m procrastinating. Sore muscles and joints from trying to keep up with big spring cleaning and repair jobs that are getting ahead of me. No longer able to leap tall buildings in a single bound. I think I know where my entropy is going.)

Henry J · 12 April 2013

SWT: I think this one is actually pretty clear. If you were to take a living organism and isolate it from its surroundings, it would transition spontaneously to a non-living object with the same total energy and mass. Since you’ve isolated it, you know that its total entropy must have increased as it made the transition from a living organism to a dead one. From that, I would conclude that organism had a lower entropy when it was alive.

Makes sense. While alive, an organism presumably stores energy (whether fuel, energized molecules, or whatever) in holding areas that make it convenient to use. With nothing left to replenish those storage areas, the remaining energy would simply spread out. Henry

Henry J · 12 April 2013

If it froze to death, its entropy would be less. Consider a frozen wooly mammoth or ”Iceman.” :-)

Ah, but if it's isolated from the environment, wouldn't it retain the same average internal temperature? ;)

https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 · 13 April 2013

Joe Felsenstein said: Let me just comment on the first part of Mike's comment.
But didn't Dembski say that having a function was the specification? And about this: "...Dembski makes it clear when Design is detected, the “chance hypothesis” should be one that includes all natural biological processes, including not only mutation but also natural selection." Did he try to calculate some probability including natural selection?

Joe Felsenstein · 13 April 2013

[Edited to clarify who said what]
https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 said:
Joe Felsenstein said: Let me just comment on the first part of Mike's comment.
But didn't Dembski say that having a function was the specification? And about this: "...Dembski makes it clear when Design is detected, the “chance hypothesis” should be one that includes all natural biological processes, including not only mutation but also natural selection." Did he try to calculate some probability including natural selection?
Dembski had various possible things that could be specification. The specification has to designate a set of genotypes, and complex specified information is then assessed by calculating -log2 of the fraction of individuals (in some null distribution) that are in the set. I was using a fitness scale. At one point Dembski (in NFL) cited with approval using viability, which is a component of fitness. I think that if the objective is to see whether natural selection can explain adaptation, fitness is the most relevant specification. The only place I know of where Dembski tries to assess the probability that something could be brought about by natural selection is his argument about the bacterial flagellum. He gives no general methodology for other cases. Moreover In the original 2002 argument in No Free Lunch he does not include in the assessment of CSI any probability calculation of whether the adaptation could be brought about by natural selection. CSI is to be assessed by the probability of seeing that degree of adaptation in the null distribution. I think he felt that his Conservation Law was then enough to show that if you started without CSI you could not get CSI, even with natural selection. Unfortiunately that argument doesn't work. In the 2005 redefinition of CSI (which is what your second question is about) he does include a probability that the specification could be achieved by natural selection (and mutation), but does tell you how to computer that. In effect you have to to do all the hard work with that definition of CSI, without a conservation law to help you.

https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 · 13 April 2013

Thanks.

(The quote should be "Only if the specification is being the best single one of the strings is the SI that value" and not "Let me just comment on the first part of Mike’s comment.". Sorry.)

air · 14 April 2013

In the 2005 redefinition of CSI (which is what your second question is about) he does include a probability that the specification could be achieved by natural selection (and mutation), but does tell you how to computer that.
Joe - did you mean "but does not tell you how to compute that?

Joe Felsenstein · 14 April 2013

air said: ... Joe - did you mean "but does not tell you how to compute that?
Yess, yuo figuerd outt taht I mak meny tipographicla erorrs.

TomS · 15 April 2013

Joe Felsenstein said: In the 2005 redefinition of CSI (which is what your second question is about) he does include a probability that the specification could be achieved by natural selection (and mutation), but does [not tell you how to compute that]. In effect you have to to do all the hard work with that definition of CSI, without a conservation law to help you.
(Edited per discussion above.) I raise the possibility that it is not a fatal error to a theory if it proves impossible to compute a variable central to theory. Perhaps if one can compare two values of the variable without knowing their absolute value, or even just knowing that there is an unambiguous value to the variable. I suspect that CSI does not come up to such a lesser standard.

Joe Felsenstein · 15 April 2013

TomS said:
Joe Felsenstein said: In the 2005 redefinition of CSI (which is what your second question is about) he does include a probability that the specification could be achieved by natural selection (and mutation), but does [not tell you how to compute that]. In effect you have to to do all the hard work with that definition of CSI, without a conservation law to help you.
(Edited per discussion above.) I raise the possibility that it is not a fatal error to a theory if it proves impossible to compute a variable central to theory. Perhaps if one can compare two values of the variable without knowing their absolute value, or even just knowing that there is an unambiguous value to the variable. I suspect that CSI does not come up to such a lesser standard.
The new definition of CSI requires you to provide a probability that the adaptation (or one better than it) could be achieved by mutation together with natural selection. Then it compares this with the Universal Probability Bound and, in effect, only declares that CSI is present if the probability if less than the UPB. So this definition makes you do all the hard work. The previous definition did not include any probability that NS+RM could explain the adaptation -- the Conservation Law was supposed to prevent that. (Of course, it turns out that it does not work). Yes, if you can bound this probability below (and that way show that it is larger than the UPB) or if you could bound it above (and that way show that it is less than the UPB) then you could draw the conclusion that CSI is not present or is present. It is nearly impossible to obtain such bounds. The nice thing about the previous definition of CSI is that, in effect, you need only show that random mutation (without any natural selection) would have a trivially small probability of producing the adaptation. That is very much easier.

stevaroni · 18 April 2013

I had an interesting moment last night while playing poker with four friends and only now realize how close we came to destroying the universe.

We were in a goofy mood, and there was one particular item of swag we all wanted, so we decided to play an "elimination round" to see who got it.

We decided we'd each draw a hand, the player with the lowest hand would be eliminated, and so forth till we had a winner (this is what happens when engineers gamble - one high card draw would be too simple).

Now, realistically, barring intervening disaster, someone was going to win, and the time I naively thought the odds were 1 in 5.

But... if you do the math, you realize that the odds of us getting the exact hands we each drew were actually about 300 million to one!

The odds of the winner winning with the exact series of hands he drew were about 3x10e42!

And yet, against seemingly insurmountably impossible odds approximating the grains of sand on all the worlds beaches, Chris won with those very hands!

Incredible!

Only now that I think of the odds do I realize the opportunity we missed.

Given those infinitesimal odds, if we had only played twice, and had Chris won both rounds, we would have achieved CSI.

Possibly a wormhole might have opened, possibly we could have been swept backwards in time, possibly the Second Coming, who can know? But regardless, we would have done something exceeding the odds of 10e150, thus achieving a result that could only come from intelligent intervention, and Wild Bill Dembski would have been vindicated.

diogeneslamp0 · 18 April 2013

If you want an event with a probability of 10^150, any random combination of ~88 playing cards will do (assuming you draw them with replacement, or from a very large population of playing cards.)

(1/52)^88 =~ (1/10^150)

Aaron Marshall · 18 April 2013

I am fairly new to these types of discussions but I am very interested in engaging in dialogue on these issues. I have just finished reading Dembski's book "The Design Revolution" and to my untrained mind it seems like a solid case for making Intelligent Design a rigorous scientific discipline (which he claims is already happening). I would like to hear from people who wish to dialogue respectfully why this should not be the case. In particlular I thought his argument as to the difference between Intelligent Design and Scientific Creationism shows that they are two very different projects. Scientific creationism has prior religious commitments (namely that there exists a supernatural agent who creates and orders the world & that the biblical account of creation recorded in Genesis is scientifically accurate) whereas ID does not. ID simply begins with the data that scientists observe in the lab and determines if this data exhibits patterns knows to signal intelligent causes and thus comes to the conclusion that the thing in consideration was in fact designed. The question of who designed it or what designed it is not part of the ID project. The fact of the matter is that it could be some kind of natural teleology as Thomas Nagel has suggested. So can you critique what you think is wrong about these statements and the approach of ID in trying to detect design in nature? Can you also explain to me why there is such negative, seemingly personal, animosity toward ID? If this is just objective science then why don't those in the lab do the work to show that ID is not true and then jsut dismiss it? I'm sure that those reading this will have much to say about what I have written but I hope to keep this dialogue cordial as I am very much interested in hearing why you think ID is so "silly." Thank you.

phhht · 18 April 2013

Aaron Marshall said: ID simply begins with the data that scientists observe in the lab and determines if this data exhibits patterns knows to signal intelligent causes and thus comes to the conclusion that the thing in consideration was in fact designed.
The trouble is that nobody, most emphatically including Dembski, can do that. Nobody can build a design detector. Nobody can say how to distinguish the designed from the non-designed. And ID is a direct but intentionally obscured variant of creationism. You need to do a LOT more reading about the issues. Then come back and ask your questions, if you still have them.

stevaroni · 18 April 2013

Aaron Marshall said: I am fairly new to these types of discussions but I am very interested in engaging in dialogue on these issues.
Yawn. Hi new concern troll who has never been here before but somehow uses the exact same language as our last new concern troll who has mysteriously vanished after three days. Welcome to the zoo.

I have just finished reading Dembski's book "The Design Revolution" and to my untrained mind it seems like a solid case for making Intelligent Design a rigorous scientific discipline (which he claims is already happening). I would like to hear from people who wish to dialogue respectfully why this should not be the case.

Yeah. Sure. Why not. Let's get the talking points out of the way.

In particular I thought his argument as to the difference between Intelligent Design and Scientific Creationism shows that they are two very different projects. Scientific creationism (blah, blah, snip)

Yeah. We get it. Intelligent design proponents absolutely insist that it's not creationism, despite mountains of evidence that it's the same guys, using the same financing to push the same ideas. Whatever. Here's some "respectful dialogue". It doesn't matter if ID is creationism. It doesn't matter if ID is not creationism. Both ID and creationism would be perfectly legitimate research endeavors, worth of respect from the scientific community if only they actually produced any evidence. That's all it would take. That's the buy-in. But in 2000 years neither branch of the "poof" tree has produced the tiniest little sliver of actual, objective evidence that anything outside of basic biology is going on. Never. In the mean time science has bolstered it's case with literally millions of neatly organized fossils showing the neat progression of species through intermediate forms back into deep time. Science has figured out plate tectonics and nuclear decay, which provided an age an migration model which neatly matched the known fossil evidence. Science has figured out genetics, which provided an independent family tree that exactly matches the extant evolutionary tree. And lastly, science has unambiguously demonstrated evolution in the lab and in the wild. Meanwhile, intelligent design has produced nothing but mass market books and lawsuits for school districts. The entirety of their evidence used to rest on some particularly speedy bacteria that Michael Behe liked to talk about and some mathematical ideas Bill Dembski trumpets, but can't define in a conclusive manner that can actually be cross-checked. I say "used to" because Behe's bacterial puzzle was solved years ago, right after he publicized it and made an ass of himself on the stand in the Dover case. Dembski didn't get to the ass-making phase because he ran away from said trial without testifying. And that is my respectful wrap of of Intelligent Design Theory. You don't want to see my disrespectful take. Really. There's just no "there" there. Never has been. All they have is popular books and lawyers. Oh, and concern trolls.

DS · 18 April 2013

Aaron Marshall said: I am fairly new to these types of discussions but I am very interested in engaging in dialogue on these issues. I have just finished reading Dembski's book "The Design Revolution" and to my untrained mind it seems like a solid case for making Intelligent Design a rigorous scientific discipline (which he claims is already happening). I would like to hear from people who wish to dialogue respectfully why this should not be the case. In particlular I thought his argument as to the difference between Intelligent Design and Scientific Creationism shows that they are two very different projects. Scientific creationism has prior religious commitments (namely that there exists a supernatural agent who creates and orders the world & that the biblical account of creation recorded in Genesis is scientifically accurate) whereas ID does not. ID simply begins with the data that scientists observe in the lab and determines if this data exhibits patterns knows to signal intelligent causes and thus comes to the conclusion that the thing in consideration was in fact designed. The question of who designed it or what designed it is not part of the ID project. The fact of the matter is that it could be some kind of natural teleology as Thomas Nagel has suggested. So can you critique what you think is wrong about these statements and the approach of ID in trying to detect design in nature? Can you also explain to me why there is such negative, seemingly personal, animosity toward ID? If this is just objective science then why don't those in the lab do the work to show that ID is not true and then jsut dismiss it? I'm sure that those reading this will have much to say about what I have written but I hope to keep this dialogue cordial as I am very much interested in hearing why you think ID is so "silly." Thank you.
Welcome Aaron. Why don't you start by reading what the authors of this thread have to say about Dembski and his ideas? Then ask yourself this, why is absolutely none of the Dembski stuff published in a respectable scientific journal? There is a negative animosity towards ID because it is pesudoscientific nonsense perpetrated by charlatans and liars who have a religious agenda and no scientific credentials. It is an dishonest and disingenuous attempt to get creationism taught as science in public schools. It has no scientific merit whatsoever and serves only as a ruse to question valid science. But other than that it's just fine and dandy. If you disagree then just explain how to calculate CSI and compare the CSI of various organisms. This is something that Dembski has never once managed to do. Now ask yourself, if any real scientist proposed such a nebulous and untestable concept, what should the reaction be?

Aaron Marshall · 18 April 2013

phhht said:
Aaron Marshall said: ID simply begins with the data that scientists observe in the lab and determines if this data exhibits patterns knows to signal intelligent causes and thus comes to the conclusion that the thing in consideration was in fact designed.
The trouble is that nobody, most emphatically including Dembski, can do that. Nobody can build a design detector. Nobody can say how to distinguish the designed from the non-designed. And ID is a direct but intentionally obscured variant of creationism. You need to do a LOT more reading about the issues. Then come back and ask your questions, if you still have them.
I hope that we can challenge ideas here and not each other, so it is in that spirit that I respond. You are telling me that you cannot distinguish the designed from the undesigned? I find that impossible to believe. Was your car designed? I can certainly tell things are designed. Now maybe it is harder with some things than with others but why should that be a precursor to just throwing up our arms and saying it can't be done. It seems like Dembski does have a "design detector" in mind in his Explanatory Filter. He starts with the phenomenon that we want to determine if it was designed and asks if that thing is contingent - did it have to happen? If it did then it is necessary and the case is closed. If it is contingent then we ask if the thing is complex? If the answer is no then he attributes it to chance and the case is closed. If it is complex then we move to the question of whether it is specified? If it isnt specified then again he attributes it to chance. If it is specified, complex and contingent then we determine that it was designed. Now we have said nothing about who or what designed it we just noted that it has all the halmarks of design. I would contend those are the same thought processes you go through to determine that your car was designed instead of being necessary or just came about by chance. As to your last point, in what way is ID a "direct but intentionally obscured variant of creationism?" Nothing that I have said, or from what I have read in Dembski's book, mentions anything about God or Christianity as the Designer. All he has done is looked at certain natural systems and phenomena and determined that they were designed. My question to you would be what would it take for you to think that something in nature was designed? Are you removing that option from the table before you even look at the evidence? That doesn't seem like your doing science and going where the evidence leads but instead you have already decided beforehand where the evidence is allowed to go. I hope you will respond and we can keep the dialogue going.

DS · 18 April 2013

Aaron Marshall said:
phhht said:
Aaron Marshall said: ID simply begins with the data that scientists observe in the lab and determines if this data exhibits patterns knows to signal intelligent causes and thus comes to the conclusion that the thing in consideration was in fact designed.
The trouble is that nobody, most emphatically including Dembski, can do that. Nobody can build a design detector. Nobody can say how to distinguish the designed from the non-designed. And ID is a direct but intentionally obscured variant of creationism. You need to do a LOT more reading about the issues. Then come back and ask your questions, if you still have them.
I hope that we can challenge ideas here and not each other, so it is in that spirit that I respond. You are telling me that you cannot distinguish the designed from the undesigned? I find that impossible to believe. Was your car designed? I can certainly tell things are designed. Now maybe it is harder with some things than with others but why should that be a precursor to just throwing up our arms and saying it can't be done. It seems like Dembski does have a "design detector" in mind in his Explanatory Filter. He starts with the phenomenon that we want to determine if it was designed and asks if that thing is contingent - did it have to happen? If it did then it is necessary and the case is closed. If it is contingent then we ask if the thing is complex? If the answer is no then he attributes it to chance and the case is closed. If it is complex then we move to the question of whether it is specified? If it isnt specified then again he attributes it to chance. If it is specified, complex and contingent then we determine that it was designed. Now we have said nothing about who or what designed it we just noted that it has all the halmarks of design. I would contend those are the same thought processes you go through to determine that your car was designed instead of being necessary or just came about by chance. As to your last point, in what way is ID a "direct but intentionally obscured variant of creationism?" Nothing that I have said, or from what I have read in Dembski's book, mentions anything about God or Christianity as the Designer. All he has done is looked at certain natural systems and phenomena and determined that they were designed. My question to you would be what would it take for you to think that something in nature was designed? Are you removing that option from the table before you even look at the evidence? That doesn't seem like your doing science and going where the evidence leads but instead you have already decided beforehand where the evidence is allowed to go. I hope you will respond and we can keep the dialogue going.
It is only possible to detect design reliably if you know the identity of the designer, its motives, its methods and its limitations. This is something that ID is not prepared to stipulate. Therefore, design detection is extremely susceptible to false positives and false negatives. It is virtually impossible to determine if it has any validity at all. That is why it is scientifically vacuous. Life does not have the appearance of design on close examination. It shows all of the hallmarks of evolution, complete with historical contingency and many constraints. It shows no evidence of any foresight or planning. It contains not intelligent design but stupid design, incompetent design, plagarized design and downright stupid design. In short, it is exactly what one would predict as the product of random mutation and natural selection. There is no need for any alternative hypothesis. The appearance of design is an illusion that disappears on closer inspection.

Mike Elzinga · 18 April 2013

Aaron Marshall said: In particlular I thought his argument as to the difference between Intelligent Design and Scientific Creationism shows that they are two very different projects. Scientific creationism has prior religious commitments (namely that there exists a supernatural agent who creates and orders the world & that the biblical account of creation recorded in Genesis is scientifically accurate) whereas ID does not.
Intelligent Design morphed out of “Scientific” Creationism in order to get around the 1987 US Supreme Court decision on Edwards v. Aguillard. In fact, if you go over to the National Center for Science Education website and click on the Legal Cases link, you will find transcripts and court decisions in considerable detail. ID advocates, including Dembski, may try to distance themselves from “Scientific” Creationism, but it is easy to find the direct connections. Just go to Dembski’s Keynote address at BIOLA in 2002, you will see very clearly that Dembski conceives of ID as “the essential good liberating the human spirit from the suffocating ideologies of reductionism and materialism.” ID/creationism is a sectarian socio/political movement, started officially by Henry Morris and Duane Gish in 1970 with their founding of the Institute for Creation “Research.” The Discovery Institute’s “Center for (the Renewal of) Science and Culture” has a famous ”Wedge Document” that lays out five-year and twenty-year goals that “seek nothing less than the overthrow of materialism and its cultural legacies.” If the court cases, Dembski’s keynote address, the Wedge Document, and all the documented history of this movement don’t convince you of the sectarian socio/political and culture war nature of ID/creationism, then there is an entire set of memes, misconceptions, and misrepresentations of science that run through all of the works of Dembski, et. al. that link them unmistakably. There is not one attempt on the part of Dembski, and other leaders of this movement, that doesn’t reveal those misconceptions; and those misconceptions go right back to Henry Morris and Duane Gish. Dembski’s ideas are genetically related to Morris and Gish. Judging from your canned language, I suspect you already know the connections.

phhht · 18 April 2013

Aaron Marshall said:
phhht said:
Aaron Marshall said: ID simply begins with the data that scientists observe in the lab and determines if this data exhibits patterns knows to signal intelligent causes and thus comes to the conclusion that the thing in consideration was in fact designed.
The trouble is that nobody, most emphatically including Dembski, can do that. Nobody can build a design detector. Nobody can say how to distinguish the designed from the non-designed. And ID is a direct but intentionally obscured variant of creationism.
You are telling me that you cannot distinguish the designed from the undesigned?
No,I'm telling you that NOBODY CAN BUILD A DESIGN DETECTOR.
I can certainly tell things are designed.
Excellent. How?
...what would it take for you to think that something in nature was designed?
A working design detector. It should be able to measure the relative designedness of a rock and a pocket watch.

Aaron Marshall · 18 April 2013

"Both ID and creationism would be perfectly legitimate research endeavors, worth of respect from the scientific community if only they actually produced any evidence."

So what would you consider "evidence"?

"And lastly, science has unambiguously demonstrated evolution in the lab and in the wild."

Can you point me to where science has unambiguously demonstrated macroevolution in the lab or in the wild?

"The entirety of their evidence used to rest on some particularly speedy bacteria that Michael Behe liked to talk about and some mathematical ideas Bill Dembski trumpets, but can't define in a conclusive manner that can actually be cross-checked.I say "used to" because Behe's bacterial puzzle was solved years ago, right after he publicized it and made an ass of himself on the stand in the Dover case."

Where was Behe's bacterial puzzle solved? Can you point me to that evidence?

Here is what is seemingly odd to me. Why is there so much anger and derision coming from you regarding this particular scientific endeavor? It seems like if this was truly an objective scientific thing then you would evaluate the arguments, run the tests, produce the results and see where the chips fell? Do you even allow for the possibility of design in nature or do you rule that out a priori? If you rule out design in the first place then of course it is no mystery why you don't find it but if it is a live option for you then again why isn't this just an objective scientific disagreement but much more for you.

Are you saying that natural physical processes can account for everything in nature? Can you tell me how those physical processes can account for cognition, reason and objective morality (values) or do you deny that those things exist? How do physical processes account for the laws of logic or mathematics or aesthetics? Can you point me to the evidence that shows these things have been accounted for by strictly physical processes?

https://me.yahoo.com/a/JxVN0eQFqtmgoY7wC1cZM44ET_iAanxHQmLgYgX_Zhn8#57cad · 18 April 2013

And once again the fact that it knows a whole lot more about this pseudoscience than initially indicated comes out.

It never responds honestly, just repeats the same worthless tripe every time, as if somehow the PRATTs have become meaningful quesitons.

Glen Davidson

phhht · 18 April 2013

Aaron Marshall said: Why is there so much anger and derision coming from you regarding this particular scientific endeavor?
Because so many crypto-christian loonies come around here who sound just exactly like you, Aaron. That's why.
Are you saying that natural physical processes can account for everything in nature?
I'm saying that there is not the slightest, tiniest, flimsiest shred of evidence that anything other than natural physical processes are involved. Nor is there any reason to suppose such involvement - apart from god-of-the-gaps arguments like yours, you transparent phony.

DS · 18 April 2013

Aaron Marshall said: Can you point me to where science has unambiguously demonstrated macroevolution in the lab or in the wild?
If you really are serious about wanting to examine the evidence for evolution, here is a link that presents 29 evidences for macroevolution, complete with scientific references: http://www.talkorigins.org/faqs/comdesc/ My favorite is section four on genetics, but we can start wherever you want.

Aaron Marshall · 18 April 2013

Are you saying that natural physical processes can account for everything in nature?
I'm saying that there is not the slightest, tiniest, flimsiest shred of evidence that anything other than natural physical processes are involved. Nor is there any reason to suppose such involvement - apart from god-of-the-gaps arguments like yours, you transparent phony. I am asking legitimate questions and you are calling me names? Why am I a phony, because I want to hear your arguments? I told everyone that I just read Dembski's book and I thought the arguments were good and so I wanted to find a place where I could discuss this. Is this not the place? So again, without calling me names, would you respond to my questions regarding cognition, reason and objective morality. Do you think those things exist? It would seem that all three are evidence for something other than natural physical processes to be invovled. I am also interested to know how anything that I have been saying at all has anything to do with a god-of-the-gaps argument? I think ID could be show that intelligence has designed things in nature and that still doesn't point to God, certainly not to the God of Christianity. Thomas Nagel thinks that physicalism is false but he is also an atheist so just because something is designed, what follows from that is something wholly apart from the scientific question of is the thing designed in the first place, wouldn't you agree? Lastly, you say that so many "crypto-Christian loonies" come here and sound like me and that is why you are mad. If you have the truth why don't you just spell it out to me and not attack me personally and then hope that I am won over by the evidence? Is there a better place that I could dialogue with naturalistic evolutionists?

Aaron Marshall · 18 April 2013

DS said:
Aaron Marshall said: Can you point me to where science has unambiguously demonstrated macroevolution in the lab or in the wild?
If you really are serious about wanting to examine the evidence for evolution, here is a link that presents 29 evidences for macroevolution, complete with scientific references: http://www.talkorigins.org/faqs/comdesc/ My favorite is section four on genetics, but we can start wherever you want.
Thank you. I will look over this tonight. Then can I ask you questions about it?

Keelyn · 18 April 2013

stevaroni said:
Aaron Marshall said: I am fairly new to these types of discussions but I am very interested in engaging in dialogue on these issues.
Yawn. Hi new concern troll who has never been here before but somehow uses the exact same language as our last new concern troll who has mysteriously vanished after three days. Welcome to the zoo.

I have just finished reading Dembski's book "The Design Revolution" and to my untrained mind it seems like a solid case for making Intelligent Design a rigorous scientific discipline (which he claims is already happening). I would like to hear from people who wish to dialogue respectfully why this should not be the case.

Yeah. Sure. Why not. Let's get the talking points out of the way.

In particular I thought his argument as to the difference between Intelligent Design and Scientific Creationism shows that they are two very different projects. Scientific creationism (blah, blah, snip)

Yeah. We get it. Intelligent design proponents absolutely insist that it's not creationism, despite mountains of evidence that it's the same guys, using the same financing to push the same ideas. Whatever. Here's some "respectful dialogue". It doesn't matter if ID is creationism. It doesn't matter if ID is not creationism. Both ID and creationism would be perfectly legitimate research endeavors, worth of respect from the scientific community if only they actually produced any evidence. That's all it would take. That's the buy-in. But in 2000 years neither branch of the "poof" tree has produced the tiniest little sliver of actual, objective evidence that anything outside of basic biology is going on. Never. In the mean time science has bolstered it's case with literally millions of neatly organized fossils showing the neat progression of species through intermediate forms back into deep time. Science has figured out plate tectonics and nuclear decay, which provided an age an migration model which neatly matched the known fossil evidence. Science has figured out genetics, which provided an independent family tree that exactly matches the extant evolutionary tree. And lastly, science has unambiguously demonstrated evolution in the lab and in the wild. Meanwhile, intelligent design has produced nothing but mass market books and lawsuits for school districts. The entirety of their evidence used to rest on some particularly speedy bacteria that Michael Behe liked to talk about and some mathematical ideas Bill Dembski trumpets, but can't define in a conclusive manner that can actually be cross-checked. I say "used to" because Behe's bacterial puzzle was solved years ago, right after he publicized it and made an ass of himself on the stand in the Dover case. Dembski didn't get to the ass-making phase because he ran away from said trial without testifying. And that is my respectful wrap of of Intelligent Design Theory. You don't want to see my disrespectful take. Really. There's just no "there" there. Never has been. All they have is popular books and lawyers. Oh, and concern trolls.
Yes, Stevaroni – I agree 100% with everything you said. I hope we don’t need to go through Aaron Marshall’s rehashing of ID bs talking points. BS that has been soundly refuted ad nauseam, by the way.

Aaron Marshall · 18 April 2013

https://me.yahoo.com/a/JxVN0eQFqtmgoY7wC1cZM44ET_iAanxHQmLgYgX_Zhn8#57cad said: And once again the fact that it knows a whole lot more about this pseudoscience than initially indicated comes out. It never responds honestly, just repeats the same worthless tripe every time, as if somehow the PRATTs have become meaningful quesitons. Glen Davidson
I just read Dembski's book and Nagel's and I want to dialogue about them. I never claimed anything but that. I just said I was new to this board and discussing ID stuff. I would certainly reply honestly if you asked me a question. I wouldnt attack you and call you names.

phhht · 18 April 2013

Aaron Marshall said:
Are you saying that natural physical processes can account for everything in nature?
I'm saying that there is not the slightest, tiniest, flimsiest shred of evidence that anything other than natural physical processes are involved. Nor is there any reason to suppose such involvement - apart from god-of-the-gaps arguments like yours, you transparent phony.
I am asking legitimate questions.
No you're not. You're a crypto-christian loony posing as a simple, naive seeker of enlightenment. And around here, you're about number 1,214.

DS · 18 April 2013

Aaron Marshall said:
DS said:
Aaron Marshall said: Can you point me to where science has unambiguously demonstrated macroevolution in the lab or in the wild?
If you really are serious about wanting to examine the evidence for evolution, here is a link that presents 29 evidences for macroevolution, complete with scientific references: http://www.talkorigins.org/faqs/comdesc/ My favorite is section four on genetics, but we can start wherever you want.
Thank you. I will look over this tonight. Then can I ask you questions about it?
Absolutely. But you should realize that the site already includes answers to all creationist arguments. Unless you can provide a better explanation for all of the evidence, you must conclude that macroevolution has indeed occurred.

Aaron Marshall · 18 April 2013

Life does not have the appearance of design on close examination. It shows all of the hallmarks of evolution, complete with historical contingency and many constraints. It shows no evidence of any foresight or planning. It contains not intelligent design but stupid design, incompetent design, plagarized design and downright stupid design. In short, it is exactly what one would predict as the product of random mutation and natural selection. There is no need for any alternative hypothesis. The appearance of design is an illusion that disappears on closer inspection.

You just said that life contains, "not intelligent design but stupid design, incompetent design, plagarized design and downright stupid design." So you admit that there is design? That is all ID is saying from my reading. They certainly aren't saying that there has to be perfect design or anything like that. In fact it would be impossible to tell the things that you are saying unless you knew who the designer was and why they/it/he were desinging the thing the way that they were before you could know whether the design was "stupid." Isn't that a wholly seperate question that is really of know consequence? If all ID shows is that something was designed then that is what it is trying to do. It doesn't have to get into value judgements about the quality of the design. So if you throw out that idea that a designer would have to design something "perfectly" then what other evidence is there that the "illusion of design" isn't really just actual design?

phhht · 18 April 2013

Aaron Marshall said: You just said that life contains, "not intelligent design but stupid design, incompetent design, plagarized design and downright stupid design." So you admit that there is design?
It is intentional, provocative misunderstandings like this one which mark the crypto-christian partisan.

stevaroni · 18 April 2013

Aaron Marshall said: "Both ID and creationism would be perfectly legitimate research endeavors, worth of respect from the scientific community if only they actually produced any evidence." So what would you consider "evidence"?
I think most people can agree on what "evidence" is. An artifact or phenomena that can be physically examined and measured. An event which was unambiguously recorded in sufficient detail and with reasonable objectivity that it can be studied. A force of nature that shows a repeatable, quantifiable effect. The key concept is that evidence is an objective thing that can be examined, measured, and quantified. And of such stuff, in 2000 years ID/creationism research has produced exactly.... well, none.
Can you point me to where science has unambiguously demonstrated macroevolution in the lab or in the wild?
Ah... macro evolution, you say. Let's try something first, give me a mechanistic dividing line between "macro" evolution and "micro" evolution, because outside of the creationist world such a distinction doesn't exist any more that "micromath" and "macromath" exists or "microwalking" is a distinct thing from "macrowalking". There is just math, walking.. and... wait for it... plain vanilla evolution. The very fact that you use the term "macroevolution" betrays your status as a concern troll. Please note that for next time. Still, if you insist on proof of evolution (a subject you could easily Google yourself, if you were serious) start with Lenski's long-term evolution experiment. Google it.

Where was Behe's bacterial puzzle solved? Can you point me to that evidence?

Actually, Behe himself invalidated his own work during the Dover trial when he was employed as an expert witness. Placed on the stand and under oath he was forced to work through his improbability calculations using real numbers. When Behe did the actual math he was forced to admit that a specific mutation that were so improbable it might as well be impossible was, given the number of cells just present in the courtroom, probably happening there every 50 minutes or so. He was also forced to recant his position that the evolution of the immune system was totally unexplained and undocumented. He came to this epiphany after being asked to address a stack of dozens of papers, publications and textbooks that did just that in excruciating detail. Some would call this "gotcha" lawyering, but then again, Behe was a tenured professor of biology and this was supposedly his area of expertise. His money quote in regards to the giant pile of evidence - "this is heavy" Bill Dembski was scheduled to testify at the same trial. But duly noting the effect that cross examination under penalty of perjury has on bullshit (and the subsequent effect an honesty meltdown has on your career as an author of popular ID books), Bill chose the moral high ground, recused himself as an expert witness and refused to testify. Still, he kept the school district's money. Nice.

Here is what is seemingly odd to me. Why is there so much anger and derision coming from you regarding this particular scientific endeavor?

Ummm... because you seem to have come here under false pretense?
Can you tell me how those physical processes can account for cognition, reason and objective morality (values) or do you deny that those things exist? How do physical processes account for the laws of logic or mathematics or aesthetics? Can you point me to the evidence that shows these things have been accounted for by strictly physical processes?
Not yet and certainly not completely. On the other hand, there's a lot of really good research going on and people are turning up really cool stuff on brain evolution all the time. (Regardless of your position on evolution or creationism, read "Brain Bugs" by Dean Buonomano, fascinating book about the weird way our brains work and the weird glitches we all have that you never notice till someone points them out) Meanwhile, in 2000 years, while science has doubled the average lifespan and put cameras on mars, the number of phenomena positively attributed to supernatural or external intelligences is still... oh yeah, exactly zero. You tell me which way an honest man bets.

DS · 18 April 2013

Aaron Marshall said: Life does not have the appearance of design on close examination. It shows all of the hallmarks of evolution, complete with historical contingency and many constraints. It shows no evidence of any foresight or planning. It contains not intelligent design but stupid design, incompetent design, plagarized design and downright stupid design. In short, it is exactly what one would predict as the product of random mutation and natural selection. There is no need for any alternative hypothesis. The appearance of design is an illusion that disappears on closer inspection. You just said that life contains, "not intelligent design but stupid design, incompetent design, plagarized design and downright stupid design." So you admit that there is design? That is all ID is saying from my reading. They certainly aren't saying that there has to be perfect design or anything like that. In fact it would be impossible to tell the things that you are saying unless you knew who the designer was and why they/it/he were desinging the thing the way that they were before you could know whether the design was "stupid." Isn't that a wholly seperate question that is really of know consequence? If all ID shows is that something was designed then that is what it is trying to do. It doesn't have to get into value judgements about the quality of the design. So if you throw out that idea that a designer would have to design something "perfectly" then what other evidence is there that the "illusion of design" isn't really just actual design?
Life has the appearance of having evolved,

Joe Felsenstein · 18 April 2013

Aaron Marshall said: I am fairly new to these types of discussions but I am very interested in engaging in dialogue on these issues. I have just finished reading Dembski's book "The Design Revolution" and to my untrained mind it seems like a solid case for making Intelligent Design a rigorous scientific discipline (which he claims is already happening). I would like to hear from people who wish to dialogue respectfully why this should not be the case. ... So can you critique what you think is wrong about these statements and the approach of ID in trying to detect design in nature? Can you also explain to me why there is such negative, seemingly personal, animosity toward ID? If this is just objective science then why don't those in the lab do the work to show that ID is not true and then jsut dismiss it? I'm sure that those reading this will have much to say about what I have written but I hope to keep this dialogue cordial as I am very much interested in hearing why you think ID is so "silly." Thank you.
Aaron, let me take you at your word. If you think William Dembski's arguments are convincing, perhaps you can answer a few questions. * Does Dembski's 2002 law of conservation of CSI work? Does it show that CSI cannot be put into a genome by natural selection? If so, how? How do you deal with the problem that the specification has to remain the same, to judge whether high fitness can be achieved by evolutionary processes, when in Dembski's argument he can only argue for conservation by changing the specification between generations? * If his CSI conservation does not work, how does his Design Detection work? If you can't use it to detect Design, what argument does he have? * Do you agree that his (and Robert Marks's) Search For a Search argument does not show that natural selection does not work to put CSI into a genome? * What about Dembski's use of the No Free Lunch theorem? It has been lethally criticized (in multiple extensive treatments) for building in the unrealistic assumption that fitness surfaces are "white noise" surfaces that are unlike actual biology. If you're serious, you'll try an answer to at least one of these. There are plenty of links in the original post to follow to find out the details of the argument against Dembski's CSI/Design argument. If you're not serious, you'll say that we are losing our tempers (have you noticed that we get associated with amoral racist Nazis by the other side?). If you're not serious you'll spend a lot of time saying that if it looks designed it is designed. If you're not serious you'll ask us to provide every detail of what evolution did. I see some signs of not-serious. So prove me wrong and seriously tell me what your reaction is to the points I made above. I'm the author of the original post here, so I hope you will give my questions some attention. I hope for a cordial exchange.

TomS · 19 April 2013

Aaron Marshall said: Are you saying that natural physical processes can account for everything in nature? Can you tell me how those physical processes can account for cognition, reason and objective morality (values) or do you deny that those things exist? How do physical processes account for the laws of logic or mathematics or aesthetics? Can you point me to the evidence that shows these things have been accounted for by strictly physical processes?
Let's assume that there are lots of things which "natural physical processes" cannot account for. Do you have an example of an alternative which does account for something? That account ought to tell us what happened, when and where, and what properties of the agent(s) result in such-and-such being the way it is (rather than otherwise). As far as evidence, I will not insist on evidence for your alternative. But it might be interesting to see what sort of thing might count as evidence, whether for or against it.

DS · 19 April 2013

Joe Felsenstein said:
Aaron Marshall said: I am fairly new to these types of discussions but I am very interested in engaging in dialogue on these issues. I have just finished reading Dembski's book "The Design Revolution" and to my untrained mind it seems like a solid case for making Intelligent Design a rigorous scientific discipline (which he claims is already happening). I would like to hear from people who wish to dialogue respectfully why this should not be the case. ... So can you critique what you think is wrong about these statements and the approach of ID in trying to detect design in nature? Can you also explain to me why there is such negative, seemingly personal, animosity toward ID? If this is just objective science then why don't those in the lab do the work to show that ID is not true and then jsut dismiss it? I'm sure that those reading this will have much to say about what I have written but I hope to keep this dialogue cordial as I am very much interested in hearing why you think ID is so "silly." Thank you.
Aaron, let me take you at your word. If you think William Dembski's arguments are convincing, perhaps you can answer a few questions. * Does Dembski's 2002 law of conservation of CSI work? Does it show that CSI cannot be put into a genome by natural selection? If so, how? How do you deal with the problem that the specification has to remain the same, to judge whether high fitness can be achieved by evolutionary processes, when in Dembski's argument he can only argue for conservation by changing the specification between generations? * If his CSI conservation does not work, how does his Design Detection work? If you can't use it to detect Design, what argument does he have? * Do you agree that his (and Robert Marks's) Search For a Search argument does not show that natural selection does not work to put CSI into a genome? * What about Dembski's use of the No Free Lunch theorem? It has been lethally criticized (in multiple extensive treatments) for building in the unrealistic assumption that fitness surfaces are "white noise" surfaces that are unlike actual biology. If you're serious, you'll try an answer to at least one of these. There are plenty of links in the original post to follow to find out the details of the argument against Dembski's CSI/Design argument. If you're not serious, you'll say that we are losing our tempers (have you noticed that we get associated with amoral racist Nazis by the other side?). If you're not serious you'll spend a lot of time saying that if it looks designed it is designed. If you're not serious you'll ask us to provide every detail of what evolution did. I see some signs of not-serious. So prove me wrong and seriously tell me what your reaction is to the points I made above. I'm the author of the original post here, so I hope you will give my questions some attention. I hope for a cordial exchange.
Thanks for stepping in Joe. I agree, if Aaron really wanted to discuss Dembski and his ideas, he would have responded to the original points in this thread. In fact, I recommended that he do so, before he started bringing up other topics. THere might be a Gish gallop coming on. If you want to move a discussion of macroevolution, or anything else, to the bathroom wall I am more than willing to oblige.

Aaron Marshall · 19 April 2013

Joe Felsenstein said:
Aaron Marshall said: I am fairly new to these types of discussions but I am very interested in engaging in dialogue on these issues. I have just finished reading Dembski's book "The Design Revolution" and to my untrained mind it seems like a solid case for making Intelligent Design a rigorous scientific discipline (which he claims is already happening). I would like to hear from people who wish to dialogue respectfully why this should not be the case. ... So can you critique what you think is wrong about these statements and the approach of ID in trying to detect design in nature? Can you also explain to me why there is such negative, seemingly personal, animosity toward ID? If this is just objective science then why don't those in the lab do the work to show that ID is not true and then jsut dismiss it? I'm sure that those reading this will have much to say about what I have written but I hope to keep this dialogue cordial as I am very much interested in hearing why you think ID is so "silly." Thank you.
Aaron, let me take you at your word. If you think William Dembski's arguments are convincing, perhaps you can answer a few questions. * Does Dembski's 2002 law of conservation of CSI work? Does it show that CSI cannot be put into a genome by natural selection? If so, how? How do you deal with the problem that the specification has to remain the same, to judge whether high fitness can be achieved by evolutionary processes, when in Dembski's argument he can only argue for conservation by changing the specification between generations? * If his CSI conservation does not work, how does his Design Detection work? If you can't use it to detect Design, what argument does he have? * Do you agree that his (and Robert Marks's) Search For a Search argument does not show that natural selection does not work to put CSI into a genome? * What about Dembski's use of the No Free Lunch theorem? It has been lethally criticized (in multiple extensive treatments) for building in the unrealistic assumption that fitness surfaces are "white noise" surfaces that are unlike actual biology. If you're serious, you'll try an answer to at least one of these. There are plenty of links in the original post to follow to find out the details of the argument against Dembski's CSI/Design argument. If you're not serious, you'll say that we are losing our tempers (have you noticed that we get associated with amoral racist Nazis by the other side?). If you're not serious you'll spend a lot of time saying that if it looks designed it is designed. If you're not serious you'll ask us to provide every detail of what evolution did. I see some signs of not-serious. So prove me wrong and seriously tell me what your reaction is to the points I made above. I'm the author of the original post here, so I hope you will give my questions some attention. I hope for a cordial exchange.
Joe - I hope I have made myself clear about who I am and what I was hoping to accomplish with my questions. I have read Dembski's book and Nagel's and they seemed persuasuve to me. So now I was hoping to dialogue with people who disagreed and understand where the confilct lies. I am using the language that Dembski uses because that was what I read that I found persuasive. Now the questions you bring up are completely over my head at this point that I don't know even where to start. I want to answer your questions and have a back and forth but I don't know enough to say anything intelligent about your questions except to repeat what I have read in his books (which everyone here says is a bunch of bunk). Does Dembski’s 2002 law of conservation of CSI work? Does it show that CSI cannot be put into a genome by natural selection? If so, how? How do you deal with the problem that the specification has to remain the same, to judge whether high fitness can be achieved by evolutionary processes, when in Dembski’s argument he can only argue for conservation by changing the specification between generations? What do you mean by that last statement? I certainly don't know the answers to your questions. I am willing to read the posts and try to understand your objections to Dembski's work. I do want to hear both sides of the argument. I have read your post a couple of times but I am so far out of my league trying to catch up and understand what you are saying. I am not a Scientist. What do you mean that Dembski has to change the specification between generations? I reread the chapter that he deals with this and he says that the "Law of Conservation of Information tells us that when specified complexity is given over to natural causes, it either remains unchanged (in which case information is strictly conserved) or disintegrates (in which case information diminsishes). So my understanding of what he is saying is that natural causes (if only by chance or necessity or combo) can best case preserve specificed complexity or otherwise degrade it but they can't generate this specified complexity. He doesn't say that natural causes in tandem with intelligence can't generate spcified complexity. So he says that in reproduction, organisms transmit their specified complexity to the next generation and you think that this is done through the mechanism of natural selection but how could that process add information to the new organism? I don't want to just parrot back his arguments cause I'm sure you have heard them before but I am saying that seemingly is persuasive. I don't see in there where he talks about changing the specification between generations. So that is why I ask what you mean by that. From what I have read in The Design Revolution it seems that he shows that natural processes could not account for the CSI in the genome. Now I don't know their Search for Search argument but you seem to make a big deal about that fact that by their own argument then design could only come in at the begining. I see him agreeing that maybe that is a possibility. If design came in at only the begining (in fact isn't that what Nagel is essentially arguing?) there would still be design there and then the question would be how could strictly physical processes front load evolution with design? I will leave these comments here for now and wait for your response on these points not because I don't wish to ask you about the other points in future posts. Does the bottom line come down to the assumptions that we are making before we start to look at the data? Or do you think that is just a cop out? I say this because you said, "You say that If you're not serious you'll spend a lot of time saying that if it looks designed it is designed. If you're not serious you'll ask us to provide every detail of what evolution did." Are you saying that is what Dembski's arguments amount too? I agree that you could never "prove" every detail of what evolution did but isn't that exaclty the gist of these questions regarding that if I see evidence of Intelligent Design then I have to give an account of who the designer is and these things about him/it? What if that cannot be done (and I am not saying that, but for the sake of argument)? Would that prove that there is no design in nature? Isn't asking for an ID proponent to answer every question of how something was designed before you will recognize that it is designed following down the same path that you are chiding me for going down? Now, to be fair you didn't make that argument and you may not agree with it but that has been in multiple posts towards me so I just wondered what you thought of that? Thanks.

Henry J · 19 April 2013

So in other words, what Aaron needs to do is get some actual textbooks on the subject and read those.

Aaron Marshall · 19 April 2013

phhht said:
Aaron Marshall said:
phhht said:
Aaron Marshall said: ID simply begins with the data that scientists observe in the lab and determines if this data exhibits patterns knows to signal intelligent causes and thus comes to the conclusion that the thing in consideration was in fact designed.
The trouble is that nobody, most emphatically including Dembski, can do that. Nobody can build a design detector. Nobody can say how to distinguish the designed from the non-designed. And ID is a direct but intentionally obscured variant of creationism.
You are telling me that you cannot distinguish the designed from the undesigned?
No,I'm telling you that NOBODY CAN BUILD A DESIGN DETECTOR.
I can certainly tell things are designed.
Excellent. How?
...what would it take for you to think that something in nature was designed?
A working design detector. It should be able to measure the relative designedness of a rock and a pocket watch.
So you are saying that there is no way to tell if things are designed or not? Do you think your car was designed? That we can detect that some things were designed seems just common sense to me. Now maybe your right, when we get down into the nuts and bolts of how you detect these things, it gets really complicated, but why should that stop us from pursuing the goal jsut because it is hard? So why wouldn't Dembski's explanatory filter work as a design detector? Take your rock example. We ask the question whether it was contingent and the answer seems obviously so (it didn't have to be here). So then we ask if it is complex? In this sense is it improbable that we have the rock absent just chance? Semmingly not and thus I would say that chance could have produced it and thus it was not designed. You could disagree on that step and say it is complex and thus move on to the last step where we ask if it is specified, indicating an independently given pattern (not just randomness) and that would clearly not be the case with the rock. So the rock would have two places to be kicked out of the "designed" category. Now run the same test with the watch. It is surely contingent and complex and it exhibits an independently given pattern, or in other words it has low specificational complexity. Now these are all Dembski's terms that I just lifted from his book, but that seems like a legitimate way to determine design. What do you mean that the design detector would have to measure the relative designedness of an object? What if it can't say more than what I have said, that it was designed or it wasn't designed? Isn't that enough? Otherwise arent you asking more of ID than you ask of evolutionary theory?

Aaron Marshall · 19 April 2013

TomS said:
Aaron Marshall said: Are you saying that natural physical processes can account for everything in nature? Can you tell me how those physical processes can account for cognition, reason and objective morality (values) or do you deny that those things exist? How do physical processes account for the laws of logic or mathematics or aesthetics? Can you point me to the evidence that shows these things have been accounted for by strictly physical processes?
Let's assume that there are lots of things which "natural physical processes" cannot account for. Do you have an example of an alternative which does account for something? That account ought to tell us what happened, when and where, and what properties of the agent(s) result in such-and-such being the way it is (rather than otherwise). As far as evidence, I will not insist on evidence for your alternative. But it might be interesting to see what sort of thing might count as evidence, whether for or against it.
But isn't that the point to begin with. If there are things that natural processes can't account for then by showing that ID has done considerable work. Now you would have to go back to the drawing board to come up with a theory that would account for things that could not have come about by natural processes. Why would I then also have to give a description of what that thing would be and how they accomplished it in order for the underlying fact that the thing was designed to be true? That is asking of ID what you reprimand naturalistic evolutionary critics of requiring of that theory. There are lots of things that are unknown even if naturalistic evolution is true and just like Joe said, that certainly doesn't make it not true. In the same sense if there were/are lots of things about design theory that were/are unknown that alone doesn't make it false. I would contend that evidence for design would be the same things that Dembski and Nagel talk about in their books: if there can be shown certain things exist that could not have come about by strictly phycisal processes then that would be evidence that they were designed.

Aaron Marshall · 19 April 2013

Henry J said: So in other words, what Aaron needs to do is get some actual textbooks on the subject and read those.
Whcih ones do you reccomend?

https://me.yahoo.com/a/JxVN0eQFqtmgoY7wC1cZM44ET_iAanxHQmLgYgX_Zhn8#57cad · 19 April 2013

Not the first reason why IDiocy should even be considered.

Just the dishonest talking points of the wooists.

Like we couldn't foresee that. Or recognize who the troll is.

Glen Davidson

phhht · 19 April 2013

phhht said:
Aaron Marshall said: You are telling me that you cannot distinguish the designed from the undesigned?
No,I'm telling you that NOBODY CAN BUILD A DESIGN DETECTOR.
I can certainly tell things are designed.
Excellent. How?
...what would it take for you to think that something in nature was designed?
A working design detector. It should be able to measure the relative designedness of a rock and a pocket watch.
Aaron Marshall said: So you are saying that there is no way to tell if things are designed or not?
I'll tell you what I've told you over and over: there is no objective empirical method to detect design.
That we can detect that some things were designed seems just common sense to me.
Excellent. Tell us how to do it.
So why wouldn't Dembski's explanatory filter work as a design detector?
One reason it wouldn't work is that design is not an objective property of reality, but is instead an illusion.
Take your rock example. We ask the question whether it was contingent and the answer seems obviously so (it didn't have to be here). So then we ask if it is complex? In this sense is it improbable that we have the rock absent just chance? Semmingly not and thus I would say that chance could have produced it and thus it was not designed.
In other words, that's your unsupported opinion. You have no objective test to determine whether it is designed or not.
You could disagree on that step and say it is complex and thus move on to the last step where we ask if it is specified, indicating an independently given pattern (not just randomness) and that would clearly not be the case with the rock. So the rock would have two places to be kicked out of the "designed" category.
But you cannot measure "complexity." All you can do is to assert that something looks complex to you. Or not. You have no object measure of "specifiedness." Nobody can say how to measure or detect either of those purported properties.
Now run the same test with the watch.
Watch, rock, Abeliean sandpile simulation, genome: THERE IS NO OBJECTIVE, TESTABLE, RELIABLE WAY TO DETECT DESIGN, much less measure its magnitude.
What do you mean that the design detector would have to measure the relative designedness of an object? What if it can't say more than what I have said, that it was designed or it wasn't designed? Isn't that enough?
It's plenty for me - if it works objectively, empirically, and does not depend on the opinions of observers.

Henry J · 19 April 2013

To figure out how to distinguish between designed and non-designed, you must look at borderline cases, not extremes where the answer is already known, like cars.

phhht · 19 April 2013

Aaron Marshall said: If there are things that natural processes can't account for then by showing that ID has done considerable work.
Nonsense. Reality is mostly composed of things that no one can explain. So what? The inexplicable does not entail the supernatural.
Now you would have to go back to the drawing board to come up with a theory that would account for things that could not have come about by natural processes.
As far as anybody knows, THERE ARE NO THINGS THAT COULD NOT HAVE NATURAL CAUSES. There is only the known and the unknown, and the existence of the unknown DOES NOT IMPLY THAT GODS DID IT.
Why would I then also have to give a description of what that thing would be and how they accomplished it in order for the underlying fact that the thing was designed to be true?
Because there is no other way of demonstrating the truth.
There are lots of things that are unknown even if naturalistic evolution is true and just like Joe said, that certainly doesn't make it not true. In the same sense if there were/are lots of things about design theory that were/are unknown that alone doesn't make it false.
Correct. But to show that your purported design is true, you must have objective, empirical evidence for its existence. There is tons of such evidence for ToE. There is not one solitary bit of such evidence for design.
I would contend that evidence for design would be the same things that Dembski and Nagel talk about in their books: if there can be shown certain things exist that could not have come about by strictly phycisal processes then that would be evidence that they were designed.
Yes, IF. The trouble is that nobody CAN show the existence of ANYTHING that has come about by non-physical processes. And NO: EVEN IF anyone could show such a thing, it would mean nothing for design unless it provided objective, empirical evidence for design. So Aaron, why don't you drop your threadbare mask and your pathetic design disguise and tell the truth. You believe in design because you think it constitutes evidence for the existence of gods. You're a crypto-christian. But there are no gods, Aaron. Just like design, there is not the slightest bit of unambiguous empirical evidence for their existence. Just like design, they are illusions.

SWT · 19 April 2013

Aaron Marshall said:
Henry J said: So in other words, what Aaron needs to do is get some actual textbooks on the subject and read those.
Whcih ones do you reccomend?
Aaron, I'll give you something I hope you'll find thought-provoking that's less than book length -- an article about evolutionary development of hardware. As you read it, keep in mind that the program for the 10x10 FPGA used in the experiment reported in the article probably had at least 2500 possibilities* and ponder what the implications of this experiment are for the power of random mutation + natural selection.** We can chat here later on. I have to get back to grading design projects ... ____ Footnotes: * For those of you wondering how I came up with this number: I did a little reading on FPGAs, and it seems common that each cell in the FPGA takes an input from four neighboring cells, and uses a lookup table to establish the output from the cell. Thus, each lookup table maps 16 possible inputs to 2 possible outputs. This means I need 5 bits to encode the logic of one cell's lookup table, or 500 bits to encode the lookup tables for all 100 cells. I think there's more you can do with the programming, so 500 bits might be a lowball estimate. **Alert readers might notice that Dembski's probability bound is pretty close to 500 bits.

TomS · 19 April 2013

Aaron Marshall said:
TomS said:
Aaron Marshall said: Are you saying that natural physical processes can account for everything in nature? Can you tell me how those physical processes can account for cognition, reason and objective morality (values) or do you deny that those things exist? How do physical processes account for the laws of logic or mathematics or aesthetics? Can you point me to the evidence that shows these things have been accounted for by strictly physical processes?
Let's assume that there are lots of things which "natural physical processes" cannot account for. Do you have an example of an alternative which does account for something? That account ought to tell us what happened, when and where, and what properties of the agent(s) result in such-and-such being the way it is (rather than otherwise). As far as evidence, I will not insist on evidence for your alternative. But it might be interesting to see what sort of thing might count as evidence, whether for or against it.
But isn't that the point to begin with. If there are things that natural processes can't account for then by showing that ID has done considerable work. Now you would have to go back to the drawing board to come up with a theory that would account for things that could not have come about by natural processes. Why would I then also have to give a description of what that thing would be and how they accomplished it in order for the underlying fact that the thing was designed to be true? That is asking of ID what you reprimand naturalistic evolutionary critics of requiring of that theory. There are lots of things that are unknown even if naturalistic evolution is true and just like Joe said, that certainly doesn't make it not true. In the same sense if there were/are lots of things about design theory that were/are unknown that alone doesn't make it false. I would contend that evidence for design would be the same things that Dembski and Nagel talk about in their books: if there can be shown certain things exist that could not have come about by strictly phycisal processes then that would be evidence that they were designed.
If it can be shown certain things exist that could not have come about by strictly physical processes then that would be evidence that ... they could not come about by strictly physical processes. No more, no less. As long as nobody has even given a description of what "intelligent design" is, it is idle to say what would count as evidence for it. We can make guesses, based on ordinary understanding of what design accounts for, but it is well known that any attempts to do that are met with the response that "that is not what I mean by intelligent design". For example, we know that things like centaurs, flying carpets, and Penrose triangles are intelligently designed. Yet they do not exist. After all, there is a difference between design and manufacture (and, indeed, between design and creation). That means that intelligent design alone does not account for the existence of something. But, of course, "that is not what I mean by intelligent design". The advocates of "intelligent design" make a point of not telling us what happened, when and where. They do not tell us what materials and methods are used by ID. They do not tell us about the limitations on the "intelligent designers" lead to things turning out the way they are, rather than something else. They do not tell us the motives of the "intelligent designers". "Why do the intelligent designers give eyes to predators and eyes to their prey?" "Why give flagella to bacteria and an immune system to their hosts?" "Why design humans so similar to chimps and other apes - are we supposed to act like apes?" "Why design the world of life with the appearance of having an evolutionary history of many millions of years - and design human intelligence so that it comes to that conclusion?"

diogeneslamp0 · 19 April 2013

Aaron,

this thread is about Dembski's claim that he has proven a so-called Law of Conservation of CSI (LCCSI).

Before you showed up there was the OP and at least 8 pages of comments, including links and references, pointing out MULTIPLE mathematical fallacies in Dembski's LCCSI. These were pretty specific, and in ten years neither Dembski nor any other ID proponent have ever responded to these multiple criticisms from real scientists, except to say that the authors are ignorant (without showing how they are), or that the criticisms are straw men (without explaining ANYTHING in the content of the criticisms that is a misunderstanding of his "proof"), or that the criticisms are out of date (without explaining what change restored the validity of his LCCSI).

We recommend you re-read the OP, and then re-read say the first 8 pages of comments, esp. the more mathematical comments from Joe, Mike Elzinga, myself, etc. At one point I compute the CSI in an ordinary grain of salt-- it's huge, but salt crystals are not intelligently designed. Go back and read that and think about the consequences of that for Dembski's competence and integrity.

So his Law of Conservation of CSI is dead, dead, dead.

Now we ask you, Aaron: what conclusions must you draw from this?

1. Dembski has no integrity. We know from painful experience he will lie about ANYTHING. He claims he has "proven" an LCCSI while ignoring its OBVIOUS, OBVIOUS logical fallacies. That's lying, not a difference of opinion.

We have seen Dembski lying about many, many topics; we could provide you with links upon request. Do you want such a list?

We criticize Dembski pretty harshly because all of us have seen him lying over and over again, and his audaciousness irks us.

2. Since the LCCSI is dead, that means natural processes like evolution can increase CSI. So it doesn't matter how much "CSI" is in living things-- no quantity of CSI is evidence that any gene, any protein, any molecule was intelligently designed.

I repeat: a few pages back, in a comment, I computed the CSI in an ordinary grain of salt-- it's huge, but salt crystals are not intelligently designed. What do you think that means for inferring intelligent design?

Lastly, you have noted that some name-calling has been directed at you. I'm sorry about that, but the ID movement has NOTHING BUT name-calling. Just for today's output, you can read today's post from David Klinghoffer at the Discovery Institute website. Klinghoffer is a non-scientist employed by the Discovery Institute for the sole purpose of making ad hominem attacks.

In today's post, Klinghoffer comes up with "some midget of a Darwinian". What will the geniuses of ID call us next-- Pygmies? Negroes? Ad hominem is all they've got.

stevaroni · 19 April 2013

Aaron Marshall said: * Does Dembski's 2002 law of conservation of CSI work? Does it show that CSI cannot be put into a genome by natural selection? If so, how? How do you deal with the problem that the specification has to remain the same, to judge whether high fitness can be achieved by evolutionary processes, when in Dembski's argument he can only argue for conservation by changing the specification between generations?
Who Knows? First of all, exactly what is CSI? Maybe you might have just read Dembski's book but the rest of us have been watching him for a decade. In all this time he has never defined CSI as anything except an arbitrary upper bound on a probability calculation. If I'm wrong about this then by all means, please tell me. Now, tell me how to measure CSI. Again, it's been a decade and we don't have an answer. Then tell me now to even detect CSI in the first place. All we have here is "I can look at a car and see it's designed". Great. Now tell me how to apply that to an isolated strand of DNA where we know it could have been damaged on extraction, we know it might be incomplete or contain extraneous material, we know it may have mutated loci and not accurately represent the actual functional germ line, and we even know that we might be reading the damn thing backwards. CSI is mostly philosophy, not math. It's mostly "a complicated animal obviously needs a complicated blueprint". Great. Maybe it does. Who knows? But that's hardly rigorous math. It's not exactly F=MA we're talking about there. Then there's the issue of whether CSI needs to be "conserved". I sure don't know how to tell. Tell me how to take a poorly-defined, de-novo property which cannot be measured, run it billions of times through a transform which materially changes it on every cycle and then determine if any of this mystery quantity was lost without knowing anything about the initial conditions. And that's CSI and "conservation of information" in a nutshell. What makes the whole thing particularly frustrating is that there doesn't seem to be any need for all the magic-want waving in the first place. The propagation of information in a lossy transmission medium is actually a very well understood science because reliably and efficiently storing and moving information around with minimal bandwidth is enormously important to the world economy and huge amounts of money ride on making it work. The seminal work in this field was done by a man named Claude Shannon in the 30's and early 40's to support transmission and recovery of digital information in the face of imperfect transmission systems. There is a huge body of work in the field that's been field (and battle) tested and known to be correct. It's a big enough filed that it has it's own technical terms, it's own math, and it's own conferences full of experts who are paid an enormous amount of money to get it right. There are, incidentally several people who regularly post on this blog and work in this field and really understand it. I design digital communication systems for a living, and deal with this every day, and still, when I make a mistake here someone will correct my equations within hours (I'm lookin' at you, Elziga). And much of this knowledge base applies directly to DNA, which is, after all, a digital storage media plagued with bandwidth limits and imperfect transmission. Much of this knowledge base neatly overlays Dembski's work. And yet Dembski totally ignores it and makes up his own terms which he steadfastly refuses to define, and uses his own math which nobody else is able to duplicate. So does "conservation of CSI" work? Who knows? We don't even know what it is.

Mike Elzinga · 19 April 2013

Aaron Marshall said: I would contend that evidence for design would be the same things that Dembski and Nagel talk about in their books: if there can be shown certain things exist that could not have come about by strictly phycisal processes then that would be evidence that they were designed.
And I would contend that you have not obtained from Dembski ANY workable definition of ID that is calculable and objective. There is no consistent definition of CSI among ID/creationists; not even among Dembski’s writings. Furthermore, there has never been an attempt on the part of any ID/creationist to actually calculate anything and demonstrate that their numerology can pick out design from non-design. They simply cannot do it; no matter how much they continue to bluff and bluster. And we don’t ever learn from any of these people what the purpose of “endogenous information,” “exogenous information,” and their difference, “active information,” adds to our understanding of anything. It is just larded-on jargon attempting to make trivial ideas look impressive. But it doesn’t matter what Dembski, et. al. want to call their calculations; they are irrelevant to anything that actually goes on in nature. If they want to see the Virgin Mary on a piece of toast and calculate whatever “information” they think applies, the bottom line is that they are simply using pseudo-math to make what they claim to see appear to be “scientific” and “objective.”

DS · 19 April 2013

Aaron Marshall said:
TomS said:
Aaron Marshall said: Are you saying that natural physical processes can account for everything in nature? Can you tell me how those physical processes can account for cognition, reason and objective morality (values) or do you deny that those things exist? How do physical processes account for the laws of logic or mathematics or aesthetics? Can you point me to the evidence that shows these things have been accounted for by strictly physical processes?
Let's assume that there are lots of things which "natural physical processes" cannot account for. Do you have an example of an alternative which does account for something? That account ought to tell us what happened, when and where, and what properties of the agent(s) result in such-and-such being the way it is (rather than otherwise). As far as evidence, I will not insist on evidence for your alternative. But it might be interesting to see what sort of thing might count as evidence, whether for or against it.
But isn't that the point to begin with. If there are things that natural processes can't account for then by showing that ID has done considerable work. Now you would have to go back to the drawing board to come up with a theory that would account for things that could not have come about by natural processes. Why would I then also have to give a description of what that thing would be and how they accomplished it in order for the underlying fact that the thing was designed to be true? That is asking of ID what you reprimand naturalistic evolutionary critics of requiring of that theory. There are lots of things that are unknown even if naturalistic evolution is true and just like Joe said, that certainly doesn't make it not true. In the same sense if there were/are lots of things about design theory that were/are unknown that alone doesn't make it false. I would contend that evidence for design would be the same things that Dembski and Nagel talk about in their books: if there can be shown certain things exist that could not have come about by strictly phycisal processes then that would be evidence that they were designed.
There are no living things that cannot be accounted for by evolution. NONE. No structures, no genes, no species, nothing. Dembski has not demonstrated that anything cannot be produced by natural processes. let alone living things. The burden of proof is entirely on you to provide a more explanatory and predictive explanation for what is observed in nature. In so doing you will have to account for ALL of the evidence, including all of the evidence for evolution. This appears to be something that you are either incapable of, or simply unwilling to do. Until you can do this, all of your bluff and bluster ring false.

Prem Isaac · 19 April 2013

Is there any feature of living things which cannot be accounted for by evolution? Whether or not an ID proponent is able to provide something better is irrelevant to the above question. The recent book by Thomas Nagel, titled "Mind And Cosmos", sets out Nagel's view that the laws of physics and chemistry along cannot explain such features of human existence. Stuctures, Genes, and Species aren't the only entities whose origin require explanation. What do you say about things like Consciousness, or the ability to Learn(Cognition), or the existence of Values(Moral values for example) for example. How does evolution account for these things? What about the rationality of our own thought process? If undirected natural selection along with a supply of beneficial mutations is the only mechanism by which things are to be explained, how exactly does one justify the rationality of human thought, i.e. why think that our propositions about anything in the world happen to be true in an objective, i.e. mind-independent way?

Evidence does not stand by itself but is interpreted via a theory which precedes it, be it Naturalism, or belief in a Designing Intelligence. So the debate cannot be merely about evidence but has to include a discussion and critique of the underlying beliefs of either side. At the outset, there is no a priori reason for rejecting the existence of a Designer - no slam dunk demonstration can be given to show there isn't one. Why think that Naturalism is true?

phhht · 19 April 2013

Prem Isaac said: Is there any feature of living things which cannot be accounted for by evolution?
Sure, lots of them. So what?
do you say about things like Consciousness, or the ability to Learn(Cognition), or the existence of Values(Moral values for example) for example. How does evolution account for these things?
Perhaps the ToE cannot account for those things. So what?
What about the rationality of our own thought process? If undirected natural selection along with a supply of beneficial mutations is the only mechanism by which things are to be explained, how exactly does one justify the rationality of human thought, i.e. why think that our propositions about anything in the world happen to be true in an objective, i.e. mind-independent way?
Perhaps the ToE cannot account for those things. So what?
...there is no a priori reason for rejecting the existence of a Designer - no slam dunk demonstration can be given to show there isn't one. Why think that Naturalism is true?
Because there is not the slightest, slimmest reason to believe otherwise - apart from god-of-the-gaps arguments like yours.

DS · 19 April 2013

Prem Isaac said: Is there any feature of living things which cannot be accounted for by evolution? Whether or not an ID proponent is able to provide something better is irrelevant to the above question. The recent book by Thomas Nagel, titled "Mind And Cosmos", sets out Nagel's view that the laws of physics and chemistry along cannot explain such features of human existence. Stuctures, Genes, and Species aren't the only entities whose origin require explanation. What do you say about things like Consciousness, or the ability to Learn(Cognition), or the existence of Values(Moral values for example) for example. How does evolution account for these things? What about the rationality of our own thought process? If undirected natural selection along with a supply of beneficial mutations is the only mechanism by which things are to be explained, how exactly does one justify the rationality of human thought, i.e. why think that our propositions about anything in the world happen to be true in an objective, i.e. mind-independent way? Evidence does not stand by itself but is interpreted via a theory which precedes it, be it Naturalism, or belief in a Designing Intelligence. So the debate cannot be merely about evidence but has to include a discussion and critique of the underlying beliefs of either side. At the outset, there is no a priori reason for rejecting the existence of a Designer - no slam dunk demonstration can be given to show there isn't one. Why think that Naturalism is true?
Sorry no. Naturalism is not a theory. And evolution is a theory with immense predictive and explanatory power. Why would you want to replace it with something that explains exactly nothing? Why do you assume that consciousness and rational thought does not have any adaptive value? Why do you assume that it is not selected on? Why do you assume that it could not evolve? Why do you ignore all of the things that evolution has explained and grasp at straws in a desperate attempt to find something that is a little more difficult to explain? Why do you conflate methodological naturalism with philosophical naturalism? Do you deny that methodological naturalism has been wildly successful? Why would you want to deny this? Why are you so desperate to deny the reality of evolution?

Joe Felsenstein · 19 April 2013

Aaron Marshall said:
Joe Felsenstein said: ... So prove me wrong and seriously tell me what your reaction is to the points I made above. I'm the author of the original post here, so I hope you will give my questions some attention. I hope for a cordial exchange.
... I certainly don't know the answers to your questions. I am willing to read the posts and try to understand your objections to Dembski's work. I do want to hear both sides of the argument. I have read your post a couple of times but I am so far out of my league trying to catch up and understand what you are saying. I am not a Scientist. What do you mean that Dembski has to change the specification between generations? I reread the chapter that he deals with this and he says that the "Law of Conservation of Information tells us that when specified complexity is given over to natural causes, it either remains unchanged (in which case information is strictly conserved) or disintegrates (in which case information diminsishes). So my understanding of what he is saying is that natural causes (if only by chance or necessity or combo) can best case preserve specificed complexity or otherwise degrade it but they can't generate this specified complexity. He doesn't say that natural causes in tandem with intelligence can't generate spcified complexity. So he says that in reproduction, organisms transmit their specified complexity to the next generation and you think that this is done through the mechanism of natural selection but how could that process add information to the new organism? I don't want to just parrot back his arguments cause I'm sure you have heard them before but I am saying that seemingly is persuasive. I don't see in there where he talks about changing the specification between generations. So that is why I ask what you mean by that.
In his 2002 explanation in No Free Lunch on pages 153-154 (of the paperback printing of his first edition) he has two specifications T1 and T0, and argues that they have the same probability. They are different sets of genotypes, and hence different specifications. Since CSI is judged by whether this probability is less than the Universal Probability Bound, that shows they either both have CSI or both don't. However, it allows the two T's to be different specifications. If you want to ask whether natural selection can improve adaptation (as judged by where the organism is on a fitness scale before and after selection), then you have to use the same scale afterwards as you did before.
From what I have read in The Design Revolution it seems that he shows that natural processes could not account for the CSI in the genome. Now I don't know their Search for Search argument but you seem to make a big deal about that fact that by their own argument then design could only come in at the begining. I see him agreeing that maybe that is a possibility.
Accepting all of the SFS argument, the design at the beginning is not in the organism, it is in the adaptive landscape's shape. After natural; selection and other evolutionary forces act, it is now in the genome. Design has "come in" ... to the genome. They are arguing that the Designer has structured nature, in effect so that natural selection actually works and the genome is further out on the fitness scale afterwards than it was before. I think that this structuring of nature could very well be just the ordinary properties of physics, but in any case the SFS argument does not show that CSI must be present in the genome beforehand if it present in the genome afterwards.
Does the bottom line come down to the assumptions that we are making before we start to look at the data? Or do you think that is just a cop out? I say this because you said, "You say that If you're not serious you'll spend a lot of time saying that if it looks designed it is designed. If you're not serious you'll ask us to provide every detail of what evolution did." Are you saying that is what Dembski's arguments amount too? I agree that you could never "prove" every detail of what evolution did but isn't that exaclty the gist of these questions regarding that if I see evidence of Intelligent Design then I have to give an account of who the designer is and these things about him/it? What if that cannot be done (and I am not saying that, but for the sake of argument)? Would that prove that there is no design in nature? Isn't asking for an ID proponent to answer every question of how something was designed before you will recognize that it is designed following down the same path that you are chiding me for going down? Now, to be fair you didn't make that argument and you may not agree with it but that has been in multiple posts towards me so I just wondered what you thought of that? Thanks.
I mentioned these "non-serious" assertions because you had used them, not William Dembski. I'd just be impressed if someone could just show me an argument that William Dembski's Design Inference does in fact show that when we see CSI, we then know that natural selection could not have put that into the genome.

stevaroni · 19 April 2013

Prem Isaac said: Is there any feature of living things which cannot be accounted for by evolution?
Eh. Who knows? There might be some feature of some organism that cannot be accounted for by natural causes, but one has never been document, so at this point the onus is on those who claim such a feature exists. Why? Because it's 2013 and the track record for demonstrated supernatural phenomena is currently zero. In the entire history of man, after 5000 or so years of searching, there has never been a phenomena that has been moved from the "nature did it" column to the "supernatural" column, though many thousands of things have moved the other way. Mother nature, simply put, has earned her position of default explanation when we get to "don't know how", because because that explanation has always been correct and nothing outside of nature has ever been shown to be involved. That's a batting average of 1000.

Prem Isaac · 19 April 2013

stevaroni said: Eh. Who knows? There might be some feature of some organism that cannot be accounted for by natural causes, but one has never been document, so at this point the onus is on those who claim such a feature exists.
Prem Isaac said: Is there any feature of living things which cannot be accounted for by evolution?
Eh. Who knows? There might be some feature of some organism that cannot be accounted for by natural causes, but one has never been document, so at this point the onus is on those who claim such a feature exists. Why? Because it's 2013 and the track record for demonstrated supernatural phenomena is currently zero. In the entire history of man, after 5000 or so years of searching, there has never been a phenomena that has been moved from the "nature did it" column to the "supernatural" column, though many thousands of things have moved the other way. Mother nature, simply put, has earned her position of default explanation when we get to "don't know how", because because that explanation has always been correct and nothing outside of nature has ever been shown to be involved. That's a batting average of 1000.
Stevaroni, thanks for your response. In my post, I was being specific: I mentioned 3 things: Consciousness, Cognition(the learning process) and Values(like Moral Values). Are you asserting these things do NOT exist? or are you asserting that these things can be explained by Naturalism/Evolution? And, even if "nature did it", where did nature come from, and why does it work to produce human life? Leaving God or the supernatural out of the picture, what is the explanation for these 3 things?

stevaroni · 19 April 2013

What's with all these hit and run concern trolls every time there's a Dembski thread?

They all follow the same pattern. They drop in, repeat the exact same CSI talking points and then vanish after 5 posts.

And it's always the exact same uninformed noob questions and concern troll position.

Is there any way to figure out if they're coming from the same ISP?

I can't help but notice that they follow a familiar pattern, swarming a couple of times a year, often around "Byers season".

I'm beginning to think that there's a class once a semester at Baylor that uses Dembski's books and one of the assignments is to go onto this evolution blog and ask 5 "probing" questions of the heathens when we disparage Wild Bill.

phhht · 19 April 2013

Prem Isaac said: Leaving God or the supernatural out of the picture, what is the explanation for these 3 things?
Suppose we NEVER have an explanation for those things. SO FUCKING WHAT? THAT WILL NOT MEAN THAT GODDIDIT!

Prem Isaac · 19 April 2013

phhht said: What about the rationality of our own thought process? If undirected natural selection along with a supply of beneficial mutations is the only mechanism by which things are to be explained, how exactly does one justify the rationality of human thought, i.e. why think that our propositions about anything in the world happen to be true in an objective, i.e. mind-independent way?
Perhaps the ToE cannot account for those things. So what?
...there is no a priori reason for rejecting the existence of a Designer - no slam dunk demonstration can be given to show there isn't one. Why think that Naturalism is true?
Because there is not the slightest, slimmest reason to believe otherwise - apart from god-of-the-gaps arguments like yours. Well I havent mentioned God yet. I just said Designer, since ID proponents do NOT claim that the methods they employ will ever reveal the identity of the Designer. The difference between a God of the gaps argument and the inference to a Designer is simple: In a God-of-the-gaps argument, the notion of "God" is used to fill in gaps in knowledge - there is no real demonstration made to show that it has to be God who is responsible. So I am with you there - however, the Inference to a Designer is not such an argument. Rather it is arrived at by a process of elimination: ID proponents are saying, that there are only 3 options: Randomness(Chance), Necessity(something deterministic about the laws of physics/chemistry) and Design(intentional product of a mind). So if Randomness and Necessity are not able to explain features of the world, the only other option is Design. This is not God-of-the-Gaps.

DS · 19 April 2013

Prem Isaac said: Stevaroni, thanks for your response. In my post, I was being specific: I mentioned 3 things: Consciousness, Cognition(the learning process) and Values(like Moral Values). Are you asserting these things do NOT exist? or are you asserting that these things can be explained by Naturalism/Evolution? And, even if "nature did it", where did nature come from, and why does it work to produce human life? Leaving God or the supernatural out of the picture, what is the explanation for these 3 things?
Evolution can account for these things. Nature came from the natural world. Humans evolved because that is what happened, nothing and nobody worked to produce it. Leaving god and the supernatural out of these things leaves you with completely natural explanations, just like for gravity, lightning, and every other natural phenomena. Deal with it.

stevaroni · 19 April 2013

Prem Isaac said: Stevaroni, thanks for your response. In my post, I was being specific: I mentioned 3 things: Consciousness, Cognition(the learning process) and Values(like Moral Values). Are you asserting these things do NOT exist? or are you asserting that these things can be explained by Naturalism/Evolution? And, even if "nature did it", where did nature come from, and why does it work to produce human life? Leaving God or the supernatural out of the picture, what is the explanation for these 3 things?
I am asserting that nobody knows because we don't fully understand the phenomenon yet. I am also asserting that there is no reason to believe anything but nature is involved. I make this second assertion based on the fact that nothing but nature has ever been found to be at work anywhere we've explored, and based on being involved exactly zero times in everything that is understood, there's no reason at all to believe the supernatural is powering the increasingly shrinking pool of things we don't understand yet. Again the supernatural was the default explanation for almost everything throughout most of human history, but upon examination every supposedly "supernatural" phenomenon was moved to the "Mother nature" column once we measured it enough. No phenomenon, event, or object has ever gone from "natural" to "supernatural". Ever. The supernatural have never put a single point on the board. Aside from the fervent wishes of a few religious denominations, there's really no reason to expect that this is going to change anytime soon.

phhht · 19 April 2013

Prem Isaac said: ID proponents are saying, that there are only 3 options: Randomness(Chance), Necessity(something deterministic about the laws of physics/chemistry) and Design(intentional product of a mind). So if Randomness and Necessity are not able to explain features of the world, the only other option is Design. This is not God-of-the-Gaps.
There is a fourth option: WE JUST DO NOT KNOW. To claim that the "only other option" is design is to commit a god-of-the-gaps fallacy. The unknown does not entail a designer.

stevaroni · 19 April 2013

Prem Isaac said: I mentioned 3 things: Consciousness, Cognition(the learning process) and Values(like Moral Values).
And by the way, of the three, the last one "Moral Values" is by far the weakest talking point. Every society has it's set of "obvious" moral values. They are all different. There is no "one" set of morals - they vary widely, often in ways that directly offend other groups. The Spartans killed handicapped babies. The tribes of Israel slew the children of vanquished enemies to eliminate their bloodlines. The colonial Americans owned slaves. In certain places in New Guinea, if you killed a man in battle, it was considered your duty to eat him lest his spirit be lost, take his wife into your household, and, if necessary, impregnate her till she bore a son so she would have a male heir to care for her in her old age. Do any of those strike you as the "right" thing to do? Moral values appear to be partly preprogrammed instinct, partly simple constructs of society, that keep things flowing and keep the group together. They seem, really, no more mysterious than traffic laws.

Prem Isaac · 19 April 2013

phhht said:
Prem Isaac said: ID proponents are saying, that there are only 3 options: Randomness(Chance), Necessity(something deterministic about the laws of physics/chemistry) and Design(intentional product of a mind). So if Randomness and Necessity are not able to explain features of the world, the only other option is Design. This is not God-of-the-Gaps.
There is a fourth option: WE JUST DO NOT KNOW. To claim that the "only other option" is design is to commit a god-of-the-gaps fallacy. The unknown does not entail a designer.

stevaroni · 19 April 2013

phhht said:
Prem Isaac said: ID proponents are saying, that there are only 3 options: Randomness(Chance), Necessity(something deterministic about the laws of physics/chemistry) and Design(intentional product of a mind). So if Randomness and Necessity are not able to explain features of the world, the only other option is Design. This is not God-of-the-Gaps.
There is a fourth option: WE JUST DO NOT KNOW. To claim that the “only other option” is design is to commit a god-of-the-gaps fallacy.
The difference, of course, is that of these four options, three of them (Randomness, Necessity, and Don't Know yet) have a track record of being true. The fourth option (let's be honest, God), has simply never been shown to be at work. Anywhere. Ever. In the entire recorded history of man. The far side of the moon was totally unknown until 1967. There was plenty of reason to believe we'd find all sorts of unknown things there, but that doesn't mean it was realistic to believe we'd find God's summer house.

Joe Felsenstein · 19 April 2013

Um, I am the "owner" of this thread. Could we please take the naturalism god-of-the-gaps stuff somewhere else? It has nothing much to do with Dembski's CSI arguments. I know you all had a hard week at work and need to "vent", but ...

(Oh yes, and in case anyone wonders whether I have noticed the Winston Ewert reply at ENV to my post, I did notice it about 3 days after it was posted, and am working on a reply.)

phhht · 19 April 2013

I'll follow the trolls wherever you put us.
Joe Felsenstein said: Um, I am the "owner" of this thread. Could we please take the naturalism god-of-the-gaps stuff somewhere else? It has nothing much to do with Dembski's CSI arguments. I know you all had a hard week at work and need to "vent", but ... (Oh yes, and in case anyone wonders whether I have noticed the Winston Ewert reply at ENV to my post, I did notice it about 3 days after it was posted, and am working on a reply.)

stevaroni · 19 April 2013

Actually, Joe, I suspect that our new trolls have clocked their 5 extra-credit posts and will soon bother us no more.

You can't Gish Gallop on a blog like this, and being constantly reminded that your emperor has no clothes while you're try to convince readers he's just upgraded his ermine-edge robes rapidly looses it's fun for these types.

I suspect that "Aaron Marshall" and "Prem Issac" will soon enough disappear.

diogeneslamp0 · 19 April 2013

Stevaroni is right about the concern trolls.
stevaroni said: What's with all these hit and run concern trolls every time there's a Dembski thread? They all follow the same pattern. They drop in, repeat the exact same CSI talking points and then vanish after 5 posts. And it's always the exact same uninformed noob questions and concern troll position. ...I'm beginning to think that there's a class once a semester at Baylor that uses Dembski's books and one of the assignments is to go onto this evolution blog and ask 5 "probing" questions of the heathens when we disparage Wild Bill.
This is exactly correct. This is from Dembski's Teaching page.
William Dembski wrote: AP410 This is the undegrad course. You have three things to do: (1) take the final exam (worth 40% of your grade); (2) write a 3,000-word essay on the theological significance of intelligent design (worth 40% of your grade); (3) provide at least 10 posts defending ID that you’ve made on “hostile” websites, the posts totalling 2,000 words, along with the URLs (i.e., web links) to each post (worth 20% of your grade). AP510 This is the masters course. You have four things to do: (1) take the final exam (worth 30% of your grade); (2) write a 1,500- to 2,000-word critical review of Francis Collins’s The Language of God -- for instructions, see below (20% of your grade); (3) write a 3,000-word essay on the theological significance of intelligent design (worth 30% of your grade); (4) provide at least 10 posts defending ID that you’ve made on “hostile” websites, the posts totalling 3,000 words, along with the URLs (i.e., web links) to each post (worth 20% of your grade). [Dembski's "Teaching page"; Dead link: http://www.designinference.com/teaching/teaching.htm; Wayback machine Archive Aug. 5, 2012]
The link's dead but you can still get if from the Wayback machine. So I agree that some of these concern trolls are Dembski's students who are REQUIRED to lie on the internet. Like Aaron Marshall lying about being an ordinary guy who just "happens" to read one of Dembski's shit books. Aaron Marshall post's were particularly creepy. When he uses Dembski's bizarre idioms, like when he writes "live options"-- the kind of pompous newspeak only Dembski would use-- it creeps me out. What the fuck is the difference between a "live option" and a just plain "option"? I've never heard anyone but Dembski express himself with this philosophical-gobbledygook jargon. Little Stepford creationists incapable of independent thought. We have to have a policy: All concern trolls are demanded to identify themselves as Dembski's students. If they want to lie, let's force them to lie and get it on the record.

diogeneslamp0 · 19 April 2013

To recap: Dembski's students are required to do 10 posts totalling 3,000 words. However, it's not clear they all need to be on the same blog. But we can keep count of how many posts the concern trolls make, so that gives an upper limit on how much of their phoniness we have to put up with.

Joe Felsenstein · 19 April 2013

We don't know that Dembski has this policy these days; or what courses he is teaching.

The issue of whether these folks are deliberately trolling can be tested by asking Aaron Marshall to slowly, carefully discuss one issue (as I have asked him for CSI/Design). If someone is willing to do that, then it is less likely that they are acting as trolls. Particularly if they stay on-topic.

I'm happy to have that discussion (though I am mostly busy this weekend).

As for where I am going to move the naturalism/God discussion, maybe you folks should put it on the Mathematics thread, as that seems not to be being policed. Anyway if it doesn't go away by itself I will move it to the BW, and you wouldn't want that, would you? I refuse to do the move to another active thread myself. Troll-chasers should have to exert some extra effort.

DS · 20 April 2013

It seems likely that the trolls are Dembski students. Funny then that they seem desperate to discuss anything but ID. Not one of them was able to address the issues raised at the start of this thread. Each of them tried desperately to change the subject. One even demanded evidence, promised to look at the evidence, asked if he could ask questions about the evidence, then took a powder. Yea Dembski should be really proud of these christless soldiers.

If any of the trolls want to discuss macroevolution or naturalism, I will be more than happy to meet them on the bathroom wall. That's where they belong. Can we get extra credit Joe?

https://me.yahoo.com/a/JxVN0eQFqtmgoY7wC1cZM44ET_iAanxHQmLgYgX_Zhn8#57cad · 20 April 2013

Aaron Marshall said: Joe - I hope I have made myself clear about who I am and what I was hoping to accomplish with my questions. I have read Dembski's book and Nagel's and they seemed persuasuve to me.
Why? Do you understand epistemology, do you understand the assumptions that both Nagel and Dembski bring to their work? And do you understand why few who really care about science and truth are impressed by either one? If not, you have much to learn.
So now I was hoping to dialogue with people who disagreed and understand where the confilct lies.
Why don't you want to go learn the bases for critiquing pseudoscience, rather than to a forum with its considerable limits on ability to teach?
I am using the language that Dembski uses because that was what I read that I found persuasive.
That is a partisan reason, not an intellectually honest one.
I want to answer your questions and have a back and forth but I don't know enough to say anything intelligent about your questions except to repeat what I have read in his books (which everyone here says is a bunch of bunk).
That's why you can't start with pseudoscience as your basis for thought, or even for discussion. How are we to discuss with you what you don't begin to know?
I reread the chapter that he deals with this and he says that the "Law of Conservation of Information tells us that when specified complexity is given over to natural causes, it either remains unchanged (in which case information is strictly conserved) or disintegrates (in which case information diminsishes). So my understanding of what he is saying is that natural causes (if only by chance or necessity or combo) can best case preserve specificed complexity or otherwise degrade it but they can't generate this specified complexity.
It's easy to see why he wants this to be true, impossible to see that he has any data for this, or for his "law."
He doesn't say that natural causes in tandem with intelligence can't generate spcified complexity.
Why are you pretending that intelligence isn't a "natural cause"? Just because Dembski pretends that it isn't?
So he says that in reproduction, organisms transmit their specified complexity to the next generation and you think that this is done through the mechanism of natural selection but how could that process add information to the new organism?
By mutation and natural selection. As in Lenski's experiments.
I don't want to just parrot back his arguments cause I'm sure you have heard them before but I am saying that seemingly is persuasive.
But his claims don't begin with reliable scientific information. Can you see how that's a problem in making claims about science?
From what I have read in The Design Revolution it seems that he shows that natural processes could not account for the CSI in the genome.
It would seem that he does not. More importantly, he doesn't in the slightest do what real science does, make predictions based on theory and test them by the evidence. Evolution predicts the patterns of inheritance and change that we see, design does not. It's that simple, really, and if anyone wants then to claim that evolutionary mechanisms aren't adequate, then at least that person has to explain what evolution explains. Dembski doesn't even try, leaving a gaping wound where once there was explanation and completion. You simply can't ask people to give up explanation due to some made-up "law of conservation of information," or what-not.
If design came in at only the begining (in fact isn't that what Nagel is essentially arguing?)
No. You read two flakes, and you don't even get the secular one right. Nagel's arguing teleology, but not "design," and not necessarily only at the beginning. He can make up useless "causes" as well as any theist can.
there would still be design there and then the question would be how could strictly physical processes front load evolution with design?
They can't, no way would the information transmit accurately.
Does the bottom line come down to the assumptions that we are making before we start to look at the data?
Yes, you have to begin with "assumptions" that have proven out over time. Appeal to magic never has.
I agree that you could never "prove" every detail of what evolution did but isn't that exaclty the gist of these questions regarding that if I see evidence of Intelligent Design then I have to give an account of who the designer is and these things about him/it?
Of course it isn't the same. Evolution explains the patterns found in life, hence it has enormous evidentiary value. You have no evidence for a designer. Evolution matches up cause and effect to say that 'these evolutionary causes predict these patterns via their limits' and then finds the effects of those causes in the patterns. You just say that life was designed, not by a designer having certain proclivities, like real designers (humans) do, which could then be matched up with actual effects in purported "design."
What if that cannot be done (and I am not saying that, but for the sake of argument)? Would that prove that there is no design in nature?
No, there's always that chance. You need to provide evidence that there is.
Isn't asking for an ID proponent to answer every question of how something was designed before you will recognize that it is designed following down the same path that you are chiding me for going down?
No, because that's a false characterization of the matter. We're asking for you to answer some questions, to actually have some sort of prediction of design effects that we can find, rather than trying to fit your designer into supposed gaps. Glen Davidson

https://me.yahoo.com/a/JxVN0eQFqtmgoY7wC1cZM44ET_iAanxHQmLgYgX_Zhn8#57cad · 20 April 2013

Aaron Marshall said: But isn't that the point to begin with. If there are things that natural processes can't account for then by showing that ID has done considerable work. Now you would have to go back to the drawing board to come up with a theory that would account for things that could not have come about by natural processes.
And how could you come up with a theory that would account for things that could not have come about by natural processes? We don't know anything but "natural processes," if that is the sort of language you're using (I don't like the "naturalist" language, but I'll put up with it). You need at least a plausible cause for any meaningful theory, at least in classical physics, and we have no "non-natural" plausible cause at all.
Why would I then also have to give a description of what that thing would be and how they accomplished it in order for the underlying fact that the thing was designed to be true?
Because that is needed in order for the purported "cause" to be matched up with any "effects" at all.
That is asking of ID what you reprimand naturalistic evolutionary critics of requiring of that theory.
No, you know nothing about these matters. We have known causes (such as reproduction) for many confirmatory evolution effects, you have no known cause, nothing that can explain specific effects. That's what's required for knowledge.
There are lots of things that are unknown even if naturalistic evolution is true and just like Joe said, that certainly doesn't make it not true. In the same sense if there were/are lots of things about design theory that were/are unknown that alone doesn't make it false.
All things about "design theory" as touted by Dembski et al. are unknown. Paley's ideas can be construed as a model where cause and effect are reasonably observable, and it fails. That's why today's IDist waves his hand, conjuring up unknown causes that supposedly cause effects. But even if it were true we could not then know it to be true, for we could not match up cause and effect in such a case.
I would contend that evidence for design would be the same things that Dembski and Nagel talk about in their books: if there can be shown certain things exist that could not have come about by strictly phycisal processes then that would be evidence that they were designed.
You mischaracterize Nagel's position, but, worse, you implicitly mischaracterize intelligence when you act as though it were "non-physical." We have no problem with intelligent causation, as it's as "natural" as any evolutionary mechanism is, hence any resort to intelligence has to be to physical processes, unless and until you manage to give us evidence for "non-physical intelligence." Indeed, if intelligence made life, then we need the evidence for its evolution. Glen Davidson

diogeneslamp0 · 20 April 2013

I don't think there's much point challenging the troll Aaron Marshall. Dembski's students are required to make 10 posts and he made 12, so he fulfilled Dembski's course requirements.

Troll "Prem Isaac" made 4 posts so we might see him again.

https://me.yahoo.com/a/JxVN0eQFqtmgoY7wC1cZM44ET_iAanxHQmLgYgX_Zhn8#57cad · 20 April 2013

diogeneslamp0 said: I don't think there's much point challenging the troll Aaron Marshall. Dembski's students are required to make 10 posts and he made 12, so he fulfilled Dembski's course requirements. Troll "Prem Isaac" made 4 posts so we might see him again.
But, aren't they doggedly pursuing truth without regard to presuppositions? Oh, right, that's a vanishingly rare trait in anyone who has only read one side and wants to "discuss" these matters on a forum, rather than learning what epistemology is, and how it is followed in science, including evolutionary science. Much better to ask "questions" that already assume the truth of the presumptions of creationism/ID. Glen Davidson

https://me.yahoo.com/a/JxVN0eQFqtmgoY7wC1cZM44ET_iAanxHQmLgYgX_Zhn8#57cad · 20 April 2013

Aaron Marshall said: So you are saying that there is no way to tell if things are designed or not? Do you think your car was designed? That we can detect that some things were designed seems just common sense to me.
Obviously, because we know what was produced technologically and what was produced by reproduction, or crystallization, or whatever "natural process" has been discovered. What we don't do is detect some universal category "designed" that applies to anything and everything "intelligent" (another problem word in the universal sense), since we just know of humans as "designers" and nothing else, unless some relatively trivial animal examples were included.
Now maybe your right, when we get down into the nuts and bolts of how you detect these things, it gets really complicated, but why should that stop us from pursuing the goal jsut because it is hard?
It's complicated for you, that is, because you want to use an illegitimate "technique" that doesn't actually exist to detect design, rather to conflate techne with physis, to use Aristotle's words for, roughly, the designed vs. nature. We depend upon empiricism, with a lot of known aspects of human capabilities to guide us.
So why wouldn't Dembski's explanatory filter work as a design detector?
That is exactly what doesn't work. For the obvious, it claims that life is designed, when there is no evidence of any designer of life, and life has all of the characteristics expected of unguided evolution. But beyond that, it doesn't really distinguish between a great crystal of Ettringite vs. one that was grown in a laboratory, or a similarly-shaped piece of metal. You already have to know that the Ettringite grows "naturally," that process can (presumably) be more or less copied in a lab, and that metal pieces don't form "naturally" in six-sided columns. But then so what? You're counting on the fact that you already know all of data relevant to any "design" question about the pieces "in question" (not in question at all, actually), and Dembski's "filter" doesn't tell you anything. Not surprising, of course, since it wasn't developed to detect design, only to redefine life as designed, thus to bypass all of the actual evidence that life evolved and was not designed.
Take your rock example. We ask the question whether it was contingent and the answer seems obviously so (it didn't have to be here). So then we ask if it is complex? In this sense is it improbable that we have the rock absent just chance? Semmingly not and thus I would say that chance could have produced it and thus it was not designed.
"Chance" as you use it is simply a scholastic notion, and doesn't really mean anything in science. And you're simply using what we know empirically about rocks to weed it out, not using honest issues of CSI or whatever. Back to crystals, how can you tell if one was made "by design" or "naturally"? Actually, it's often done by noting how pure it is and if it has the defects common in "natural crystals." Complexity tells you nothing, in fact the "natural crystal" would generally be considered more complex due to its imperfections--and presumably one could also make lab crystals that are less pure and with more defects. So what do you gain from Dembski's "filter." Nothing.
You could disagree on that step and say it is complex and thus move on to the last step where we ask if it is specified, indicating an independently given pattern (not just randomness) and that would clearly not be the case with the rock.
How do you know that? Could there have been an Apollo conspiracy, with rocks made by humans to be unlike those of earth, and to fit the conception of what moon rocks would be like? Well, the conspiracy seems extremely improbable, and the rocks real, but rocks could have been produced, probably, detectable only by the lack of deep history. Oh hey, that's the same test, using different criteria, that indicates that life is "natural" and not "designed" as well.
So the rock would have two places to be kicked out of the "designed" category.
Which is too bad if it were designed. Oh, we can detect it, almost certainly, by looking for the contingencies of history, much as we detect the contingencies of evolution to state that life evolved.
Now run the same test with the watch. It is surely contingent and complex and it exhibits an independently given pattern, or in other words it has low specificational complexity.
And? Does it have the evidence of having evolved, and the mechanisms of reproduction? Do humans make watches? Sure. Do aliens? We don't know. Do gods? You'd not really think so, although presumably they could if they existed (unlikely), but again we don't actually know at all.
Now these are all Dembski's terms that I just lifted from his book, but that seems like a legitimate way to determine design.
Why, because it relies upon our ability to predetermine design before any "complexity test" telling us really nothing, oh, except for life, at which point the fact that we can readily see the huge differences between it and technology is something that he simply ignores?
What if it can't say more than what I have said, that it was designed or it wasn't designed?
But it can't do that at all, unless you already know what the entity purportedly responsible for said design can do, and what it might wish to do (need to do, whatever). That's the problem.
Isn't that enough?
No it isn't enough, but you have nothing that can tell you just that. What if I said that we could tell if life evolved, without any sort of known process that would be responsible for its effects? After all, it could be as loose and often indeterminable as technological evolution would be without the written record, or it could be as rigidly determinate of patterns as biological mechanisms are. Oh, that's right, creationists/IDists don't care about those huge differences, either.
Otherwise arent you asking more of ID than you ask of evolutionary theory?
No, because evolutionary theory doesn't use bogus tests that can't even detect the huge differences between technology and life. Glen Davidson

https://me.yahoo.com/a/JxVN0eQFqtmgoY7wC1cZM44ET_iAanxHQmLgYgX_Zhn8#57cad · 20 April 2013

Prem Isaac said: Well I havent mentioned God yet. I just said Designer, since ID proponents do NOT claim that the methods they employ will ever reveal the identity of the Designer.
Too bad, because you won't be able to credit anything to a being without knowing what sort of cause it actually is, which means that you have to know a lot about it, and from that could likely come up with a kind of ID, even if just a fossil or some such thing.
ID proponents are saying, that there are only 3 options: Randomness(Chance), Necessity(something deterministic about the laws of physics/chemistry) and Design(intentional product of a mind).
Fake categories, derived from scholasticism (the first two, anyhow), not from epistemology or evidence. Especially bad is the claim that "Design" is something other than Chance or Necessity (if we choose to use such poor terminology), when all of the evidence is that it is more or less "necessity" in your poor terms, something that is essentially determined by genetics and development. At the least it would have to be a mix of necessity and chance in such terms. Intelligence evolved and is limited by a number of factors.
So if Randomness and Necessity are not able to explain features of the world, the only other option is Design. This is not God-of-the-Gaps.
No, it's choosing the premises in order to come to the desired conclusion. Glen Davidson

Mike Elzinga · 21 April 2013

A discussion started by Elizabeth Liddle over on The Skeptical Zone prompted me to look more closely at that factor of 10120 in Dembski’s CSI Specification paper, Χ = - log2[10120φS(T)P(T|H)]. That factor comes from a reference given by Dembski on page 23 to a paper by Seth Lloyd in Physical Review Letters. Lloyd calculates that the universe can have performed no more than 10120 elementary logical operations on 1090 bits during the course of its evolution. On page 7, Lloyd says the following:

What is the universe computing? In the current matter-dominated universe most of the known energy is locked up in the mass of baryons. If one chooses to regard the universe as performing a computation, most of the elementary operations in that computation consists of protons, neutrons (and their constituent quarks and gluons), electrons and photons moving from place to place and interacting with each other according to the basic laws of physics. In other words, to the extent that most of the universe is performing a computation, it is ‘computing’ its own dynamical evolution. Only a small fraction of the universe is performing conventional digital computations.

It is important to note that during this process, matter has condensed into galaxies, stars, planets, and life on at least one planet. Lloyd is using the standard big bang model and some adjustments that take into consideration the uncertainties in that model. He doesn’t include dark matter or dark energy. It’s a pretty straight-forward estimate. The result of Dembksi’s use of this, in setting the threshold for his CSI, is that to compute the probabilities of a specified subset of that universe – e.g., a protein molecule – that subset must contain at least as much information and require at least as many elementary logical operations as the entire universe. Let me repeat that. A small subset, already included in Lloyd’s calculation, is required to have at least as much information and require at least as many logical operations as the entire set itself. Therefore the universe is required to have more logical operations than the universe requires. Aside from the fact that Dembski doesn’t provide any justification for declaring something is designed, other than just hand waving, he and his followers are saying a finite subset must be as complex as the entire finite set that contains it. What kind of logic is that? And Dembski rambles on for 41 pages while Lloyd gets right to the point in 17.

https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 · 21 April 2013

But if he didn't include the dark matter and dark energy, than 10^120 it's not the operations of the entire universe...

Bhakti Niskama Shanta · 22 April 2013

Does Current Biology have the Misfortune of Owning an Unreliable Clock? http://scienceandscientist.org/Darwin/2013/04/20/does-current-biology-have-the-misfortune-of-owning-an-unreliable-clock/

Joe Felsenstein · 22 April 2013

Bhakti Niskama Shanta said: Does Current Biology have the Misfortune of Owning an Unreliable Clock? http://scienceandscientist.org/Darwin/2013/04/20/does-current-biology-have-the-misfortune-of-owning-an-unreliable-clock/
Current biology at least does not have to deal with deriving its theory from vedantic theology. This self-citation is to a "paper" by Bakhti Niskama Shanta on why complications in mitochondrial inheritance vindicate vedantic texts. The "Conclusion" section is an absolute classic. I also like the part about how
evidence is forcing many biologists to conclude that, if Darwin had known some of what has been discovered since the publishing of his theory, he probably wouldn’t have believed in his own theory of evolution.
Also, the paper has nothing to do with Complex Specified Information or Dembski's argument.

DS · 22 April 2013

Bhakti Niskama Shanta said: Does Current Biology have the Misfortune of Owning an Unreliable Clock? http://scienceandscientist.org/Darwin/2013/04/20/does-current-biology-have-the-misfortune-of-owning-an-unreliable-clock/
Typical babble gook. It's just the old "if you can't explain everything to my satisfaction then I don't have to believe anything you say, even though I have no evidence of my own" routine. Molecular clock are extremely useful but not simple to calibrate. That doesn't mean they are worthless. Time for a dump to the bathroom wall.

Joe Felsenstein · 22 April 2013

I see that Bhakti Niskama Shanta posted a series of identical comments all at the same time, one to basically every ongoing thread.

That is not responsible behavior (and not good advertising for his views). If he shows up on any of my threads again he goes straight to the Bathroom Wall.

Mike Elzinga · 25 April 2013

https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 said: But if he didn't include the dark matter and dark energy, than 10^120 it's not the operations of the entire universe...
ID advocates choose different "upper probability bounds" depending on how different advocates calculate the number of operations required to make a universe. The discussion is still going on over at TSZ also; where I added the following comment. The CSI calculation of Dembski boils down to a complete obfuscation of a very simple notion from statistics. That kairosfocus character over at UD simply obfuscates even further. As anyone can learn from probability and statistics, if an event has a probability p, and the number of trials attempting to get that event is N, then the mean number of successes in achieving that event is Np. Dembski’s calculation of Χ - after many paragraphs of rationalizations and side tracks – boils down to Χ = - log2Np = log2(1/p) – log2N. with N = 10120 φS(T) and p = P(T|H). The 10120 was taken from Seth Lloyd’s paper in Physical Review Letters, and is Lloyd’s estimate of the number of logical operations it took to make our universe. Including the φS allows for the possibly of multiple universes involved in the number of trials, with 10120 logical operations per universe. Assuming only one universe, the calculation comes down to Χ = log2(1/p) – log210120 = log2(1/p) – 400 So 1/p is the number of trials required to get one instance of the specified event, and taking the logarithm gives the amount of “information” supposedly contained in that sample space, and log base 2 gives that “information” in bits. Note that it assumes uniform, random sampling. I am guessing that this would be what Dembski and Marks call “endogenous information.” The 400 (500 in some of Dembski’s calculations) would be the “exogenous information,” and their difference would be the “active information.” As Elizabeth and Joe Felsenstein point out over on The Skeptical Zone, the problem is coming up with a distribution for P(T|H). In addition, Seth Lloyd’s calculation is based on the fact that the events being questioned by ID advocates – such as the origins of life and evolution – are already included in the 400 (or 500 in some of Dembski’s calculations). If Dembski wants to use Lloyd’s number in his CSI and apply it to events in the universe, it therefore makes no sense to assert that the “endogenous information” they contain is greater than the “endogenous information” in the universe. Said more directly, the N trials to make the universe already produced the event in question; therefore the number of trials required to produce that particular event has to be less than the number of trials to produce all the events in the universe. So here again we see the circularity contained in the assumption that such events do have more such information. One can very easily enumerate permutations and combinations of things and get numbers far larger than all the operations in the universe; just as I did with my calculation of the “CSI” of a rock. It all depends on how one chooses to describe it. The ID descriptions are generally chosen to make events such as the origins of life look impossible.

https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 · 25 April 2013

Thanks, I get it now.

And, by the way: "Note that it assumes uniform, random sampling" - this sentence says everything. It's the tornado probability again...

diogeneslamp0 · 25 April 2013

Mike Elzinga said:
https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 said: But if he didn't include the dark matter and dark energy, than 10^120 it's not the operations of the entire universe...
ID advocates choose different "upper probability bounds" depending on how different advocates calculate the number of operations required to make a universe. The discussion is still going on over at TSZ also; where I added the following comment. The CSI calculation of Dembski boils down to a complete obfuscation of a very simple notion from statistics. That kairosfocus character over at UD simply obfuscates even further. As anyone can learn from probability and statistics, if an event has a probability p, and the number of trials attempting to get that event is N, then the mean number of successes in achieving that event is Np. Dembski’s calculation of Χ - after many paragraphs of rationalizations and side tracks – boils down to Χ = - log2Np = log2(1/p) – log2N. with N = 10120 φS(T) and p = P(T|H). The 10120 was taken from Seth Lloyd’s paper in Physical Review Letters, and is Lloyd’s estimate of the number of logical operations it took to make our universe. Including the φS allows for the possibly of multiple universes involved in the number of trials, with 10120 logical operations per universe. Assuming only one universe, the calculation comes down to Χ = log2(1/p) – log210120 = log2(1/p) – 400 So 1/p is the number of trials required to get one instance of the specified event, and taking the logarithm gives the amount of “information” supposedly contained in that sample space, and log base 2 gives that “information” in bits. Note that it assumes uniform, random sampling. I am guessing that this would be what Dembski and Marks call “endogenous information.” The 400 (500 in some of Dembski’s calculations) would be the “exogenous information,” and their difference would be the “active information.” As Elizabeth and Joe Felsenstein point out over on The Skeptical Zone, the problem is coming up with a distribution for P(T|H). In addition, Seth Lloyd’s calculation is based on the fact that the events being questioned by ID advocates – such as the origins of life and evolution – are already included in the 400 (or 500 in some of Dembski’s calculations). If Dembski wants to use Lloyd’s number in his CSI and apply it to events in the universe, it therefore makes no sense to assert that the “endogenous information” they contain is greater than the “endogenous information” in the universe. Said more directly, the N trials to make the universe already produced the event in question; therefore the number of trials required to produce that particular event has to be less than the number of trials to produce all the events in the universe. So here again we see the circularity contained in the assumption that such events do have more such information. One can very easily enumerate permutations and combinations of things and get numbers far larger than all the operations in the universe; just as I did with my calculation of the “CSI” of a rock. It all depends on how one chooses to describe it. The ID descriptions are generally chosen to make events such as the origins of life look impossible.
Mike, didn't you lose the Phi_S in your last equation for X? I also think it's important to point out that, while Dembski at the beginning of his 2005 paper says that P(T|H) is the actual probability given Darwinian evolution, by the end of the paper he just throws that out and sets P(T|H) equal to what I call the "tornado probability." This is a point that Joe F has also missed: Dembski swaps out the real P, that he can't compute, for a fake P, the tornado probability. That is, if the genetic sequence has letters of four types and length L then he sets p = (1/4)^L. This of course is far, far, far, astronomically far away from the probability of Darwinian evolution. Near the end of the paper, repeating his shit calculation for the bacterial flagellum, he just counts up L = the # of amino acids in the whole flagellum, and since there are 20 kinds of amino acid, he sets p = p = (1/20)^L. It's sleight of hand, and it has nothing to do with evolution. It's essential to emphasize (as Joe F has not) that Dembski officially defines CSI as based on THE TORNADO PROBABILITY, which he can compute, in place of the probability of Darwinian evolution, which he cannot. That's officially part of Dembski's definition of CSI as of 2005.

Mike Elzinga · 25 April 2013

diogeneslamp0 said: Mike, didn't you lose the Phi_S in your last equation for X? I also think it's important to point out that, while Dembski at the beginning of his 2005 paper says that P(T|H) is the actual probability given Darwinian evolution, by the end of the paper he just throws that out and sets P(T|H) equal to what I call the "tornado probability." This is a point that Joe F has also missed: Dembski swaps out the real P, that he can't compute, for a fake P, the tornado probability. That is, if the genetic sequence has letters of four types and length L then he sets p = (1/4)^L. This of course is far, far, far, astronomically far away from the probability of Darwinian evolution. Near the end of the paper, repeating his shit calculation for the bacterial flagellum, he just counts up L = the # of amino acids in the whole flagellum, and since there are 20 kinds of amino acid, he sets p = p = (1/20)^L. It's sleight of hand, and it has nothing to do with evolution. It's essential to emphasize (as Joe F has not) that Dembski officially defines CSI as based on THE TORNADO PROBABILITY, which he can compute, in place of the probability of Darwinian evolution, which he cannot. That's officially part of Dembski's definition of CSI as of 2005.
I put φS equal to 1, for one universe. Dembski apparently wanted to fold in some other means of “minimal” specification to multiply up those probabilities he only calculates using those “tornado probabilities” that don’t apply to evolutionary processes. That is also what those characters over at UD do. They like strings of characters because all they have to do is raise the number of ASCII characters to a power equal to the length of the string to get a probability as low as they want. That isn’t how molecules of any kind behave in the real world. That is why I picked the rock example. The “CSI specification” was at least more realistic in specifying the exact configuration of the rock and all its crystals. Of course, I cheated just as they cheat over at UD. I took all permutations of identical atoms as different; which gives a nice over-count. But even if I didn’t do that, I still had enough in the permutations and orientations of the crystals. But that is the key, isn’t it; declare the “impossible” winner and “prove” it by calculating all possible rearrangements of a string of characters that gives a very large number. This also goes to another question I keep asking the ID/creationists but never get a response to; namely, “Where along the chain of complexity in atom/molecular assemblies do the laws of physics and chemistry stop working and ‘intelligence’ has to step in an do the job that physics and chemistry can no longer do?” I did it with rocks. Why can’t I declare rocks the winners in the evolutionary lottery? Why do protein molecules count while rocks don’t? If I lower the temperature of a protein molecule until it becomes as rigid as a rock, can we then rule out intelligent design in protein molecules? If I lower the temperature of lead below 7.2 K where it becomes a superconductor, can we now declare lead to be intelligently designed? Trying to make up other “probability killers” like “functionality” isn’t going to work either. There are lots of things going on in all sorts of condensed matter systems that can be specified with much more “information” than a string of letters enumerating the positions of molecules in a protein. It is appalling just how doggedly dishonest ID has become in recent years. People without even a high school level of understanding of science declaring they can “mathematically prove” that protein molecules and the stuff of life is “intelligently designed.” I don’t think these people realize just how ridiculous their attempts have become.

Joe Felsenstein · 25 April 2013

First would you guys please learn to edit out less relevant parts of quotes?. These comments are getting unnecessarily long.

Diogenes, which part of Dembski (2005) is where he reverts to the Tornado probability? The flagellum calculation?

Dembski's collaborator Winston Ewert has attacked my arguments in a post at Evolution News and Views, arguing that Dembski never was using the Tornado probability, that even in his 2002 book he incorporated the P(T|H) term. As I had based my 2007 article on the interpretation that CSI was calculated from the Tornado probability and by using Dembski's Conservation Law argument, this would seriously invalidate those parts of my argument. Right now I am reading through the 2002 book trying to determine whether Ewert is correct about this.

In 2005 he definitely has the P(T|H) term. The question is, did Dembski always have it? Determining whether it really made a reappearance in the 2005 paper is then of some importance.

If P(T|H) is bigger than, say, 10^(-6), then that means that whether the UPB is 10^(-150) or 10^(-120) or 10^(-300) really doesn't matter, as natural selection can do the job.

Mike Elzinga · 25 April 2013

Here are two key quotes from Ewing’s reply to Joe Felsenstein.

In all discussion in the book regarding the Law of Conservation of Information, Dembski is using Shannon information, which is defined as the negative logarithm of the probability. In discussing a design inference for the bacterial flagellum, Dembski attempts a sketch of the probability of its arising through natural selection.

Biological life clearly needs to be explained. Conservation of information does not exclude the possibility of a Darwinian process as the explanation. However, it does pose a challenge to Darwinian evolution as being incomplete. Darwinian evolution does not satisfactorily explain the information in the genome. It depends on an explanation that does not yet exist. Until such an explanation exists and is tested, Darwinian evolution does not explain biological life.

IShannon = - ∑ pi log2 pi, where the index i goes from 1 to Ω, the number of states. When all states are equally probable with all pi = 1/Ω, this becomes IShannon = log2 Ω. If done properly, this kind of “information” is proportional to entropy because flipping bits takes energy. Given that fact, Shannon “information” is not conserved because entropy is not conserved. It is not possible to compute CSI and make all the claims that ID/creationists make if they totally disregard basic physics and chemistry. As long as they cannot do even simple high school level calculations in physics and chemistry, everything they “calculate” is bogus; endless wrangling over words doesn’t save them.

Mike Elzinga · 25 April 2013

Mike Elzinga said: IShannon = - ∑ pi log2 pi, where the index i goes from 1 to Ω, the number of states.
Just a reminder for those encountering this for the first time; whenever you take numbers, multiply by the probability of their occurrence, and then sum, you get the average. So the formula for Shannon “information” (uncertainty, entropy; it gets called a lot of things by different users) is just the negative of the average of the logarithms of the probabilities. Note also that the probabilities all have to add up to 1.

prongs · 25 April 2013

Mike Elzinga said: I don’t think these people realize just how ridiculous their attempts have become.
Trouble is neither do politicians, who will pander to any group they think will give them votes and photo ops.

https://www.google.com/accounts/o8/id?id=AItOawn_Ilhh_JlVBezsrOdLJgQdsyRGHgoWvW8 · 26 April 2013

« In discussing a design inference for the bacterial flagellum, Dembski attempts a sketch of the probability of its arising through natural selection.» Bu I think Dembski used Michael Behe's wrong idea that the flagellum couldn't evolve by natural selection. Am I right?

diogeneslamp0 · 26 April 2013

Mike Elzinga said:
diogeneslamp0 said: Mike, didn't you lose the Phi_S in your last equation for X?
I put φS equal to 1, for one universe. Dembski apparently wanted to fold in some other means of “minimal” specification to multiply up those probabilities he only calculates using those “tornado probabilities” that don’t apply to evolutionary processes.
No, this is incorrect. φS is virtually never 1, in fact it's a huge number usually, and it's never equal to the number of universes. In my two long previous comments, I gave detailed instructions on how to compute φS, although I didn't use that notation. Nobody paid any attention then. In his 2005 paper, Dembski gave two methods for computing φS, which I called "Guaranteed Success Methods #2 and #3". My description of Dembski's 2005 "Semiotic String method", based on counting the number of words in a verbal description of a system, and which I call "Guaranteed Success Method #3", is here. My description of Dembski's 2005 "Simplicity of Description method", based on counting the number of bit strings that are as simple or simpler (by Kolmogorov complexity) than the observed string, and which I call "Guaranteed Success Method #2", is here. These methods are ridiculous, but you can't just set φS = 1. ERRATA: In my two comments above, I made an error: I left out the factor 10^120 which must be multiplied with the probabilities inside the square brackets [] of the logarithms. This factor of 10^120 of course represents Dembski's so-called Universal Probability Bound. There is a simple way to fix the equations I wrote before. Because log2[10120] = ~400, Thus, before when I said the log2[stuff] should be more than 1, in fact I should have said, the log2[stuff] should be more than 400. This corrects for my leaving out the UPB of 10^120. In addition, Dembski's "semiotic string" method assumes the number of words in the English language is 100,000. I used 250,000, which is more accurate, but for consistency I should use Dembski's value. I will re-post both of my comments on Dembski's 2005 methods with fixes to the math.

Mike Elzinga · 26 April 2013

diogeneslamp0 said:
Mike Elzinga said:
diogeneslamp0 said: Mike, didn't you lose the Phi_S in your last equation for X?
I put φS equal to 1, for one universe. Dembski apparently wanted to fold in some other means of “minimal” specification to multiply up those probabilities he only calculates using those “tornado probabilities” that don’t apply to evolutionary processes.
No, this is incorrect. φS is virtually never 1, in fact it's a huge number usually, and it's never equal to the number of universes.
You are apparently talking about Dembski’s MNφS. I don’t much care how Dembski chooses to obfuscate the number of trials. All he did was to lard up a simple notion from probability that the mean number of successes from N trials on an event with probability p is just Np. Dembski, Abel, and other “math whizzes” in the ID movement bamboozle using all sorts of “philosophy” and rationalizations to obscure what ultimately turns out to be simple math done wrong or applied inappropriately. At bottom, whether he pretends to use Shannon “information” - or whatever he wants to call it – to “prove” Conservation of Information, he is wrong when he wants to apply such notions to the real universe where matter interacts with matter. That kairosfocus character and others over at UD use Dembski’s CSI in exactly the same way I parodied it with a rock. They are counting arbitrary labels and specified characteristics that don’t interact among themselves. ASCII characters don’t interact among themselves. Therefore the probability selecting from of 20 amino acids to make a molecule with 500 positions in a chain is not the same as 20-500; that is a completely nonsensical calculation. Whether the φ refers to some arbitrary description of the event - such as the number of English words needed to specify it - or whether it folds in the number of monkeys involved in making the trials, or any other thing that Dembski can think of to make the number of trials bigger and therefore “swamp the probability,” is completely irrelevant. All Dembski has done by breaking down N into a bunch of factors is to allow the inclusion of a bunch of excuses to make the number of trials bigger so that no events in the building of the molecules of life can have a probability large enough to overcome it.

Mike Elzinga · 26 April 2013

Mike Elzinga said: All Dembski has done by breaking down N into a bunch of factors is to allow the inclusion of a bunch of excuses to make the number of trials bigger so that no events in the building of the molecules of life can have a probability large enough to overcome it.
In fact, the entire CSI obfuscation can be boiled down to the product Np. Take away the logarithms and the labels called “information,” because those completely obscure what is being discussed. If Dembski, or anyone else in the ID movement, wants to tell us that there is a maximum number of trials possible in the universe to produce a given event, and he grabs a number like 10120, then the minimum probability required for Np = 1 is p = 10-120. So Dembski has to tell us why p for the occurrence of a given event in the universe is less than that. Either he has to tell us that the probability has to be jacked up by the intrusion of intelligence in order for the event to occur, or that the number of trials to produce that event is not as large as he claims. He is not about to admit the latter. What he has done instead is lard up and obfuscate this simple calculation with hundreds of pages of “philosophy.”

Joe Felsenstein · 26 April 2013

So what the Dembski (2005) formulation amounts to, after one has exponentiated the formula and multiplied through, is this

1. Figure out a probability so small it can occur less than once, anywhere in the universe, in the whole history of the universe.

2. Ask the evolutionary biologist to compute the probability of an adaptation that good or better arising.

3. Compare it to the small probability.

4, If it is less, declare Design to have been detected.

And that's it. No calculation of any kind of "information" necessary. Also notice who gets to do all the heavy lifting -- and it's not the "Design theorist".

diogeneslamp0 · 26 April 2013

Joe Felsenstein said: Dembski's collaborator Winston Ewert has attacked my arguments in a post at Evolution News and Views, arguing that Dembski never was using the Tornado probability, that even in his 2002 book he incorporated the P(T|H) term. As I had based my 2007 article on the interpretation that CSI was calculated from the Tornado probability and by using Dembski's Conservation Law argument, this would seriously invalidate those parts of my argument... In 2005 he definitely has the P(T|H) term. The question is, did Dembski always have it? Determining whether it really made a reappearance in the 2005 paper is then of some importance.
In Winston Ewert's post at Evolution News and Views he is of course lying about Dembski's probability calculations, and also misrepresenting the point you made. Ewert's first argument is a misrepresentation of your post: Ewert repeatedly claims that Joe F claimed in his post that Dembski did not do probability calculations prior to his 2005 paper. This is obviously false. Joe F has stated and consistently maintained that both Dembski's pre-2005 and post-2005 work included probabilities, but that they were defined differently. Joe F said that pre-2005 Dembski defined probability = tornado probability, which he could compute but is irrelevant, but from 2005 onward he defined the probability as including Darwinian mechanisms, which are relevant but which Dembski can't compute. Winston Ewert misrepresents this by claiming that Joe F said there were no probabilities in Dembski's pre-2005 work e.g. No Free Lunch. Joe F never said it. Ewert thinks he can refute Joe F by simply listing citations from No Free Lunch where Dembski computed his infantile tornado probabilities. NO. It's not that easy. I assert that Joe F is wrong in claiming that Dembski in 2005 and thereafter asserts that the probability of Darwinian mechanisms is the necessary quantity. That's not exactly right. Rather, post-2005 Dembski wants to have his cake and eat it too. Dembski continues to compute tornado probabilities, but he lies and says they're probabilities of Darwinian mechanisms. Ewert's second argument is a lie: he says Dembski computed the probability of Natural Selection. Bullshit. Many IDers lie about the plain meaning of Dembski's math, including Dembski himself.
Winston Ewert wrote: …On page 72 of No Free Lunch, Dembski again presents the Generic Chance Elimination Argument. Step 7 instructs the subject to calculate the probability of the event under all relevant chance hypotheses. ...In discussing a design inference for the bacterial flagellum, Dembski attempts a sketch of the probability of its arising through natural selection. ...The probability in the form of Shannon information existed in No Free Lunch. The change in the 2005 paper... was not a change in the way that probability was handled. The design inference in all its forms has always involved calculating the probability of relevant chance hypotheses including natural selection. This is not a new feature of a revamped design inference, but a critical component of the design inference since its inception. [Information, Past and Present. Winston Ewert. ENV. April 15, 2013.]
What a lying piece of shit! The text in boldface is manifestly false. Dembski only computes tornado probabilities, then later he lies about it and says he computed the probability of natural selection. Which I'm going to demonstrate with some quotes below.
Winston Ewert wrote: ...The argument of specified complexity was never intended to show that natural selection has a low probability of success.
Bullshit. Dembski has repeatedly claimed he computed that natural selection has a low probability of success. Dembski himself lied about his own math at least two or three times, saying that his infantile tornado probabilities were the probability of evolution by Darwinian mechanisms-- first at a forum at the AMNH in April 2002, transcript here.
William Dembski said: Convinced that the Darwinian mechanism must be capable of doing such evolutionary design work, evolutionary biologists rarely ask whether such a sequence of successful baby steps even exists. Much less do they attempt to quantify the probabilities involved? I attempt, in chapter 5 of my most recent book, "No Free Lunch", to do that, to lay out the probabilities, there I lay out techniques for assessing the probabilistic hurdles that the Darwinian mechanism faces in trying to account for complex biological structures like the bacterial flagellum. The probabilities that I calculate, and I try to be conservative, are horrendous, and render the natural selection entirely implausible as a mechanism for generating the flagellum and structures like it. [Dembski at the ID forum at the AMNH, April 23, 2002, transcript here]
He's lying. He computed no such thing in No Free Lunch. Dembski did not misspeak; compare the above to his written notes for that forum; it's deliberate misrepresentation. But a month later, May 2002, Richard Wein forced Dembski to admit that his computation in No Free Lunch is based on random recombination of parts, which we call "tornado probability." Dembski admitted this during his knock-down drag-out internet fight, only after Wein managed to squeeze the admission out of him.
William Dembski wrote: Next, all the biological community has to mitigate the otherwise vast improbabilities for the formation of such systems is co-optation via natural selection gradually enfolding parts as functions co-evolve. Anything other than this is going to involve saltation and therefore calculating a probability of the emergence of a multipart system by random combination. But, as Wein rightly notes, "the probability of appearance by random combination is so minuscule that this is unsatisfying as a scientific explanation." Wein therefore does not dispute my calculation of appearance by random combination, but the relevance of that calculation to systems like the flagellum. And why does he think it irrelevant? Because co-optation is supposed to be able to do it. Now, why should we believe it? What I offer in chapter 5 of NFL are reasons not to believe it. I tighten Michael Behe's notion of irreducible complexity... I submit that there is no live possibility here but only the illusion of possibility. [William Dembski, "Obsessively Criticized But Scarcely Refuted: A Response To Richard Wein", May 2002]
In the above, Wein gets Dembski to admit he computed "my calculation of appearance by random combination", but says we should trust it because evolution can't produce irreducibly complex structures. Two years later, Dembski goes back to lying, reverses his admission to Wein about his "calculation of appearance by random recombination", and goes back to calling it probability of natural selection.
William Dembski wrote: Yet even with the most generous allowance of legitimate advantages, the probabilities computed for the Darwinian mechanism to evolve irreducibly complex biochemical systems...always end up being exceedingly small.29 [Footnote 29: See, for instance, Dembski, No Free Lunch, sec. 5.10.] [William Dembski, "Irreducible Complexity Revisited", January 2004, p.29-30]
Again, outright lying. His citation #29 is to HIMSELF, to the oft-cited bullshit "tornado probability" he computed in Section 5.10 of No Free Lunch. Remember this, Joe: Dembski and his followers often misrepresent the calculation he did in NFL: it's the tornado probability, but Dembski passes it off as the probability of natural selection. Richard Wein thoroughly demolished Dembski's computation of tornado probabilities (they don't call it that; they call it probability of random combination) in a detailed article here.
Joe Felsenstein said: Diogenes, which part of Dembski (2005) is where he reverts to the Tornado probability? The flagellum calculation?
In several places, but it's subtle. Note that Dembski in multiple places (p. 18, 25) explicitly asserts that P(T|H) is the probability including Darwinian mechanisms, but his computations do the opposite. Every time that it gets down to computing numbers, Dembski always switches to the tornado probability, often in sneaky ways. You can start by looking at footnote 33 on page 25. There he cites his 2004 paper, which as we saw above, lies about what Dembski calculated and calls it the probability of Darwinian mechanisms. His 2004 paper, as we've seen, in turn cites section 5.10 of No Free Lunch. So if you follow his footnotes, it all points back to the tornado probability in No Free Lunch, about which they all bullshit.
William Dembski wrote: As an example of specification and specified complexity in their context-independent form, let us return to the bacterial flagellum. Recall the following description of the bacterial flagellum given in section 6: “bidirectional rotary motor-driven propeller.” This description corresponds to a pattern T. Moreover, given a natural language (English) lexicon with 100,000 (= 105) basic concepts (which is supremely generous given that no English speaker is known to have so extensive a basic vocabulary), we estimated the complexity of this pattern at approximately φS(T) = 1020... It follows that –log2[10120 φS(T)•P(T|H)] > 1 if and only if P(T|H) < 1/2×10-140, where H, as we noted in section 6, is an evolutionary chance hypothesis that takes into account Darwinian and other material mechanisms and T, conceived not as a pattern but as an event, is the evolutionary pathway that brings about the flagellar structure... Is P(T|H) in fact less than 1/2×10-140, thus making T a specification? The precise calculation of P(T|H) has yet to be done. But some methods for decomposing this probability into a product of more manageable probabilities as well as some initial estimates for these probabilities are now in place.33 [Footnote 33: See my [Dembski's 2004] article “Irreducible Complexity Revisited” at www.designinference.com (last accessed June 17, 2005). See also section 5.10 of my book No Free Lunch.] These preliminary indicators point to T’s specified complexity being greater than 1 and to T in fact constituting a specification... [“Specification: The Pattern That Signifies Intelligence.” William A. Dembski. 2005, version 1.22, p.24-25.]
You see what he did? He cited his 2004 paper, which in turn cited section 5.10 of No Free Lunch. In other places likewise, Dembski in his 2005 paper openly computes tornado probability AND NOTHING ELSE. On page 19 he computes the "specification" of several hands of poker, in each case computing the probability using uniform probability distributions, which we call "the tornado probability." Below on page 22 he computes the probability P(T|H) "with respect to the uniform probability distribution denoted by H" of a sequence of Fibonacci numbers.
This [Fibonacci] sequence, if produced at random (i.e., with respect to the uniform probability distribution denoted by H), would have probability 10-10, or 1 in 10 billion. This is P(T|H). [“Specification: The Pattern That Signifies Intelligence.” William A. Dembski. 2005, version 1.22, p.24-25.]
On page 26, he computes the specification for a possibly "loaded" die using tornado probability. Then he adds:
that still leaves alternative hypotheses H’ for which the probability of the faces are not all equal.
Note he is distinguishing between alternative hypotheses H’, which is based on a non-uniform probability, and his hypothesis H, which logically implies H is based on a uniform probability distribution. I will emphasize that a few of the sharper ID proponents have noted Dembski's fast moves with switching between probability of evolution and tornado probability (though they don't call it that). VJ Torley is one of the few ID proponents who understand Dembski's math. Torley has addressed Dembski's 2005 paper by basically saying the Dembski should define his probability as explicitly being based on random recombination, that is, the tornado probability. Torley did a detailed analysis of Dembski's probabilities and made a detailed case that the invocation of tornado probability should be an official part of CSI, in at a long comment, Comment #207 at the second MathGrrl thread at Uncommon Descent. His analysis is canny and you should read it. Secondly, the ID proponent Niwrad, a frequent poster at UD, made an online calculator of CSI, and his math is explicitly based on the tornado probability. You should also read Richard Wein's detailed refutation of No Free Lunch because Wein focuses relentlessly on the issue of how probability is calculated; Dembski responds and eventually Wein gets him to admit that his probability is the tornado probability. Read Dembski's responses to Wein. Wein does a detailed debunking of Dembski's self-contradictory statements including his 2004 paper here. So any ID proponent who says they're computing the probability of "Darwinian mechanisms" is lying.

diogeneslamp0 · 26 April 2013

Joe,

I wrote a long comment in answer to your question and it's held up in moderation.

Joe Felsenstein · 26 April 2013

diogeneslamp0 said: Joe, I wrote a long comment in answer to your question and it's held up in moderation.
I'll see what I can do, though I don't think accusations of lying are helpful here -- you can just make contradictions clear and let people draw their own conclusions. Given the human capacity for self-delusion there are probably fewer cases of deliberate lying than we think.

Joe Felsenstein · 26 April 2013

I changed its status to Approved, but it has still not appeared. Perhaps my powers are not as great as I hoped.

If you still have the text, cut it into two or three comments and post them. (While you're at it tone down the accusations of deliberate lying).

If you don't still have the text, I can recover it for you from the moderation queue.

diogeneslamp0 · 26 April 2013

It went through unchanged. If you want to delete it, go ahead and I'll edit later.

Or maybe you delete it, I tone it down, and it could be an OP.

Joe Felsenstein · 26 April 2013

Oops. Yes, I see it up there.

If you want to make it an Original Post, that is for you to decide. I repeat that using the word "lying", "liar", or "lie" is not smart, as that lets the other side off the hook; making it easy for them to go into Taking Offense mode, instead of deaing with your arguments point by point.

Mike Elzinga · 26 April 2013

Joe Felsenstein said: And that's it. No calculation of any kind of "information" necessary. Also notice who gets to do all the heavy lifting -- and it's not the "Design theorist".
It does even worse to his followers. All that mathematical and “philosophical” larding-up has his followers slogging through all the obfuscation believing that they are doing deep mathematics and “information” theory. After they master the all the unnecessary jargon and “sophisticated” math, they come out the end believing they have a depth of understanding of nature that their opponents can’t grasp or are too lazy and stupid to learn. This is particularly apparent over at UD. But this is all an illusion because, when it comes right down to fundamental concepts in biology, chemistry, and physics at the high school level, not one ID/creationist advocate or follower can pass simple concept tests in any of these subjects. We see their consternation about what charge and mass have to do with anything over at UD. One has to admit that Dembski has been pretty slick in knowing his followers. One cannot get into any discussions with him or his followers without being totally sidetracked into an infinite regress of wrangling over the meanings of the meanings of meanings. Any attempt at discussing the real science gets lost in the inevitable heat that is generated by trying to come to some agreement about what anyone is saying. It serves the Culture War objective quite well by creating the illusion that there is a real, scientific discussion going on that is being kept out of the schools. But there is no CSI; it is all bogus. I would suggest that all further attempts to discuss CSI with the followers of ID/creationism be reduced to Np. It is not necessary to take logarithms and waste time on “information” and all of Dembski’s made-up definitions.

Joe Felsenstein · 28 April 2013

diogeneslamp0: Thanks particularly for the references to vjtorley's comment and to Richard Wein's writings, which are helpful. I am still busy with rereading this literature (and a few major distractions connected with my research and grant funding, none of which is about the CSI/Design issue).

Mike Elzinga · 28 April 2013

Dembski has certainly managed to get a lot of people to write thousands of pages of response to his CSI.

However, “tornado probability” didn’t originate with Dembski or even with Fred Hoyle; it was going on even before that with Henry Morris back in the 1970s and 80s. Morris often referred to Isaac Asimov who, by the way, did not agree that evolution violated the second law of thermodynamics.

Morris and even Phillip E. Johnsom and Dembski have sometimes given credit to A.E. Wilder-Smith for the thermodynamic and “information analysis” arguments against evolution.

Here is Morris referring to newspaper syndicated columnist Sidney Harris as though Harris was an expert on thermodynamics.

Couching the thermodynamics argument in terms of “information” theory does a pretty good job of obscuring ID/creationist misconceptions about the second law and how condensed matter forms. The confusions about the existence of things and the second law go back into the 19th century before the fields of condensed matter and the quantum mechanical nature of matter were developed.

ID/creationists are thinking in terms of an “ideal gas” when they think about the “primordial soup” out of which emerge the assemblies of complex, heterogeneous molecular systems. That appears to be the limit of their understanding of chemistry and physics.

It is not the force of their arguments that has generated so many pages of response to the ID/creationists; it has been the socio/political threat of pushing that junk into public education. Furthermore, ID/creationists have generally done a pretty good job of dragging the discussion onto their territory and enticing their opponents to argue on ID/creationist turf using ID/creationist concepts.

If someone familiar with the science, and with the core misconceptions ID/creationists are propagating, attempts to drag the discussion back to reality, ID/creationists throw up a huge barrage of jargon and obfuscation and accuse their opponent of not understanding the issues. I find that a totally sleazy tactic.

ID/creationists can’t get a hearing in the science community because ID/creationist “science” is easily recognized as pure hokum. Trying to get the public to see that is a far more difficult problem. It is a shame that scientists have to spend so much time on ID/creationist crap with so little value.