"[...] all algorithms that search for an extremum of a cost function perform exactly the same, when averaged over all possible cost functions." (Wolpert and Macready, 1995)
andIt's against this backdrop of displacement that I treat the No Free Lunch theorems. These theorems say that when averaged across all fitness functions of a given class (each fitness function being an item of information that constrains an otherwise unconstrained search), no evolutionary algorithm is superior to blind or random search.
— Dembski
Source: Dembski Evolution's Logic of Credulity: An Unfettered Response to Allen Orr While Dembski's treatment of the "No Free Lunch" theorems was, according to mathematician David Wolpert, mostly written in Jello, it is still interesting to pursue some of Dembski's claims. As I will show, not only do the No Free Lunch theorems fail to support Dembski's thesis, but in fact the No Free Lunch theorems show that such optimization is child's play. The question really becomes: Is it really that hard to find an optimal solution using random search under the assumptions of the No Free Lunch Theorems? The answer may be a surprise to many and it is 'not really'.In general, arbitrary, unconstrained, maximal classes of fitness functions each seem to have a No Free Lunch theorem for which evolutionary algorithms cannot, on average, outperform blind search.
— Dembski
As Erik pointed out, as early as 1996, Tom English derived how relatively simple optimization really is:Ironically, even if we grant that the prior over the set of all cost functions is uniform, the NFL theorem does not say that optimization is very difficult. It actually says that, when the prior is uniform, optimization is child's play! I mean that almost literally. Almost any strategy no matter how elaborate or crude will do. If the prior over the set of cost functions is uniform, then so is the prior over the set of cost values. That means that if we sample a point in the search space we are equally likely to get a low cost value as a high cost value. Suppose that there are Y possible cost values. Then the probability a sampled point will have one of the L lowest cost values is just r = L / Y, regardless of which strategy that was used to decide which point to sample. The probability s that at least one of N different sampled points will have a cost value among the L best is given by s = 1 - (1 - r)^N, again independently of the strategy used. Is that good or bad performance? The number of points required to achieve a given performance and confidence level is N = ln(1 - s) / ln(1 - r) ~ - ln(1 - s) / r. After sampling 298 points the probability that at least one of them is among the best 1% is 0.95. After 916 sampled points the same probability is 0.9999. If instead we want a point among the best 0.1% we need to sample 2994 points to find one with probability 0.95, or approximately 9206 points to find one with probability 0.9999. That kind of performance may not be satisfactory when the optimization must be done very fast in real-time under critical conditions, but it is good for most purposes. Certainly our universe would seem to be able to spare the time necessary to sample 9206 points. This is why Thomas English wrote
Source: English T. (1999) "Some Information Theoretic Results On Evolutionary Optimization", Proceedings of the 1999 Congress on Evolutionary Computation: CEC99, pp. 788-795 The inference is never better than the assumption of a uniform prior that it relies on, however. It would seem that in most non-trivial optimization problems the number of good points in the search space are not as frequent as the number of bad points, meaning that the corresponding cost functions are not drawn uniformly from the set of all possible cost functions."The maligned uniform distribution is actually benign. The probability of finding one of the better points with n evaluations does not depend on the size of the domain [7]. For instance, 916 evaluations uncover with 99.99% certainty a point that is better than 99% of the domain. What is remarkable about NFL and the uniform is not just that simple enumeration of points is optimal, but that it is highly effective." (see below for a reference)
— Erik
Thomas M. English Evaluation of Evolutionary and Genetic Optimizers: No Free Lunch Evolutionary Programming V: Proceedings of the Fifth Annual Conference on Evolutionary Programming, L. J. Fogel, P. J. Angeline, and T Bäck, Eds., pp. 163-169. Cambridge, Mass: MIT Press, 1996.The obvious interpretation of "no free lunch" is that no optimizer is faster, in general, than any other. This misses some very important aspects of the result, however. One might conclude that all of the optimizers are slow, because none is faster than enumeration. And one might also conclude that the unavoidable slowness derives from the perverse difficulty of the uniform distribution of test functions. Both of these conclusions would be wrong. If the distribution of functions is uniform, the optimizer's best-so-far value is the maximum of n realizations of a uniform random variable. The probability that all n values are in the lower q fraction of the codomain is p = qn. Exploring n = log2 p points makes the probability p that all values are in the lower q fraction. Table 1 shows n for several values of q and p. It is astonishing that in 99.99% of trials a value better than 99.999% of those in the codomain is obtained with fewer than one million evaluations. This is an average over all functions, of course. It bears mention that one of them has only the worst codomain value in its range, and another has only the best codomain value in its range.
— Tom English
| Fraction | Probability | ||
|---|---|---|---|
| 0.01 | 0.001 | 0.0001 | |
| 0.99 | 458 | 678 | 916 |
| 0.999 | 4603 | 6904 | 9206 |
| 0.9999 | 46049 | 69074 | 929099 |
| 0.99999 | 460515 | 690772 | 921029 |
Source: Tom English Pandasthumb Comment I cannot emphasize strongly enough how wrong Dembski is in his comments on random search as these almost trivial calculations reveal. While Dembski is correct that finding the optimal solution may be extremely hard, finding a solution which is arbitrarily close to the solution is actually quite straightforward. It should not come as a surprise that the "No Free Lunch Theorems" have more unfortunate surprises in store for Intelligent Design. More on that later...In 1996 I showed that NFL is a symptom of conservation of information in search. Repeating a quote of Dembski above:
Under the theorems' assumption of a uniform distribution of problems, an uninformed optimizer is optimal. To be 99.99% sure of getting a solution better than 99.999% of all candidate solutions, it suffices to draw a uniform sample of just 921,029 solutions. Optimization is a benign problem with rare instances that are hard. Dembski increases the incidence of difficult instances by stipulating "interesting problems." At that point it is no longer clear which NFL theorems he believes apply. Incidentally, an optimizer cannot tune itself to the problem instance while solving it, but its parameters can be tuned to the problem distribution from run to run. It is possible to automate adaptation of an optimizer to the problem distribution without teleology.The upshot of these theorems is that evolutionary algorithms, far from being universal problem solvers, are in fact quite limited problem solvers that depend crucially on additional information not inherent in the algorithms before they are able to solve any interesting problems. This additional information needs to be carefully specified and fine-tuned, and such specification and fine-tuning is always thoroughly teleological.
— Dembski
81 Comments
fnxtr · 21 June 2006
The Anti-Science League really does seem to miss the point that evolution doesn't "seek the optimal solution", or anthing so anthropomorphically deliberate; it is a *result* of the range of the available options at the time. "Best" is constantly in flux. Hence the title of this website.
PaulC · 21 June 2006
PvM · 21 June 2006
I will discuss both the concept of evolvability (why evolution works so well) and the concept of displacement in later postings. Little steps... Otherwise there may be too much information to deal with.
Evolvability, or as I see it the co-evolution of mechanisms of variation, helps understand how and why evolution can 'learn from the past' and how it can be successful.
Displacement is a whole can of worms and I believe that Dembski's approach fails to explain why/how natural intelligence can circumvent this problem, if it even is a real problem. Since evolution is not a global optimizer, I find his displacement problem of limited interest.
William E Emba · 21 June 2006
Moses · 21 June 2006
Poof! I say! Poof!
secondclass · 21 June 2006
GuyeFaux · 21 June 2006
Very nice, succinct post.
I really liked Wolfram's jello article, and thought that it was good enough to blow the whole "Evolution-eats-a-free-lunch" thing out of the water. Basically, it points out that
1) Dembski knows what he wants to prove: evolution is insufficient to produce complexity,
2) Knows how he wants to prove it: show that evolution purports to be an algorithm which performs better over all problem spaces.
3) Does not in fact prove it.
In the world of formal logic and math, there's no such thing as a partially proved theorem.
Alann · 21 June 2006
GuyeFaux · 21 June 2006
Grey Wolf · 21 June 2006
Sir_Toejam · 21 June 2006
Sir_Toejam · 21 June 2006
ya know, sometimes I think Dembski made up this stuff just to piss off his old alma mater.
One does wonder how on earth he could have presented a defendable thesis, if his NFL drivel represents the way his mind actually works.
secondclass · 21 June 2006
Coin · 21 June 2006
ttw · 21 June 2006
Evolution is surivial of the adequate.
If there is some "structure" on the problem space, the iterated nature of evolutionary algorithms (putting the next generation's samples in the "better" region) can give a very fast convergence rate.
'Rev Dr' Lenny Flank · 21 June 2006
Shalini, BBWAD · 21 June 2006
Notice how Dembski hurries to wash his hands from providing any substance whatsoever to his beloved theory?
This from the 'Issac Newton of information theory'.
(snicker)
djlactin · 21 June 2006
Adam Ierymenko · 22 June 2006
Wow... I already knew that Dembski's NFL drivel was bad... but this thread seems to have torn it a few new ones.
I continue to be held speechless by both the intellectual vacuity and the amoralism (lying, deceptive use of fallacies, etc.) of today's religious apologists.
If there's a God, how come his followers behave as if there isn't?
pwe · 22 June 2006
Corkscrew · 22 June 2006
What if the domain is infinite and the cost function goes to infinity somewhere? If I'm understanding correctly, that sort of situation wouldn't allow you to apply this result in any meaningful sense.
Am I horribly missing the point here?
Corkscrew · 22 June 2006
PaulC · 22 June 2006
PaulC · 22 June 2006
Correction: when I wrote "but it will still be much higher than most other values of the fitness function" I should have said "but it will still have a high probability of being greater than or equal to most other values of the fitness function." There's no guarantee that it will be "much higher."
Actually, many optimization problems look a lot like this. Almost all randomly chosen solutions have an objective function that could just as well be set at 0 (alternatively, they violate constraints and are infeasible). You can easily find the 99.9%tile value, but it will be 0. The useful solutions are a vanishingly small percentage of the sample space, so you will need some other kind of optimization to find them. It still might not be hard; for instance if the objective function is convex, some kind of hill-climbing will take you to the solution once you find a feasible starting point.
Torbjörn Larsson · 22 June 2006
Let me see if I get the story so far.
English notes that the averaging complaint should be dropped. I don't understand why it doesn't apply, since usually evolution should see a gradient or else be momentarily happy or slowly drift.
Anyway, he notes that those complementary random instances doesn't fit in the world, as Alann says, and that he has in fact proved it, but that "it remains possible that the "real-world" distribution approximates an NFL distribution closely".
*But* he also notes that "Under the theorems' assumption of a uniform distribution of problems, an uninformed optimizer is optimal" and sufficiently fast for a population. So No Free Lunch for Dembski.
Furthermore, if NFL assumes "(1) never visit the same point twice, and (2) eventually visit every single point in the space" it doesn't necessarily apply to evolution. Populations are related (duh!) and doesn't cover the whole space they are evolving through.
And since it assumes optimality it doesn't apply to evolution anyway. Furthermore it doesn't apply for since genes and species coevolve. NFL also seems to assume conservation of information. (Perhaps that is why coevolution dropkicks it, as PvM hints.) Since we have an inflow of information from the nonclosed environment on the properties of newly realised features and from changes in the environment affecting these properties, it also breaks NFL assumptions on the fitness landscape.
I'm eager to see the rest of this story!
Torbjörn Larsson · 22 June 2006
"What if the domain is infinite and the cost function goes to infinity somewhere?"
What does that mean? If you look at fitness instead I guess you have 0-100 % of the population reproducing with on average n children. How is fitness defined, do you need to invoke cost, and how do you map between?
Mike Rogers · 22 June 2006
Corkscrew · 22 June 2006
Donald M · 22 June 2006
Ken Kelly · 22 June 2006
Donald,
Which of the two Dembski quotes in PvM's post do you consider to be misleading, absent their full context?
secondclass · 22 June 2006
PvM · 22 June 2006
Donald seems to go for the ad hominem approach. Seems that ID is even more vacuous than I had imagined.
Still embarassed for accusing me erroneously of leaving out 16 paragraphs of text?
Sometimes it's best to remain silent Donald, unless you can really contribute something to the really devastating finding that random search under NFL assumptions is trivial.
Care to defend Dembski ?
Thought so...
PvM · 22 June 2006
As far as not addressing all the work of Dembski, don't worry I am working my way slowly through the more obvious and damning errors and mistakes.
I can understand the shock to some who actually took all Dembski said as the gospel, so to speak
Sir_Toejam · 22 June 2006
poor ducky is a masochist; he never tires of being wrong.
no such thing as bad publicity, eh Donald?
'Rev Dr' Lenny Flank · 22 June 2006
Wow, Donald, has it been a whole month already? Is FL standing by, waiting for his turn?
Yes, yes, yes, Donald ---- science doesn't pay any attention to your religious opinions, and you don't like that. Right. We got it. Really. We heard you the first hundred times.
Of course, weather forecasting or accident investigation or medical practice or the rules of basketball also don't pay any attention to your religious opinions, do they.
If it makes you feel any better, Donald, none of them pay any attention to MY religious opinions either. Of course, I don't throw tantrums over it, like you do. (shrug)
PvM · 22 June 2006
steve s · 22 June 2006
Donald doesn't waste time fact-checking the posts at UD. It would only slow him down. He finds it much easier to just accept what they say is true. I mean, why bother checking? You'll just find out that they misled you.
SPARC · 23 June 2006
Nobody needs NFL, irreducible or specified complexity to prove creation. Actually we can observe it every day. We are just a few clicks away from an obviously created parallel world and we can easily identify the creators as being Behe, Demski and others heavenly supported by the DI. However, due to its redundancy it may not be unreducible complex (taking away one part will not mean the armagedon for this world). Thus , it remains to be proven that this creation involved some intelligence.
SPARC · 23 June 2006
Nobody needs NFL, irreducible or specified complexity to prove creation. Actually we can observe it every day. We are just a few clicks away from an obviously created parallel world and we can easily identify the creators as being Behe, Dembski and others heavenly supported by the DI. However, due to its redundancy it may not be irreducible complex (taking away one part will unfortunazely not mean the Armageddon for this world). Thus , it remains to be proven that this creation involved some intelligence.
SPARC · 23 June 2006
Nobody needs NFL, irreducible or specified complexity to prove creation. Actually we can observe it every day. We are just a few clicks away from an obviously created parallel world and we can easily identify the creators as being Behe, Dembski and others heavenly supported by the DI. However, due to its redundancy it may not be irreducible complex (taking away one part will unfortunately not mean the Armageddon for this world). Thus , it remains to be proven that this creation involved some intelligence.
SPARC · 23 June 2006
BTW, they have created another new
world, have a look
stevaroni · 23 June 2006
Alann · 23 June 2006
I'd like to thank secondclass and Coin for explaining the criteria for a search algorithm that I had missed.
(That it must never repeat a point (no inherent inefficiency) and must eventually hit every point (eventually will get it right))
I think I understand the NFLT now:
1) There are an infinite number of math problems
2) For any given value there are an infinite number of math problems which return that value
3) Since each given value has an infinite number of problems all values are equally likely
4) In effect this says that the average of all math problems is essentially picking a random number
5) Since the result is essentially random no search algorithm can be better than another.
It is pure garbage. Its like an elaborate proof that 1 = 1 by starting with (A x 0) + 1 = (B x 0) + 1, and implying that this somehow infers something about A and B.
First its an attempt to trick you by starting with the hidden premise that your solution is purely random. Well duh, if your solution is random of course you cannot search for it effectively. They want to gloss over the step where in order to apply the NFLT you must first prove the problem is purely random. Yes just try and explain that there are equally as many scenarios where the life form would be a fire breathing dragon or a unicorn, as there are where the life form would be a fish or a goat.
Second even in mathematics step 3 is just plain wrong. If there is an infinite number of A and an infinite number of B.
It DOES NOT hold true that there are as many A as B. Infinity does not behave like a normal number:
infinity * infinity = infinity (not infinity squared)
infinity/infinity=infinity (not one)
infinity + infinity = infinity (not two infinity)
infinity - infinity = infinity (not zero)
In fact it should be obvious that certain numbers have special properties making them more or less likely than others. For example Zero would be more likely because any number times zero is zero, at the same time prime numbers would be rarer because they are less likely though multiplication.
Gray · 23 June 2006
Henry J · 23 June 2006
Alann,
Re "Infinity does not behave like a normal number:"
Not to mention that some infinities are larger than others, so much so that there can't be a set of all cardinal numbers (a.k.a. sizes of sets).
Gray,
Re "The natural numbers and the odd numbers, for example, are both infinite, but there are "more" (in some sense that I don't understand; infinity is a bizarre concept after all) of the former than the latter. Right?"
I think you mean integers as compared to real numbers. There's more real numbers than integers. Though otoh, the set of rational numbers is the same size as the set of integers.
Henry
Coin · 23 June 2006
Alann · 23 June 2006
There are an infinite number of:
Prime Numbers (2,3,5,7,...)
Natural Numbers (+1,+2,+3,...): positive integers (sometimes zero is included)
Integers (...,-2,-1, 0,+1,+2,...)
Rational Numbers (x/y): All integers plus all possible fractions.
Irrational Numbers: Decimal numbers with non-repeating digits, including Pi, the square root of 2, etc.
Real Numbers: All Rational and Irrational Numbers
Imaginary numbers: Any Real number plus a real number times the imaginary number (square root of -1).
While each may be a bigger set than the last, there is no term for bigger or smaller infinity.
In this case the infinity represented in the biological sense used here says all environments and conditions are equally possible, little details like "on Earth" which make things finite just tend to get in the way of what is otherwise perfectly acceptable Mad Science.
Coin · 23 June 2006
Gray · 23 June 2006
Gray · 23 June 2006
Addendum to my question: For any sets A and B, where B is a subset of A, it seems that the following two statements are true: (i) A contains every member than B contains, (ii) A contains some members that B does not contain. Don't (i) and (ii) support the conclusion that A has more members than B?
Coin · 23 June 2006
Tom Curtis · 23 June 2006
Gray, there are two definitions of the notion of "bigger than" for quantities. Definition one, which you allude to is that if A is a proper subset of B, than set B is bigger than A. The other definition is that if every member of A can be uniquely correlated with a member of B, but not every member of B can be uniquely correlated with a member of A, than B is bigger than A. For finite sets, both definitions are equivalent.
For infinite sets, most mathemeticians treat the definitions as independant, and the second definition as more fundamental. As coin has shown, it is easy to show that if you treat the "size" of the set of natural numbers as a quantity, then by the second definition the set of odd numbers has the same size as the set of natural numbers. IN fact, by that definition, the set or rational numbers is also the same size, and so also is the set of computable numbers. But the set of real numbers is larger.
Some mathemeticians dispute this. The claim that the set of computable numbers is the set or real numbers; and that talk about the size of infinite sets is nonsense. I think they're right, but I don't have anywhere near enough mathematical knowledge to justify that opinion. (On the other hand, if they are wrong, Platonism is true, and there is a God, and wishes are horses.)
http://en.wikipedia.org/wiki/Aleph-null
http://en.wikipedia.org/wiki/Constructive_logic
http://en.wikipedia.org/wiki/Computable_number
http://en.wikipedia.org/wiki/Supertask
Doc Bill · 23 June 2006
Where's Donald?
Alas, another mindless, baseless, clueless creationist tossing out Coulteresque assertions as if they were rare gems and not the paste of wishful thinking.
Good-bye, Donald.
'Rev Dr' Lenny Flank · 23 June 2006
Donald will be back next month for another drive-by.
Sir_Toejam · 23 June 2006
Gray · 23 June 2006
Coin,
At some point you will have to say to me what Socrates says to Glaucon, "You won't be able to follow me any longer," but hopefully we're not there quite yet. So let me take one more stab at this.
If B is a proper set of A, can there ever be a bijection of A and B? Based on the summary definitions you've provided, it seems not. If A contains every member of B, then can't every member of B be "paired" with the same member in A without remainder in B? But A also contains at least one member, that B does not. If all the same members of A and B have been paired, aren't any members of A that are not contained in B without a "partner"?
Let A be the natural numbers, and B the odd numbers. Every member of B can be paired with the same member in A (1 with 1, 3 with 3, 5 with 5, ad infinitum) without remainder in B. But A also contains all the even numbers that do not have a "partner" in B; all the members of B are already "taken" by the odd members in A.
The wikipedia entry for Hilbert's Hotel is interesting. I'm actually familiar with the proof for the existence of God that denies the possibility of an actual infinite. It's called the Kalam cosmological argument (a friend of mine wrote his dissertation on it). At present, the whole business strikes me as a bit nonsensical. I wonder whether that sentiment would disappear with a few graduate level courses in mathematics. Isn't there a school of mathematics that rejects all of this talk about "infinite" series? The intuitionists?
Gray · 23 June 2006
Henry J · 23 June 2006
Gray,
Re "(i) A contains every member than B contains, (ii) A contains some members that B does not contain. Don't (i) and (ii) support the conclusion that A has more members than B?"
If both are finite it does - that's the mathematical definition of "finite set": that all proper subsets of a set S are smaller than the set itself. If S has a proper subset the same size as itself then S is infinite.
(Note - Proper subset of a set S = subset of S that doesn't contain all the elements of S.)
(Note - sets S1 and S2 are the same size if there exists a one-to-one mapping between their elements.)
-----------
Tom,
What's a computable number? Does that include all algebraic numbers plus some that aren't algebraic?
(Note - algrebraic number = solution to a single-variable polynomial with integer coefficients.)
Henry
Anton Mates · 23 June 2006
Henry J · 24 June 2006
Re "Isn't the way that members are correlated both arbitrary and significant."
Arbitrary, since the question is simply whether or not a one-to-one mapping exists at all. It doesn't really matter which such mapping gets demonstrated, and also it doesn't matter if there are other mappings that aren't one-to-one.
Henry
Marek 14 · 24 June 2006
As for the aleph-null and aleph-one: it was proven that the continuum hypothesis (essentially whether the cardinality of real numbers is aleph-one or higher) is undecidable in standard set theory, so whether you want to accept it or not, you won't hit any contradictions.
Keith Douglas · 24 June 2006
A computable (real) number is one that can be generated by a turing machine (or a markov algorithm, post system, lambda calculus, etc.). Since there are only countably many turing machines, there are only countably many such numbers and whence "most" real numbers are uncomputable.
Gray · 24 June 2006
Allright, I may have it. The "size" of a set is not absolute but is relative to another set, and the relation between the two is established by a function that correlates the members of one with members of the other. So any set can be larger, smaller or equal to any other set depending on the function used. Right?
If so, is it the case that the odd numbers are smaller than, larger than, or equal to the natural numbers depending on the function used? That doesn't sound quite right.
Anton Mates · 24 June 2006
Gray · 24 June 2006
O.K. So N (natural numbers) and O (odd numbers) are equal in size because there exists at least one function that uniquely correlates all the members of N with all the members of O.
Assuming that I've got that right, my difficulty is as follows. For sets with finite numbers of members, if two sets are of equal size, and all of the members of the first set are correlated to members of the second set by a one-to-one function (any one-to-one function), the second set cannot have members that are not correlated to members of the first set. But for sets with infinite numbers of members, that's not the case.
It's possible to correlate all of the members of O with members in N, but leave some (an infinite number) of the members of N with no correlate in O. It's also possible to correlate all of the members of N with members in O, but leave some (an infinite number) of the members of O with no correlate in N. It follows, paradoxically, that it's possible for a one-to-one function between sets of equal size to leave "extras," as it were.
Henry J · 24 June 2006
Re "It follows, paradoxically, that it's possible for a one-to-one function between sets of equal size to leave "extras," as it were."
That's essentially a rephrasing of the definition of infinite set - it can have the same size as some proper subset of itself.
Ergo, don't expect infinite sets to "act like" finite sets; they don't.
Henry
Gray · 24 June 2006
SPARC · 25 June 2006
Sorry, my mistake use
this instead of
that
Still, it remains a vacuum filled with emptiness
SPARC · 25 June 2006
Stevaroni: OK, I just blew out my irony meter on that one.
For those who haven't looked, the page is the research intelligent design.org wiki homepage.
At least in my browser, it comes up as...
Sorry, that was my mistake, one slas too much at the end of the URL.
Click here and you will still find a vacuum filled with emptiness
William E Emba · 25 June 2006
Donald M · 25 June 2006
'Rev Dr' Lenny Flank · 25 June 2006
Hey Donald, why won't you answer the simple questions I keep asking you?
What, again, did you say the scientific theory of ID is? How, again, did you say this scientific theory of ID explains these problems? What, again, did you say the designer did? What mechanisms, again, did you say it used to do whatever the heck you think it did? Where, again, did you say we can see the designer using these mechanisms to do ... well . . anything?
Or is "POOF!! God --- uh, I mean, The Unknown Intelligent Designer --- dunnit!!!!" the extent of your, uh, scientific theory of ID .... ?
How does "evolution can't explain X Y or Z, therefore goddidit" differ from plain old ordinary run-of-the-mill "god of the gaps?
Here's *another* question for you to not answer, Donald: Suppose in ten years, we DO come up with a specific mutation by mutation explanation for how X Y or Z appeared. What then? Does that mean (1) the designer USED to produce those things, but stopped all of a sudden when we came up with another mechanisms? or (2) the designer was using that mechanism the entire time, or (3) there never was any designer there to begin with.
Which is it, Donald? 1, 2 or 3?
Oh, and if ID isn't about religion, Donald, then why do you spend so much time bitching and moaning about "philosophical materialism"?
(sound of crickets chirping)
You are a liar, Donald. A bare, bald-faced, deceptive, deceitful, deliberate liar, with malice aforethought. Still.
psychologist · 25 June 2006
Donald, you should have asked Pim for his source if you doubted his quote. Instead you called him a liar. You owe Pim an apology.
'Rev Dr' Lenny Flank · 25 June 2006
Barley Zagner · 25 June 2006
PvM · 25 June 2006
PvM · 25 June 2006
Henry J · 25 June 2006
Re "Now that I understand what is meant by the claim that two infinite sets are the "same" size or "equal" is size, I am of the opinion that it is a complete misuse of those words."
Well, the technical term is "cardinality of the set", or "cardinal number". Which for finite sets is essentially the same thing as the size of the set, so I don't generally bother distinguishing the terms. I don't know if mathematicians limit the use of the word "size" to finite sizes or not.
Henry
Anton Mates · 25 June 2006
William E Emba · 27 June 2006
miriam · 5 July 2006
Sorry for this