and then links to his GA code, here. Salvador adds later in his post thatThe following computational theatrics are akin to what Dave Thomas performed:
I'm afraid that isn't quite correct because if you go to ga.c, and do a text search for 500500 you won't find it. The solution was never explicitly stored anywhere. It seems Sal is implying that his sum-of-integers genetic algorithm has no "fixed target," and is akin to my work on Steiner Trees. That is patently False. Cordova's algorithm is exactly like Dawkins' "Weasel", with the major difference being that, while Dawkins was searching for the specific target "METHINKS IT IS LIKE A WEASEL," Cordova is searching for the specific sequence of numbers 251, 252, 253, ... 750. When these are summed and doubled, the result is the sum of the numbers from 1 to 1000: 500,500. Another oddity was that Cordova's code wouldn't even compile -- it took me a couple of hours to reverse engineer it and figure out what in tarnation he was doing. As an exercise in Smoke and Mirrors, Cordova's algorithm is remarkable. But, unlike my program, it is definitely looking for one, and only one, Answer. PROOF Cordova's program loops from 1 through half the desired end-number; if the sum of the first N integers is desired, and N=1000 (as in Cordova's listing), then he loops from 1, 2, 3, up to 500 (=N/2). For each of the 500 numbers, Cordova stores a number that represents a "Midpoint Distance" (I'll call this MidPoint[i] for short, where i is any number in the sequence 1, 2, 3, ... N/2.) Initially, his Midpoint Distances are generated randomly, with values anywhere from -2N to +2N (e.g. -2000 to +2000). Then, he loops through each of the N/2 numbers once per generation, and repeats the process over many generations. For a given generation, and a given looping number from that generation (such as i=37, with value MidPoint[37]), Cordova calculates a "pseudo-euclidean distance":
ofro wrote: What I don't understand is the basic premise of your example, which apparently already has an explicit solution of the problem built into the program.
Here's how he uses this distance. At each step in the loop (from 1 to N/2), the "distance" to the current MidPoint value is examined, and a "mutated" MidPoint distance is also derived, always within a few units (plus or minus 2.5) of the original "distance". If the Mutated "distance" is shorter, that distance then replaces the current value, MidPoint[i]. Otherwise, the current value is retained. So, something is definitely being minimized, but what? Aside: if you think this exposition is getting too wordy, please do try to figure out Cordova's Code for yourself. I'll wager you'll be back here before you know it! If the pseudo-Euclidean distance is called D, then Cordova is trying to minimize D = (x1-y)^2 + (x2-y)^2. To see where this will lead, one can take the derivative of D with respect to y, and set it to zero to find the extremum (which is indeed a minimum, but I'll leave that detail to the reader): D' = 2 (x1-y)(-1) + 2(x2-y)(-1) === 0, => (x1-y) + (x2-y) === 0, => y = (x1 + x2)/2. Whatever x1 and x2 are, Cordova's function will be minimized when the value of y (which turns out to be a MidPoint value) equals the average of x1 and x2. Delving deeper into Sal's Nightmare, one finds that, if you are on the ith number in the loop (say, i = 37), what gets sent down as x1 and x2 are just the index i itself, and that same index + N/2. Thus, for i=37, it turns out that x1 = 37, and x2 = 537. Not coincidentally, the average of 37 and 537 is 287, or simply 37 + 250. And that's almost all there is to it. Even if the value for the 37th midpoint was far off at the beginning (say, -1634), as the generations proceed, any mutations that serve to bring the midpoint closer to the intended value (=287) are accepted, and those that don't are rejected. Formally substituting i for x1, and i+N/2 for x2, we see that midpoints are drawn inexorably to the value y = (x1 + x2)/2 = (i + i + N/2)/2 = (2i + N/2)/2 = i + N/4. If N is 1000, N/4 is 250. The first midpoint (for loop index i=1) is drawn to 251 (=1 + 250), while the 37th midpoint is drawn to 287 (=37 + 250), and the final index's midpoint is drawn to 750 (=500 + 250). All that remains is to evaluate the final "estimate," which is just the sum of the 500 (or N/2) midpoints, doubled. Here follows the proof that this sum is just a roundabout way of calculating the sum of the first N integers, which the brilliant Gauss found useful as a child to escape some boring math drills: instead of adding the numbers up, Gauss realized the sum of the first N numbers was just (N*(N+1)/2).double pseudo_euclidian_distance( double x1, double x2, double y) { return (x1-y) * (x1-y) + (x2-y)*(x2-y); }
79 Comments
Alan Kellogg · 16 August 2006
Could one express it as...
1 + 10^Nth x 1/2 10^Nth ?
Thus 1+10,000=10,001 x5,000= 50,050,000
Matt Peterson · 16 August 2006
*golf clap*
Parse · 16 August 2006
Okay. I've seen better looking code in the Obfuscated C competition, but enough with the ad hominems.
Cordova's claim of "I'm afraid that isn't quite correct because if you go to ga.c, and do a text search for 500500 you won't find it. The solution was never explicitly stored anywhere." is technically true. Or at least, everything in that after the word "because" is true. It's equivalent to running a genetic algorithm, where the fitness is determined by how close the ROT13 of the phrase is to "ZRGUVAXF VG VF YVXR N JRNFRY". You won't find "METHINKS IT IS LIKE A WEASEL" in that, but it is comparing it to a specific goal. The fact that goal is calculated - without any randomness added - from constant inputs for a given value means that the goal is hardcoded into the algorithm. You can try to slice it up, or hide it, but it will still be there.
khan · 16 August 2006
In his reply to the Design Challenge, Mr. Cordova describes a genetic algorithm of his own devising, for solving the problem of adding up all the integers from 1 to, say, 1000. (The result is 500,500, by the way).
I derived & proved a formula for such when I was a 16 year old math nerd.
And I certainly was not the first.
n(n/2 + 1/2) = sum(n...1)
(Yes, the symbols might not be correct.)
What does that have to do with evolution?
IanC · 16 August 2006
Don Baccus · 16 August 2006
W. Kevin Vicklund · 16 August 2006
Torbjörn Larsson · 16 August 2006
So. If Salvador doesn't understand what a GA is, should he criticize it?
Well, for our laughs, perhaps. He also believes that the code in a GA may be randomly changed, so that is another 'test' his code must pass.
BC · 17 August 2006
Mark Nutter · 17 August 2006
That's nice work, but I think one of your conclusions is overdrawn. There is a difference between "it seems Cordova is implying" something untrue, and "Cordova is lying."
It sounds to me like Cordova is saying only that the answer itself (500,500) is not part of the code, but his primary point is the argument that one can specify an answer without explicitly spelling out the answer. Indeed, his whole point is that simulations like this are merely exercises in using an algorithm to obscure the fact that the outcome is programmed in. You have elucidated that this is the case in Cordova's algorithm, but that does not, in itself, diminish his point.
Not that I agree with Cordova's criticisms of genetic algorithms either, of course. I just think perhaps you rushed to a conclusion regarding "lying." Let's just say his criticism is irrelevant, since your example finds a truly unknown solution, as opposed to coding for a specific predetermined answer to the problem.
Corkscrew · 17 August 2006
Mind if I just run through a few definitions to get this all clear in my head?
There are two major divisions of genetic algorithm fitness function: extrinsic and intrinsic. Extrinsic ones are ones which are imposed by means of an explicit fitness function (this algorithm would be an example). Intrinsic ones are ones where the organisms follow behavioural rules and these rules themselves give rise to a sort of fitness value for the organism (examples include the Tierra simulation and the Real World [tm]).
What we're discussing here is a further subdivision between extrinsic fitness functions that are locally or globally described. The former is more biologically realistic in that it simulates "blind" evolution better. The latter is less realistic, and is usually only used for demo purposes. The difference is that, if you know the global fitness function, using a GA is completely redundant.
It is of course possible for all three types of fitness to describe the same landscape. For example in the case of the Steiner solution, the same fitness landscape could be described by the following approaches:
1) Extrinsic/global - a complete listing of the lengths of the network for all conceivable patterns of nodes and lines
2) Extrinsic/local - a fitness function which selects possibilities on the basis of their length
3) Intrinsic - consider a population of line/node patterns, plus a population of "predator" organisms. These could be short lines which appear at random and "eat" anything they fall on. This would create a strong selective pressure towards shorter lines.
All three approaches describe the same landscape, and examples like this are the reason why it's generally considered valid to use GAs with extrinsic/global fitness functions as examples.
Can anyone with more knowledge than me confirm whether that's broadly accurate?
ah_mini · 17 August 2006
Have to say I agree with a couple of other posters. It's one thing to say that someone's critique of your original post was inadequate, incompetent, incorrect, etc. It's another to say that your critic is a liar. That requires a whole different level of proof which, quite frankly, is not evident in your post. I think maybe you should go back and tone down some of the indignation as, other than that, it was a very revealing read as to why ID criticisms of GAs are bankrupt.
Wesley R. Elsberry · 17 August 2006
What sort of person writes what is supposed to be a non-deterministic algorithm, but only provides one fixed and immutable pathway down which it may travel? Salvador Cordova is such a person, apparently. I was surprised to find that Cordova's code returned exactly the same answer on multiple runs, so I had a look inside. His idea of pseudo-random number seeding was this:
srand(0);
If you add
#include <time.h>
and change that other line to
srand(time(NULL));
then at least you don't get exactly the same number out on every run.
Wesley R. Elsberry · 17 August 2006
Cordova apparently also either made a sloppy guess on the number of generations to run, or very carefully tuned that number so that his program would *not* converge on the correct result.
A little exploration finds that 6650 generations mostly fails to reach convergence on 500500, while 6660 generations often succeeds in reaching convergence. Coincidence?
If you return to the brain-dead "srand(0)" initialization, 6515 is the number of generations at which the program converges.
Wesley R. Elsberry · 17 August 2006
Salvador T. Cordova · 17 August 2006
Dave Thomas · 17 August 2006
Dave Thomas · 17 August 2006
steve s · 17 August 2006
Wesley R. Elsberry · 17 August 2006
Dave,
I'm using GCC:
gcc version 2.95.3 20010315 (release) [FreeBSD]
ofro · 17 August 2006
Thomas, Elsberry, Cordova (anybody else?):
Guys, could you please tone it down with your polemic! I realize that you are'nt exactly the best of buddies. I knew I didn't get the most truthful of answers from Cordova, and I was going to consult with my son to analyze the code and prepare a reply. You saved me that trouble. At the same time, if I had known that my comment/answer was going to be exploited that much, ....
You are taking the fun out of telling the folks over at UD about experimental facts that the hard-core don't want to hear, but maybe some visitors will listen and think if arguments are handled in a more civized manner. I have to admit UD's civility is a design feature that I appreciate even if the content makes me cringe at times.
(And I won't admit that in my younger years I probably would have adopted a more raucus style, too)
Mark Nutter · 17 August 2006
Glen Davidson · 17 August 2006
All of this is good, of course. But we would do well not to forget the big picture, which is that evolution (by known/knowable mechanisms) is an explanation, the only meaningful one, for what we see in the organic world. Genetic algorithms are attempts to mimic what has been reasonably inferred from the evidence, and have been generally adapted to serve our own purposes rather than to reproduce the meaningless yet amazing complexity that we observe in the world of organisms.
That is to say, the template for all genetic algorithms, the explanatory theory of evolution by known (perhaps by some yet unknown as well) mechanisms is what is really at issue. RM + NS (+ the rest) is what fits the data found in biology, the neutral rates of evolution, the lack of rational design in organisms, the derivative nature of all of biology. No known designer, or any other phenomenon, has ever produced anything like it.
Genetic algorithms using essentially RM + NS typically reproduce the patterns that we see in undirected evolution--divergences, derivation, non-rational design. This is what they are good at showing, whether or not they are directed toward some goal. The lack of goals in biological evolution only makes the search space much wider, something that Sal would prefer to smother over with meaningless challenges to replicate evolution computationally. That is to say, IDists like to put evolution into question using non-empirically-based challenges, rather than to deal with the actual evidence that gave us the idea of genetic algorithms in the first place.
We have a superb example of undirected evolution in the biological realm. No goal or rational design has ever been demonstrated there, while the endless evidence of derivation is the almost the entire theme of biology. So I applaud Dave Thomas's excellent work at coming up with an undirected evolutionary analogy to use in computers, however I don't want the squabble with those who avoid empirical modeling like the plague to obscure the fact that Sal et al haven't in the slightest dealt with the success of undirected RM + NS + in explaining what we see in ourselves and in our world.
We have an explanation (one that is well beyond exact duplication--don't think that isn't the reason why they demand such a duplication), they have none. It's no wonder that Sal turns away from the actual biological evidence in order to deny the fact that we have a very good explanation, yet it is only a measure of how unscientific his thinking is. He is not trying to come up with an explanation in order to help us understand the details of what we see, he is only trying to prevent the use and propagation of the successful explanation that has been partly adapted for our own use--because it comes up with solutions that humans would not readily think of.
Why do we want to use genetic algorithms at all, Sal? Surely if the organic realm were chock full of rationally-designed mechanisms we would have little reason to try to mimic an evolution that simply provides the same sort of rational designs that humans and computers produce. No, we looked at organisms, saw that they are not rationally-designed, yet often enough had useful alternatives to rational designs that we thought we would do well to supplement intelligent design with a measure of evolution, and so we came up with mimics in order to supplement intelligent design. We use genetic algorithms to mimic the non-intelligent designs produced by evolution, so that we can supplement our intelligent rational design with solutions similar to what we observe in the organic world. All genetic algorithms create patterns analogous with the patterns found in life reproduces important aspects of RM + NS +, and non-teleological genetic algorithms only come closer to making a true mimic.
Why use evolutionary processes at all if they only produce what is indistinguishable from intelligent designs anyhow?. Can Sal differentiate between evolutionary processes and rational intelligent design processes, as well as to identify the differences in their results?
He is trying very hard not to be able to do either one.
Glen D
http://tinyurl.com/b8ykm
Glen Davidson · 17 August 2006
All of this is good, of course. But we would do well not to forget the big picture, which is that evolution (by known/knowable mechanisms) is an explanation, the only meaningful one, for what we see in the organic world. Genetic algorithms are attempts to mimic what has been reasonably inferred from the evidence, and have been generally adapted to serve our own purposes rather than to reproduce the meaningless yet amazing complexity that we observe in the world of organisms.
That is to say, the template for all genetic algorithms, the explanatory theory of evolution by known (perhaps by some yet unknown as well) mechanisms is what is really at issue. RM + NS (+ the rest) is what fits the data found in biology, the neutral rates of evolution, the lack of rational design in organisms, the derivative nature of all of biology. No known designer, or any other phenomenon, has ever produced anything like it.
Genetic algorithms using essentially RM + NS typically reproduce the patterns that we see in undirected evolution--divergences, derivation, non-rational design. This is what they are good at showing, whether or not they are directed toward some goal. The lack of goals in biological evolution only makes the search space much wider, something that Sal would prefer to smother over with meaningless challenges to replicate evolution computationally. That is to say, IDists like to put evolution into question using non-empirically-based challenges, rather than to deal with the actual evidence that gave us the idea of genetic algorithms in the first place.
We have a superb example of undirected evolution in the biological realm. No goal or rational design has ever been demonstrated there, while the endless evidence of derivation is the almost the entire theme of biology. So I applaud Dave Thomas's excellent work at coming up with an undirected evolutionary analogy to use in computers, however I don't want the squabble with those who avoid empirical modeling like the plague to obscure the fact that Sal et al haven't in the slightest dealt with the success of undirected RM + NS + in explaining what we see in ourselves and in our world.
We have an explanation (one that is well beyond exact duplication--don't think that isn't the reason why they demand such a duplication), they have none. It's no wonder that Sal turns away from the actual biological evidence in order to deny the fact that we have a very good explanation, yet it is only a measure of how unscientific his thinking is. He is not trying to come up with an explanation in order to help us understand the details of what we see, he is only trying to prevent the use and propagation of the successful explanation that has been partly adapted for our own use--because it comes up with solutions that humans would not readily think of.
Why do we want to use genetic algorithms at all, Sal? Surely if the organic realm were chock full of rationally-designed mechanisms we would have little reason to try to mimic an evolution that simply provides the same sort of rational designs that humans and computers produce. No, we looked at organisms, saw that they are not rationally-designed, yet often enough had useful alternatives to rational designs that we thought we would do well to supplement intelligent design with a measure of evolution, and so we came up with mimics in order to supplement intelligent design. We use genetic algorithms to mimic the non-intelligent designs produced by evolution, so that we can supplement our intelligent rational design with solutions similar to what we observe in the organic world. All genetic algorithms create patterns analogous with the patterns found in life reproduces important aspects of RM + NS +, and non-teleological genetic algorithms only come closer to making a true mimic.
Why use evolutionary processes at all if they only produce what is indistinguishable from intelligent designs anyhow?. Can Sal differentiate between evolutionary processes and rational intelligent design processes, as well as to identify the differences in their results?
He is trying very hard not to be able to do either one.
Glen D
http://tinyurl.com/b8ykm
steve s · 17 August 2006
Hey Glen, do you know where I can find this?
sparc · 17 August 2006
Maybe it would help if Salvador quits his computer for a while, orders a peptide (e.g. Methinksitislikeaweasel) and a phage display library (NEB # E8121L, 1480,-$) and tries some phage panning. Media and plastic ware are inexpensive and the experiments can easily carried out by people not too experienced with wet experiments. Even if DI had to buy the necessary equipment (which seems quite likely if one looks at the number of experimental papers they have published) less then 5000$ should be sufficient to set up the lab for this purpose. If one of you guys would invite him in his lab less then 2000,-$ will be sufficient for the complete experiment. Indeed he would have enough material for WD´s research assistant to run the experiments in parallel.
He would experience everything important in evolution: randomness (phage library), selection (panning) and reproduction (amplification of selected phages in bacterial host cells).
However, I am afraid that he would deny this, because the suggested library contains all possible heptapeptides, which he would interpret as front loaded.
Dave Thomas · 17 August 2006
T_U_T · 17 August 2006
I've just finished dissecting Sam's "masterpiece", and, Ouch ! methinks , the "incompetent and unaware of it" article was written exactly about him. This is not even a genetic algorithm the guy produced at all ! What a shame...
Steviepinhead · 17 August 2006
As has been said before, even on this thread, there is some heuristic utility to demonstrating to the doubtful, untutored, or fence-sitters that ID is a sham (and most of its best-known practitioners shameless shamsters).
The same cast of poseurs has every vested interest--psychological, religious, and at times professional and financial--to engage in obfuscation, misdirection, fact-fudging, logic-twisting, and outright prevarication, as they attempt to slither off the well-set hook. However "polite" and "politic" it might be to pretend to ignore this deeply-engrained habit of falsehood, it would neither be fair, ethical, or truthful to do so.
In Sal's case, however, we also seem to be dealing with some rare species of psuedo-intellectual masochist.
The demands of ethics are thus rendered more complex: clearly we must continue to expose the vacuousness and illegitimacy of his "arguments."
Whether it is "better" to continue enabling Sal's self-flagellation by assiduously exposing his moral vacuousness--or whether one should refrain from further fueling his perverse pleasures--is a delicate moral question indeed, perhaps best left to the discretion and scruples of each individual.
Mark Nutter · 17 August 2006
Mark Nutter · 17 August 2006
Caledonian · 17 August 2006
Don't give me this "we don't know he was intentionally trying to deceive" garbage.
Cordova is either a complete and utter incompetent, or he is lying. Which of these two possibilities do you find *more* insulting? There's not much to pick between them in my view, but I suppose I'd prefer to be accused of being a liar instead of a fool.
Dave Thomas · 17 August 2006
Good point, Mark. While I do know that Sal is saying things that are patently false, I can't prove that he knows it himself. Some people can really get tied in knots trying to accommodate their "world view lenses."
The reason I played the "Lie" card is that Sal is not being true to his own discipline of computer programming. I have proved that my Steiner GA is quite different from the Dawkins "Weasel" style GA's; but, Cordova has said that my GA is no different than "Weasel" GA's; and, Cordova provided an example of a number-summing GA he says is akin to mine, and further, that in his GA, "The solution was never explicitly stored anywhere," making it supposedly "different" than Weasel.
That is the lie that has me bugged. That is why I went to the trouble of proving that Cordova's summation GA is just another "Weasel" style fixed target algorithm, with one unique answer hard-coded in.
Does Sal know he's not speaking the truth? In reality, probably not. I was hoping that a rigorous proof that his GA is just another "Target" algorithm quite unlike my Steiner Tree GA might shake him loose, but from today's comments at UD, that doesn't appear at all likely.
Sad, really. But Sal will have a harder time continuing to fool himself once I publish my post on "Genetic Algorithms for UD Software Engineers," probably next week.
T_U_T · 17 August 2006
Cordova's summation GA
Cordova's (sic)ga.c is not a GA in the first place. It is a badly mangled version of simulated annealing with T=0 from the start on.
Coin · 17 August 2006
Wesley R. Elsberry · 17 August 2006
Alann · 17 August 2006
- Is the determination of selectability in natural selection considered a random process?
- Is it a problem if it is considered a designed process?
- What are the options other than random and designed since natural selection seems to be neither?
- How could you represent natural selection in a computer model without being designed selection?
I think the argument is reasonable, because I don't think determining fitness in terms of most efficient should be considered a designed selection. Oh, and as for the summation problem I want to share the explanation of the solution I was taught that was so elegant I never forgot it. Take the series of numbers: 1 2 3 ... n-2 n-1 n Place beneath it the series in reverse order: n n-1 n-2 ... 3 2 1 Add the two lines together: n+1 n+1 n+1 ... n+1 n+1 n+1 Note that each value is n+1, that there are n columns, and that this represent the sum of two copies of the series. So the total of any one copy would of course be ((n+1)*n)/2.Alann · 17 August 2006
Correction: I meant to say unreasonable.
Glen Davidson · 17 August 2006
T_U_T · 17 August 2006
alienward · 17 August 2006
Coin · 17 August 2006
Dave Thomas · 17 August 2006
Sir_Toejam · 17 August 2006
Dave Cerutti · 17 August 2006
OK, I agree with you that Sal's algorithm is merely minimizing a series of 500 numbers. In each of 500 elements of an array, he picks a number that's randomly higher or lower than the current guess, and if it's closer to the midpoint, which is indeed defined in the code, then he takes it. And he just repeats this guess-check 5000 times for each of the 500 elements of the array. Indeed, there is no genetics here, no populations of changing answers, just variables with names that sound like a GA.
But, I'm not sure how, in your Steiner network, the solution is not somehow somehow imbedded in the statement of the problem itself. Is it because, in a given solution, there could be numerous collinear nodes generated along the way to constructing the solution such that there are many (infinitely) many ways of creating a solution of the same overall shape? I guess my hangup is that there seems to be one correct shape to the Steiner solution (discounting symmetry in the problem), which would ultimately be found by any given algorithm with the scoring function stated in the problem provided sufficient search time.
Dave Thomas · 17 August 2006
Brit · 17 August 2006
Corkscrew · 17 August 2006
Did Dave Thomas not know in advance his fitness function would have a chance of being marginally successful (in a MacGeyver sense at least), or did he have some monkey code his fitness functions or describe the fitness function to him?
Of course he knew that it would be at least marginally successful at the task it was set - that's what evolution does.
I think that the key take-home point of Dave's demonstration is that selective processes can achieve results which are complex, high-information and specified - as long as your specification in some way derives from the fitness function. So, in this case, the specification "Steiner tree" derives quite clearly from the selection criterion "shortest network".
Let's apply this to, say, the bacterial flagellum. What's the selection criterion operating on the bacteria? Ability to reproduce. Can the ability to move around ever provide a boost to an organism's fitness? Why, I believe it can. Is the flagellum very efficient at moving the organism? I do believe it is. So, in specifying the flagellum as our target space, are we accidentally choosing a target space that's at a local maximum of the fitness function? Apparently so.
Oops.
Don Baccus · 17 August 2006
djmullen · 17 August 2006
/* ga.c */
/* this program will answer to question, "What is the sum of the numbers from 1 to 1000?"
through a genetic algorithm. The algorithm pairs up numbers form 1 to 1000. Rather than compute the midpoint
via a simple calculation it takes a random number as a starting point and then mutates the random number and
uses a fitness function to select between the mutant and the original number to give the current best midpoint estimate.
The process is repeated with increasing refinement. Twice the sum of the midpoints then becomes the some we are seeking.
Snapshots of the algorithms progress are given along the way.
He can't even get his comments right.
Torbjörn Larsson · 17 August 2006
BC:
"Careful research is needed to differeniate between then two, and IDists want to jump to the conclusion that any IC system is actually a class B system."
I think it is impossible to do this research. IC is illdefined and we can't know when something is 'irreducibly complex'.
Algorithmic complexity theory says that:
"given a system S, you cannot in general show that there is no smaller/simpler system that performs the same task as S. ... Godel just came in and threw a wrench into the works. There is absolutely no way that you can show that any system is minimal - the idea of doing it is intrinsically contradictory." ( http://scienceblogs.com/goodmath/2006/06/the_problem_with_irreducibly_c_1.php )
If we embed creationistic complexity in its actual context of identifying a certain smallest system, we find that simplicity isn't welldefined, so neither is this "complexity". It is a - wait for it - *vacuous* concept! :-)
So if we find a system to be removably failing, we can't distinguish between class A or class B systems. Any number of scaffolding and exaptation could give us class A.
Wesley:
"Myself, I think the congruence between apparent intent and result makes "lie" by far the smarter pick."
Point. But there is also the idea that cognitive dissonance prohibits people to see the obvious. Salvador could have been constructing what he thought a GA was, because he can't face the truth.
But I can live with a consensus opinion from people who has been engaging him more. At some point it gets more offensive to pick stupidity instead of lying. (Not that I would ever suggest Cordova has passed that point - or wouldn't I? ;-) And cognitive dissonance is lying anyway - to oneself.
Sam Garret · 17 August 2006
Sam Garret · 17 August 2006
Since all simulation programs require conditional processing, and since conditional processing is a hallmark of design, once you've simulated something you've proved that something's designed.
1) Meteorological simulations exist
2) Therefore the weather is intelligently designed
Torbjörn Larsson · 17 August 2006
Mark:
"I think your point is much stronger if you focus on the incorrectness of the claim, rather than on the inferred motives of the claimant. Science does not currently provide us with a means of objectively detecting an intent behind the behavior given only the evidence of the behavior itself. "
"If your side of the argument is that Cordova is deliberately and intentionally spreading lies, people who are ID-friendly or ID-neutral will be tempted to think that your side has lost if you cannot prove Cordova's statements are intentionally deceptive. Since that's nearly impossible to prove, you can be 100% about Cordova's inaccuracy, and yet people may still conclude that Cordova won the argument. It's much more effective to focus on the facts, and draw people's attention to the discrepancy between what Cordova claimed, and what we actually find in the evidence."
Also good points.
So I think I will flip-flop here on the lying, and error on the cautious side.
Tune in tomorrow for next exciting episode in our show - is an IDiot unintentionally or intentionally lying? And how much do we care? :-)
bill Farrell · 18 August 2006
What I'm going to suggest is very disgusting but bear with me. Go out to UncommonDescent (into madness) and read the Cordova thread. Then, take a shower. You'll need it.
UncommonDescent into prevarication I'd call it. I think that Sal genuinely believed that he pulled a fast one with ga.c but once unmasked scurried to weave a tissue of lies to cover his original deceit. Oh, just street theatre!
It's all Vintage Sal, vintage ID. A poorly thought out posting followed by frantic backpedaling followed by the Clinton Defense, that is, arguing over the placement of commas and the definition of the word "is" in the blizzard of retorts. And in the end retreating behind banned and lost replies because they're so hurtful, I mean, truthful. Well, same thing.
I must say, though, he follows his master's footsteps faithfully.
These clowns think they're going to "topple Darwinism?" Give me an effing break.
RBH · 18 August 2006
caligula · 18 August 2006
Michael Suttkus, II · 18 August 2006
A few years back, I programmed (in VBA because it was available at the time) something similar using poker hands. The computer randomly generated a ten by ten grid of five card hands. I assigned each a score based on it's value in poker, and let them reproduce, mutate and compete with each other. Mutation was restricted to regulation poker hands (i.e., producing a hand with two queens of spades was forbidden), but otherwise free and as random as the compiler could produce (no randomize 0 for me!).
The answers were all pretty similar. Pairs occur fairly often and have reproductive advantage over the general random hand. Eventually the pair mutates into a three-of-a-kind and then a four-of-a-kind. The playing field is eventually dominated by 4oaks in almost every simulation.
What interested me is what the playing field wasn't dominated by:
1. It wasn't particularly dominated by high 4oaks. Sure, there was a slightly higher than average result due to pairs of higher cards having a higher value than pairs of lower (since more than one pair is likely to show up in a random 100 poker hands), but selective pressure worked far better to increase the number of cards and very badly for increasing the value of the cards.
2. The highest possible poker hand, a royal straight, never appeared. If "the solution" was hard coded into the selection routine, this should have been it, but it never appeared in all the times I ran the simulation. This makes sense, of course. Building pairs allows step by step improvement, but in poker a four-card straight isn't particularly better than a three-card straight. You either have a full five-card straight or you don't. Similarly, there's no success for flushes. If a flush or a straight appeared at random in the first generation, then they had great success and would dominate the board, but that wasn't common. Even then, however, straights had no pressure to become straight flushes and flushes only had a low pressure towards increasing the value of the cards towards maximum, which would eventually result in a straight flush but never did in the times I ran the simulations.
As genetic algorhythm programs go, it was fairly primitive, but I was pleased it emulated some of the pitfalls of natural selection rather well and certainly showed that natural selection and mutation can produce improvements by however local fitness is defined.
Ric · 18 August 2006
Because they wouldn't let me post the following over at UD, I will post it here.
Let me first note the irony of people who are always crying about their views being suppressed so readily suppressing the views of others.
Okay, now my comments:
Sal Cordova's response to Dave Thomas' program is continuously to say, "It can't prove evolution because the law of nature (i.e. natural selection) that the program relies on was programmed in by a human being. This is evidence of intelligent design." That is, of course, laughable.
Don't we have evidence that human beings can clearly code programs to replicate laws of nature? Physics programs replicate forces like friction and velocity all the time. That the predictions generated by these programs apply to the real world is proven every time a plane or bridge behaves as it is supposed to when friction and velocity are applied. So by Sal's logic, these programs shouldn't be valid, or they should be evidence that some intelligent force created friction and velocity.
Of course, any one without fish to fry would conclude simply that human beings can code a virtual representation of a natural force, and the ability to do so doesn't say anything about whether or not that natural force is the result of intelligence.
Edward Braun · 18 August 2006
Jake · 18 August 2006
Could any loop cantaining an if/then be considered a genetic algorithm? If that's the case, they are everywhere.
Chris Hyland · 18 August 2006
I think multi-objective genetic algorithms are a better example, where there are multiple fitness functions and so a single optimal solution is never found. Instead you get a population of solutions that are a compromise of the different functions.
http://img235.imageshack.us/img235/3395/paretohz9.png
That way you can't complain as much that the solution is coded in the algorithm.
ofro · 18 August 2006
KC · 18 August 2006
Tyrannosaurus · 18 August 2006
Sal could be said to be either a liar, a fool or an ignorant. Which one is "less" insulting? It makes no difference since a person who in his field of studies cannot or is not willing to see the errors of his line of thinking even after been provided with the evidence does not deserve any respect at all.
Tyrannosaurus · 18 August 2006
Comment #120349
Posted by caligula on August 18, 2006 02:53 AM (e)
I agree with you in the way Dave's GA is a more close descriptor of evolution. The point in which Sal shows deception or willfull ignorance (take your pick) is in arguing that you have A SOLUTION as a target. Since the IDiots like Sal will program their models to arrive to that one solution hence arguing against evolution. ToE on the other hand will arrive at solutions that may or may not be the optimum but are good enough.
Registered User · 18 August 2006
My favorite Sal Cordova lie was when he claimed that some professors (plural) had made some favorable comments about some ID peddling video or something, then when asked to give the names he suddenly switched to the singular and pretended that he had merely added an "s" accidentally.
Prior to that, I suspected that Sal was morally bankrupt before that because of his habit of bragging about converting kids with his false incomprehensible baloney. But when it became clear that Sal would lie about *anything* if he suspected he could get it away with, then the picture of Sal's pathology really crystallized.
There's something pathetic about watching people discuss science with Sal. It's like watching a documentary film about a mentally ill person who is slowly killing himself.
secondclass · 18 August 2006
I'm hoping that Salvador can address the fact that Dave's program generates CSI. Dembski's definition of CSI entails a pattern that's tractably recognizable by a human, so even if a solution or near-solution is "embedded" in the program, it doesn't constitute CSI unless we can look at the program and see the solution there. Dave's program does not exhibit CSI, but the output of the program does. How do explain that, Salvador?
Anonymous_Coward · 18 August 2006
Anonymous_Coward · 18 August 2006
Coin · 18 August 2006
Ric · 18 August 2006
steve s · 18 August 2006
Wesley R. Elsberry · 18 August 2006
Torbjörn Larsson · 19 August 2006
"Um, that doesn't address the point I was talking about, which was Cordova telling "ofro" that his code did not include an explicit solution, when we --- and Cordova, self-admittedly --- know that it does."
I can't see that he says so. He admits that he has used a "formula" (Gauss formula), but he doesn't claim that this is an explicit solution. Instead he says: "I pointed out what explit meant in my usage, that means explictly using the string "500500″ in the program. The route toward that outcome was indirect, and inexplicit, or shall I say implicit." ( http://www.uncommondescent.com/index.php/archives/1464 , comment 5.)
So I see room for a possible dissonance. He may think he implemented a GA as he may believe they work since he didn't frontload the solution but a sufficiently deterministic algorithm. IanC suggests "simulated annealing" which seems to me a reasonable description. (Very little annealing.)
But I'm not sure how meaningful it is to discuss this if people know from other threads that Cordova is in fact lying.
stevaroni · 19 August 2006
Wesley R. Elsberry · 19 August 2006
Popper's ghost · 22 August 2006
Dave Thomas · 29 August 2006
COMMENTS ARE NOW CLOSED
I'm closing comments on this post, but do stay tuned for a follow-up post, "Genetic Algorithms for Uncommonly Dense Software Engineers."
Thanks for the discussion!
Cheers, Dave