Background - the War of the Weasels
For those readers wondering what this is all about, here's a quick road-map to the key battles in this summer's "War of the Weasels."
Chapter 1: Check That Random Seed It may surprise some Uncommonly Dense Software Engineers (UDSEs) that the "random" numbers generated by compilers like C or C++ are not truly "random." Rather, they are pseudo-random numbers, generated by an elaborate algorithm that uses a deterministic formula. These pseudo-random generators are usually initialized with the "seed," a whole number that serves to start the deterministic formula off in a different place, leading to different "random" numbers. If this seed is a constant, such as Zero (0), then the sequence of pseudo-random numbers will be exactly the same every time the program is run. Of course, with this bad programming practice, there is no way the UDSE could determine if a given GA is targeted or not, because, even if it was not targeted, the result would always be the same. Here is an example of this "no-no" in a supposed GA, scordova's ga.c listing. As another example, consider the use of a non-targeted algorithm, the Steiner GA, for a 5-node problem. Here, the seed for the random generator has been changed from something associated with the actual time of day to a constant, 0. As can be seen, the results of several generations (1000 here) always turn out the same. Everything turns out the same - the locations of every node, the connections between every pair of points, everything!Target? TARGET? We don't need no stinkin' Target! (Dave Thomas, July 5th, 2006) UDSE Response (scordova, July 9th, 2006) UDSE Response (scordova, July 14th, 2006) Take the Design Challenge! (Dave Thomas, August 14, 2006) Tautologies and Theatrics (part 2): Dave Thomas's Panda Food (scordova, Aug. 15th, 2006) Calling ID's Bluff, Calling ID's Bluff (Dave Thomas, August 16th, 2006) Dave Thomas says, "Cordova's algorithm is remarkable" (scordova, August 17th, 2006) Antievolution Objections to Evolutionary Computation (Wesley R. Elsberry, August 18th, 2006) Design Challenge Results: "Evolution is Smarter than You Are" (Dave Thomas, August 21st, 2006) Congratulations Dave Thomas! (davescot, August 22nd, 2006) Can humans compute better than computers? Dave Thomas's design challenge (scordova, August 22nd, 2006)
When the Seed is tied to time of day, however, the results of each trial of 1000 generations can be quite different, as shown in the following two trials.
Chapter 1 Summary: Change the Random Generator's Seed Every Time.
Chapter 2: Is the Answer Always the Same?
If the random generator seed is properly initialized so that the sequence of "random" numbers is different every time, the next step is to run the GA several times, and see if the Answer comes out the same every time.
For example, if if the random seed in ga.c is changed, the result is always the same - the sum of the first 1000 integers is always computed as 500,500. Similarly, the result of running Dawkins "Weasel" algorithm is always the same phrase, "METHINKS IT IS LIKE A WEASEL."
To illustrate what a real (un-targeted) GA looks like when it is switched over to look for a fixed Target, I added a special feature to my Steiner GA to calculate fitness in a different way from the usual method, where Fitness is simply the length of all segments of a particular candidate solution.
When the "Target" button is checked, the Fitness Test is changed from its normal length calculation to the number of "DNA differences" between candidates and a specific organism (represented as a String in the GA). I could have selected the actual Steiner Solution for the 5-point problem, but decided instead to make the "Target" a rather inelegant and overly long candidate far from the actual answer to the 5-point problem.
Instead of selecting based on proximity to a target phrase like "METHINKS IT IS LIKE A WEASEL," when the Steiner GA is run in Target Mode, it selects based on proximity to the string for the 2874-unit-long Kluge,
target = "04614580132412507588758509TFFTFTFFTFFTFFTFFFFTFFFFFTTFFFFTTFFF"; // the Kluge
In subsequent runs, the Target is obtained every time. Here are two such runs; the first is an exact match to the Target (Fitness = number of DNA differences = 0), and in the second, the algorithm didn't run quite long enough to get a perfect match to the Target, leaving it with a Fitness of 1. It certainly will achieve the precise target given more generations, however.
In summary, when a Fixed Target is employed, the GA converges to that Target every time it is run. Not surprising, actually!
Chapter 3: Is it just that Computers are Faster?
In his August 22nd response, Salvador Cordova of UD wrote the following:
Thomas has proven no such thing, neither Dennett nor Orgel. Computers can compute certain things faster than us, that is why they exist to help us. For Thomas to argue that evolution is smarter than humans because computers can compute faster than humans is a non-sequitur. This appears to be the Final Retreat of Intelligent Design theorists. Recall that they originally said that GA's work only because the Answer is fed into the GA via the Fitness Test. But, given a problem with no obvious answer, and one for which a Genetic Algorithm actually out-performed ID theorist Cordova, the fall-back position is "But Computers are FASTER than humans!" Unfortunately, the Problem Space for the 6-point Steiner Grid is incredibly large. Using the basic points+segments data structure, there are billions upon billions of different possible solutions. I set up my GA to also look at completely random solutions for a long time. When the normal GA runs, it can complete 1000 generations (one evolutionary simulation) in a couple of minutes. It works out to examining about 8000 individual solutions every second. On the overnight run leading to the Design Challenge, the Steiner GA found one actual solution after 8 hours (Simulation #150), and found the other (different handedness) solution just 39 minutes later (Simulation #162). In the 300 Simulations of the overnight run, the very worst result had a length of 1877, or about 18% longer than the actual Steiner solution's length (1586.5 units).Dave Thomas wrote:
What is it going to be like having to go to Bill Dembksi and admit that you've learned the hard way of the true meaning of what Daniel Dennett terms Leslie Orgel's Second Law: "Evolution is smarter than you are"?
Compare that to simply running one mindless random solution after another, at 8000 solutions per second. Here is the best result obtained in 14 hours from over 400 million individual solutions. This has a length of 2502 units, almost 58% longer than the formal 6-point Steiner solution (1586.5 units).
Of course, many of our human Intelligent Designers answering the Design Challenge got the formal Steiner solution in much less time than 14 hours. Just because a computer is "fast" doesn't make it "smart."
Genetic Algorithms don't find useful and innovative solutions because they are "fast." They do so because they are using the processes of selection, mutation and reproduction to solve problems, just as happens in Nature. And because the process is essentially unguided, it might not go down the wrong roads taken by human designers who have biases about what the solutions should look like. GA's actually evolve new information in very reasonable times.
Sure, computers can perform faster than humans at some tasks. But even blinding speed is of little use when searches are random. It is evolution's combining of random processes like mutation with deterministic processes like selection and reproduction that enable it to be "smarter than you are."
RESOURCES
Make your own 'Dummiez' Book Covers!
Put your own words on Einstein's Blackboard!
Put your own victims 'On Notice' Colbert-Style!
161 Comments
Marek 14 · 1 September 2006
Very nice! Is the Windows version of the program availaible?
ah_mini · 1 September 2006
I'm wondering, when is a fixed target not a fixed target? I've worked on simple GA's that are used to program FPGAs. The task was go generate a 0V signal on the input of a 1kHz square waveform and a 5V signal on the input of a 10kHz wave. A trivial task for a human to design, except we weren't using any supporting RC circuitry or clocks, only power to the FPGA being used.
Obviously, the above phenotype was trivially simple enough to just dump into the fitness function. That would appear to be a fixed target then. However, at the same time, the solution was different every time. This isn't surprising, as the GA effectively treated the FPGA as an analog device, thus producing a large collection of "solutions".
I think the above is a good example of how even supposedly fixed target GA's can produce differing results. Also, it's interesting to note that the solutions our particular GA produced resembled in no way anything that a human would have come up with. It completely ignored all common concepts of circuit design to come up with often bizarre signal pathways, with some sections detacted from the main circuit through the chip entirely and working merely by apparent electromagnetic coupling.
Anyway, enough of my babble! I really enjoyed reading this series of articles!
Andrew
Popper's ghost · 1 September 2006
Joe Shelby · 1 September 2006
well, there is an instance where a fixed seed is (well, was) actually a good thing. late 70s/early 80s video games. timer chips were cheap (and essential to the early calculators), but true clock chips with memory were epensive back then so they couldn't rely on that to generate a seed. instead, they experimented, picked the one that seemed the "fairest", and went with it. home consoles didn't have memory clocks either (e.g., Atari 2600) so those games also hard-coded their seed.
sometimes, the seed would be re-generated using user reactions (or even their score at various times), so if the user missed shooting something, it would generate different objects to shoot at from that point on (air sea battle). however, if the user did exactly the same thing, then the game would do the same thing in response.
Its the reason you could buy a book on "How to Win at Pac Man". they analysed the patterns that repeated if the player was consistent with their actions.
some games like card game cartridges also randomized by looking at "how many milliseconds has the machine been turned on" to give more variety. a card game isn't much fun if it deals the same deck every time. ;-)
Wesley R. Elsberry · 1 September 2006
David vun Kannon · 1 September 2006
I'm surprised this challenge project caused much of a stir in ID circles. I thought that most ID adherents would accept the idea of microevolution, which is what this sort of optimization problem is similar to. If so, I would expect the discourse to shift from "GAs need a fixed target to work." towards "GAs only show that function optimization is a kind of microevolution, but can't demonstrate features of the natural world such as speciation."
I don't think you can answer that objection with a discussion of niching in GAs. My understanding of niching is that it is more similar to a mixed strategy within a species in evolutionary terms.
Obviously, ALife environments such as Tierra demonstrate speciation via mutation when parasites appear, but I'm sure most IDers would not think that parasites are an uptick in complexity, and therefore can be ignored. If mutation is the only operator changing the genotype, speciation is going to be very slow, or not show the growth in complexity we see in nature, IMHO.
I don't know of any EC studies that show speciation (no interbreeding based on phenotype differences). If anyone has a reference, please share it. Algorithms that compute genotype distances and only allow close genotype matches to mate are going to be open to the argument that the algorithm has "baked in" the speciation rather than showing that speciation is an emergent phenomenon of the run.
My intuition is that a good EC speciation demonstration will require a large number of generations (like nature) and/or demes whose connectivity changes over time (like nature) and/or co-evolution (the rest of the population is part of the fitness function, like nature).
BTW, my prediction is that after "GAs can't evolve species.", the next issue will be "GAs can't evolve sex." All GAs assume sexual reproduction is available as part of the suite of genetic operators - it doesn't emerge. That's baking in the answer because "male and female He created them." If you use God's tools, of course you're going to get similar results. But can you make God's tools?
Corkscrew · 1 September 2006
All GAs assume sexual reproduction is available as part of the suite of genetic operators - it doesn't emerge.
Actually, many GAs use asexual reproduction. This is not a problem - most of the time, sexual reproduction doesn't perform massively better. The reason for this is that, whilst sexual reproduction increases variety, it destroys perfection just as quickly.
The main reason we humans have sexual reproduction isn't because it's better for evolution - it's because it makes life harder for parasites, who have trouble specialising enough to attack one generation whilst generalising enough to attack the next. A good book for this stuff is "The Red Queen" by Matt Ridley.
GuyeFaux · 1 September 2006
Wesley R. Elsberry · 1 September 2006
jeffw · 1 September 2006
Dave Thomas · 1 September 2006
Inoculated Mind · 1 September 2006
What was Salvador Cordova using if not A COMPUTER??? His excuse was too pitiful to even be laughable.
bob · 1 September 2006
Andrew,
I read an article about your FPGA experiment. Interesting stuff. I believe the article said you were using an asynchronous FPGA, and so the design was very sensitive to temperature and power supply voltage. Did you ever get a synchronous version working?
wamba · 1 September 2006
David B. Benson · 1 September 2006
Dave Thomas --- Well done again!
Alann · 1 September 2006
You forgot one of the major criteria, the same algorithm given a different problem will still find a solution.
The example might be clearer if simplify the problem by not using long strings. I started playing with my own model when I say the contest.
Define the environment as an arbitraty rectangle (say 0,0 to 500,500)
Selectors are defined a series of points (arbitrary or random)
Define a critter as a series points connected by line segments.
Fitness:
Subtract the total length of all line segments in the critter.
Subtract the distance from the each selector to the nearest point in the critter multiplied by constant (say 100 so distance is more important than length)
Mutation:
Addition: select a random point in the critter and add a line segment to a new random point within radius x (say 10)
Reduction: select a random point and remove it from the critter. Integrity must be maintained so new line segments are created between attached points. (ie. A-B-C becomes A-C or A-B-C and B-D becomes A-C and A-D and C-D, etc).
Alteration: A given point is moved to a new random location within radius x (say 3)
Selection:
Keep the best 3rd of all variations
Given enough generations this will approximate a solution.
Dave Thomas · 1 September 2006
ah_mini · 1 September 2006
Corkscrew · 1 September 2006
Andrew: thanks, I've been looking for that for a while. Had found Adrian Thompson's site, but couldn't locate that particular demonstration.
ttw · 1 September 2006
But, given a problem with no obvious answer, and one for which a Genetic Algorithm actually out-performed ID theorist Cordova, the fall-back position is "But Computers are FASTER than humans!"
Which would seem to be the point to using them.
Popper's ghost · 1 September 2006
Ken Shaw · 2 September 2006
As a professional software developer I would just like to say how astounded I am that anyone claiming to know anything about software dvelopment would use a hard coded constant in a call to srand(). This indicates a basic lack of knowledge of both C and RNG's.
Furthermore is Cordova claiming this program works? It is riddled with significant errors and most certainly will not compile.
Popper's ghost · 2 September 2006
Les Lane · 2 September 2006
Let me suggest than you replace "uncommonly" with "inordinately" so as to create a more fitting acronym.
Todd · 2 September 2006
"Uncommonly Dense" refers to "Uncommon Descent", Demski's blog where much of this insanity occurs. "Uncommon Dissent" also works, both in how science looks at ID and how the blog is administrated.
GuyeFaux · 2 September 2006
GuyeFaux · 2 September 2006
To put it more simply, how can one tell that a fitness function f:C->[0-1], where C is the space of all critters, is targeted or encodes the solution? My guess is that in general one cannot, but also that it doesn't matter.
Ken Shaw · 2 September 2006
Dave Thomas · 2 September 2006
GuyeFaux · 2 September 2006
Dave Thomas · 2 September 2006
Dave Thomas · 2 September 2006
GuyeFaux · 2 September 2006
Don Baccus · 2 September 2006
Popper's ghost · 2 September 2006
Popper's ghost · 2 September 2006
As long as we're being precise, the quote is "C for about 30", rather than "about 30 years". Of course, since I haven't used C each waking moment of those 30 years, one could argue ... if one were being a ...
Popper's ghost · 2 September 2006
Torbjörn Larsson · 2 September 2006
"You will not be able to reproduce your experiment exactly, unless you remember what seeds you used. ... Unless this can be formally proven a priori (as Dave Thomas has, in the case of Cordova's program), it's begging the question. It's a good hint if in 10 runs the answer is always the same, but this is not strictly proof that the search is targeted. Cf. the Halting Problem; maybe something different will happen on the tenth run. Furthermore, (and this is really problematic for the counterarguments against ID) if the answers are different, it's insufficient to conclude that the search is not targeted. ... Secondly, how did you tell that a solution was reached?"
I doubt this is a serious pitfall. When we do experiments elsewhere we allow for some uncertainty. The main point must be that according to a reasonable measure, the search will end up close to a solution in enough number of cases.
Stuart Weinstein · 2 September 2006
"Very nice! Is the Windows version of the program availaible?"
Windows?
Is there a Turing test for operating systems?
RBH · 2 September 2006
Marek 14 · 3 September 2006
GuyeFaux · 3 September 2006
Tom English · 3 September 2006
Corkscrew · 3 September 2006
I've been meaning to relearn C/C++ for a while now. I also have an interesting idea for making the evolutionary algorithm even less "targeted". A match made in programmer heaven!
When I have a moment, therefore, I will be rigging the GA to work via the expedient of a very simple predator/prey system, whereby the prey seek out food sources (nodes) whilst being "eaten" by carnivores - lines that'll be randomly dropped on the surface. That will mean that the fitness function is implicit, rather than explicit, which will more closely mirror meatspace evolution.
I predict that this system will also be able to locate both solutions and MacGuyvers, thus demonstrating once again that even the least explicit fitness function can generate high-CSI results - as long as one chooses one's specification so that it matches the fitness criteria.
David B. Benson · 3 September 2006
Extending Marek 14's suggestion of forbidden zones, why not assign a cost to each point of the space, so that forbidden zones have infinite cost, but more generally, some parts are more expensive than others?
GuyeFaux · 3 September 2006
David B. Benson · 3 September 2006
GuyeFaux --- Not my plan. Straight lines are still straight lines. But rather than the cost being the Euclidean distance, the cost is the sum of the weights along the line.
BWE · 3 September 2006
Ok,
I've finally finished the whole darn set of posts, counter-posts and comments. Not being an engineer seems to really hamper my ability to process. (My schooling taught me to be a heck of a stamp collector though. And, unfortunately I haven't put any of that resource into putting the phylogenetic tree in the proper order so I have turned out pretty useles for a lot of things)
WHat I am left with is a basic question. Are we basically talking about the math behind a generic emergent system? You are talking about fitness functions and so forth and UD claims those are the result of a designer. No? But that is not even close to the case if I read this correctly. A fitness function is simply the outcome that you would expect given the parameters of what is. No? For example: a slime mold tracks pheremones. The little single cell guy operates on a few simple rules. He emits pheremone x when he encounters situation y and reacts in z fashion when he encounters pheremone x. With a potential pheremone outut of 6 or 8 different pheremones, his options are statistically limited. Eventually he will agglomerate given the correct stimuli. Or not, given different stimuli.
You could most likely write a software program that could replicate this exactly. (I thought that this is basically what SIM City does). And the Intelligent agent would be putting in "fitness functions" that were simply the range of triggers in the environment. Not that they invented them so much as they recognized them. The fitness function then, doesn't it merely describe what the environment is? The emergent behavior, like a Mandelbrot set, would emerge due to it's natural constraints. The intelligent designer (software guy) is merely replicating what is not what they would like to see. Am I way off?
GuyeFaux · 3 September 2006
David B. Benson · 3 September 2006
GuyeFaux --- Precisely. One could even make it more exotic by making each intermediate joining point cost something to build...
BWE --- No, you are way on! A slight change is that the fitness function describes how adapted the individual is to the environment.
GuyeFaux · 3 September 2006
BWE, I've got no idea what you just said.
To clarify, a fitness function is a proxy for the environment. That is, it determines which critter gets to reproduce. Given a fitness function f and two critters X and Y,
f(X) > f(Y)
means that the GA will make sure that X has more children than Y.
In meat-space, the fitness function is the environment; it's the thing that selects one critter over the other. It reflects the preferential survival/reproduction capabilities of critters.
(Actually, in nature critters aren't preferentially selected; genes are. But the critter is a decent proxy for it's genes.)
BWE · 3 September 2006
Like whether the emergent behavior would not emerge because the environment wouldn't support it? So you have some of the variables stop certain outcomes? Is that even a variable or simply a formula that says If (emergent behavior)=(something that doesn't work) then goto line 10?
Where (something that doesn't work) is a set of parameters describe the physical world?
BWE · 3 September 2006
Popper's ghost · 3 September 2006
GuyeFaux · 3 September 2006
GuyeFaux · 3 September 2006
Adding even the most trivial bumps to the surface makes the optimal solution stupidly hard to compute by hand. However, the fitness function need not suffer too much computationally.
So I like David B.B.'s idea.
I thought that the same type of challange could be issued with the traveling salesperson problem as well: "Here's a matrix of distances, what's the shortest circuit?" But unfortunately there are many known algorithms out there to solve the problem, so Sal et al could use them to "look in the back of the book", as he's done for this problem. But adding a small novelty would take care of that issue...
Popper's ghost · 3 September 2006
David B. Benson · 3 September 2006
Popper's ghost & GuyeFoxx --- Right on. This essentially eliminates any attempt to use symmetries to arrive at even local minima. So one is reduced to brute force, which GAs are good at, along with other satisficing algorithms such as 'computational swarm intelligence' and 'simulated annealing'.
Popper's ghost · 3 September 2006
GuyeFaux · 3 September 2006
Michael J · 3 September 2006
I wrote some GA code to optimize natural gas flows around 6 years ago. I used sexual reproduction and the occasional run developed into more than one species. That is more than one optimal solution appeared simultaneously in the population. When there was mating across the species the poor child was sub-optimal and usually didn't survive selection.
However, the situation was not permanent as the species that was slightly more optimal eventually crowded out the second species.
I enjoyed writing the GA code but my managers didn't like the fact that the answer always came out slightly different on each run.
Michael
Wesley R. Elsberry · 3 September 2006
KimB · 4 September 2006
I've read this series with ever increasing interest.
I'm really fascinated by the Macgyvers (great name btw).
Not quite perfect, but good enough to survive in the given environment.
...which is nice.
Roger Rabbitt · 4 September 2006
k.e. · 4 September 2006
Why is Dembski worried about a machine producing a result that is analogous to the "set theoretic compliment of chance and regularity" .....in other words a freak accident of interacting natural forces and, or matter.
Would that make his arguments seem like Dawkins is correct?
WAD here is a new book title for you.
"The Selfish Algorithm".....has a nice ring don't you think?
Of course it's not too original...and it might make you look as if you are selling out..
...but hey.... you could call it "The Co-operative Design Algorithm"
...ah typed out on god's trusty old "Imperial Typewriter" and flicked off to the great halls of angels where it was mimeographed using freshly plucked quills and then sent the great smoking DNA factories in the skies before shipping to each corner of the universe by cosmic vacuum gravity pumps.
Just an idea.
GuyeFaux · 4 September 2006
Andrew McClure · 4 September 2006
Wesley R. Elsberry · 4 September 2006
BWE · 4 September 2006
Ok, I will try again.
The ID group (Sal et al) says that the fitness function is (or is the result of) an Intellegent Designer. No?
But the fitness function is simply a set of parameters imposed by the natural surroundings, right?
Even though you wrote it as software, isn't the point of the function to mimic the limits imposed by nature? The intelligent designer isn't the one who makes CO2 heavier than O2 or N2. So the fitness function, which is seeking to replicate fitness as regards the ability to be successful in a given environment and assign it a value on a scale (right?) isn't god putting out his finger and saying poof, it is the ability to utilize or function within existing parameters. Right?
No I-Designer, right? Is this more complicated than that? Not the software, the concept of the fitness function being, in essense, a designer. The fitness function merely imposes parameters?
Emergent systems all emerge by applying a basic set of rules within a fixed set of parameters, from fractal geometry, the golden mean, ant colonies and slime molds to evolution of the species. No?
GuyeFaux · 4 September 2006
GuyeFaux · 4 September 2006
David B. Benson · 4 September 2006
The book "Simple Genetic Algorithm" is quite good for the mathematically minded. But in particular, the author offers at least three different ways in which selection of the 'best' to have children might be accomplished. So-called elitism is one of these.
Roger Rabbitt · 4 September 2006
Roger Rabbitt · 4 September 2006
Wesley R. Elsberry · 4 September 2006
Roger Rabbitt · 4 September 2006
Andrew McClure · 4 September 2006
BWE · 4 September 2006
GuyeFaux · 4 September 2006
GuyeFaux · 4 September 2006
Michael J · 4 September 2006
I think the ID hairsplitting is getting a bit silly. As has been repeatedly said in the real world the fitness function is not in the cell but is the environment.
So in a desert the fitness function is to survive well enough to reproduce in desert conditions. Where is the Intelligent Designer in this scenario?
Behind which Cacti are the solutions hidden? So creatures better able to manage heat and lack of water do better, reproduce more and survive.
I really can't see the difference between this and an environment which says shorter is better. The creatures that are shorter are aloud to reproduce at the expense of the longer length solutions.
Henry J · 4 September 2006
Re "You've missed the moving goalpost."
Which is probably good, since actually hitting the moving goalpost might hurt.
Henry
GuyeFaux · 4 September 2006
Anton Mates · 5 September 2006
Wesley R. Elsberry · 5 September 2006
Popper's ghost · 5 September 2006
Popper's ghost · 5 September 2006
Corkscrew · 5 September 2006
William E Emba · 5 September 2006
Alann · 5 September 2006
GuyeFaux · 5 September 2006
William E Emba · 5 September 2006
David vun Kannon · 5 September 2006
Dave Thomas · 5 September 2006
RBH · 5 September 2006
RBH · 5 September 2006
Alann · 5 September 2006
In response to Dave, I think you missed my second posting.
While I am not an IDist I think I can see their point of view (warped though it is )that the nature of the selectivity is the problem.
They see this kind of algorithm as a clear chain of A-B-C-D where each step is closer to a solution and therefore more selectable. It does not matter that the solution itself is not hardcoded only that the fitness function evaluates in terms of proximity to any solution.
Perhaps if I phrase it as:
"Valuing the current function" is closely aligned with "proximity to an optimum,"
It seems like this is a ridiculous arguement that the model is invalid because we can look at two species and tell which one is more fit, as though to say selectability itself must be denied. At the same time it is accurate that in a more realistic model most mutations would have to be treated as neutral and it may take several successive mutations before a change in fitness could be determined.
I do feel the onus is on ID to open discussion and propose their own models or variations on this model which can present a selectability and fitness model which is not strictly progressive, is not considered a "targeted" approach, but must still able to identify stages where a given variant is superior.
I doubt that they will ever engage in such a discussion since they will likely realize that they are essentially defining "selectable" and "targeted" as the same thing. It is an intentional inability to grasp that an artificial selection model when supplied arbitrary or random selection criteria is accurately considered a representation of a natural selection model.
As someone said earlier all of this can be considered models of so called "micro" evolution, which most IDist are not crazy enough to dispute. Of course from a real scientific standpoint there is no reason to believe there is any significant difference between "micro" and "macro" evolution, in fact real scientists have little cause to even use such terms. In analogy gravity works equally on pebbles and boulders, so why should evolution be any different?
William E Emba · 5 September 2006
Popper's ghost · 5 September 2006
Popper's ghost · 5 September 2006
Another way I could have put it is "Therefore, the ID claim that evolution cannot produce organisms highly fit to their environment for the reasons they give is false." I think that's what I had in mind.
Popper's ghost · 5 September 2006
Another way, William, I might have put it is Dave Thomas's "publicly admit that evolutionary processes like selection, reproduction and mutation can indeed produce highly coherent, irreducibly complex, and even complexly specified designs, without having to know anything about the specifics of these designs". I suppose that, despite all that, it is conceivable that evolution cannot produce organisms highly fit to their environment. When making empirical claims, it's always risky to use deductive words like "therefore". But it is reasonable to interpret a denial of a claim that something cannot happen as a denial that it cannot happen for the claimed reasons, not in an absolute sense. Still, the clarification has some value.
Popper's ghost · 5 September 2006
Popper's ghost · 5 September 2006
Popper's ghost · 5 September 2006
Popper's ghost · 5 September 2006
Roger Rabbitt · 5 September 2006
Dave Thomas · 5 September 2006
Popper's ghost · 5 September 2006
Andrew McClure · 6 September 2006
Darth Robo · 6 September 2006
Roger Rabbitt wrote:
"Your GA operates on intelligent selection."
How? Dave has pointed out that neither he or the computer knew the answer. (I hope you're not also suggesting the computer itself is intelligent.) The computer also continued to find more solutions after it already found one. I'll admit, the mathematics of this stuff is over my head, but the principle makes sense. The FF represents the natural environment and the GA adapts to it (is that right?). And with all these people using their own GA's to come up with many different solutions (to which they didn't know the solution either until their program found it), where is the intelligent selection?
As far as I can tell, the FF represents a problem to be solved and we use the computer to help us solve it. The only thing I can think of that's selected would be the FF itself, but what difference would that make? If it represents the environment, what difference would that make to natural selection if the environment occured naturally or was "Intelligently Designed"?
If I've misunderstood anything (probably have) and deserve a verbal slaughtering, then feel free. :-)
In the meantime I'll just say: "Welease Woger!"
William E Emba · 6 September 2006
Popper's ghost · 6 September 2006
Popper's ghost · 6 September 2006
To be a bit more specific: there's an argument that any genetic algorithm must be "front-loaded" via "illicit expedient" with "the target sequence" in order to reach the target. If this argument were valid, then evolution (A) must be front-loaded. But the existence of a GA (B) that isn't front-loaded yet reaches its target demonstrates that the argument is invalid, and thus the claim that evolution must be front-loaded is false (but perhaps it is front-loaded anyway). This gets back to GuyeFaux's comment about necessary vs. sufficient.
Popper's ghost · 6 September 2006
Popper's ghost · 6 September 2006
I should add that Occam's Razor is not merely a methodological rule but is actually true when phrased something like: A simpler theory is more likely to make accurate predictions, since adding extraneous elements increases the number of predictions without making the theory fit the evidence any better. So in a sense, your namesake answered Hume.
fnxtr · 6 September 2006
Alann · 6 September 2006
BWE · 6 September 2006
RBH · 6 September 2006
Popper's ghost · 6 September 2006
Popper's ghost · 6 September 2006
Darth Robo · 6 September 2006
BWE said:
"Fish don't know to school. They have a limited number of responses to the fish next them. Schooling is an emergent behavior of the system. The specific school pattern includes environmental constraints/inputs (a fitness function) but the generic "schooling" behavior is simply the result of an algorithm of the various cues that the schooling fish respond to."
I've seen a documentary showing that. It also showed birds too, operating on the exact same principle. Another thing they also did was do a computer simulation. All of the "birds" on screen were represented by coloured dots. All of the dots were given the same specific parameters of motion - two or three simple instructions (follow the dots nearest them, don't get too close or too far). Then the simulation was run. We saw a few hundred dots all flying around the screen moving exactly like a flock of birds, with some footage of real birds for comparison. It was pretty cool.
BWE · 6 September 2006
What was it called? I'd like to send it to someone.
Sir_Toejam · 6 September 2006
I too remember seeing that video a while back.
ITMT, you might get a kick out of this BWE:
http://www.humboldt.edu/~ecomodel/clupeoids.htm
Sir_Toejam · 6 September 2006
ahh, and speaking of schooling being an emergent property, I did remember the name Parish, and found a more recent article than the ones I recalled:
http://www.biolbull.org/cgi/content/full/202/3/296
Sir_Toejam · 6 September 2006
..and finally, this is where you can play with the basics of the models yourself, IIRC:
http://depts.washington.edu/~birdfish/schooling/model.shtml
BWE · 6 September 2006
STJ, I've seen that stuff many, many times but it is always good to repeat it. And it is especially good for a step by step process of enquiry that might lead a fundy straight to heck. Where all the good people go. :-)
BWE · 6 September 2006
STJ, I've seen that stuff many, many times but it is always good to repeat it. And it is especially good for a step by step process of enquiry that might lead a fundy straight to heck. Where all the good people go. :-)
In fact, way back before we knew to call them "Emergent Systems" I had a fancy for some particularly peculiarly prickly starfish that could tear your ass bad if you sat on them. Seems that populations of the prickly bastards could fluctuate rather drastically
Sir_Toejam · 6 September 2006
Darth Robo · 7 September 2006
Sorry, BWE, it was a few years ago. I wish I could remember for the life of me. We only had terrestrial TV back then so I'm guessing it was on BBC2's "Horizon" programme, or on Channel 4 maybe. If I do manage to find out, I'll certainly let you know. Sorry 'bout that. :-(
Rob-ot · 7 September 2006
If anyone here is a physicicst, and knows GA's, could you drop me an email at rkhull@roanoke.edu?
BWE · 7 September 2006
I've looked around and I can't find it. My wife teaches middle school science and she could use something like it.
Minions of Christ, do you get why the FF isn't an Intelligent agency? I don't understand all too well what it is you mean by the idea that you are pre-selecting for the answer. Maybe Roger Rabbit can spell it out. This seems too important to let it lie.
Do you agree that the software is demonstrating that evolution is possible?
Why or why not?
Darth Robo · 7 September 2006
Not too which person you're responding to there, but I get that the FF isn't intelligent and the software IS demonstrating that evolution is possible. Far as I can tell, only the Rabbitt has that problem. Probably the reason why we don't usually let rabbits near computers. :)
Roger Rabbitt · 7 September 2006
Popper's ghost · 7 September 2006
Popper's ghost · 7 September 2006
Roger Rabbitt · 7 September 2006
Popper's ghost · 7 September 2006
Ok, I vote stupid, since the statement RR just quoted goes against him.
Popper's ghost · 7 September 2006
Specifically, RR is too stupid to grasp that "requires prior knowledge of the optimum" means "requires prior knowledge of what the optimum is", not "requires prior knowledge about the optimum".
Extremely stupid bunny.
Popper's ghost · 7 September 2006
OTOH, perhaps RR does know that ""requires prior knowledge of the optimum" means "knows the optimum ahead of time" and is just pretending not to. After all, just how stupid can someone be?
Popper's ghost · 7 September 2006
BTW, Wesley made it explicit in the very post from which RR mined that quote: "You don't have to know what the actual minimal tour length is in order to deploy it and get a short tour as the result of running the algorithm."
Darth Robo · 7 September 2006
"Ok, I vote stupid, since the statement RR just quoted goes against him."
That's what I thought.
" "Proximity to the optimum" requires prior knowledge of the optimum. That's NOT how real fitness functions work." [emphasis mine]
Lettuce, anyone?
CJ O'Brien · 7 September 2006
BWE · 7 September 2006
Silly Rabbit, ID is for Maroons. And you're proving that over and over and over. Take 3 steps back, forget the equations and realize that the entire argument is wrong. You have missed the whole point or you are deliberately obfuscating and you know your arguments to be wrong.
Popper's ghost · 8 September 2006
Alann · 8 September 2006
Popper's ghost · 8 September 2006
Popper's ghost · 8 September 2006
Alann · 8 September 2006
I think you are trying to dismiss my point out unfairly.
In a real world sense natural selection does recognize an optimized variant, this does not in any way imply that natural selection or any other fitness model is inately an intelligent process. We can and have engineered species like disease resistant crops which have better survivability then the purely natural variants.
I never said mutations act on the fitness function. The range of mutation is limited but the result does not favor fitness. That is why I said with a range of +/-1 for mutation 1 and 3 are equally likely offspring of 2. It is the probability of 3 having offspring vs 2 having offspring which is favored independent of mutation.
I specifically restated my example which I realize is differnt than the stienr. I am trying to express the principles as simply as possible but it applies equally to the Stiener problem:
What ID is calling the "target sequence" (and yes I agree this is bad termonology) is the idea that the fitness equation has an optimal solution and in the conceivable lineage there are no necessary steps which are neutral or less fit.
They do miss the idea that the Stiener solution could model Irreductible Complexity as well. In principle there exists several possible initial states which will almost invariable favor mutation into McGuyver solution. Once a McGuyver variant exists selection will strongly favor the McGuyver over the steps needed to reach the double bowtie solution. Put another way one of the steps between the McGuyver and the double bow tie is sufficiently harmful when compared to the existing McGuyver that it will invariably be selected out.
I tried to expressed this as the jump from 23 to 29 where 24 has multiple factors (1,2,3,4,6,8,12,24).
My understanding of IC in biological terms does not require that the intervening step (removed piece) is inherently lethal, only that the value of the selectable trait has been lost, such as a flagella tail which cannot propel.
In principle IC has merit. You can show in principle that a catalyst is required, and is missing from our current understanding of the formula. If you argued IC in chemical terms you could prove it. The problem is IC has little practical value in biology because we are already so many factors which we cannot define precisely that create a margin of error while small in reasonable terms more than sufficient to account for IC.
Coin · 8 September 2006
Alann · 8 September 2006
Anyway I am through playing the Designer's Advocate.
Dave's point against inference of design loses none of its potency.
The McGuyver results strongly refutes the suggestion of a true target sequence. Getting stuck with these non-optimal variants is very much in line with evolution and natural selection.
As much as ID wants to complain that the problem is unfair, this places the burden on them to explain what a fair problem would be. Too bad they suck at coming up with alternatives.
Scott · 8 September 2006
Popper's ghost · 8 September 2006
Popper's ghost · 8 September 2006
Popper's ghost · 8 September 2006
Alann · 11 September 2006
Dave Thomas · 11 September 2006
Popper's Ghost · 12 September 2006
Alann · 12 September 2006
I was attempting to abstract a general issue from the complexity of the problem. I realize this is not working.
My point was that the argument about future fitness vs. present fitness can be taken to refer to the nature of the problem (that the problem lends itself to functional intermediates) and not as a direct argument against the fitness function.
I do not agree with the ID argument on this point, I am only trying to see their point of view. I apologize for letting this take the conversation so far off topic.
Oh and for the record (since someone seems to think I am and idiot) my background is mathematics, logic, and computer science. I do not consider myself in any way an expert on biology.
Popper's ghost · 15 September 2006
Odd then that you show no understanding of mathematics, logic, computer science, or biology.
Dave Thomas · 18 September 2006
Comments are now Closed.
It's been fun. Let's do it again some day, shall we?
Thanks & regards, Dave