ID Creationism and the Second Law

Posted 23 June 2011 by

A venerable claim of creationists is that evolution somehow or other violates the Second Law of Thermodynamics. In its tradition of recycling old-line creationist claims, the intelligent design movement, in the person of Granville Sewell, a professor of mathematics at the University of Texas El Paso, has taken up the creationist Second Law claim. For the few here who don't regularly read him, I have to say that Jason Rosenhouse's takedown of Sewell's claims (and in particular Sewell's whining about a rejected ms.) is lovely. Highly recommended.

134 Comments

DS · 23 June 2011

Great takedown. Thanks to RIchard for the link, Jason for the article and Mike Elzina for patiently trying to explain all of this over and over again.

felsenst · 23 June 2011

from Joe Felsenstein (how do I get the PT signin system to automatically sign my comments with my name and URL instead of just my Gmall user name?) Jason Rosenhouse's flattening of Sewell is wonderful. Both he and Mark Perakh have previously flattened Sewell's earlier arguments (here and here), though Sewell seems to have learned nothing. I have also posted twice at PT making fun of Sewell's argument (here and here). The crux of Sewell's argument is that we cannot explain the local rise of life unless we have something entering the biosphere that affects it. that having an overall increase of entropy due to events elsewhere is not enough. Sewell airily says that it isn't enough because
… if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here.
Of course, it is the very entry into the biosphere of radiation from the sun that makes life, and its evolution, possible! In my posts I had fun with this, pointing out that Sewell's "proof" also showed that weeds can't grow, and neither can flowers.

Reed A. Cartwright · 23 June 2011

Joe, the easiest way is to register for a local Movable Type account and log in with that. I don't think our Gmail support allows you to change your "handle".

felsenst · 23 June 2011

Thanks Reed, and in case anyone wants to quibble, yes I am aware that some energy for life comes from chemoautotrophy. But most of it is from the sun.

Dale Husband · 23 June 2011

Gee, another reason for me to be ashamed of my home state of Texas!

Henry J · 23 June 2011

Why is it always the second law? Why not the first or third? (Or fourth if there's that many.)

Joe Felsenstein · 23 June 2011

(Mostly just testing whether commenting now works).
Henry J said: Why is it always the second law? Why not the first or third? (Or fourth if there's that many.)
I think if they could use any of the other (0th, 1st, 3rd, 4th) laws of thermodynamics to argue that evolution was impossible, they would. But the 2nd Law is the most difficult to understand so it offers them the most opportunity.

dphorning · 24 June 2011

Ok, sorry to draw this way OT, but 4th LoT? Was this facetious? Or did the number change since intro stat mech?

Joe Felsenstein · 24 June 2011

dphorning said: Ok, sorry to draw this way OT, but 4th LoT? Was this facetious? Or did the number change since intro stat mech?
The Fourth Law of Thermodynamics is that a commenter on a blog, when encountering a list like this of the "four laws of thermodynamics" (numbered 0, 1, 2, 3) will lose count when he posts about it. Us commenters have very limited conceptual abilities.

Paul Burnett · 24 June 2011

felsenst said: Jason Rosenhouse's flattening of Sewell is wonderful. Both he and Mark Perakh have previously flattened Sewell's earlier arguments (here and here), though Sewell seems to have learned nothing.
Creationists repeat the same lies over and over again, typically to new / naive audiences. So we have to continually let those new audiences know that the Liars For Jesus(TM) are recycling old lies. It is tiresome but necessary. Keep up the good work.

Wesley R. Elsberry · 24 June 2011

Joe Felsenstein said:
dphorning said: Ok, sorry to draw this way OT, but 4th LoT? Was this facetious? Or did the number change since intro stat mech?
The Fourth Law of Thermodynamics is that a commenter on a blog, when encountering a list like this of the "four laws of thermodynamics" (numbered 0, 1, 2, 3) will lose count when he posts about it. Us commenters have very limited conceptual abilities.
It may also refer to William Dembski mistakenly claiming that he had proved a law of conservation of information.

https://me.yahoo.com/a/x68O1lsNl5F6vhoVwwk5qw1CYaqjc3BB#ca44a · 24 June 2011

If they use zeroth law instead of 2nd they would hardly succeed. Because zeroth law makes evolution possible.

Joe Felsenstein · 24 June 2011

https://me.yahoo.com/a/x68O1lsNl5F6vhoVwwk5qw1CYaqjc3BB#ca44a said: If they use zeroth law instead of 2nd they would hardly succeed. Because zeroth law makes evolution possible.
I'm not sure how -- it is hard to imagine a world without it. By the way, I now can make quantitative statements about the laws of thermodynamics. I went into the Wikipedia page on Laws of Thermodynamics. copied out the text on each law, and word-counted it. The result: # Zeroth Law:   348 # First Law:      199 # Second Law:   658 # Third Law:      154 ... and some of the discussion for the Zeroth Law is about why it would be needed at all. Leaving the Second Law as the true champion in needing-to-be-clarified.

SWT · 24 June 2011

Henry J said: Why is it always the second law? Why not the first or third? (Or fourth if there's that many.)
The second law
  • Talks about how systems change in time
  • Is couched in terms of a quantity few lay people actually understand
  • Is easily misrepresented because of an unfortunate nomenclature choice by the founders of information theory
  • Is easily misrepresented because of the statistical nature of the Boltzmann interpretation and the difficulty of explaing that interpretation to lay people (especially innumerate lay people)
  • That's why. (At least IMNSHO.)

    SWT · 24 June 2011

    felsenst said: Jason Rosenhouse's flattening of Sewell is wonderful. Both he and Mark Perakh have previously flattened Sewell's earlier arguments (here and here), though Sewell seems to have learned nothing. I have also posted twice at PT making fun of Sewell's argument (here and here).
    I just took a look back at Mark Perakh's excellent essay and was greatly amused by the first comment.

    Mike Elzinga · 24 June 2011

    It has been quite painful watching the mangling of the second law by not just the ID/creationists, but also by well-intentioned people trying to make the essence of the second law accessible to the general public.

    And I have been watching this since about the mid 1970s when I was first becoming aware that misconceptions about the second law were being promulgated.

    None of the excellent textbooks I have known and used over the years make any reference to order/disorder when discussing entropy. The concept of the integral of dQ/dT was already becoming a useful pattern for making efficient calculations before Clausius named it in 1865; and certainly before statistical mechanics clarified just what temperature, internal energy, and accessible microstates are.

    It is an equivalent way of stating that energy flows spontaneously from high temperature to low temperature. Dividing a quantity of heat, Q by a small T gives a larger number than dividing that same Q by a large T. Entropy spontaneously increases; DUH!

    As I have tried to make clear (despite the attempted distractions by certain trolls), entropy comes down simply to the enumeration of energy microstates. How those microstates are connected with the microscopic constituents of the system and the degrees of freedom over which the energy is spread, how the number of microstates changes with the total energy of the system, these are what are important.

    There is, in general, no consistent relationship between how energy is spread among microstates and the spatial ordering of atoms and molecules that provide those microstates. The use of order/disorder, if it is used at all (and it should NOT be used in this context), is only a metaphor drawn from visualizations of spatial arrangements of things typically used to teach the concepts of enumeration.

    The second law basically comes down to the fact that energy spreads around. More fundamentally, however, energy spreads around because matter interacts with matter. This is a simple, observational fact. If that were not the case, there would be no universe as we know it. Energy must be released and spread around in order for matter to condense.

    If a thermodynamic system is completely isolated from its surroundings, energy spreads among all available microstates until the maximum number of microstates is on the average occupied. Thus, Boltzmann’s constant times the logarithm of the number of microstates - i.e., entropy - increases to a maximum; DUH!

    If the microscopic constituents of the thermodynamic system also did NOT interact with each other, then the system would remain in whatever particular microstate it is in. It would have zero entropy (logarithm of 1 is zero); but because it is also completely isolated from its surroundings, we wouldn’t know what state it was in.

    And this fact should also reveal the ID/creationist lie that large entropy signifies “lack of information,” whatever the hell that means.

    But the notions of entropy and microstates make it possible to relate the macroscopic states of a thermodynamic system to its underlying constituents. That is the true power of the concept of entropy. Entropy is just a name given to a mathematical calculation that makes such connections possible.

    It has nothing to do with the universe coming all apart, with order/disorder, or with making evolution impossible. Evolution happens because matter condenses. And matter condensing requires the spreading around of energy.

    Entropy is not about now advanced some organism is on some evolutionary scale. Over time, as matter condenses, more and more complex things emerge. Young organisms have less entropy (number of energy microstates consistent with its macroscopic state) that do larger adults. That doesn’t make the young “more advanced.”

    Entropy is about counting the number of energy microstates consistent with the macroscopic state of any thermodynamic system. It does not have to be any more complicated than that.

    And the fact that Sewell did not submit his “paper” to the most appropriate journal – namely, Physical Review Letters (the go-to journal for the most important announcements in physics) – this fact alone reveals either complete naiveté on the part of Sewell or, more likely, the typical sleazy tactics ID/creationists have been using for nearly 50 years now.

    I would say that the “nuisance payoff” to Sewell’s lawyers is well worth the exposure of Sewell’s tactics along with the fact that ID/creationists continue to play this game despite decades of being repeatedly refuted and rebuffed by the scientific community.

    P.S. Would there be any advantage to reposting and having available on a suitable thread the example I posted recently over on the Bathroom Wall? I could easily repost it.

    mrg · 24 June 2011

    SWT said: I just took a look back at Mark Perakh's excellent essay and was greatly amused by the first comment.
    I thought that was intriguing. I wouldn't really think that the ICR or the AIG have been "marginalized" by the DI, but clearly they have been following the lead of the DI's approach of "replace blatantly stoopid arguments with opaque ones." And so the blatantly stoopid SLOT argument has been largely supplanted by the glib gibberish of the "conservation of information" argument -- though it's nothing more than a shapeshifted version of the SLOT argument.

    Mike Elzinga · 24 June 2011

    mrg said: And so the blatantly stoopid SLOT argument has been largely supplanted by the glib gibberish of the "conservation of information" argument -- though it's nothing more than a shapeshifted version of the SLOT argument.
    Yeah; “In the Beginning Was Information.” They just make up crap as they go; and get paid well for it.

    Mike Elzinga · 25 June 2011

    Sewell doesn’t seem to realize it, but his "Poker Entropy and the Theory of Compensation" is actually a pretty good self-parody. His misunderstanding of the second law and the arrangements of matter is precisely what he is using in describing random five-card draws from a deck of playing cards. It is yet again another example of the “Fundamental Misconception by ID/creationists.” It’s Dembski’s excuse for inventing “complex specified information” and “conservation of information.” This just has to be juxtaposed with Henry Morris’s pseudo-science of thermodynamics as compared with what Rudolph Clausius actually did. It just gets funnier and funnier as it goes. We can only hope the general public isn’t getting dumber and dumber. This is from Rudolph Clausius in Annalen der Physik und Chemie, Vol. 125, p. 353, 1865, under the title “Ueber verschiedene für de Anwendung bequeme Formen der Hauptgleichungen der mechanischen Wärmetheorie.” ("On Several Convenient Forms of the Fundamental Equations of the Mechanical Theory of Heat.") It is also available in A Source Book in Physics, Edited by William Francis Magie, Harvard University Press, 1963, page 234. (Note: Q represents the quantity of heat, T the absolute temperature, and S will be what Clausius names as entropy)

    … We obtain the equation ∫dQ/T = S - S0 which, while somewhat differently arranged, is the same as that which was formerly used to determine S. If we wish to designate S by a proper name we can say of it that it is the transformation content of the body, in the same way that we say of the quantity U that it is the heat and work content of the body. However, since I think it is better to take the names of such quantities as these, which are important for science, from the ancient languages, so that they can be introduced without change into all the modern languages, I propose to name the magnitude S the entropy of the body, from the Greek word η τροπη, a transformation. I have intentionally formed the word entropy so as to be as similar as possible to the word energy, since both these quantities, which are to be known by these names, are so nearly related to each other in their physical significance that a certain similarity in their names seemed to me advantageous. …

    Clausius apparently translates η τροπη from the Greek as Umgestaltung and not Umdrehung. However, this doesn’t matter because he modified the word to entropy for the reasons he indicated. On the other hand, here is Henry Morris’ pseudo-scholarship back in 1973.

    The very terms themselves express contradictory concepts. The word "evolution" is of course derived from a Latin word meaning "out-rolling". The picture is of an outward-progressing spiral, an unrolling from an infinitesimal beginning through ever broadening circles, until finally all reality is embraced within. "Entropy," on the other hand, means literally "in-turning." It is derived from the two Greek words en (meaning "in") and trope (meaning "turning"). The concept is of something spiraling inward upon itself, exactly the opposite concept to "evolution." Evolution is change outward and upward, entropy is change inward and downward.

    Mike Elzinga · 25 June 2011

    And, of course, the comments over at Unimaginably Dense never cease to amaze. Those comments following Sewell’s whining are just as “remarkable.”

    Sewell clearly doesn’t know anything about the editor at the American Journal of Physics; but I can assure anyone here that this editor is not fooled by any of Sewell’s dopy “arguments.”

    386sx · 25 June 2011

    Mike Elzinga said: On the other hand, here is Henry Morris’ pseudo-scholarship back in 1973.

    The very terms themselves express contradictory concepts. The word "evolution" is of course derived from a Latin word meaning "out-rolling". The picture is of an outward-progressing spiral, an unrolling from an infinitesimal beginning through ever broadening circles, until finally all reality is embraced within. "Entropy," on the other hand, means literally "in-turning." It is derived from the two Greek words en (meaning "in") and trope (meaning "turning"). The concept is of something spiraling inward upon itself, exactly the opposite concept to "evolution." Evolution is change outward and upward, entropy is change inward and downward.

    He's completely ignorant of the real etymology, but even his wrong etymology is just bizarre, what with the weird "out-rolling" and "in-turning" phrases he uses. And besides that, using etymology of words to disprove a scientific theory is about as cheesy as scholarship (read: hucksterism) can get. Lol.

    Joe Felsenstein · 25 June 2011

    386sx said:
    Mike Elzinga said: On the other hand, here is Henry Morris’ pseudo-scholarship back in 1973.

    The very terms themselves express contradictory concepts. The word "evolution" is of course derived from a Latin word meaning "out-rolling". ...

    He's completely ignorant of the real etymology, but even his wrong etymology is just bizarre, what with the weird "out-rolling" and "in-turning" phrases he uses. And besides that, using etymology of words to disprove a scientific theory is about as cheesy as scholarship (read: hucksterism) can get. Lol.
    Morris was actually correct about the word "evolution", see here, though of course that the name "evolution" came from an inappropriate Victorian metaphor is not an argument against it.

    Steve P. · 25 June 2011

    Of course, it is the very entry into the biosphere of radiation from the sun that makes life, and its evolution, possible! In my posts I had fun with this, pointing out that Sewell’s “proof” also showed that weeds can’t grow, and neither can flowers.
    Er, Prof. Felsenstein, radiation burns the skin. It causes cancer. Bad news for a lot of organisms. Now how do you suppose early life was able to somehow develop a way to filter out the debilitating effects of bad light while simultaneously utilizing the good wavelengths? Seems they would have to figure out how to block the bad before being able to use the good light. Is there an emergent answer to this conundrum in here somewhere? As well, it appears life is devolving, what with all the extinction of species and all due it appears to nature's seeming inability to rein in the self-made pernicious problem of Man. True, nature is throwing lots of biological 'money' at this problem with new and interesting diseases that Man may eventually succumb to. But is it enough and in time? It seems NS' problems are two-fold: speed up mutational advantage in a large number of organisms in time to counter the effects of that one dominant organism Man, and/or debilitate Man's evolution long enough to give 'the rest of them' a breather. So how does NS go about doing this - 'turbo-boosting' a large portion of living organisms but at the same time 'spiking the gas' of the single dominant organism, to reestablish balance? Man is pretty damn smart. I wouldn't want to be in natural selection's shoes.

    TomS · 25 June 2011

    Typical creationist "response".

    To point out what should be obvious, even to a creationist:

    What you say has nothing to do with the 2nd law of thermodynamics.

    You have no response, so you change the subject.

    But, let us just pause a moment, and ask what the creationist explanation is for the issues that you bring up? That the "intelligent designer(s)" just wanted it that way?

    Why is the Earth round? Because that's the way that the ID wanted it? Why is the Earth flat? Because that's the way that the ID wanted it? Why is the Earth shaped like a pretzel, or like a "Penrose triangle", or why is there no Earth at all? Because that's the way that the ID wanted it? After all, can't "intelligent designers" do whatever they want, and we are in no place to question their motives?

    386sx · 25 June 2011

    Joe Felsenstein said:
    386sx said:
    Mike Elzinga said: On the other hand, here is Henry Morris’ pseudo-scholarship back in 1973.

    The very terms themselves express contradictory concepts. The word "evolution" is of course derived from a Latin word meaning "out-rolling". ...

    He's completely ignorant of the real etymology, but even his wrong etymology is just bizarre, what with the weird "out-rolling" and "in-turning" phrases he uses. And besides that, using etymology of words to disprove a scientific theory is about as cheesy as scholarship (read: hucksterism) can get. Lol.
    Morris was actually correct about the word "evolution", see here, though of course that the name "evolution" came from an inappropriate Victorian metaphor is not an argument against it.
    Okay thanks. His "out-rolling" and "in-turning" seem to me like bizarre hyphenated phrases that nobody would ever use. They seem more kooky than scholarly. (To me the non-scholar anyway. To my non-scholarly mind.) Morris seems to be utterly ignorant of Rudolph Clausius's coinage of the term "entropy" and just makes up whatever crap he feels like making up.

    TomS · 25 June 2011

    In 18th century biology, "evolution" referred to the appearance of the features of the embryo from their preformed state.

    SWT · 25 June 2011

    Steve P. said:
    Of course, it is the very entry into the biosphere of radiation from the sun that makes life, and its evolution, possible! In my posts I had fun with this, pointing out that Sewell’s “proof” also showed that weeds can’t grow, and neither can flowers.
    Er, Prof. Felsenstein, radiation burns the skin. It causes cancer. Bad news for a lot of organisms. Now how do you suppose early life was able to somehow develop a way to filter out the debilitating effects of bad light while simultaneously utilizing the good wavelengths? Seems they would have to figure out how to block the bad before being able to use the good light. Is there an emergent answer to this conundrum in here somewhere?
    You do know, don't you, that a couple of meters of water is all that's needed to filter out UV pretty effectively?

    386sx · 25 June 2011

    TomS said: In 18th century biology, "evolution" referred to the appearance of the features of the embryo from their preformed state.
    Thanks. The etymology page Joe Felsenstein provided says "Charles Darwin used the word only once, in the closing paragraph of 'The Origin of Species'". I did not know that...

    386sx · 25 June 2011

    If I were king, “out-rolling” and “in-turning” would outlawed because they just look bizarro. They just rub me the wrong way for some reason. I will have nightmares for a long time after having seen those "words". :P

    Mike Elzinga · 25 June 2011

    Sewell is also complaining about “mistakes” Dan Styer supposedly makes in his AJP paper “Evolution and Entropy.” But here is what Styer wrote.
    (This creationist argument also rests upon the misconception that evolution acts always to produce more complex organisms. In fact evolution acts to produce more highly adapted organisms, which might or might not be more complex than their ancestors, depending upon their environment. For example, most cave organisms and parasites are qualitatively simpler than their ancestors. This biological misconception will not be discussed in this article.)
    So if Sewell is trying to change his pitch to one of “uncovering a misconception” about entropy in a physics journal, that shtick doesn’t fly either. Styer is simply giving creationists everything they want and more; and that includes the creationist’s mistaken notion that more complex organisms have lower entropy because they are “less probable” somehow. But then he shows that life is simply embedded in an environment in which the overall entropy is increasing by many orders of magnitude more than it is decreasing. The notion of “entropy compensation” is a creationist invention, not a physics invention. One could take Styer’s example much farther and calculate the entropy decrease for all matter condensing on planet Earth; namely, the solidification of molten rocks, the formation of ice sheets, the oxidation of various elements, etc., etc., the list gets pretty long. And the list may or may not include some forms of life, depending on how the life forms evolved, what size they are, and how much energy entered and left their systems over the course of their lives. If matter is condensing – and it did as planet Earth formed – energy was released into surrounding space; entropy increased. Yet things formed; crystals and rocks formed; molten iron and copper and other minerals became solids. Their atoms and molecules condensed into fewer energy microstates. Nothing “disobeyed” the second law of thermodynamics; because the second law is required for matter to condense.

    Flint · 25 June 2011

    Why is Sewell focusing on evolution? If his view of thermodynamics is correct, it seems to me that life as we know it wouldn't be possible, at least not without Sewell's god propping up every metabolism there is. But this has always confused me. If life itself is possible, evolution does nothing more than shuffle organic molecules around. Sewell seems to be tilting at the wrong windmill here. He should be arguing that LIFE is not possible.

    Henry J · 25 June 2011

    Using the 2nd law as an argument against evolution is arguing that life isn't possible. It's just that the users of that argument don't seem to realize that, even when it is explicitly pointed out to them.

    TomS · 26 June 2011

    I find it an interesting exercise when coming across an argument against evolution to check whether the argument is at least as relevant when formulated as an argument against reproduction and development. (Or against "micro"evolution within "kinds", which so many of the evolution deniers insist that they accept, but it's not as amusing to see the results.) From "teach all sides" to "irreducible complexity". From "if X is true, then we cannot trust our knowledge" to "I believe that I have a special relationship with my Creator".
    And "the 2nd law of thermodynamics".

    Atheistoclast · 26 June 2011

    This comment has been moved to The Bathroom Wall.

    mrg · 26 June 2011

    This comment has been moved to The Bathroom Wall.

    Flint · 26 June 2011

    This comment has been moved to The Bathroom Wall.

    phantomreader42 · 26 June 2011

    TomS said: I find it an interesting exercise when coming across an argument against evolution to check whether the argument is at least as relevant when formulated as an argument against reproduction and development.
    ID is nothing more than the child's story of a magical baby-delivering stork, retold in the idiom of information theory.

    mrg · 26 June 2011

    phantomreader42 said: ID is nothing more than the child's story of a magical baby-delivering stork, retold in the idiom of information theory.
    More like "retold in a gibberish parody of the idiom of information theory".

    Atheistoclast · 26 June 2011

    This comment has been moved to The Bathroom Wall.

    DS · 26 June 2011

    This comment has been moved to The Bathroom Wall.

    TomS · 26 June 2011

    This comment has been moved to The Bathroom Wall.

    DS · 26 June 2011

    This comment has been moved to The Bathroom Wall.

    Mike Elzinga · 26 June 2011

    This comment has been moved to The Bathroom Wall.

    Flint · 26 June 2011

    This comment has been moved to The Bathroom Wall.

    Flint · 26 June 2011

    This comment has been moved to The Bathroom Wall.

    Atheistoclast · 26 June 2011

    This comment has been moved to The Bathroom Wall.

    Mike Elzinga · 26 June 2011

    This comment has been moved to The Bathroom Wall.

    mrg · 26 June 2011

    This comment has been moved to The Bathroom Wall.

    apokryltaros · 26 June 2011

    This comment has been moved to The Bathroom Wall.

    DS · 26 June 2011

    This comment has been moved to The Bathroom Wall.

    Richard B. Hoppe · 26 June 2011

    I tossed the Atheistoclast derail to the BW. Sorry, folks.

    mrg · 26 June 2011

    Richard B. Hoppe said: I tossed the Atheistoclast derail to the BW. Sorry, folks.
    Don't apologize! Thank you.

    Mike Elzinga · 26 June 2011

    Richard B. Hoppe said: I tossed the Atheistoclast derail to the BW. Sorry, folks.
    Thank you. I agree with mrg; no need to apologize.

    John · 26 June 2011

    Richard B. Hoppe said: I tossed the Atheistoclast derail to the BW. Sorry, folks.
    No to need to apologize, Richard. I concur with mrg and Mike; you did what you had to do. At least you saved me the trouble from addressing his absurd claim that there are "limits to Darwinism".

    bigdakine · 26 June 2011

    SWT said:
    Henry J said: Why is it always the second law? Why not the first or third? (Or fourth if there's that many.)
    The second law
  • Talks about how systems change in time
  • Is couched in terms of a quantity few lay people actually understand
  • Is easily misrepresented because of an unfortunate nomenclature choice by the founders of information theory
  • Is easily misrepresented because of the statistical nature of the Boltzmann interpretation and the difficulty of explaing that interpretation to lay people (especially innumerate lay people)
  • That's why. (At least IMNSHO.)
    The fourth law is to remember the first three: 1. There's no free lunch 2. You can't get it wholesale.. 3. You must pay tax.

    Steve P. · 27 June 2011

    TomS said: Typical creationist "response". To point out what should be obvious, even to a creationist: What you say has nothing to do with the 2nd law of thermodynamics. You have no response, so you change the subject. But, let us just pause a moment, and ask what the creationist explanation is for the issues that you bring up? That the "intelligent designer(s)" just wanted it that way? Why is the Earth round? Because that's the way that the ID wanted it? Why is the Earth flat? Because that's the way that the ID wanted it? Why is the Earth shaped like a pretzel, or like a "Penrose triangle", or why is there no Earth at all? Because that's the way that the ID wanted it? After all, can't "intelligent designers" do whatever they want, and we are in no place to question their motives?
    Typical design denier response. Rather, it goes directly to the heart of the issue. Evolution, promoted here on PT as a creative force would certainly defy the SLOT. Fortunately, we know better. Evolutionary processes are stop-gap measures, maintaining what already exists, but alas slowly losing the battle. So to Joe's quip that duh, life just so happens to be able to utilize light from the sun, therefore Sewell is wrong, wrong, wrong, is rather er infantile. As to creationists response for why life can use light from the sun but not rocks? Good question. Life obviously contains something rocks don't. Dembski proposes active information. To be sure, we are trying to quantify in 'scientifically innovative ways' just what makes life different from rocks. You (pl) in turn simply say 'its all physics and chemistry so why do you bother looking elsewhere'. Stark contrast, no? "ID waxing, ND waning".

    Dave Lovell · 27 June 2011

    Steve P. said: As to creationists response for why life can use light from the sun but not rocks? Good question. Life obviously contains something rocks don’t. Dembski proposes active information.
    Place a piece of ice in a hole in an impervious bed of rock. On the application of sunlight, water from the melting ice will fill the hole. Does water contain the "information" to allow it to use sunlight and shape itself to fit exactly into every nook and cranny of the hole? Is the water "alive"?

    TomS · 27 June 2011

    I'm not a design denier. I believe in microdesign. You deny macrodesign when you deny that rocks are designed.

    SWT · 27 June 2011

    Steve P. said: Evolution, promoted here on PT as a creative force would certainly defy the SLOT.
    Prove it. Be sure to use math.
    Fortunately, we know better. Evolutionary processes are stop-gap measures, maintaining what already exists, but alas slowly losing the battle.
    Prove it. Be sure to include detailed refutations of published work demonstrating otherwise.
    So to Joe's quip that duh, life just so happens to be able to utilize light from the sun, therefore Sewell is wrong, wrong, wrong, is rather er infantile.
    Joe is correct. Sewell is wrong. A key piece of the argument in Sewell's withdrawn paper (which is just repackaging of his other flawed remarks on this subject) is easily shown to be inconsistent with observation. I'll explain this one, so grab your copy of the paper and I'll explain. Sewell's equations (1)-(5) are a correct instantiation of the second law for a non-adiabatic system in which only heat transfer by conduction is occurring. Eq. (4), in particular, is an entropy balance for the system, stating that accumulation of entropy in the volume is the sum of entropy produced within the volume and the net flow of entropy into the volume through its boundary. So far, so good. What if more is going on than heat transfer? In this case, the fundamental thermodynamic property relationship can be combined with material, energy, and momentum balances to produce a complete entropy balance. In its more general form for a system with nc distinct components and nr reactions among these components), the integrand in the first term of the right side on Eq. (4) is a sum of products of fluxes and conjugate forces divided by the temperature:

    (heat flux)*(driving force for heat conduction) + (flux of component 1)*(driving force for diffusion of component 1) + ... (flux of component nc)*(driving force for diffusion of component nc) + (momemtum flux)*(driving force for momentum transfer) + (reaction rate 1)*(driving force for reaction 1) + ... (reaction rate nr)*(driving force for reaction nr)

    This first term is, in fact, the entropy production in the system. The second law constraint is that the integrand of this first term must be non-negative. The integrand for the second term of the right side of Sewell's Eq. (4) has the form of a sum of entropy fluxes due to the flow of heat and each component out of the control volume and so represents a "flow" of entropy out of the system due to heat and mass flows. Here's one of the places Sewell really goes off the rails. He asserts, incorrectly, that we should distinguish between "thermal entropy" and "component i entropy" and insists, incorrectly, that any "compensation" of entropy production must occur process-by-process: entropy production by heat conduction can only be "compensated" for by a heat flow, entropy production by diffusion of component i can only be "compensated" for by a flow of i. If Sewell is correct, there is no way to "compensate" for either chemical reactions (including phase changes) or for viscous effects in the system. Both of these assertions are incorrect and lead quickly to conclusions contrary to observation. To see this, let's apply Sewell's argument to a simple non-adiabatic system. If we take a sample of water and cool it sufficiently, we know by observation that the sample will freeze. We also know that the entropy of the ice formed is less than the entropy of the water we started with. This is of course, no real problem, since we know that there is a flow of heat out of our water sample that can be thought of as carrying entropy out of the sample. Sewell's asserts, however, that there are multiple kinds of entropy; for this case, his argument says that we must distinguish between "thermal entropy," "liquid water entropy," and "ice entropy." Sewell's argument, applied to this system, is there can be no "compensation" among these types of entropy. Since Sewell argues that "heat entropy" cannot "compensate" for "ice entropy" and "liquid water entropy," the logical conclusion of Sewell's argument is there is no way taking heat out of the system can compensate for the entropy decrease due to conversion of liquid water to ice. This is, of course, not correct. So yes, Sewell is wrong, wrong, wrong. He is wrong to assert that there are multiple kinds of thermodynamic entropy. He is wrong to assert that "compensation" can occur only process-by-process, and I suspect that in any other context he would not accept, on a mathematical basis, his process-by process "compensation" argument. Process-by-process compensation is sufficient to satisfy the entropy balance equation, but not necessary; a mathematician of his caliber should know better.

    apokryltaros · 27 June 2011

    Richard B. Hoppe said: I tossed the Atheistoclast derail to the BW. Sorry, folks.
    While you're at it, can we toss Steve P into the BW, too?

    mrg · 27 June 2011

    SWT said: Prove it. Be sure to use math.
    Yep! If the "construction" or "creation" implied by evolution is a violation of the SLOT ... then our OUR constructions and creations, airplanes or computers or toys, are violations of the SLOT. But we can't violate the SLOT. "But our constructions are the product of intelligence." "Nobody's ever been intelligent enough to figure out a way around the SLOT. If anyone ever does, the news is going to make it around the world VERY quickly!" No more energy problems! Perpetual-motion machines now proven possible! I have never got a creationist to do anything but dodge this issue, usually with snark.

    Mike Elzinga · 27 June 2011

    Steve P. said: Evolution, promoted here on PT as a creative force would certainly defy the SLOT.
    NOTHING “defies” the laws of thermodynamics; PERIOD. That includes living organisms. Any creationist who claims otherwise has no clue what the laws of thermodynamics mean; and that has been true ever since “scientific” creationism was started by Henry Morris in 1970.

    Fortunately, we know better. Evolutionary processes are stop-gap measures, maintaining what already exists, but alas slowly losing the battle.

    There is no “battle” being lost. Living organisms exist in a precariously narrow energy window that allows matter within them to be loosely bound. If the matter within such organisms were tightly bound (frozen) the organisms wouldn’t work. If the matter were even more loosely bound, the organisms would come all apart. As it is, their loosely bound parts are subject to the inevitably larger forces that eventually tear them apart (wear them out). But because such systems reproduce, they pass on newer, nearly similar replicas of themselves (reproduce); and if the replicas have enough variation to fit well within the now-current environment, the ones that more nearly fit best keep reproducing and dying. It’s simple chemistry and physics.

    To be sure, we are trying to quantify in ‘scientifically innovative ways’ just what makes life different from rocks. You (pl) in turn simply say ‘its all physics and chemistry so why do you bother looking elsewhere’. Stark contrast, no?

    There is no “stark contrast.” There is nothing about living organisms that “disobey” any chemistry or physics (try pouring a little hydrofluoric acid on your hand and note that chemistry still works with biological organisms). The complexity of living organisms is responsible for any “differences” they have with non-living systems. It is also necessary for living systems to exist within an energy window that allows their critical parts to be loosely bound and to draw energy and matter from their surroundings. Move them outside that window or cut off any matter/energy input/output, they “die.”

    “ID waxing, ND waning”.

    ID still stupidly clinging to pseudo-science, ND based on real science.

    TomS · 27 June 2011

    Airplanes can violate the "law of gravity" because they are intelligently designed.

    And that proves that birds are intelligently designed, too.

    DS · 27 June 2011

    Steve P wrote:

    "Fortunately, we know better. Evolutionary processes are stop-gap measures, maintaining what already exists, but alas slowly losing the battle."

    Well as long as "we" are so smart and knowledgable, perhaps "we" wouldn't mind answering a few questions. Complete with references from the scientific literature of course, not just hand waving made-up mumbo jumbo.

    1) How many species of living organisms are currently alive on the earth? How many were alive six hundred million years ago?

    2) How many genera of living organisms are currently alive on the earth? How many were alive six hundred million years ago?

    3) How many families of living organisms are currently alive on the earth? How many were alive six hundred million years ago?

    4) How many phyla of living organisms are currently alive on the earth? How many were alive six hundred million years ago?

    Now once you have answered those questions honestly, I think you will see that your "hypothesis" is conclusively falsified. But then again, you still don't believe that competition is real now do you?

    And of course as MIke has pointed out, intelligence does NOT violate the SLOT, neither does intelligent design, neither does life. You got nothin Poindexter.

    DS · 27 June 2011

    TomS said: Airplanes can violate the "law of gravity" because they are intelligently designed. And that proves that birds are intelligently designed, too.
    Is that the third law of aerodynamics?

    mrg · 27 June 2011

    NOTHING “defies” the laws of thermodynamics; PERIOD.
    Tech question, entropy of mixing ... from what Lambert says, it's basically the same as dispersal of a gas into a vacuum. In the dispersal of a gas in a vacuum, the concentration of energy of the molecules in a small vessel is dispersed by expanding into the larger (empty) vessel. Same thing with mixing, but it involves two or more substances dispersing into each other. This is not the same as shuffling a deck of cards, which despite old-fashioned examples doesn't represent a change in entropy. But what about dispersal of salt and pepper into each other? I don't think it does, but I can't get my little brain clearly wrapped around why not.

    mrg · 27 June 2011

    TomS said: Airplanes can violate the "law of gravity" because they are intelligently designed.
    Reminds me of the ancient "Bugs Bunny Versus The Gremlin" cartoon from WWII days ... ends with Bugs falling out of the sky in an old B-10 bomber, falling to earth nose down to certain death, screaming at the top of his lungs, closer and closer and closer ... ... and then, a few feet above the ground, the bomber comes to a screeching halt, hanging nose down a few feet above the ground. Cut to Bugs and the gremlin, leaning on the nose of the bomber, crunching on carrots: "Ehhh ... ran outa gas." And of course the great scene of when Bugs first meets the gremlin, who is pounding on the fuze of a bomb with a hammer: "You gotta hit these blockbusters JUUUUUUST right!"

    Just Bob · 27 June 2011

    bigdakine said: 1. There's no free lunch 2. You can't get it wholesale.. 3. You must pay tax.
    I prefer: 1. You can't win. 2. You can't break even. 3. And you can't get out of the game.

    Mike Elzinga · 27 June 2011

    mrg said:
    NOTHING “defies” the laws of thermodynamics; PERIOD.
    Tech question, entropy of mixing ... from what Lambert says, it's basically the same as dispersal of a gas into a vacuum. In the dispersal of a gas in a vacuum, the concentration of energy of the molecules in a small vessel is dispersed by expanding into the larger (empty) vessel.
    Adiabatic, free expansion of an ideal gas into a vacuum does no work. No energy enters, no energy leaves. The kinetic energies of all the molecules remain the same as they were before; i.e., the distribution of energy over all microscopic states (translational kinetic energies) remains constant. So the entropy remains constant. The only thing that changes is the rate of impacts of molecules with the walls of the container. If the volume of the container is doubled, for example, the impact rate is cut in half and the pressure drops to half. An ideal gas is not realistic however. In real gases, the molecules interact.

    Same thing with mixing, but it involves two or more substances dispersing into each other.

    It depends on what is mixing and how. If a partition is removed from a container containing identical gases in each half of the container, and if the gases in each half started at the same temperature and pressure, there would be no change in total entropy because the distributions of energy would be the same. And if the gases were identical except for the color of the molecules (green in one half of the container, red in the other), the entropy doesn’t change because the color has nothing to do with the distribution of ENERGY. (note: this is just an illustration; molecules are not “colored.”)

    This is not the same as shuffling a deck of cards, which despite old-fashioned examples doesn’t represent a change in entropy. But what about dispersal of salt and pepper into each other? I don’t think it does, but I can’t get my little brain clearly wrapped around why not.

    If it isn’t about energy dispersal, it’s not about entropy. Just because one can use the same enumeration techniques to describe spatial rearrangements does not necessarily mean those spatial rearrangements have anything to do with how the total energy of those entities is distributed among microstates. As it turns out, however, many of the spatial arrangements of the constituents of a thermodynamic system follow the same distributions as do the energies carried by those constituents. Thus the same mathematical techniques can be used to describe those spatial arrangements. For example, potential energies are connected to spatial relationships; two atoms in close proximity have a mutual potential energy well that may be holding them together. The key to all this lies in whether or not there are ENERGETIC interactions among the particles. If the property that is used to distinguish particles from each other is in no way involved in the interactions of the particles with each other or with their surrounding environment, then we cannot be talking about energetic exchanges and energy distributions. Insofar as any of those properties are directly correlated with energetic interactions, then we can apply thermodynamics to those properties. Thermodynamics is about ENERGY; always remember that. If it is not about energy, then any mathematical description of the distribution of some property of the constituents of a system is not about thermodynamics; even if the same mathematical descriptions (e.g., the formulas for entropy) are used. The key underlying concept when applying thermodynamics to a problem is that matter interacts with matter. Those interactions involve energy exchanges; and that is what thermodynamics and the second law are all about, ENERGY exchanges and ENERGY redistribution. It has very little to do with the redistribution of color, for example. The whole point of the notion of entropy was, from the beginning, to help relate different thermodynamic parameters to each other. That remains true after the development of statistical mechanics, with the additional advantage that those macroscopic parameters can now also be connected to microscopic parameters. For example, the notion that heat flows spontaneously from high temperature to low temperature can be interchanged with the notion that entropy must increase because the same quantity of heat divided by a lower temperature means entropy increases. The latter formulation may be an easier way to set up the problem. But we now also know that temperature is related to the average kinetic energy per degree of freedom in, say, a system of mobile molecules. So we also know why heat flows from high temperature to low temperature. And we also know for an isolated system of particles – provided that the constituents are able to interact with each other – that their energies will redistribute until all available energy microstates are occupied with equal probability. By its very definition, that means entropy increases.

    Richard B. Hoppe · 27 June 2011

    apokryltaros said:
    Richard B. Hoppe said: I tossed the Atheistoclast derail to the BW. Sorry, folks.
    While you're at it, can we toss Steve P into the BW, too?
    Not when he elicits informative comments like those of SWT and Mike Elzinga just above. They make SteveP's ignorant comments worth keeping.

    mrg · 27 June 2011

    Richard B. Hoppe said: They make SteveP's ignorant comments worth keeping.
    And we're not done yet. MrE's reply has provoked more questions but I need to refer to Frank Lambert's writings to get a handle on them. I get exasperated when creationists imply us science geeks are refusing to accept that we're off base. When I feel I'm off base, I won't rest until I've nailed down what's wrong. Or at least been forced by events to abandon the effort until I can renew it later.

    Henry J · 27 June 2011

    I get exasperated when creationists imply us science geeks are refusing to accept that we’re off base.

    But that's what their whole "argument" amounts to.

    mrg · 27 June 2011

    Henry J said: But that's what their whole "argument" amounts to.
    No argument there, but given how restless I get when I don't have a handle on something, it's obnoxious to be accused of complacency. Ah, but being obnoxious is creationist policy.

    mrg · 27 June 2011

    Mike Elzinga said: Adiabatic, free expansion of an ideal gas into a vacuum does no work. No energy enters, no energy leaves. The kinetic energies of all the molecules remain the same as they were before; i.e., the distribution of energy over all microscopic states (translational kinetic energies) remains constant. So the entropy remains constant.
    OK, this is what Lambert says:
    When molecules are allowed to expand into a larger volume (in three-dimensional space) , quantum mechanics shows that an interesting change in possible energy levels takes place: the energy levels become closer together. (Technically, we must say that the density of occupiable levels in any selected energy range is greater.) This means effectively that molecules, if allowed to occupy a larger volume even without any increase in their energy, can spread out to occupy many more energy levels. This means greater dispersal of energy and an increase in entropy simply by there being a greater three-dimensional volume in which the molecules can move. (Further, because any change in which entropy increases is a spontaneous change. It happens without any outside aid, energy input, etc.)
    Now if this is right or not, I dunno, but I am puzzled. Thinking in terms of heat engines, according to Carnot a heat engine requires a temperature difference to do work. That's straightforward, no big deal. But if there's a temperature gradient, entropy is increasing from a heat transfer whether we extract any work from it or not. So if we have a pressure gradient, we can in the same way extract work from that -- like the old compressed-air torpedoes, a pressurized air bottle driving a turbine or reciprocating motor. It would seem that losing that pressure differential would entail an increase in entropy whether we extract any work from it or not.
    It depends on what is mixing and how. If a partition is removed from a container containing identical gases in each half of the container, and if the gases in each half started at the same temperature and pressure, there would be no change in total entropy because the distributions of energy would be the same. And if the gases were identical except for the color of the molecules (green in one half of the container, red in the other), the entropy doesn’t change because the color has nothing to do with the distribution of ENERGY. (note: this is just an illustration; molecules are not “colored.”)
    It would seem that mixing of energetically indistinguishable particles would involve no change in entropy. However, Lambert again states that there is a change in entropy in mixing even if it involves no change in energy:
    How does that apply to (1), perfume in a room? It spontaneously mixes with the gases in the large room because its energy is redistributed among more energy levels than in the small vapor space of the bottle. This is the same as having greater energy dispersal = an increase in entropy = spontaneity. And (2), cream in coffee? (Or any other kinds of liquids mixing?) Same as above. Because the motional energy in the molecules of the substances in cream can be more spread out between the molecules in the coffee, the energy of the cream, or of any liquid mixing with another, is redistributed among more energy levels in the mixture than alone by itself = greater energy dispersal = increase in entropy = spontaneous mixing.
    Y'know ... I don't feel the slightest bit embarrassed about not following this very well.

    mrg · 27 June 2011

    PS: If entropy is a measure of dispersal of energy, it would seem to me that the spread of energetic molecules from a pressurized chamber into a vacuum would certainly represent a dispersal of energy.

    apokryltaros · 27 June 2011

    Richard B. Hoppe said:
    apokryltaros said:
    Richard B. Hoppe said: I tossed the Atheistoclast derail to the BW. Sorry, folks.
    While you're at it, can we toss Steve P into the BW, too?
    Not when he elicits informative comments like those of SWT and Mike Elzinga just above. They make SteveP's ignorant comments worth keeping.
    That's a good reason, then.

    Mike Elzinga · 27 June 2011

    mrg said: OK, this is what Lambert says:
    I’ll need to go read what Lambert says in context; but the part you quote seems out of context or unqualified.

    Thinking in terms of heat engines, according to Carnot a heat engine requires a temperature difference to do work. That’s straightforward, no big deal. But if there’s a temperature gradient, entropy is increasing from a heat transfer whether we extract any work from it or not.

    Good!

    So if we have a pressure gradient, we can in the same way extract work from that – like the old compressed-air torpedoes, a pressurized air bottle driving a turbine or reciprocating motor. It would seem that losing that pressure differential would entail an increase in entropy whether we extract any work from it or not.

    Engines are designed to do work; and they therefore extract energy from the kinetic energy in the molecules hammering on turbine blades and pistons. Force times distance. Adiabatic free expansion of an ideal gas is an example of a change in pressure that does no work. Now this is an ideal situation in which the molecules of an ideal gas do not interact with each other. When the gas expands, the distribution of kinetic energy per degree of freedom does not change. This means that the temperature remains constant. And the total energy of an ideal gas depends only on its temperature. Adiabatic means no energy entered or left the system. So the energy is constant (just as the constant temperature says it is). So in this case, the total energy remains distributed among the same number of translational energy states; i.e., the entropy remains constant. PV = nRT and PV remains constant. In a real gas, molecules in close proximity are under the influence of mutual potential energy wells. As the gas expands, the molecules move “up” and out of those wells and lose kinetic energy in the process (the temperature drops). So the distribution of total energy among available energy states changes (entropy changes). How the entropy changes depends on the original temperature of the gas (how close the molecules were originally), how complicated the molecules are (whether or not they have rotational degrees of freedom and how many) and whether or not the gas molecules do work when beating on a piston that moves against a resisting force, or whether the molecules transmit energy to a lower pressure gas as they expand into the larger volume.

    It would seem that mixing of energetically indistinguishable particles would involve no change in entropy. However, Lambert again states that there is a change in entropy in mixing even if it involves no change in energy:

    The example Lambert gives is not an appropriate one to illustrate the point. Perfume and air molecules are vastly different, as are coffee and cream molecules. Indeed the entropy of mixing changes because the total energy is being distributed among a changing number of microstates. The example of mixing I gave earlier (identical but differently colored molecules in separate halves of a box) is a better illustration of the confusion that develops between spatial distributions and energy distributions. Here the entropy doesn’t change because (turn out the lights so you can’t see the colors) the number of ENERGY microstates doesn’t change. It is always about the number of ENERGY microstates. If these don’t change, the entropy doesn’t change. A further note about the connection between energy microstates (little energy “buckets”) and the macroscopic calculation of entropy: When we consider a classical system of particles with many degrees of freedom, temperature turned out to be the average kinetic energy per degree of freedom, in other words KEavg per DF = ½ kB T. Heat, Q is in units of energy. So ΔQ/T has units of energy divided by energy per degree of freedom; and this results in the number of degrees of freedom (little “buckets’).

    Y’know … I don’t feel the slightest bit embarrassed about not following this very well.

    Yeah, there is a lot of misinformation floating around about thermodynamics. It has been years of frustration for those of us who try to make it accessible to students and the public. And there is so much more that physicists have to know.

    Mike Elzinga · 27 June 2011

    mrg said: PS: If entropy is a measure of dispersal of energy, it would seem to me that the spread of energetic molecules from a pressurized chamber into a vacuum would certainly represent a dispersal of energy.
    That is a change in energy density. This is not the same as the number of energy microstates.

    mrg · 27 June 2011

    Mike Elzinga said: I’ll need to go read what Lambert says in context; but the part you quote seems out of context or unqualified.
    Yeah. It seems to flatly contradict what you're telling me, and I admit I found his argument hard to follow. What he's saying is that an increase in volume leads to an increase in energy states all by itself. See: http://entropysimple.oxy.edu/content.htm A commentary on this article would be useful. OK, one question though: does spontaneous heat transfer from a system to its environment represent an increase in entropy?

    Mike Elzinga · 27 June 2011

    Mike Elzinga said:
    mrg said: PS: If entropy is a measure of dispersal of energy, it would seem to me that the spread of energetic molecules from a pressurized chamber into a vacuum would certainly represent a dispersal of energy.
    That is a change in energy density. This is not the same as the number of energy microstates.
    I should add some other technical distinctions here also. A density of energy states (density of states) refers to the number of energy microstates within a specified energy range. An energy density refers to the amount of energy contained within a specified spatial volume. The amount of energy contained within a specified time interval is referred to as power.

    Mike Elzinga · 27 June 2011

    mrg said: OK, one question though: does spontaneous heat transfer from a system to its environment represent an increase in entropy?
    Yes. Think for a moment what spontaneous means. Use a classical system for example. Heat is a transfer of energy. How does that occur? It occurs because of interactions of particles with other particles or with a radiation field. Kinetic energy is transferred by way of momentum exchanges with particles and/or fields (e.g., photons). That is why heat spontaneously flows from high temperature (more kinetic energy per degree of freedom) to low temperature (less kinetic energy per degree of freedom). Momentum transfers are taking place primarily in the direction of lower kinetic energy. So, divide a given quantity of heat (energy) by the temperatures and we see the energy going from a smaller number of degrees of freedom to a larger number of degrees of freedom. Entropy is increasing. (A little exercise: Consider what is happening in that little concept quiz I gave the answer to over on the Bathroom Wall.)

    mrg · 27 June 2011

    Mike Elzinga said: The amount of energy contained within a specified time interval is referred to as power.
    Guy, if I was that ignorant, I wouldn't be even able to ask you the questions I am asking.

    mrg · 27 June 2011

    Mike Elzinga said: Yes.
    OK, now I'm really confused, because if a system (compressed air vessel) drains into a vacuum environment, the thermal energy represented by the molecular motions is dispersed into its environment just as certainly as the thermal energy of a heated object is thermally transferred to the atmosphere around it.

    Mike Elzinga · 27 June 2011

    mrg said:
    Mike Elzinga said: Yes.
    OK, now I'm really confused, because if a system (compressed air vessel) drains into a vacuum environment, the thermal energy represented by the molecular motions is dispersed into its environment just as certainly as the thermal energy of a heated object is thermally transferred to the atmosphere around it.
    Yes; but for an ideal gas undergoing adiabatic expansion and doing no work, there is no redistribution of energy among microstates (translational degrees of freedom). The total energy is simply spread out in space (less energy density), but with the same number of energy microstates (same entropy). Think of looking at a given section of the box without knowing anything about where the boundaries of the box were. You would “see” the same number of molecules with the same distribution of energies. You would see them less frequently because they have farther to travel in a bigger box before returning. But that has nothing to do with the distribution of energy among translational degrees of freedom (translational microstates). Energy density and density of states is not the same thing.

    Guy, if I was that ignorant, I wouldn’t be even able to ask you the questions I am asking.

    :-)

    Mike Elzinga · 27 June 2011

    But you raise an excellent point.

    The availability of energy to do work is also connected to energy density. Let those molecules expand into outer space, and we can no longer capture the energy and use it for driving turbines.

    But that still has nothing to do with energy states. In order to do work, energy has to “flow downhill” so to speak; from higher kinetic energies to lower because that is the direction the momentum transfers will occur. That depends on temperature differences, hence increases in entropy.

    But you can’t get much energy out of a few hundred molecules after the rest get away.

    mrg · 27 June 2011

    Again, Lambert is telling me what seems to be the exact opposite: that even though there is no change in overall energy, by expanding the volume the available microstates have expanded thereby and so the entropy has increased.

    What puzzles me is that by eliminating the pressure difference, even if no work was performed in doing so, we have eliminated the ability of the system to perform work thereby. Also ... there's no way to restore the pressure difference without doing work. Something tells me these two circumstances are addressed by the laws of thermodynamics, but I don't know precisely how.

    mrg · 27 June 2011

    Y'know, what's really amusing about this conversation is to envision Steve P trying to follow it. He will not understand a word of it. And, in failing to do so, he will still refuse to admit to himself that he has absolutely no understanding of such matters.

    Eric Finn · 27 June 2011

    mrg said: Again, Lambert is telling me what seems to be the exact opposite: that even though there is no change in overall energy, by expanding the volume the available microstates have expanded thereby and so the entropy has increased. What puzzles me is that by eliminating the pressure difference, even if no work was performed in doing so, we have eliminated the ability of the system to perform work thereby. Also ... there's no way to restore the pressure difference without doing work. Something tells me these two circumstances are addressed by the laws of thermodynamics, but I don't know precisely how.
    I think Professor Frank Lambert is right. According to the quantum mechanics, the density of states is higher in a big volume than in a small volume. One of the experimentally verified examples is the Casimir effect. From the classical point of view, we can consider the two compartments as two subsystems, one containing ideal gas and the other one empty. Quite the same way as two pieces of metal initially at different temperatures and insulated from each other and also both insulated from their environment. The entropy of the combined metal system will increase, if they are later brought in thermal contact with each other. No energy flow to environment will take place, since the combined system is perfectly isolated. We would not know that the entropy of our universe has been increased, since we can not see inside the container. The pressure difference... Each of the atoms of the ideal gas still has the same energy as initially. Surely they can do work, if you let them escape the container and collide against something.

    Mike Elzinga · 27 June 2011

    mrg said: Again, Lambert is telling me what seems to be the exact opposite: that even though there is no change in overall energy, by expanding the volume the available microstates have expanded thereby and so the entropy has increased. What puzzles me is that by eliminating the pressure difference, even if no work was performed in doing so, we have eliminated the ability of the system to perform work thereby. Also ... there's no way to restore the pressure difference without doing work. Something tells me these two circumstances are addressed by the laws of thermodynamics, but I don't know precisely how.
    Ok, I went over there and read what Lambert had to say. And I think he needs to clarify and qualify some of his statements (I suspect he knows but is having the same issues everyone does – including me – when trying to popularize thermodynamic concepts). It is much better done when someone such as you asks really intelligent questions because, as is so often the case, the expositor can only imagine what questions and confusions exist in the minds of those in the audience. I suspect Frank is no longer addressing live questions. In the context of extracting work from a thermodynamic medium, practical considerations are just as important as the theoretical considerations. One would like to extract the energy before one dies and “turns to dust.” Thus, in the case of a diminishing pressure difference, we have to recognize that pressure depends not only on the kinetic energies of the particles (temperature) in the working medium, it also depends on the number of particles, the frequency with which they hit the face of a moving piston (the volume of the cylinder), and the forces resisting the motion of the piston against which the molecules work. One can concoct all kinds of scenarios in which one has the same total energy distributed not only among microstates, but also within various volumes. It could be a tremendous amount of energy. But if it is spread out in space, the time between impacts of those molecules increases as the volume increases. So, if it is contained within an extremely large volume, for example, we could be starting out with a situation where it just takes too long to accumulate the energy contained in all those molecules and exert enough force to move the piston let alone anything attached to it. But the importance of the example of adiabatic free expansion is to highlight the difference between the number of energy states (entropy) and the spatial distribution of those states. How energy states are spatially distributed is not necessarily connected to how energy is distributed among energy states. It is the external constraints on the system that connect these; such as when the expanding gas actually does work by transferring momentum and energy to a moving piston thereby doing work on an external environment or another system. If no work is done in an adiabatic expansion, entropy doesn’t change. Diffusing real molecules – especially complex molecules - can have significant entropy changes because the molecules interact and change the number of ways they can move (degrees of freedom) and therefore the number of available microstates among which the total energy of the systems can distribute. Do not confuse the distribution of energy among microstates with the distribution of energy in space and time.

    Y’know, what’s really amusing about this conversation is to envision Steve P trying to follow it. He will not understand a word of it. And, in failing to do so, he will still refuse to admit to himself that he has absolutely no understanding of such matters.

    So far he hasn’t tried to pull FL’s attempt at copy/paste gotchas. That incident over on the BW was FL’s final waterloo. Good riddance.

    mrg · 27 June 2011

    Eric Finn said: The pressure difference... Each of the atoms of the ideal gas still has the same energy as initially. Surely they can do work, if you let them escape the container and collide against something.
    Sure, if you have yet another exterior system at lower pressure, allowing them to expand a piston or drive a turbine or whatever. But the pressure difference between the original cylinder and its immediate environment is gone; and so we cannot use the original pressure difference to do any work, 'coz it ain't there no more.

    Eric Finn · 27 June 2011

    mrg said:
    Eric Finn said: The pressure difference... Each of the atoms of the ideal gas still has the same energy as initially. Surely they can do work, if you let them escape the container and collide against something.
    Sure, if you have yet another exterior system at lower pressure, allowing them to expand a piston or drive a turbine or whatever. But the pressure difference between the original cylinder and its immediate environment is gone; and so we cannot use the original pressure difference to do any work, 'coz it ain't there no more.
    I was favouring the interpretation by Lambert, according to which the entropy increased. I find your comment valid.

    mrg · 27 June 2011

    I'll sit on this for a while. I don't feel like I've really resolved my confusions on the matter, but I don't think I'll do more than go in circles if I persist. Something to go to the back of my mind for consideration over the coming years.

    On my JFK assassination studies I've been trying to track the paper trail of his ownership of the rifle. That's at least as much of a headache as trying to figure out thermodynamics -- with misrepresentations of the facts playing a large part in the difficulty. One headache at a time.

    mrg · 27 June 2011

    Mike Elzinga said: So far he hasn’t tried to pull FL’s attempt at copy/paste gotchas. That incident over on the BW was FL’s final waterloo. Good riddance.
    I am a bit surprised he left. Not that I miss him of course, it was something of a nuisance to have to remember to skip over his posts, though much less obnoxious than reading them. Anyway, in that case he was putting on a display of absolute, complacent, fatuous ignorance -- but that was nothing new, it was normal behavior for him. To the extent that he ever bothered to think of what he was doing, his act was perfectly effective at being obnoxious, and lacking any higher aspiration he lost nothing thereby.

    Mike Elzinga · 27 June 2011

    Eric Finn said: I think Professor Frank Lambert is right. According to the quantum mechanics, the density of states is higher in a big volume than in a small volume. One of the experimentally verified examples is the Casimir effect.
    Again, the discussion over there is incomplete. The energy of a quantum mechanical particle in a box is ε = (h2/8m)(nx2/Lx2 + ny2/Ly2 + nz2/Lz2), where h is Planck’s constant and m is the mass of the particle. Doubling each of the dimensions Lj also doubles the number of nj’s if the particle retains the same translational kinetic energy. So the energy of the particle remains constant if no work is done on or by the particle when changing the dimensions of the box. So the distribution of energy among all particles remains constant if the particles do no work when adiabatically expanding into a larger box. Again we have to distinguish between energy density and density of states. Simply expanding the volume into which a particle moves, without changing the kinetic energy of the particle, is exactly what we mean by adiabatic free expansion. No change in particle energies, no net change in energy microstates. Only a change in energy density, not density of microstates.

    Eric Finn · 27 June 2011

    Mike Elzinga said: So the energy of the particle remains constant if no work is done on or by the particle when changing the dimensions of the box. So the distribution of energy among all particles remains constant if the particles do no work when adiabatically expanding into a larger box.
    Yes, the energies of the particles remain exactly the same, if we are considering a system consisting of ideal gas. The temperature won't change, but the pressure does (as you noted earlier). Expanding gases get cooled only, if they do work. What I have in mind is the quantum mechanical treatment of "Particle in a box". http://en.wikipedia.org/wiki/Particle_in_a_box The allowable energy states will be closer to each other, if we increase the dimensions of the box. Thus, the density of states in the energy scale will be higher in a large box than in a small box. Thus, there will be more microscopic combinations compatible with a macroscopic state (constant energy). Of course, we can assume that all the particles have exactly the same energy. Then, the density of states does not matter, because all the particles occupy always the same state. Problems may be encountered, if the dimensions of the box are changed and that particular energy level is shifted, no matter how small of an amount. Highly idealized examples tend to have their limitations.

    SWT · 27 June 2011

    I think a quick classical analysis shows that Lambert is correct.

    We have a sample of an ideal gas expanding adiabatically from volume V1 to V2 but doing no work during the expansion. The first law tells us that, for a closed system, the internal energy, U, is related to the heat added to the system, Q, and the work done by the system, W, through the equation

    dU = δQ - δW

    Also, the internal energy is related to the entropy and volume through the property relationship

    dU = TdS - PdV

    Since the process is adiabatic and does no work, the first law tells us there is no change in internal energy, so

    TdS = PdV

    Substituting the ideal gas law (PV=RT) into this equation gives us

    dS = (P/T)dV = (R/V)dV = R d(ln V)

    Integrating, ΔS = R ln(V2/V1)

    So, ΔS > 0 if V2 > V1

    This result makes sense to me, since the process is spontaneous and the calculated ΔA < 0.

    Mike Elzinga · 28 June 2011

    SWT said: TdS = PdV Substituting the ideal gas law (PV=RT) into this equation gives us dS = (P/T)dV = (R/V)dV = R d(ln V) Integrating, ΔS = R ln(V2/V1) So, ΔS > 0 if V2 > V1 This result makes sense to me, since the process is spontaneous and the calculated ΔA < 0.
    But δW = 0 and δQ = 0 (no work is being done and the process is adiabatic); therefore d(PV) = 0, i.e., PV remains constant, which also means – through PV = nRT - that T is constant. The internal energy of an idea gas depends only on its temperature (U = (3/2)N kB T). So the internal energy U remains constant. The kinetic energies of an ideal gas do not depend on the separations among the molecules (an ideal gas has no molecular interactions). dQ = TdS = 0. Therefore dS = 0; S is constant. The example you are showing is for a gas undergoing expansion and doing work, but with just enough heat being added to keep the temperature constant and do work as the gas expands; thus keeping the internal energy U constant (dU = 0). This is the isothermal (not adiabatic) example of an ideal gas expanding and doing work. Then, yes, TdS = PdV as you calculated.

    https://www.google.com/accounts/o8/id?id=AItOawk8DvSOY0r1jrV-SzeCiMOibrnXTMnTPcA · 28 June 2011

    Eric. SWT and Lambert have it right. The number of accessible microstates increase with volume increase for quantum reasons. Mike is struggling with the Gibbs paradox.

    Distinction must be made between reversible and irreversible adiabatic processes. This is not a reversible process.

    From a classical point of view, this is rather subtle. Although an irreversible adiabatic process has Q = 0 and W = 0 and Q+W = 0, resist the temptation to say dQ = 0 hence dS = 0. Remember, Q and W are not functions of state - only the sum is, and entropy is a state function.
    The reversible isothermal process with the same two endpoints for which the entropy change can be calculated has Q+W = 0 also, although Q = -W > 0.
    So from dS = dQrev/T, ∆S = n R ln(Vf/Vi)
    which must be the case for the irreversible adiabatic case too. That is Lambert's point.

    https://www.google.com/accounts/o8/id?id=AItOawk8DvSOY0r1jrV-SzeCiMOibrnXTMnTPcA · 28 June 2011

    Mike Elzinga said: It depends on what is mixing and how. If a partition is removed from a container containing identical gases in each half of the container, and if the gases in each half started at the same temperature and pressure, there would be no change in total entropy because the distributions of energy would be the same. And if the gases were identical except for the color of the molecules (green in one half of the container, red in the other), the entropy doesn’t change because the color has nothing to do with the distribution of ENERGY.
    No. See here and here. --JohnK (I see this new registration system boofed my ID)

    TomS · 28 June 2011

    mrg said: Y'know, what's really amusing about this conversation is to envision Steve P trying to follow it. He will not understand a word of it. And, in failing to do so, he will still refuse to admit to himself that he has absolutely no understanding of such matters.
    Any discussion which involves stuff like calculus which a majority of people can't understand can give the impression that creationism touches on some deep and complicated science. As if there were something that merits discussion in a science class. And that would be a victory for creationism.

    https://www.google.com/accounts/o8/id?id=AItOawk8DvSOY0r1jrV-SzeCiMOibrnXTMnTPcA · 28 June 2011

    "No" is meant to refer to Mike's second paragraph.

    (Well, I can't fix my ID. Like the ideal gas molecules, I (John_K) am indistinguishable from myself (https://www.google.com/accounts/o8/id?id=blahblah) but if not this must somehow be actually demonstrated using energy, and that is key to understanding irreversibility and entropy.)

    --JohnK

    SWT · 28 June 2011

    Mike Elzinga said:
    SWT said: TdS = PdV Substituting the ideal gas law (PV=RT) into this equation gives us dS = (P/T)dV = (R/V)dV = R d(ln V) Integrating, ΔS = R ln(V2/V1) So, ΔS > 0 if V2 > V1 This result makes sense to me, since the process is spontaneous and the calculated ΔA < 0.
    But δW = 0 and δQ = 0 (no work is being done and the process is adiabatic); therefore d(PV) = 0, i.e., PV remains constant, which also means – through PV = nRT - that T is constant. The internal energy of an idea gas depends only on its temperature (U = (3/2)N kB T). So the internal energy U remains constant. The kinetic energies of an ideal gas do not depend on the separations among the molecules (an ideal gas has no molecular interactions).
    So far, we agree completely. Our disagreement starts here:
    dQ = TdS = 0.
    This should in fact be dQrev = TdS If the expansion followed a reversible path from the given initial state to the given final state, dS would be zero, but the path is not irreversible. Consequently, we need to keep the PdV term in the expression for dU.
    … The example you are showing is for a gas undergoing expansion and doing work, but with just enough heat being added to keep the temperature constant and do work as the gas expands; thus keeping the internal energy U constant (dU = 0). This is the isothermal (not adiabatic) example of an ideal gas expanding and doing work. Then, yes, TdS = PdV as you calculated.
    For reversible expansion dU = δQrev - δW Since we’re dealing with an isothermal process and an ideal gas, dU = 0, and so TdS = δQrev = δW = PdV as before. Good discussion!

    mrg · 28 June 2011

    Yeah, I keep thinking that from my understanding of what entropy is all about -- energy dispersion -- the escape of a gas from a vessel into evacuated surroundings involves an increase in entropy whether any work is extracted from it or not. I was under the impression that entropy was about "availability" of energy for work, and lose availability, you get more entropy. Energy conservation considerations are for the first law, not the second.

    And from the Clausius equation -- heat transfer VS absolute temperature is entropy -- it would seem that the thermal energy in the molecules in the vessel is being transferred by simple dispersion into the environment.

    Think of Maxwell's Demon. If MaxDee accumulates active molecules, depleting them from the environment and partially evacuating it, he's reduced entropy -- which the SLOT says he can't do, at least not for free. But if MaxDee lets the molecules out again, doesn't that mean he's undone his reduction of entropy (that is, increased it again?)

    My problem is that I'm in the position of exercising intuition on something I know I don't understand. "But if you don't understand it, what valid intuition can you have on it?" Playing hunches only really works for experts.

    And then, if we get to a stopping point on this, on to the entropy of mixing, which is what I really want to know about: "If you mix gases or liquids together, you get an increase in entropy."

    "But what about mixing salt and pepper?"

    "No."

    "OK, I'm confused." But one nightmare at a time.

    mrg · 28 June 2011

    TomS said: Any discussion which involves stuff like calculus which a majority of people can't understand can give the impression that creationism touches on some deep and complicated science.
    What's really funny is that Steve P not only does not understand, he doesn't understand that he doesn't understand. He can pretend to play the game but he's not even wrong, he's not even in the game.

    Mike Elzinga · 28 June 2011

    (Well, I can’t fix my ID. Like the ideal gas molecules, I (John_K) am indistinguishable from myself (https://www.google.com/accounts/o8/id?id=blahblah) but if not this must somehow be actually demonstrated using energy, and that is key to understanding irreversibility and entropy.) –JohnK

    Indistinguishability is the key to the resolution of Gibb’s Paradox. The N! permutations of N identical molecules are all indistinguishable. So the partition function (do we really want to get into partition functions here?) has to be divided by N!. But there is an easier way to look at it. Take any sub-volume of an ideal gas and look at the distribution of energy microstates within that sub-volume. By energy microstates we are not talking about energy density (energy per unit volume), we are talking about the distribution of kinetic energies of the molecules; the distribution of kinetic energies per degree of freedom. With an ideal gas of spherically symmetric molecules with no spin, for example, those consist of the 3N degrees of translational degrees of freedom. If such a gas occupies the two halves of a box with a partition, and if they are at the same temperature and pressure in each half, looking at either half reveals the same distribution among energy microstates. Removing the partition doesn’t change the distribution among microstates if the molecules are identical. The reason is the indistinguishability of the particles as it relates to the exchanges of energy and momentum among them. This would not be affected if the molecules in each half were “differently colored.” So the total entropy remains the same. “Color” has nothing to do with the energy and momentum exchanges among identical sets of molecules.
    Think of Maxwell’s Demon. …
    When considering Maxwell’s Demon, one also has to consider the physical mechanisms by which the Demon receives data about the positions and velocities of every molecule, and what is involved in accelerating and decelerating a shutter between the two halves of the box. Photons or other particles have to carry the “information” about each gas molecule back to the Demon, and there has to be energy losses connected with the accelerating and decelerating of the shutter as well as the processing of data required to make decisions. Where does the energy come from if the gas and the Demon are isolated from the environment external to the box? It has to be extracted from the gas molecules, but it also gets redistributed right back into photons which also continue to interact with the molecules, the shutter molecules and the Demon. If the Demon is a storehouse of energy not connected to the energy in the molecules, eventually that energy has to be dissipated by the Demon into the box of molecules as the Demon works with data and moves the shutter. As to the adiabatic free expansion, here again one looks at the distribution of the total energy among all the translational degrees of freedom. Removing a partition does not change that distribution. The temperature remains constant, and the temperature is still the average kinetic energy per degree of freedom. There are still 3N degrees of freedom. The pressure drops because of the reduction in the rate of molecular impacts with the walls of the container. This would also reduce the rate of exchanges of energy and momentum among the hard-sphere collisions among molecules, but it does not change the distribution of energy among microstates. Everything would be happening half fast. But entropy is not connected to time intervals. It is simply the number of microstates consistent with the total energy. Again, there is a difference between energy density (energy per specified volume) and density of states (number of energy states within a specified energy interval).

    Mike Elzinga · 28 June 2011

    mrg said: "OK, I'm confused." But one nightmare at a time.
    I wouldn’t be too concerned about idealized problems like free expansion of an ideal gas or some of the other “gedanken experiments” used to clarify concepts. Those are mostly for making sure students have concepts straight and are not conflating concepts. They are not achievable in reality (or at least they are so difficult experimentally that one finds better ways to do the experiment). Such problems do not take up much space in the better thermodynamics and statistical mechanics textbooks anyway. They are just playthings. In reality, most of the uses of entropy are for linking thermodynamic parameters together and relating macroscopic parameters to microscopic ones. One of the more useful mathematical techniques involves the use of what are called partition functions or sums over states functions. And problems are usually set up in such a way that one can link various parameters by specifying what constraints are on the systems under study. This is much more realistic. More to the topic of this thread, however, is the fact that ID/creationists have so badly mangled the concepts of thermodynamics that their versions have absolutely nothing to do with reality in any form whatsoever.

    Mike Elzinga · 28 June 2011

    Eric. SWT and Lambert have it right. The number of accessible microstates increase with volume increase for quantum reasons. Mike is struggling with the Gibbs paradox.

    Struggling? Hmmm. Apparently my efforts are being misinterpreted (between my traveling and a busy schedule, I guess I haven’t been around enough or paying close enough attention). I am well aware of the paradoxes of classical statistical mechanics and their resolution once the quantum mechanical Fermi-Dirac and Bose-Einstein distributions are folded in. Sums over states (partition functions) are generally the way to go. I am well-aware of the Sackur-Tetrode equation. Gibbs Paradox is a pretty common issue brought up in thermodynamics and statistical mechanics. These are all interesting and important; but to appreciate them, it helps to have had experience with the problems they address. But I have generally tried to avoid such details in the context of discussions with laypersons; although I am perfectly fine with discussing these if that is what people want to do. But since the paradoxical issues came up, I had hoped to use some of the illustrations to separate out the concepts. The idealized (and physically unrealistic) examples raise these issues and provide the motivation for digging into those deeper topics. It's a pedagogical habit I have apparently acquired. It motivates some students and annoys the hell out of others. The more subtle issues involving reversible and irreversible processes were not so much what I was addressing as was the concept of entropy itself. ID/creationists have grotesquely mangled the concept; making it something about the universe coming all apart and a “barrier” to evolution. My apologies if my inattention has caused confusion. That was not my intent.

    Eric Finn · 29 June 2011

    Mike Elzinga said:
    mrg said: "OK, I'm confused." But one nightmare at a time.
    I wouldn’t be too concerned about idealized problems like free expansion of an ideal gas or some of the other “gedanken experiments” used to clarify concepts. Those are mostly for making sure students have concepts straight and are not conflating concepts. They are not achievable in reality (or at least they are so difficult experimentally that one finds better ways to do the experiment).
    I agree with you that one should not be too concerned about idealized problems. They are often something like parables making a point, but parables should not be extended too far. On the other hand, I understand mrg's confusion. One source says that the entropy increases (by a known amount) and another source says it stays the same. Both sources base their interpretations on valid principles. I would like to offer a thought that might have something to do with the issue. If we let ideal gas to expand, we end up having the allowable energy states closer to each other. Thus, there are more possible combinations how the atoms might be arranged in occupying the energy states. The entropy has increased. It may be worth noting that this conclusion has nothing to do with “energy dispersed more widely in space”, or anything similar. Now, the question is: “How did they manage do the re-arrangement?”. They do not have any interactions with each other or with the walls of the container (apart from changing direction when they hit a wall). It seems that ideal gas is totally incapable of re-arranging its energy distribution. So, the entropy will be constant also during expansion. A follow-up question might be: “How did the ideal gas acquire an energy distribution that can be described by common temperature?”. It seems to me that it is correct to state that ideal gas (that can be described by common temperature) in a large volume has higher entropy than ideal gas (than can be described by the same common temperature) in a small volume, even though the number of particles is the same in these two situations. Further, it seems to me that the large-volume case can not (easily) be acquired from the small-volume case by means of simple free expansion of ideal gas.

    mrg · 29 June 2011

    Eric Finn said: On the other hand, I understand mrg's confusion. One source says that the entropy increases (by a known amount) and another source says it stays the same. Both sources base their interpretations on valid principles.
    I've been poking around on various online sources -- UCDavis ChemWiki for example. They overwhelmingly say that free expansion of a gas into a vacuum leads to an increase in entropy. (It might be noted that Frank Lambert wrote or co-wrote a goodly number of these pages.) I'm going to poke around on this in the town library as well. One element of confusion they point out is that while the spatial translation of molecules doesn't factor into entropy, the thermal energy in the molecules being translated does, amounting to a heat flow across a system boundary. But it is my plan to go through the various ChemWiki pages and write up a set of notes to see if I can make sense of them. I don't believe that I'm going to get anywhere with further discussion here. So far, I feel I know less than I did when I started.

    Eric Finn · 29 June 2011

    mrg said: I've been poking around on various online sources -- UCDavis ChemWiki for example. They overwhelmingly say that free expansion of a gas into a vacuum leads to an increase in entropy. (It might be noted that Frank Lambert wrote or co-wrote a goodly number of these pages.) I'm going to poke around on this in the town library as well.
    Everyone says that free exapansion of a real gas leads to an increase in entropy. I tried to convey an idea that comparing two systems consisting of ideal gas with different volumes is no more than comparing two systems. One is not the starting point and the other the end point, with the process of free expansion in between. I might be interested in reading your notes.

    mrg · 29 June 2011

    Eric Finn said: I might be interested in reading your notes.
    Not for release, private scratchups only. I'm sure that they'll migrate into my elementary classical physics writeup: http://www.vectorsite.net/tpecp.html -- but it may not be as more than a few paragraphs or changes. I don't think the thermo stuff in it is wrong by any means, but that's because I got to the point where I simply threw everything out I wasn't certain of. Not very satisfactory, so I am wondering what I can do to improve matters. I have gone through some very laborious studies to resolve questions at times -- the work only ending up being a paragraph in a release document. But it has to be the RIGHT paragraph.

    Eric Finn · 29 June 2011

    mrg said:
    Eric Finn said: I might be interested in reading your notes.
    Not for release, private scratchups only. I'm sure that they'll migrate into my elementary classical physics writeup: http://www.vectorsite.net/tpecp.html
    Not quite sure, if you wished me to comment on the link you provided. From the chapter [9.5] MISUNDERSTANDINGS OF THE SECOND LAW, I pick the following two paragraphs:
    The Universe may be running out of free energy, but it is not slowly falling apart. On the basis of a simple-minded interpretation of the Second Law, we couldn't imagine a Universe that amounted to much more than it did at the beginning, a dispersed thin fog of hydrogen with traces of helium evenly distributed throughout space. In reality, heavier elements were synthesized by natural processes from lighter ones; stars and planets were formed from the elements, making up intricate stellar and planetary systems; elaborate molecules and crystals arose on the planets, with the planets also developing surface features like mountains and oceans, as well as complex weather patterns. While some critics of modern evolutionary science claim the Second Law rules out the spontaneous emergence of life and its subsequent evolution, there's nothing in the law that implies any such thing.
    I think your treatment is in agreement with contemporary physics. You might consider ways to tell the audience that complex systems – where they planetary systems or chemical compounds – can be formed only, if the constituents of that system can release energy. Much the same way as a ball can't stay in a well, unless it loses some of its energy. Release of energy means increase in entropy according to formula dS=dQ/T. Thus, the second law of thermodynamics (even though it is not a “driving force”) must be involved in each act of a system becoming more condensed (or more complicated, more improbable, more ... what ever). Anyway, systems tend naturally to become less evenly spread. I am sure that Mike Elzinga can formulate the basic idea much better than what I can.
    Physicists understand that the spontaneous emergence of various forms of order observed in nature doesn't mesh neatly with the Second Law, and have puzzled over theoretical means of reconciling the matter -- with some, for example, defining what they call "emergent systems" or various concepts of new laws of thermodynamics to cover this uncertain ground, though not everyone has been impressed by their work. In the meantime, it should be realized that any reference to entropy as "disorder" is simply bogus.
    Emergent systems and emergent properties. I think ferromagnetism is an example of an “emergent property” of matter. It can’t be predicted from the magnetic properties of its constituent atoms. However, we have now (quantum mechanical) descriptions that work well. Even then, we do not usually refer to quantum mechanics, when we discuss the interactions between two macroscopic magnets. I think the concept “emergent property” is highly overrated. It simply means that we have established a basis to use new terms that are more convenient in discussion. Granted, we still need to be able to justify our claims from first principles.

    Mike Elzinga · 29 June 2011

    mrg said:
    Eric Finn said: On the other hand, I understand mrg's confusion. One source says that the entropy increases (by a known amount) and another source says it stays the same. Both sources base their interpretations on valid principles.
    I've been poking around on various online sources -- UCDavis ChemWiki for example. They overwhelmingly say that free expansion of a gas into a vacuum leads to an increase in entropy. (It might be noted that Frank Lambert wrote or co-wrote a goodly number of these pages.) I'm going to poke around on this in the town library as well. One element of confusion they point out is that while the spatial translation of molecules doesn't factor into entropy, the thermal energy in the molecules being translated does, amounting to a heat flow across a system boundary. But it is my plan to go through the various ChemWiki pages and write up a set of notes to see if I can make sense of them. I don't believe that I'm going to get anywhere with further discussion here. So far, I feel I know less than I did when I started.
    I probably shouldn’t have raised such a subtle issue. It would not be good to have ID/creationists now claiming that there is a “theoretical crisis” in thermodynamics and statistical mechanics. The problem that SWT showed is usually resolved in the way he said. In some textbooks, the solution is to assert that entropy is a state variable. So if one can find a reversible path between the same two states that the irreversible path took, one “substitutes” the results from the reversible path for that of the irreversible path. Thus, the isothermal expansion that leaves the gas at the same temperature (same internal energy) is substituted for the adiabatic, free expansion with the result that the entropy has increased as it did in the isothermal case. I should also mention that few of the good textbooks for physicists actually include this problem in the set of illustrations. I suspect that the reason is that most physicists are aware of the subtle problem that arises when looking at it from a statistical mechanics perspective. It is not good pedagogy to simply assert that entropy is a state variable and then sweep such issues under the carpet. The “paradox” I was raising is an indication that something is amiss, and that a deeper analysis is required. Reversibility and irreversibility are extremely subtle problems in physics; and they are by no means trivial. My usual approach when discussing such an issue with laypersons is to simply assert that the issues are more carefully dealt with by quantum mechanics. Most laypersons accept that; but mrg persistently asks very good questions and is the kind of student every physics instructor loves. And the issues involved in actually carrying out an experiment on an adiabatic free expansion of a gas turns out to highlight many of the problems. Joule was the first to do it, and, as it turns out, did not have sufficient sensitivity. He concluded that the gas temperature didn’t change, but his experiment probably would not have shown a small change anyway. Later experiments (one can find discussions of these in Mark Zemansky’s book, Heat and Thermodynamics) had higher precision on the order of a couple of percent. But the real issue is that the ideal nature of the stated problem does not comport with any physical reality. In a real chamber, the molecules of that gas not only interact with the molecules of the container by transferring momentum to the molecules making up the walls of the container (which, in turn, pass energy in the way of phonons back and forth between wall and gas), but the gas molecules are also immersed in a bath of photons. The system is in equilibrium to start, meaning that all net energy transfers have ceased. So immediately one cannot start out the problem with what is called a microcanonical distribution; one more properly starts with a canonical distribution. But it doesn’t stop there. Quantum mechanically the molecules have to be treated as a wave function, and that brings in the boundary conditions involving the walls of the container. As is frequently the case with idealized problems, people tend to forget about boundary conditions. The Sackur-Tetrode equation is one of the examples in which such issues are starting to be addressed. But it turns out that irreversibility is still far more subtle. I feel badly that my enthusiasm for such subtleties got the better of me in a forum like this. I did not mean to cause further confusion. Such discussions are more appropriate for the really technical research that actually takes place in physics. Again, my apologies.

    Mike Elzinga · 29 June 2011

    It occurs to me - especially in order to head off any discouragement I may have caused mrg - that I should add an important point about that problem as handled by classical thermodynamics.

    The power of classical thermodynamics lies in the fact that one can empirically compute all sorts of things about thermodynamic systems without having to know any of the microscopic details. Lots of messy stuff lies hidden, and presumably accounted for phenomenologically in the calculations.

    But not knowing the microscopic details is also a major weakness; hence statistical mechanics, quantum mechanics, and quantum field theory, etc..

    So one can tentatively accept the results of classical thermodynamics regarding what it says about entropy in the case of free expansion. But what it means for entropy to be a state variable is very murky and needs that deeper analysis.

    If classical thermodynamics is right, then ultimately a detailed microscopic analysis and experimental verification will show it.

    And as long as there are paradoxes between classical thermodynamics and those deeper analyses, one has the excruciating job of clarifying and sorting concepts as well as making sure the underlying physical description of the problem really represents reality.

    We are already well passed that with regard to thermodynamics and statistical mechanics; but, as the deeper analyses show, most of these “simple” problems are, in reality, extremely complex and subtle.

    But that’s how we learn.

    mrg · 29 June 2011

    I think that sounds along the line of a comment I made in my physics tutorial: that the combined gas law works fine and a casual student of the subject has no need to know much more than that. But if it wasn't for Maxwell-Boltzmann statistics, there'd be no way to hook up the combined gas law to basic mechanical principles, it would just be an ad-hoc rule of thumb.

    I suppose it's also in a way like Maxwell's equations. Few EEs honestly need to know them, in fact the heart of practical EE is Ohm's Law. But Ohm's Law only exists as a practical manifestation of Maxwell's equations.

    Mike Elzinga · 29 June 2011

    mrg said: I suppose it's also in a way like Maxwell's equations. Few EEs honestly need to know them, in fact the heart of practical EE is Ohm's Law. But Ohm's Law only exists as a practical manifestation of Maxwell's equations.
    In fact, Ohm’s law is what is often referred to as a constitutive relation in electromagnetic theory. It has an underlying microscopic basis, but one need not know that in order to work EE problems.

    mrg · 29 June 2011

    Mike Elzinga said: It has an underlying microscopic basis, but one need not know that in order to work EE problems.
    Well, at a EE level the practical reality is similar to what I once heard said about plumbers: "All you gotta know is that water runs downhill and payday's Friday."

    Mike Elzinga · 29 June 2011

    mrg said: Well, at a EE level the practical reality is similar to what I once heard said about plumbers: "All you gotta know is that water runs downhill and payday's Friday."
    :-) And the plumber makes more money.

    Mike Elzinga · 30 June 2011

    mrg said:
    Eric Finn said: I might be interested in reading your notes.
    Not for release, private scratchups only. I'm sure that they'll migrate into my elementary classical physics writeup: http://www.vectorsite.net/tpecp.html -- but it may not be as more than a few paragraphs or changes. I don't think the thermo stuff in it is wrong by any means, but that's because I got to the point where I simply threw everything out I wasn't certain of. Not very satisfactory, so I am wondering what I can do to improve matters. I have gone through some very laborious studies to resolve questions at times -- the work only ending up being a paragraph in a release document. But it has to be the RIGHT paragraph.
    Someone has asked me a couple of times here on Panda’s Thumb if I had published anything on the discussions that have take place here about entropy, evolution, and the second law. I have not, even though I had already outlined a book in the past. But after constantly coming up against issues like the one raised here, and noting that colleagues tend to disapprove of writing popularizations, I kept putting it off. I just couldn’t find any satisfactory way of presenting the material; and I kept noting that my own dissatisfaction with other people’s work - such as Peter Adkins - could just as easily apply to anything I would write. And I always had the excuse of being too busy. One of the formats I explored was the old dialogue format in which the interlocutors presented different perspectives and paradoxes. And that is sort of what happened almost spontaneously (no pun intended) on this thread when SWT presented the classical solution to free expansion problem. On the other hand, I know from experience that such an approach may work well with some people and completely turn off others. So I understand where you are coming from on this.

    Mike Elzinga · 30 June 2011

    Eric Finn said: A follow-up question might be: “How did the ideal gas acquire an energy distribution that can be described by common temperature?”. It seems to me that it is correct to state that ideal gas (that can be described by common temperature) in a large volume has higher entropy than ideal gas (than can be described by the same common temperature) in a small volume, even though the number of particles is the same in these two situations. Further, it seems to me that the large-volume case can not (easily) be acquired from the small-volume case by means of simple free expansion of ideal gas.
    I don’t want to appear rude in not responding, but I have to zip in and out of this discussion at irregular intervals because of a busy schedule and a number of trips out of town. I still have a couple of weeks more of this. After the creationists, back in the 1970s, started screwing up thermodynamics and making entropy about everything coming all apart, I have had many discussions with colleagues, students, and others about how best to address the concept, both for the benefit of students and for the general public. I even seriously started writing a book. However, the book kept getting longer and longer and more and more technical until I found myself writing for physics students who already had some math background. Finding the right concepts without using much math is extremely difficult; although it is a good exercise for anyone who has had the responsibilities of teaching. So much of my efforts ended up as supplementary materials for physics instruction. I think the biggest hurdle – at least as far as the general public is concerned – is to get across the notion of what is meant by a state variable. The examples from thermodynamics that are often addressed to non-majors or to the general public invariably make two major mistakes; entropy is equated with disorder, and the use of a gas of molecules spread out in a volume of space. In fact, many elementary thermodynamics texts for physics and chemistry students start out talking about gases in a volume of space. This almost immediately locks in the notion that entropy has something to do with things being mixed up and scattered randomly in space. Statistical mechanics courses, on the other hand, often begin dealing with the notion of phase space in which momentum is on one coordinate and position is on another. Entropy then blocks out a “volume” in this phase space. Then one has to grapple with the idea that any given volume in a phase space for a classical particle can be subdivided infinitely; and hence the first mention of Planck’s constant and a lower limit to the size of the cells in phase space. Gases and harmonic oscillators are often used in illustrating the use of phase spaces. But here again, the notion that spatial distributions are involved in the concept of entropy is again locked in; and it isn’t until one gets to quantum mechanical two-state systems that the notion that space is involved in the enumeration of micostates begins to fall away. I suspect that may be one of the reasons that one author, Charles Kittel, began his text immediately with two-state systems. They make it easy to enumerate states, but don’t throw in the idea that space necessarily has anything to do with entropy. I have generally tried to be careful – almost to the point of dogmatism – to refer to entropy in terms of enumeration of microstates without making reference to space. The issues I have been raising here (I probably should not have done that in this venue) are issues one must grapple with in making the transition from the classical thermodynamics of expanding gases to the notion of entropy as a state variable in which space may or may not play a role. In many textbooks on classical thermodynamics, authors try to be careful to distinguish between reversible and irreversible processes. And an attempt is made to establish entropy as a state variable, meaning that the value of such a variable depends only the particular state of a system and not how the system got into that state. These sections of most thermodynamics textbooks have generally caused much puzzlement, pain, and misconceptions because they often come too early in a student’s development, and mathematical skill is still in a hazy state of development. But reversibility and irreversibility are actually pretty subtle topics. In classical physics, technically, the trajectories of an expanding gas are reversible. But then, how does this reversibility connect to the irreversibility of the free expansion itself? Lots of unanswered questions at this point. And as I pondered over how to simplify concepts for a lay audience, I kept slipping into these issues; and my writing kept getting more and more arcane to where I was changing my concept of who my audience was. I still think these kinds of discussions have to take place with not only physics students, but with the non-majors and the general public as well. But I no longer think I know how to do it in a book for lay audiences.

    Eric Finn · 1 July 2011

    Mike Elzinga said:
    Eric Finn said: A follow-up question might be: “How did the ideal gas acquire an energy distribution that can be described by common temperature?”. It seems to me that it is correct to state that ideal gas (that can be described by common temperature) in a large volume has higher entropy than ideal gas (than can be described by the same common temperature) in a small volume, even though the number of particles is the same in these two situations. Further, it seems to me that the large-volume case can not (easily) be acquired from the small-volume case by means of simple free expansion of ideal gas.
    In many textbooks on classical thermodynamics, authors try to be careful to distinguish between reversible and irreversible processes. And an attempt is made to establish entropy as a state variable, meaning that the value of such a variable depends only the particular state of a system and not how the system got into that state. These sections of most thermodynamics textbooks have generally caused much puzzlement, pain, and misconceptions because they often come too early in a student’s development, and mathematical skill is still in a hazy state of development.
    It may appear that I failed to appreciate the fact that entropy is a state variable, and does not depend on how the system got into that state. I was merely seconding your notion that a system without interactions can not change its energy distribution. A system without any interactions is a highly unrealistic example. Almost in par with an irresistible force hitting an impenetrable wall. For example, the vessel containing ideal gas needs to have infinite mass, because otherwise it would absorb energy from collisions.
    I still think these kinds of discussions have to take place with not only physics students, but with the non-majors and the general public as well. But I no longer think I know how to do it in a book for lay audiences.
    I think you are doing well. The concept of entropy in physics is formulated in the language of mathematics. It is by no means easy to translate the concept of entropy into a language that lacks well-defined meanings for words. And the task is not made any easier, when a set of people try deliberately to mangle the words even more. Frank Lambert has accused chemists for sloppy work in this matter. It is my humble opinion that Lambert is only polite, and wishes to clear his own backyard first and avoids pointing a finger to other faculties. The truth is that physicists are equally to blame. The misconception of entropy contradicting evolution is silly. It is based only on catchwords and mental images. Geology, cosmology, chemistry and physics are all in contradiction with the "principle of everything falling apart".

    Mike Elzinga · 1 July 2011

    Eric Finn said: It may appear that I failed to appreciate the fact that entropy is a state variable, and does not depend on how the system got into that state. I was merely seconding your notion that a system without interactions can not change its energy distribution.
    And I was trying to capitalize on the discussion to start a dialogue by raising just these issues. I was pleased that SWT put up the solution to the entropy of the free expansion of an ideal gas. The counter “argument” I retorted with is probably one of the most common ways a thoughtful student approaches it. And it makes sense because conservation of energy says that dU = δQ - δW, and both δQ and δW are zero. So if the student realizes that the internal energy of an ideal gas depends only on its temperature (that’s a good thing), it is logical to state that TδS = 0 and dU = 0. And with T ≠ 0, surely δS = 0, therefore S is constant. Not many beginning thermodynamic textbooks discuss why it is ok to substitute the result of a reversible path between two states as the solution for the irreversible path between those same two states. And even the ones that do generally do not make it clear why entropy is a state function. And why does one use those funny deltas instead of ds; or the ds with the line through them? At that early stage in the game, it is simply too subtle. So how can one say a student is wrong for such thinking? The problem lies with the lack of adequate dialogue and with establishing entropy as a state function in the context of contradictory signals coming from the text and from the use of entropy in other contexts. One simply has to bring up these issues early and encourage students to think about them instead of just mindlessly “turning the crank” on solving problems. (And, unfortunately, many students just don’t want to think these things through; “plug-and-chug has the illusion of being faster.) And if one uses a textbook that starts out with enumerating the microstates of a two-state system, then going back to systems in which space enters the picture has to be addressed. The press of time in most courses is bound to leave many subtleties unaddressed. And if this is an issue in physics and chemistry courses for majors, then it is an extremely difficult problem to deal with for non-majors and the lay public. The creationist meddling has made it that much harder.

    TomS · 2 July 2011

    Eric Finn said: I think you are doing well. The concept of entropy in physics is formulated in the language of mathematics. It is by no means easy to translate the concept of entropy into a language that lacks well-defined meanings for words. And the task is not made any easier, when a set of people try deliberately to mangle the words even more.
    I agree. The advocates of evolution denial are aware that any rhetoric which elicits responses using mathematics is a victory for them. The solution, IMHO, is to give a response which involves no mathematics. Don't take the bait.

    SWT · 2 July 2011

    In reflecting on this discussion, Mark Chu-Carroll's mantra came to mind: the worst math is no math. (He write the Good Math, Bad Math blog.) You can make verbal arguments about thermodynamics all day, but to demonstrate a point rigorously, you've got to do the math. (In our toy problem about the entropy change for isothermal expansion of an ideal gas, the ultimate resolution is to do the statistical calculation of the entropy of the gas at the initial and final conditions.)

    This is how Sewell's argument goes off the rails; he throws up a few equations, but doesn't -- in a math journal! -- do the additional mathematical development to demonstrate his point, instead resorting to verbal hand-waving that leads him to an incorrect conclusion.

    Mike makes an excellent point above about state functions, reversible paths, and equivalent paths -- if students don't understand these concepts, they can't really do thermodynamics properly. IMO, this is compounded by the fact that entropy is a far more abstract mathematical construct than internal energy or enthalpy, which students find easier to relate to their experience.

    Mike Elzinga · 2 July 2011

    TomS said:
    Eric Finn said: I think you are doing well. The concept of entropy in physics is formulated in the language of mathematics. It is by no means easy to translate the concept of entropy into a language that lacks well-defined meanings for words. And the task is not made any easier, when a set of people try deliberately to mangle the words even more.
    I agree. The advocates of evolution denial are aware that any rhetoric which elicits responses using mathematics is a victory for them. The solution, IMHO, is to give a response which involves no mathematics. Don't take the bait.
    This was a hard lesson for me personally to learn; I fought against the advice others were giving me about my early presentations to lay audiences. I finally tossed most of the math and found ways to express concepts without it; but I was aware all the while that I didn’t want any of my colleagues catching me making some of the glib analogies and illustrations I was using. But using too much math for lay audiences turns out to be more of a disadvantage in our society. Not only does it intimidate folks who already have bad feelings about math, it allows the ID/creationist to “up the ante” by chucking in more “mathematical analysis” and appearing to know more than he does. But I also agree with SWT; if one is going to understand the concepts, one eventually has to do the math. But I would add that doing the math without any feeling for the physical concepts gives only the illusion of understanding. In looking through a number of thermodynamics textbooks, one can find mathematical discussions of exact and inexact differentials and integrating factors. These are important mathematical concepts; but the question any good student of physics should be asking is, “What does any of this have to do with the physics?” Just to emphasize the point a little more, consider the advice we often give physics and chemistry students about checking units. This is very good advice, no? So let’s see how a conscientious student would apply this advice to the free expansion problem. He already determines that dU, δW, and δQ are zero. So far so good. But now he remembers that, for a classical gas, temperature is the average kinetic energy per degree of freedom. That’s energy over a pure number of degrees of freedom. So dividing dU and/or δQ by T gives energy divided by energy over number of degrees of freedom, or number of degrees of freedom. But because both δQ and dU are zero, no more degrees of freedom were added to the system. One simply cannot take either the no-math/verbal approach or the purely mathematical approach. Somewhere, somehow, the math and physical concepts have to accurately connect with physical reality, and we are obligated to check every link along the way. I suppose the thing that has annoyed me the most about ID/creationists like Sewell – someone who should know better – is the deliberate fouling up of the educational process by flooding the literature with “authorities” who can then be pitted against authority. In science, nobody has any authority until he/she can demonstrate conceptual understanding that connects with reality.

    mrg · 2 July 2011

    First law of technical writing: "A workable, easily understood simplification is vastly more useful than the complete, incomprehensible truth."

    Associated with this is the concept of "fog factor", numerically ranked in the number of years of specialized education required to understand the argument.

    One of the truisms about that is that if an argument can only be understood by people with a high enough level of fog factor, it isn't generally an argument of interest to anyone except those with the same level of fog
    factor.

    I'm going through object-oriented programming concepts right now. I've never understood it well, and the more I look into it I find that it seems to a considerable if not complete extent the product of people who don't want to make themselves understood. What's particularly obnoxious is that they use a common set of terms for which they have no mutually agreed-upon set of tidy definitions.

    "There are thousands of computer languages, but the vast majority are only used by the people who invented them."

    Mike Elzinga · 2 July 2011

    SWT said: This is how Sewell's argument goes off the rails; he throws up a few equations, but doesn't -- in a math journal! -- do the additional mathematical development to demonstrate his point, instead resorting to verbal hand-waving that leads him to an incorrect conclusion.
    This is one of the things about ID/creationists that make me wonder about their mental health. Surely someone who has to deal with mathematics would learn over a number of years that there are concepts that have to be understood in order to be used properly. Normal, observant people can recognize this requirement and obligation on the part of experts in other fields. So why do ID/creationist leaders – every damned one of them – not recognize they will be found out for faking it? Lots of people know about surface and volume integrals, the Divergence Theorem, Gauss’s Law, and when these apply. And to get miffed about it and complain on a creationist website about being “expelled” or discriminated against; this has to be related to some kind of mental illness or to just plain malicious bitterness about the fact that there are such things as cops and experts.

    SWT · 3 July 2011

    Mike Elzinga said: And to get miffed about it and complain on a creationist website about being “expelled” or discriminated against; this has to be related to some kind of mental illness or to just plain malicious bitterness about the fact that there are such things as cops and experts.
    Most people I know who've had a scientific paper rejected either (1) revise the manuscript and resubmit somewhere else or (2) concede (at least in the privacy of their own thoughts or with a trusted colleague) that the paper was rejected for valid reasons and cannot be salvaged. I don't think I know personally anyone who would publicly whine about it.

    Henry J · 3 July 2011

    Yeah, whining if any should be done in private.

    Maybe with cheese on the side.

    Or something like that.

    Mike Elzinga · 3 July 2011

    Henry J said: Yeah, whining if any should be done in private. Maybe with cheese on the side. Or something like that.
    Well, Sewell has his cheesy paper. And everybody now knows about it.

    Mike Elzinga · 6 July 2011

    I was trying to remember one of the other issues involving the entropy change of an ideal gas, and it came to me in the car while returning from a trip today.

    SWT presented the solution to the entropy change of a free expanding ideal gas by substituting the isothermal and reversible expansion between the initial and final volumes.

    If entropy is a state function, shouldn’t that be ok? Well, as it turns out, there is another wrinkle in this “recipe.”

    The change in entropy, which is the integral of dQ/T between states A and B of a thermodynamic system is often shown to be

    S(B) - S(A) ≥ ∫AB dQ/T

    with the equal sign applying to the reversible path from A to B.

    So shouldn’t the free expansion, which is irreversible, produce a greater change in entropy?

    Over the years there has been a better cataloguing of misconceptions and pitfalls in learning concepts of physics. These include things in very basic physics, such as Newton’s third law where students are asked the tension in a rope when a force F is applied to each end of the rope (as in a tug-of-war). There are literally dozens of catalogued misconceptions that have been identified, studied, and addressed.

    But I have not seen very much of this kind of work done with the concepts in thermodynamics and statistical mechanics. I think many physics instructors have been aware of the confusion cause by equating entropy with disorder. The issues of reversible versus irreversible transitions between states are also familiar.

    The typical approach to these kinds of misconceptions is to raise these issues if the students don’t (and they usually don’t). And one raises them by posing questions and raising issues or “counterexamples” that lead to paradoxes. It is good pedagogical practice to get these kinds of thought process going in order to make sure that concepts are ironed out and understood by the students.

    And this is precisely why ID/creationism should NOT be given any time in the public school science classroom. ID/creationism is deliberately concocted sectarian pseudo-science that begins by misrepresenting science.

    It is difficult enough to get the right concepts clear in the minds of students without also having to grapple with a blizzard of garbage deliberately designed to prop up sectarian dogma and therefore conflict with reality.

    Mike Elzinga · 26 July 2011

    Mike Elzinga said:
    Eric Finn said: I think Professor Frank Lambert is right. According to the quantum mechanics, the density of states is higher in a big volume than in a small volume. One of the experimentally verified examples is the Casimir effect.
    Again, the discussion over there is incomplete. The energy of a quantum mechanical particle in a box is ε = (h2/8m)(nx2/Lx2 + ny2/Ly2 + nz2/Lz2), where h is Planck’s constant and m is the mass of the particle. Doubling each of the dimensions Lj also doubles the number of nj’s if the particle retains the same translational kinetic energy. So the energy of the particle remains constant if no work is done on or by the particle when changing the dimensions of the box. So the distribution of energy among all particles remains constant if the particles do no work when adiabatically expanding into a larger box. Again we have to distinguish between energy density and density of states. Simply expanding the volume into which a particle moves, without changing the kinetic energy of the particle, is exactly what we mean by adiabatic free expansion. No change in particle energies, no net change in energy microstates. Only a change in energy density, not density of microstates.
    AAACK! While looking for an unanswered question, I found this. Man, I bollixed that post! I shouldn’t try to do this when traveling or multitasking. What was I thinking? MOMAD not only made an error, NOMAD didn’t catch the error; that’s two mistakes. You are imperfect. Imperfection must be sterilized.

    No change in particle energies, no net change in energy microstates. Only a change in energy density, not density of microstates.

    This should have read

    No change in particle energies, no net change in total energy. Only a change in energy density, and number of microstates.

    That was the point of the part I highlighted in the original post.

    Mike Elzinga · 29 July 2011

    It appears that Granville Sewell must still be really tweaked about having his paper rejected. It’s now July 29, 2011, and he just can’t accept the fact that he is wrong.

    Henry J · 29 July 2011

    I thought I was wrong once.

    But it turned out I was mistaken.