Andreas Wagner: Arrival of the Fittest: Solving Evolution's Greatest Puzzle

Posted 4 November 2014 by

I started this post thinking I'd write a review of Andreas Wagner's recent book "Arrival of the Fittest: Solving Evolution's Greatest Puzzle" (links below), an engrossing book about how biological innovation arises from the structure of metabolic, genotype, and protein networks, and how robustness--the stability of phenotypes in the face of underlying genetic variability--is critical in evolutionary innovations. But there are several excellent reviews already out there, so another would be redundant. I'll mention only a couple of points I think worth emphasizing below the fold. Robustness First, robustness is a constant theme through the book. Robustness is defined as "the persistence of life's features in the face of change." Phenotypes are often invariant in the face of genetic change. There are multiple ways--indeed, very, very many ways--of metabolizing a given food source. Many different chemical reactions can metabolize a given food source. The chemical reactions that allow an organism to metabolize a given food source differ in detail within and across species, but are phenotypically the same. Note the "within": there is variability within a species in how many food sources are metabolized: standing variation. Networks Second, genotypes are linked together in high-dimensioned spaces forming a massively interconnected network. Consider a large set of metabolic reactions, say N of them. Each reaction is a node in an N-dimensioned space. Each node has N-1 nearest neighbors, neighbors that differ in just one component of the reaction. Wagner's interest is in the characteristics of that network. He finds that they are massively interconnected; that one can step from node to node without immediately or necessarily losing the metabolic properties of the 'home' node. A substantial proportion of the 'neighbors' in fact perform the same metabolic function: they have the same phenotype. And that goes for neighbors of the neighbors. Given roughly 5,000 total metabolic reactions known in all life, a space of 5,000 dimensions contains hundreds, perhaps even thousands, of phenotypically identical reactions as nearest neighbors of a given node, and hundreds more as nearest neighbors of that node's neighbors, and so on. Wagner shows that one can step from node to node until the underlying genotype shares only 20% or 25% of its composition with the original 'home' node yet is phenotypically the same. That's the source of robustness. There's more, of course, and my too-brief summary omits an enormous amount of detail. There are other implications for us. For example, those high-dimensioned interconnected networks with their phenotypically identical neighbors make nonsense of ID creationists' probability calculations. Briefly, the notion that the numerator of their probability calculations is "1" is ludicrous. (Wagner refers to young-earth creationists as ""half literate and wholly ignorant.") I recommend the book heartily--it's not only an excellent summary of Wagner's ideas, but it's also eminently readable. Here are a few links: The book's home site (Don't get fooled by this creationist site.) Barnes&Noble site (I read it on my Nook.) The Amazon Borg site. Mark Pagel's review in Nature. World Science Festival's interview with Wagner. Wagner's publications.

27 Comments

Robert Byers · 4 November 2014

"Half literate and wholly ignorant". I know, Iknow if its true its not abusive statements about a identifiable group. So its all about who decides what is true.
A new book by Lee Spetner called EVOLUTION REVOLUTION is taking on these ideas about the fit genes and so biology as not possible due to genes ability to adapt or switch immediately to adapt. discovery folks are pushing it.
Evolution is asking people to accept the unlikely.
Its not selection that is the error but the fantastic need for mutations to happen and as needed johnny on the spot.
Further it requires long timeframes.
Its seems all speculative to me. nothing omne can sink ones teeth into because no facts for actual evolution are presented. Mechanism speculation is evolutions majority shareholder.

riandouglas · 4 November 2014

Response to Robert's comment voluntarily posted to the bathroom wall

Just Bob · 4 November 2014

Robert Byers said: "Half literate and wholly ignorant". I know, Iknow [,] if its true its not abusive statements about a identifiable group.
The irony is just too delicious. Look at your first sentence (fragment). In the second sentence I bolded all the errors. There are six 'half literate' errors in fifteen words, or more than one for every three words. See if you can spot the one which contains two errors in one bolded word. I suggested this long ago: If you want your ideas to be taken more seriously, or at least read without groans and snickers, then start by using spell-check at the bare minimum. The system right here on PT makes a squiggly red line under anything that is suspect. A word processor program will go much further in identifying errors and suggesting corrections. Then again, you could pray for writing-mechanics skills, but we both know that doesn't work. And since Jesus answers prayers, I can only conclude that Jesus can't spell, either.

thlawry · 4 November 2014

I read the book, not sure I have fully digested it yet. One thing I have a question about is his metabolism simulation. He said that you could get from any metabolism A to any other B by adding all the reactions (one at a time) which are in B but not in A, and then subtracting the reactions which are in A but not in B. That way you always have a viable metabolism either A plus some extra reactions or B plus some extra reactions. So you can random walk, one reaction at a time, all through the space of viable metabolisms.

The underlying assumption is that while deleting a reaction can be harmful, adding reactions never hurts. Is that reasonable? Suppose the new reaction catalyzes a product which is harmful unless there is another reaction to use/transform the product? Is that likely?

DS · 4 November 2014

Briefly, the notion that the numerator of their probability calculations is “1” is ludicrous. (Wagner refers to young-earth creationists as ““half literate and wholly ignorant.”)

Well if you blindly parrot arguments that you don't understand, things like that are bound to happen. Any time you see one of these bogus calculations, you should realize immediately that the ignoramus doesn't have a clue about how biology works.

And of course they are proud of their illiteracy and ignorance, indeed they must work very hard to maintain it in the face of all the potential learning opportunities. They usually refuse to do anything to change it, ever. Some have been posting here long enough to have gotten an advanced degree in biology and still haven't learned anything. Some have made the same grammatical errors over and over and even though they have been repeatedly corrected, they have yet to get it right. But then again, their arrogant attitude is most likely what made them become science haters in the first place, so what else would you expect?

Richard B. Hoppe · 4 November 2014

Got a page/chapter for that? I don't recall it that way. On housekeeping: all further Byers-related stuff will go to BW hell.
thlawry said: I read the book, not sure I have fully digested it yet. One thing I have a question about is his metabolism simulation. He said that you could get from any metabolism A to any other B by adding all the reactions (one at a time) which are in B but not in A, and then subtracting the reactions which are in A but not in B. That way you always have a viable metabolism either A plus some extra reactions or B plus some extra reactions. So you can random walk, one reaction at a time, all through the space of viable metabolisms. The underlying assumption is that while deleting a reaction can be harmful, adding reactions never hurts. Is that reasonable? Suppose the new reaction catalyzes a product which is harmful unless there is another reaction to use/transform the product? Is that likely?

harold · 5 November 2014

The underlying assumption is that while deleting a reaction can be harmful, adding reactions never hurts. Is that reasonable? Suppose the new reaction catalyzes a product which is harmful unless there is another reaction to use/transform the product? Is that likely?
I agree with the request for a precise citation, but I also have a reaction to what you have said so far. The assumption certainly need not be that adding a reaction never hurts. The only necessary hypothesis would be that adding a reaction sometimes doesn't hurt too much. And that's not an assumption, it's a testable hypothesis. And it's more or less been tested and found to be true. Polymorphisms that lead to fitness neutral metabolic pathway variation within lineages are easily demonstrated.

Kevin B · 5 November 2014

Richard B. Hoppe said: Got a page/chapter for that? I don't recall it that way. On housekeeping: all further Byers-related stuff will go to BW hell.
thlawry said: I read the book, not sure I have fully digested it yet. One thing I have a question about is his metabolism simulation. He said that you could get from any metabolism A to any other B by adding all the reactions (one at a time) which are in B but not in A, and then subtracting the reactions which are in A but not in B. That way you always have a viable metabolism either A plus some extra reactions or B plus some extra reactions. So you can random walk, one reaction at a time, all through the space of viable metabolisms. The underlying assumption is that while deleting a reaction can be harmful, adding reactions never hurts. Is that reasonable? Suppose the new reaction catalyzes a product which is harmful unless there is another reaction to use/transform the product? Is that likely?
I'm not sure that the question isn't wandering into the dangerous waters of "directed evolution". There's a whole raft of questions. Firstly, are we actually talking about getting from A to B, or is it that A and B both derive from a common ancestor that is neither A nor B. Even for a direct A to B path, you can only say after the fact that there is a viable chain of intermediate forms. PS The BW isn't "Hell" - it's a paradise (at least in the sense of a walled garden....)

thlawry · 5 November 2014

I read the book, not sure I have fully digested it yet. One thing I have a question about is his metabolism simulation. He said that you could get from any metabolism A to any other B by adding all the reactions (one at a time) which are in B but not in A, and then subtracting the reactions which are in A but not in B. That way you always have a viable metabolism either A plus some extra reactions or B plus some extra reactions. So you can random walk, one reaction at a time, all through the space of viable metabolisms. The underlying assumption is that while deleting a reaction can be harmful, adding reactions never hurts. Is that reasonable? Suppose the new reaction catalyzes a product which is harmful unless there is another reaction to use/transform the product? Is that likely?
Here's the reference: PLOS Computational Biology Dec. 2009 Evolutionary Plasticity and Innovations in Complex Metabolic Reaction Networks Joao F. Matias Rodrigues, Andreas Wagner. See Supplementary info Text_S1.pdf, page 4 http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1000613#s5 But more generally, see p. 84-5 of the book, where Wagner describes calculating viability defined as the ability to make 60-odd biomass molecules. It is a linear programming calculation see footnotes 32 and 33 for chapter 3. On p. 94, the random walk involves randomly adding or deleting a reaction, computing viability of the new metabolism, and reversing the change if non-viable, otherwise continuing. The point is, viability is all about having enough reactions to make the minimum set of products, there is no consideration of toxicity from an "extra" reaction. (Text_S1.pdf, just makes explicit what is implicitly stated in the book.) So my question remains: Being able to produce the basic 60 molecules is necessary for life, but is it sufficient? Note that this is a simulation of random walks through the "space" of possible metabolisms, we are not talking about a creationist "you can't get there from here" argument about the evolution of a specific metabolic feature.

Mike Elzinga · 5 November 2014

thlawry said: So my question remains: Being able to produce the basic 60 molecules is necessary for life, but is it sufficient? Note that this is a simulation of random walks through the "space" of possible metabolisms, we are not talking about a creationist "you can't get there from here" argument about the evolution of a specific metabolic feature.
Among biophysicists, I think the general understanding is that these large biomolecules evolve; so the ideas of evolution for complex organisms also apply to these molecules. If these molecules exist within a large network of molecules, and if there is a distribution in their characteristics, then there will very likely be a niche into which a variation falls and remains "viable." I am wondering if the word "viable" is also causing some confusion. If a molecule "discovers" a new "function" that perpetuates its continuance - in a sense, "inventing" a heretofore non-existent niche by way of an emergent property of its behavior - I think that would be considered as "viability?"

Mike Elzinga · 5 November 2014

I might add that, from my own experience in working with Monte Carlo type simulations and calculations, I have noted that another confusion that arises in the context of using Monte Carlo type simulations - which include genetic algorithms - is to flip into the frame of mind that the landscape being explored is preexisting.

You can set up such a program to explore such landscapes by setting your criterion for "success" - the criterion in and of itself will imply a landscape even if it is not possible to imagine that landscape ahead of time. You discover the landscape by way of the various "solutions" that fall out of the calculation.

From a different perspective, one can also do ab initio calculations using essentially kinetic theory that includes the fundamental interactions among atoms and molecules within a given energy range such that kinetic energies are on the order of the mutual potential energies among the constituents participating in the interactions. This takes enormous supercomputing power; but what falls out are the emergent patterns that are relatively stable; in other words, a "landscape" of relatively persistent arrangements.

We've all seen analog versions of this when we watch the emergence of weather patterns, or wave patterns in windblown sand. An example of the early analog simulations - before the existence of analog and digital computers - is the one that often used flowing water and potassium iodide crystals to map out flow lines analogous to the electric fields in vacuum tubes. Wind tunnels and schlieren photography are still used in the design of aircraft and for studying other dynamics taking place in fluids. There have been literally thousands of such techniques that have been used in situations where direct calculation is far too complex or just downright impossible.

The point of these calculations is to "see" the underlying patterns that emerge from complex interactions. However, those patterns aren't preexistent, and we aren't puzzling about how something "knows" to go from A to B; we are seeing A and B emerge.

Joe Felsenstein · 5 November 2014

(Have not yet read Andreas's book).

I wonder whether he is treating the production of the 60 molecules as a yes/no affair.

Would a more biochemical-kinetics approach be needed to see whether overproducing one of the molecules would have a disadvantage?

Mike Elzinga · 5 November 2014

In a related topic, the 2013 Nobel Prize in chemistry gives some insight into the energetics involved in the modeling of complex molecules and their interconnections. I call your attention to Figure 1 on Page 3 (10) and the surrounding discussion. Working from the inside of that figure outward, we are dealing inside with very strong electromagnetic interactions and a realm in which quantum mechanical rules apply. The next layer outward deals with classical physics in which the lengths of the wave function are very short relative to interatomic distances. In the third, outer region we see interactions with the dielectric medium in which the molecule is immersed. As a very rough guide, the strength of interaction drops by roughly an order of magnitude - from something on the order of an electron volt in the center, to about 0.1 eV in the intervening region, to about 0.01 eV at the outside. Near the outside we are dealing with van der Waals interactions in which the proximity of molecules results in a redistribution of charge that causes molecules to attract. This is a fundamental property of condensed matter formations of complex structures; bonding and screening causes the strengths of interactions to drop off sharply with distance. Temperature is also very important because it sets the size of the kinetic energies of constituents relative to their binding energies. This picture has tremendous relevance for how networks of such molecules can mutate and form interconnections, and how mutations deep within the structure have a much weaker effects on the network interconnections; resulting in "robustness" to such mutations. Organic chemists learned long ago that the structure of organic molecules had more to do with their properties than what particular atoms were located at various positions within those structures. It also demonstrates that evolutionary pathways involving collections of these molecules aren't restricted to just the mutations at "nuclei" of such a system. Now look at the paper by Wagner, et. al. "Evolutionary Plasticity and Innovations in Complex Metabolic Reaction Networks" Note the paragraph just below Figure 1 of that paper.

The functions and phenotypes of biological macromolecules are robust to genetic change. Such robustness has important implications for the evolutionary plasticity of molecules, the ability of molecules to evolve new properties. Through mutations that do not affect a molecule's function, vast regions of phenotype space can be explored, regions in which molecules with novel phenotypes can lie [1],[6]. Does the same hold for genome-scale biological networks? Can biological networks with similar phenotypes have a vast number of interconnected and different genotypes, thus being both highly robust and having large evolutionary plasticity? These questions currently have few answers. We study the evolution of genome-scale metabolic networks to provide such answers.

This seems to me to be a nice representation of why biologists and biophysicists doing such mathematical modeling are on solid physical ground; what they do is based in good chemistry and physics because good chemistry and physics are implicit in the computational algorithms they design and work with. Contrast these two works with the ID/creationist's "log2(N L) greater than 500" shtick.

Vince · 5 November 2014

Definition of "life" - networks of networks.

Richard B. Hoppe · 5 November 2014

I see the issue now. "Viability" = the ability to produce all 60 biomass molecules. Earlier on page 77 of the Nook version Wagner reports a test of that definition. Over "several hundred mutant E. coli strains, each of them engineered to lack one enzyme," he reports that "their [calculated] viability is highly accurate--it is correct for more than 90 percent of strains." See footnotes 32 and 33. for more--this phone is unhandy to post on.

harold · 6 November 2014

So my question remains: Being able to produce the basic 60 molecules is necessary for life, but is it sufficient? Note that this is a simulation of random walks through the “space” of possible metabolisms, we are not talking about a creationist “you can’t get there from here” argument about the evolution of a specific metabolic feature.
That wasn't your original question. The answer to this question would appear to be an obvious "no". There is no individual molecule that occurs in living systems that cannot be easily produced in non-living systems. Even highly adapted, specific receptor molecules can be synthesized with relative ease in a non-living system. Although it might also apply to sterile semantic arguments about whether obligate intracellular parasites that obviously evolved from cells are "life", questions about what is minimally sufficient to define something as living apply more meaningfully to abiogenesis. When we talk about the theory of evolution, we're talking about a theory that explains the diversity and relatedness of cellular life, and things that, if not cellular life, seem to have evolved from cellular life. Undoubtedly somewhat analogous processes of variation and selection could be important in a model of abiogenesis, but the current theory of evolution deals with what is already life, or evolved from life.

thlawry · 6 November 2014

Let me start over. In chapter 3, Wagner describes a random walk simulation of the evolution of bacterial metabolisms not molecules, not abiogenesis. The random walk involves either adding or deleting one reaction randomly at each step. If the new metabolism is not viable, the simulation goes back to the previous metabolism, otherwise it keeps going. The underlying biology is bacterial conjugation in which a number of complete genes can be passed from one bacterium to another, perhaps adding a new metabolic reaction to the bacteria that gets the new DNA. No new molecules are developed by natural selection, it's just DNA swapping.

The question is about Wagner's definition of viability. A metabolism is viable if and only if it is capable of synthesizing a set of about 60 essential molecules: amino acids, DNA bases, etc. As Wagner himself says, it is a trivial consequence of his definition of viability, that it is always possible to get from any viable metabolism, call it A, to any other viable metabolism B, by first adding all the reactions (one at a time) that B has and A doesn't, and then subtracting all the reactions A has and B doesn't. At every step there is always a viable metabolism, at first it is A plus some other reactions, later on it is B plus extra reactions.

Wagner's interesting result is that viable paths from any given metabolism to very different metabolisms not only exist, as they must given his definition of viability, but can be found by random walks.

My question is whether the definition of viability is too optimistic. Wagner assumes that as long as a metabolism has the reactions needed to synthesize the 60 basic molecules, you can add as many other reactions to the metabolism as you like without compromising viability. Is that reasonable?

And yes, that has been my question, all along.

harold · 7 November 2014

My question is whether the definition of viability is too optimistic. Wagner assumes that as long as a metabolism has the reactions needed to synthesize the 60 basic molecules, you can add as many other reactions to the metabolism as you like without compromising viability. Is that reasonable?
One reason to think that this basic model seems reasonable, is that it is highly, highly compatible with what we observe. Life at every level is full of redundancy and diversity. If the environment were constant, any redundancy or diversity would be selected against, albeit possibly very slowly if the redundancy or diversity were low cost to maintain. Even so there might be a steady state of diversity. However, in a changing environment, that which is redundant today may become useful tomorrow. If you are sincerely trying to learn something, and not a concern troll/stealth apologist creationist in disguise, then I admire your effort, but at that same time, it may illustrate the problem of trying to study one specific area of biomedical science without a strong grounding in the basics. Your thought experiment can be answered by appealing to reality. If it were not possible to add temporarily redundant metabolic pathways and still reproduce successfully, then the biosphere would contain few or no redundant metabolic pathways. That's manifestly the opposite of the case, even in bacteria, let alone eukaryotes. Obviously, a simple model created to illustrate a point is not intended to perfectly reflect all complexities of reality. In fact the opposite. However, the aspect of this model which you are troubled by seems, to me, to be reasonable.

thlawry · 7 November 2014

harold said:
My question is whether the definition of viability is too optimistic. Wagner assumes that as long as a metabolism has the reactions needed to synthesize the 60 basic molecules, you can add as many other reactions to the metabolism as you like without compromising viability. Is that reasonable?
One reason to think that this basic model seems reasonable, is that it is highly, highly compatible with what we observe. Life at every level is full of redundancy and diversity. If the environment were constant, any redundancy or diversity would be selected against, albeit possibly very slowly if the redundancy or diversity were low cost to maintain. Even so there might be a steady state of diversity. However, in a changing environment, that which is redundant today may become useful tomorrow. If you are sincerely trying to learn something, and not a concern troll/stealth apologist creationist in disguise, then I admire your effort, but at that same time, it may illustrate the problem of trying to study one specific area of biomedical science without a strong grounding in the basics. Your thought experiment can be answered by appealing to reality. If it were not possible to add temporarily redundant metabolic pathways and still reproduce successfully, then the biosphere would contain few or no redundant metabolic pathways. That's manifestly the opposite of the case, even in bacteria, let alone eukaryotes. Obviously, a simple model created to illustrate a point is not intended to perfectly reflect all complexities of reality. In fact the opposite. However, the aspect of this model which you are troubled by seems, to me, to be reasonable.
No, I am not a creationist (Hey, I can spell!). I think Wagner has done really interesting and ground-breaking work tackling some of the biggest questions in biology. If anything, I was trying to "bullet-proof" his work against creationist sniping, since questioning the definition of viability would seem to be the obvious line of creationist attack. What I was hoping for was "ask a simple question, get a simple answer." But I guess there isn't one. Not very surprising, if Wagner's work is as trail-blazing as it seems to be.

https://me.yahoo.com/a/Nc1GW6MJ2oCtNYp1AyeNOWDWzqdp_cw-#fed84 · 7 November 2014

Solving Evolution's Greatest Puzzle ????

Looks like NOTHING was solved, as usual.

Just continued psycho-babble by the resident groupies,
aka evos-inbreds.

Just Bob · 7 November 2014

https://me.yahoo.com/a/Nc1GW6MJ2oCtNYp1AyeNOWDWzqdp_cw-#fed84 said: Solving Evolution's Greatest Puzzle ???? Looks like NOTHING was solved, as usual. Just continued psycho-babble by the resident groupies, aka evos-inbreds.
... and SteveP adds... nothing.

Just Bob · 7 November 2014

... and misuses the term "psycho-babble".

https://me.yahoo.com/a/Nc1GW6MJ2oCtNYp1AyeNOWDWzqdp_cw-#fed84 · 7 November 2014

..... and used the term " evos-inbreds " with 100 % accuracy.

riandouglas · 7 November 2014

So what does "evos-inbreds" mean?
My parents weren't related, so it mustn't have the obvious meaning.

Or is it just some essentially meaningless pejorative term you think sounds insulting?

https://me.yahoo.com/a/Iq0tNn8vsJ8cr_NIAFfcBFfRW8A-#a85f3 · 20 November 2014

Can some one explain hyper dimension networks that Wagner talks about in the book ? How can 'library' exist in Hyper dimensions? I mean how many dimensions are we talking about and where does this come from ? How can a genotype network have more than 3 dimensions ?

DS · 20 November 2014

thlawry said: Let me start over. In chapter 3, Wagner describes a random walk simulation of the evolution of bacterial metabolisms not molecules, not abiogenesis. The random walk involves either adding or deleting one reaction randomly at each step. If the new metabolism is not viable, the simulation goes back to the previous metabolism, otherwise it keeps going. The underlying biology is bacterial conjugation in which a number of complete genes can be passed from one bacterium to another, perhaps adding a new metabolic reaction to the bacteria that gets the new DNA. No new molecules are developed by natural selection, it's just DNA swapping. The question is about Wagner's definition of viability. A metabolism is viable if and only if it is capable of synthesizing a set of about 60 essential molecules: amino acids, DNA bases, etc. As Wagner himself says, it is a trivial consequence of his definition of viability, that it is always possible to get from any viable metabolism, call it A, to any other viable metabolism B, by first adding all the reactions (one at a time) that B has and A doesn't, and then subtracting all the reactions A has and B doesn't. At every step there is always a viable metabolism, at first it is A plus some other reactions, later on it is B plus extra reactions. Wagner's interesting result is that viable paths from any given metabolism to very different metabolisms not only exist, as they must given his definition of viability, but can be found by random walks. My question is whether the definition of viability is too optimistic. Wagner assumes that as long as a metabolism has the reactions needed to synthesize the 60 basic molecules, you can add as many other reactions to the metabolism as you like without compromising viability. Is that reasonable? And yes, that has been my question, all along.
My take on this is that yes, that definition of viability is accurate. As long as you can produce all of the molecules required for metabolism you can survive. Granted the number is substantially more than 60, but the principle remains sound regardless of the number. If you can't make even one of the molecules required for metabolism, say an essential amino acid, you cannot survive and are thus inviable. This of course ignores the question of metabolic efficiency. Useless or redundant genes might lower metabolic efficiency, but they wouldn't necessarily render you inviable. They might reduce your fitness or ability to compete, but in the absence of competition they might survive just fine. Indeed, such redundancy might give you a long term advantage in a changing environment and the evolution of efficient regulatory mechanisms might offset the cost of inefficiency. Harold is correct. We see this kind of thing all the time, especially in eukaryotes. The conclusions of the book seems to be valid, at least for the evolution of metabolic pathways. It is also worth noting that a similar argument could probably be made for morphological evolution. We are beginning to understand the genetic mechanisms that control development. What we find is robustness and interconnected genetic networks that might evolve in a similar fashion. So this could be a model for the evolution of novel features in general.

eric · 20 November 2014

https://me.yahoo.com/a/Iq0tNn8vsJ8cr_NIAFfcBFfRW8A-#a85f3 said: Can some one explain hyper dimension networks that Wagner talks about in the book ? How can 'library' exist in Hyper dimensions? I mean how many dimensions are we talking about and where does this come from ? How can a genotype network have more than 3 dimensions ?
Scientists often use 'dimension' to refer to the number of independent factors in a system. Let's say you have five (other) metabolic reactions that could cause a change to some specific metabolic reaction you're interested in. It would be really useful if you could "map out" all possible combinations of those five reactions. To do that mapping, you would ideally plot the range of all possible values for reaction 1 on x. The range of values for reaction 2 on y. The range of values for reaction 3 on z, and the range of values for reactions 4 and 5 on higher dimensional axes. When you do that, you aren't implying there are more than three spatial dimensions. Its really just a (very handy and mathematically consistent) tool for data analysis. And it doesn't matter whether we humans can envision such a space or not; mathematically and computationally, a 6-axis plot is not too much harder to manipulate than a 3-axis plot.