Posted in Artwork and portraits

New portraits

Copyright 2019 by Russ Hodge. In contrast to most pieces on the site, these are not for use without permission.

William Byrd, English composer of the late Renaissance


Henry Purcell, English Baroque composer


Camille Claudel, French sculptress

Posted in Artwork and portraits

New scientific paintings

Dear friends,

Here are some new works of art related to neuroscience; these and some other pieces will be on view in the coming days at two exhibitions sponsored by the EDGE initiative:

Thursday – Friday, July 25-26 at the
Charité Crossover (CCO) Mitte, Virchowweg 4-6

Saturday – Sunday, July 27-28, at the
Heizkraftwerk Steglitz, Birkbustchstrasse 42

These images copyright 2019 by Russ Hodge, not for use without permission (unlike most of the rest of the content of the site)

“Neuronal forest”

“Self-portrait as neuron”

Santiago Ramón y Cajal, neuroscientist

Posted in Teaching and training, Teaching science communication, Uncategorized

Ghosts of omission

What a thing IS encodes what it ISN’T

Note: This piece follows up on my other articles on “ghosts” – an analysis of diverse factors which disrupt science communication. To read more, see:

An overview of the model: “Ghosts, models and meaning in science”

The main article

A recent talk on the topic given at Jackson laboratories

Ghosts in images

More on ghosts in images

“Ghosts of omission” are a type I describe in the talk recently given at the Jackson Laboratory in Maine (see the link above). I discovered this type during a retreat with the Niendorf group from the MDC. We were doing an exercise on the difference between verbal descriptions of things and images. Each member of the group had to go into the kitchen, choose an object, then come back and describe it in purely physical, spatial terms, without naming it or stating its function. The listeners had to draw it.

One of the postdocs chose to describe this:

About half of the participants drew something that clearly corresponded to this object. But interestingly, the other half of the group drew one of these:


There are times when the “resolution” of language usually doesn’t suffice to disambiguate two things that are similar. Think of verbal descriptions of faces, for example, which could usually apply to lots of different individuals – it’s hard for most people to describe them well enough for a police artist, even when a face is being drawn right in front of them.

In this case that isn’t really the problem. It would be straightforward to describe the “egg whisk” well enough to distinguish it from the beaters of a mixer. What happened, though, is that the person giving the description just didn’t think about beaters at the time.

This means that confusion or ambiguity can arise because when describing something, the speaker or writer doesn’t know about – or simply doesn’t think about – another thing that it might be confused with. In other words, the way we think of a thing encodes not only what it is– what we’d probably call defining features – but those which distinguishit from other things that resemble it along multiple dimensions.

This concept surely has profound implications for fields like information and set theory, and across the spectrum of linguistics. It’s equally crucial in the types of concepts and models created by biologists. I’ll just cite two examples here: noncoding RNAs and immune cells.

The completion of the human genome and the rapid development of sequencing technologies revealed that our DNA encodes not only messenger RNAs bearing the recipes for proteins, but a wide range of other types of RNAs. Scientists are still exploring the functions of these molecules. New types – with different functions – are being discovered all the time. Initially scientists grouped them into classes generally based on the length of the molecules – into categories such as microRNAs, or long noncodingRNAs – and generally expected that these sizes would be associated with specific functions. The field has now exploded with the characterization of dozens of types, whose functions do not necessarily correlate cleanly with an RNA’s length. In principle, the discovery of each new type is like the discovery of a new kitchen instrument which might shift the defining and distinguishing features of existing utensils.

But it’s not always the case that the discovery of a new element in a system causes scientists to revisit and revise existing classifications. The same is true of the immune system, where new types of cells continue to be discovered. Researchers with a profound understanding of this incredibly complex system know that new types can force a revision of the roles and functions of the players already known. This can, however, take a while to seep into the broader awareness of the community. And there’s no guarantee that the patterns encoded in old ways of thinking of a type of RNA, or an immune cell, will be completely stripped from the old concepts.

This problem is inherent to biology because new instruments – or upping the resolution of an old method – continually expose new features and elements of systems. At first, these components are almost always seen from the perspective of models that have done without them. Eventually the cognitive shifts spread and are better integrated. But we need to be aware that our models encode old ghosts that are never completely broken down and reconfigured.

To close I’d like to show another way in which “ghosts of omission” exert an extremely powerful effect on our thinking. In an earlier version of the “Jackson talk” I used to include an example of a text (slightly edited) by a famous humorist. We read the text and it usually got a laugh:

Tom and saw Tom’s older brother George kissing his girlfriend on a couch. Tom and I looked at each other with big grins. If faces had been meant to kiss each other, they would not have been given noses.

Suddenly the scene turned bizarre because we saw that the girl had her tongue in George’s mouth and George’s tongue was misplaced, too.

What could that girl’s tongue possibly be doing in George’s mouth? Tom and I felt sick. After about a minute of observation, we went out into the backyard.

“That’s it!” I told Tom. “I’m really disgusted with girls now. I’m never gonna hit another one. Or even hit one with a jelly bean… Let’s make a pact. The first girl who ever puts her tongue in our mouth, we give it right back to her.”

At that point I identified the author: Bill Cosby.

If you know anything of Cosby’s subsequent legal troubles, and go back and read the text, what was simply amusing now becomes somewhat “creepy”. Knowing a single fact changes the way we process language and envision the roles of the characters. I can’t define creepiness in cognitive terms… But the change that occurs between the two readings of the text is the result of ghosts of omission. It’s another example of the profound effects of the “dark matter” of ghosts.

Posted in My articles about science, Teaching and training, Teaching science communication, Uncategorized

More “ghosts” in images

In my talk at the Jackson laboratories and my other work on “ghosts” in science communication (1)(2)(3), I refer to the way hidden structures and patterns in our thinking influence not only how we understand meaning, but basic aspects of perception. Here are a couple of new examples, developed for the talk and then something I found in the news this morning.

The first illustrates how we scan, process and interpret grey-scale images. I think generally if we see a black and white image, we’ve been trained to recognize structures and patterns based on everyday things we encounter. I’m sitting on a sofa with greyish/green cushions, and I recognize significant structures such as the cracks between them (very dark lines) and a floral pattern on the fabric, and others that I dismiss – shadows just because the way the light is falling:

When I look at an MRI scan, I also see patterns:

and my brain does something similar… In essence, my brain is simplifying the structure, highlighting some differences and reducing others. It’s filtering the image down to something like this:

BUT the gradations of grey-scale on a sofa don’t mean the same thing as in an MRI scan of the brain. The original image actually contains far more gradations of grey than I can probably perceive…

But using Photoshop or another image processing program you can get the computer to mark them, and use false coloring to exaggerate the differences. Doing that to the original image produces this:

It’s not necessarily true that this rendering contains more functional information than the simpler one, but I’d bet it does. How meaningful are these new substructures? That’s for the experts to decide, but you have to notice them in the first place to ask the question.

The “ghosts” in this process are a level of visual processing that our brains often carry out below the surface, recognizing some shades of grey as the “same” and clustering them, ignoring others and filtering them out. There’s simply no guarantee that the way this is happening – trained by all kinds of situations in which we recognize patterns in images – will pick up the critical differences in an MRI image of the brain.

This morning I found a similar image in an article by the NY Post and used it to do the same thing. The piece refers to a study comparing the brains of a “normal, healthy” three-year-old and another who had suffered extreme emotional abuse. I’m not making any claims about the original study here, or the controls and so on, not having read it yet. Nor am I sure that the image they posted represents the original data, with the full resolution and color scale. But still, the difference is remarkable.

Here’s the image posted on the site:


And here’s my colorized version:


There’s certainly more to see. What does it mean? Thoughts are welcome.

Posted in Uncategorized

Talk at The Jackson Laboratory: “How to see a ghost, think like a molecule, and write like a scientist”

I finally have a reasonably good filmed version of the talk I’ve been giving around Europe and the US, thanks to my hosts at the Jackson Laboratory in Bar Harbor Maine. This talk was given on June 5, 2019, and scratches the surface of a “new model of the relationship between science and communication,” which I have written about in detail on previous posts, for example


This approach has fundamentally changed the way I think about science communication and has, I think, profound implications for teaching and learning science, communicating it to a wide public, and even improving the quality of research.

The talk can always be improved, and I am still collecting examples… All feedback would be greatly appreciated.


Posted in Uncategorized

Repost: an old article I am really proud of

The Kansas Creationists vs. the Evolutionary Atheists

Leaving Flatland and its flawed debate

Note: This article was published in under the same title in the magazine Occulto. Hodge, Russ. “The Kansas Creationists vs. the Evolutionary Atheists.” Occulto Issue e, Summer 2013, Berlin. Edited by Alice Cannava. ISSN 2196-5781. pp. 64-85. You can obtain a printed copy of the journal at

My daughter was leaving Germany for a year to explore the American half of her genome. Rather than the liberal Kansas town where I went to school, she was headed for the southern half of the state, colored deep red on political maps. “You’ll be fine if you don’t discuss politics, religion, or guns,” I advised her. “Or Charles Darwin.” His name alone provokes a strong reaction in my home state, as I found out after writing a book on evolution.[1] Everyone has an opinion and you don’t have to pass a test before you jump in to a scientific debate, giving it the character of a barroom brawl. The topic leaves few Kansans sitting on the fence. Maybe because we use a lot of barbed wire.

Barbed wire was patented in 1867, nine years after Darwin and Wallace foisted evolution on the world. Out on the prairie, farmers began fencing up their lands, threatening the culture of cowboys and cattle drives. In 19th-century Kansas, barbed wire caused a far greater ruckus than evolution, although the debates didn’t drag on long because the two sides were well armed.[2] In Europe the theory caused more consternation, but discussions were fought with hot air rather than hot lead. Nor did the Bishop Wilberforce run a cattle stampede through Thomas Huxley’s garden. You could destroy a farm that way, but it didn’t work with intellectual property.

Barbed-wire fences broke up the prairie and metaphorically divided the population over deeper issues:  Would all the unsettled land be sold? Who had the right to use it? There seemed to be two clear sides, but only by leaving Native Americans out of the discussion. Tribes had diverse views of the relationship between people and land that would have added more dimensions to the debate.

Spatial metaphors are a type of trope – a wide range of rhetorical devices whereby words are used in unusual ways, often to describe one thing in terms of something else.[3] They are fundamental to the way we think, learn, and communicate. Tropes do not simply rename things, but rather combine complex networks of associations that correspond at some points and diverge at others. They often remain hidden as we communicate, causing misunderstandings that are hard to figure out. They have a powerful influence on the way we think, especially when we don’t realize they are there. Some are so basic, stylized and routine that we limit our imagination and the ability to see things in other ways. People often transfer the wrong properties of a trope to its target, expecting two systems to behave the same way and missing the differences.

Some tropes are obvious in everyday language, making them fairly easy to detect and analyze – take, for example, the old adage, “Every debate has two sides.” It reduces many issues – whether over barbed-wire fences, science, or “red-blue” divisions on a political spectrum – to the shape of a coin, implying that you have to choose. But most topics are far more complex. Why not think of a shape with more sides – perhaps six, like a dice, or a ball that can come to rest on any point and is easy to nudge to another?

But the two-sided model completely dominates the way most people think of debates about evolution: as if the world is firmly divided into two camps, science and religion, entrenched and fighting a war. The real situation is more interesting: Most religious denominations accept evolution, and many scientists have religious beliefs. But things got off on the wrong foot in the very first public forum in 1860, where religious fundamentalists saw the issue as a battle between universal truth and everything else, and they have controlled the form of the debate ever since. It’s too bad: fundamentalists have discovered no new facts to support their position in all of that time, while evolutionary science has made extraordinary progress. The theory is a scientific idea and should be discussed that way, rather than being hijacked and carried off to the foreign land of theology.

Even if it’s a bad metaphor, scientists could take more advantage of the coin. You could print competing hypotheses on its two sides: “Species arose through a long process of evolution,” versus “Species were created over a six-day period about 6,000 years ago.” Every day this coin is flipped by geneticists, chemists, physicists, doctors, geologists, paleontologists, mathematicians, informaticians, and researchers from other disciplines. They find new ways to test it all the time. There ought to be plenty of evidence for a sudden burst of creation 6,000 years ago, or at least evidence to debunk evolutionary theory, but the coin lands with Darwin’s head pointing up every time. Even the strongest beliefs haven’t flipped it over. That doesn’t stop people from hoping it will land, just once, on the other side. But prayers can’t make evolution go away, or even improve the health of the royal family in Britain.[4]

The two-sided debate has become such a social institution that people forget it’s a trope, just one of many ways of looking at things, and take it to represent something real. When that happens tropes move into a cognitive underground where they powerfully influence our thoughts, discussions, and perceptions of many things, and they become devilishly hard to get rid of. It’s hard to imagine that these stereotyped collisions between religious fundamentalists and scientists will go away.

Even so, I think the debate is about to change. The cause won’t be a miraculous conversion of the entire planet to some form of religious fundamentalism, or a mass exodus into atheism. Instead, I believe that science is on the verge of a conceptual revolution that will completely discredit simplistic debates. For a long time now words like “species”, “genes” and “natural selection” have been tossed back and forth, as if we are talking about the same things. I am not sure how fundamentalists think of these scientific concepts, but scientists have been steadily changing the sophisticated tropes and models that underlie them. A common vocabulary has masked a much deeper conflict; we are not at all talking about the same things.

Now, I believe, science is on the verge of a conceptual revolution that is changing the basic tropes by which we think of life; this new view may render the old sort of debate completely meaningless. The two-sided metaphor has always been a poor one. Discussions about evolution should finally escape this sort of intellectual Flatland and enter more profound dimensions.[5]

* * * *

Both religious and scientific explanations for the world depend on tropes and models. Scientists make specific observations and try to extract general principles that can be tested and improved. An experiment might confirm a model, or discredit it, and the results aren’t known in advance. Fundamentalists claim that some questions about life are answered in Biblical stories and others are mysteries that can’t be solved. There is no need to do experiments – which would either confirm what is already known, or the results would be ignored.

Developing large scientific models such as evolution or restricted concepts such as species begins with a lot of specific observations. Each doesn’t mean much on its own; the aim is to classify many into groups that exhibit similar general patterns. This resembles a trope called synechdoche, in which the features of individuals are transferred to the whole group. The next step is to test the pattern by applying it to new objects or situations. This creates a continual dialogue in which new observations force scientists to revise their general models. I’ll use a spatial metaphor and call this dual process “upward and downward” reasoning, which we use in everyday thinking as well. It’s the basis of learning, communication, and all sorts of judgments that people make.

Scientists recognize that errors can be made when reasoning in both directions. Upward reasoning can suffer from the exception fallacy: if the examples you start with are unusual, you may arrive at the wrong general principles. If you then apply the principles too widely to the wrong things, you commit an error in the downward direction: the ecological fallacy. Upward-downward thinking in our daily lives can suffer from the same errors and lead to problems such as racist stereotypes. So scientists continually check their assumptions and conclusions by requiring changes in models, if they aren’t confirmed by experiments. Fundamentalists deny that these types of fallacies exist in their own thinking, but are perfectly willing to look for them in science.

Understanding a scientific model requires understanding both parts of the process. To talk about a species, for example, you need to know how researchers assemble individual organisms into a group, make decisions about its common features, and apply them to new examples. I don’t know what the meaning of “species” is for a fundamentalist – if you deny the validity of the reasoning process by which scientists made up the term, you can’t be talking about the same thing.

Researchers make their models available to the world to allow them to be widely tested and ensure that they aren’t littered by a scientist’s subjective beliefs. At some point a model has been put to so many tests in different situations that people begin to treat it as a sort of “law”. Even then we know that it is a product of human thinking. Evolution is so interesting because its view of life exposes both the power of tropological thinking and its limitations, when the subject is an open-ended biological system that will always produce surprises.

Understanding this problem may affect the way we construct models in science and other systems. It will not discount the ability of current models to predict the function of a human gene by studying a related molecule in another species, or to manipulate organisms through genetic engineering. At some point, however, progress may be held back by mental constraints that may need to be understood to overcome. Science already recognizes that the problem exists: Double-blind experiments are necessary because expectations and models have an unpredictable influence not only our interpretation of data, but perception itself.

* * * *

When evolutionary theory appeared, it moved into a neighborhood of older concepts shaped by tropes and other mental models. The theory was communicated in common words and metaphors that were strongly associated with other things. It should have caused people to reevaluate a much wider set of assumptions, and it finally has – but the process has taken 155 years. At the time, the opposite happened, and the theory was forced into a network of very old beliefs.

For example, proposing that complex organisms could arise from simpler forms sounded like “progress”: a huge political and social theme during the Industrial Revolution. Many readers immediately tried to use evolution as a metaphor for race or class relations within human society, or to confirm the old, dearly-held view of man’s dominion over nature. Both efforts were doomed to failure: social models were tropes themselves, based on old notions about nature that had now become outdated. Social issues became a metaphorical battleground between old models of life based on religion and the new theory. No one realized that the real fight was happening at a meta-level of tropes. It was as if two people were playing a game, using the same board and pieces, but following completely different rules. It’s no wonder that you could never bring the game to a satisfactory end.

Now I think biology is in the process of toppling one of its central metaphors, in a way that may also have wider social effects. This is happening partly because of advances in technology that provide a much clearer view of living organisms and the complexity of their interactions with the environment. One result is to provide a sharper view of evolution, and how it differs from some of the cultural metaphors that have been holding it down. The change is appearing in bits and pieces and its full nature hasn’t been clearly articulated or even widely perceived. It will affect the way we understand humans, nature, and society. But this time we shouldn’t make the same mistake by applying the change inappropriately to other areas.

To make the case I will first provide a very brief sketch of evolutionary theory; secondly, point out a few issues that are central to it but are hard to deal with using current models; and finally, try to link what is happening to more general processes that underlie our construction of cognitive models.

In a text of this length it is impossible to properly ground all the philosophical, linguistic, cognitive and biological concepts that support its view of the role of tropes in cognition and science. Those arguments derive from a much larger conceptual framework that I will articulate in a future project. Here I will provide an application of the method to a debate that is currently, almost universally, carried out at a level that is much more superficial and naïve.

* * * *

“Evolution is so simple, almost anyone can misunderstand it,” said philosopher David Hull.[6] Darwin and Wallace drew on straightforward observations that can be made anywhere, and interpreted them in a way that is closely linked to everyday, “common-sense” ways of thinking. The complexity of the theory lies in the way they abstracted a model from these observations, then extended it far into the past to show how a few basic principles suffice to produce new species.

The outline here covers four basic principles. The most general is common to all natural sciences and distinguishes them from religion and other styles of thought. Researchers make a fundamental assumption: “We should understand states of the world that we can not directly observe on the basis of what we can observe.” This can be seen as a derivative of Occam’s razor, which in its original form has been translated as, “Plurality must never be posited without necessity.”[7]

The razor doesn’t mean that the universe is inherently simple; instead, it recognizes that views of the natural world are the product of philosophical and methodological choices, and one shouldn’t make up more hypotheses than are necessary. If a single, global force (gravity) can account for falling apples and the motion of the planets, we shouldn’t make more assumptions and suppose that each object is being pushed around by its own personal force, without evidence. By definition this approach discounts miracles such as the idea that the universe was created 6,000 years ago, in six days, which presupposes a suspension of the current forces we observe at work.

A model may posit forces that can’t be observed (such as gravity), but which have predictable effects that can be tested in observations or experiments. If galaxies are racing away from each other, their trajectories can be projected backwards in time to produce the notion of the Big Bang, or forward to produce a vision of the future of the universe. The same rationale yields an explanation for geological formations and a likely age of the Earth. Evolution is the biological equivalent, based on an observation of current life to abstract rule-governed processes that explain the origin of diverse species.

To conceive evolution, Darwin and Wallace wove three basic observations into a system that respects this fundamental principle of science. First: species constantly undergo variation. Offspring are not identical to their parents or each other (unless they are twins or clones). Variation can be directly observed in every species and is rarely an issue in popular, dualistic debates about evolution. The theory partly hinges on the rate at which it happens, which can only be determined using scientific methods; the results have been consistent with evolutionary predictions.

Most variation arises because of natural imperfections in biochemical systems. DNA undergoes many types of changes: through “spelling errors” (mutations), or when sequences break off longer molecules during the creation of egg and sperm cells. Cells can repair the damage, but material can move from one chromosome to another in a process called recombination. Other errors include duplications of DNA sequences, whole chromosomes, and in some cases an entire genome. Genetic material can also be lost. Any of these alterations can result in measurable physiological or behavioral changes in the organism as a whole – its phenotype. Such changes happen to some degree in every child; we are all X-Men.

The second observation was that some variations are passed down to an organism’s offspring through a process of heredity. The main reason is the conservation of specific DNA sequences from parents to their offspring, but some other types of biochemical changes are passed along as well. Heredity is not a deterministic system because first, each of us inherits a unique genome – we are all experiments, venturing into a landscape that has not yet been explored by evolution – and secondly, most types of behavior and many aspects of a body’s development are shaped in a dialogue with the environment.

The third factor in evolution, natural selection, is usually wildly misunderstood. Right from the start it was labeled with a misleading trope – “survival of the fittest” – that scientists have been trying to peel off ever since. It was coined by Darwin’s contemporary Herbert Spencer, a philosopher with the social status of a movie star. One of Spencer’s main interests was social progress, and he hoped that the new theory would shed light on cultural development. Religious and political conservatives seized on his words and applied their own tropes in interpreting “fittest” any way they liked – to keep humans at the top of nature, near God, or the wealthy or powerful at the top of society. They used it to justify racism and its nastiest form: eugenics movements that sought to “improve” humanity by sterilizing or killing the ill, the handicapped, prisoners, “promiscuous women,” Jews, and anyone else that those in power didn’t care for.

Darwin never liked “survival of the fittest” because he recognized that biological concepts could only be applied to culture in a metaphorical way that mangled what he meant. Finally, grudgingly, he used the phrase – probably out of the wish to appear conciliatory – but only after redefining it in and stripping it of moral and social connotations. The translation in strictly Darwinian terms sounds circular and almost silly: “survival of the survivors,” or “survival of the reproducers.” In other words, current species are the descendants of animals that managed to reproduce more than others. If you couldn’t pass along your genes, a lot of your hereditary material would disappear in favor of those that could. And if you didn’t reproduce as much as your neighbors, and nor did your descendants, and this happened over vast periods of time, then eventually your genomic contribution to the future of your species would dwindle and perhaps even disappear.

Darwin had noticed that many factors could give an animal a reproductive edge over other members of its species: differences in fertility, an organism’s ability to survive long enough to reproduce, preference for certain mates, etc. Events that struck a population equally, like random accidents, wouldn’t have much effect: The diversity of a species would undergo slow, random changes in a process called genetic drift. That itself can produce different species. If two subpopulations are isolated from each other long enough, drift may eventually change their genomes to an extent that they can no longer mate to produce fertile offspring.

So selection begins with any trait that gives an organism a reproductive edge, increasing its frequency, compared to other variants, in the next generation. If offspring with the trait also produce more children, and the bias continues over many generations, the result may be natural selection. It always occurs as a function of a dialogue between the features of an organism and its environment; identical animals don’t always do equally well in different environments. If you could measure the frequency of particular variants of genes in a species before selection happened and then again afterwards, most would exhibit random drift. But variants in an animal that had undergone “positive” selection would show a statistical increase, while forms that lower an organism’s reproduction would become rare or even disappear.

Today the signature of these events can only be detected by studying the frequency of particular DNA sequences over time. And here is also the signature of a trope by which the process is usually oversimplified in our imagination: “fitness”, or selection, isn’t something that happens to a single individual, or even a single couple, or a single generation. Instead, it is a population effect that may require thousands of generations, or however long it takes to create a new species. The change usually takes place in multiple family lines. What happens to an individual organism plays a role, but the impact on evolution is a statistical one, spread out over vast periods of time. One can observe individual advantages in reproduction, then postulate their extension into the past and future as an “upward” style of thinking. But one can’t reason back “downward” to make predictions for specific individuals, which might die in accidents or suffer from other random events. It’s also important to note that a reproductive advantage passes along an organism’s entire genome, including factors that may support the “edge”, but also all of the other characteristics it passes down.

An organism’s reproductive ability can be influenced at every level – from single letters of the genetic code, the behavior of molecules within its cells, the function of its organs, its thinking, and its overall interactions with the environment. It comes into play at every phase of a lifetime – from its origins as a single cell, through its development in an egg or the womb, its infancy, childhood, or adulthood, up to the end of its fertile phase. Usually selection stops there, but it might continue in cases where organisms contribute substantially to the survival of their “grandchildren”. Any difference that affects an organism’s phenotype can influence selection, given a permissive environment.

Variation, heredity, and reproductive differences are directly observable and – along with the more general assumptions of science – form the basis of evolutionary theory. The first two factors are rarely called into question; selection is more contentious, but mostly because the debaters are using different tropes.

* * * *

The power of evolutionary theory lies in the way it has spawned millions of hypotheses that continue to be tested in countless ways. Even this hasn’t been convincing to “Young Earth” fundamentalists, who have discarded the basic scientific premise of a continuity of natural forces in favor of a miraculous act of Creation that took place about 6,000 years ago. Their rationale is based on a faith in what they call a “literal” reading of the book of Genesis, but each fundamentalist decides what should be read literally and what not, in response to other cultural influences, making today’s fundamentalism is much different than forms practiced in the past. The written record of languages – easy to discover through a trip to any library – makes it easy to discard the Bible’s story of language creation (the “Tower of Babel”) as a fable. But the creation of species, recorded in fossils, and recounted in the same book, is regarded differently – why?

Other challenges to evolutionary theory are grouped under the popular label “intelligent design.” This is indistinguishable from a religious philosophy known as Natural Theology,[8] which dominated thinking about life until the development of evolutionary theory. Its major argument holds that living systems appear so complex and well-structured – usually by analogy to a machine such as a clock – that they must have been created by some sort of supernatural intelligence.[9]

Darwin grew up in this tradition, but several major conceptual flaws convinced him to reject it in favor of evolution. It “cherry-picks” from empirical observations of life: Anything that can’t yet be explained is assigned to the domain of miracles, including biochemical processes discovered through strictly scientific methods. Once scientists provide a reasonable account of the origins of these processes, or demonstrate that some fossil species didn’t arise spontaneously, the intelligent design community shifts its focus to the next unsolved problem. Michael Behe, a biochemist who has become an advocate for the philosophy of intelligent design, has consistently taken this strategy.[10]

Another flaw is the difficulty of distinguishing between “designs” and the structures or patterns that arise due to physical and chemical laws. The spiral forms of snail shells and the tornado-like pattern of water as it moves into a drain might look like supreme achievements of an intelligent architect, but both can be explained by applying models of biological or physical components and the forces acting on them. The body of every human child is an amazing structure that arises from a single cell. Usually this process is explained by reference to biological events, rather than constant, supernatural interventions – so why not the origins of species?

Finally, even if scientists were to stumble upon some unmistakable “signatures of a designer,” how many such designers are there? Each molecule, cellular structure, organism, or species might have its own. Claiming to see the hand of a single designer in different natural phenomena is the clear sign of a particular religious agenda, and today it is usually the attempt to thrust a Judeo-Christian deity into the science classroom.

* * * *

Evolutionary theory is not yet complete because some aspects of living systems have been impossible to explore. Some of these problems represent a lack of technology; others, I think, are inevitable when human minds construct a model and try to apply it almost universally to the world.

The first area of incompleteness has to do with evolution’s portrayal of the environment. Darwin was the first ecologist: He demonstrated that the fates and forms of species were thoroughly intertwined with each other and external factors; that each species exerts an influence on others, and that overpopulation and a competition for resources play a role in natural selection. Organisms don’t change due to purely internal factors; they arise and are shaped through a complex, fluid dialogue with everything around them. This includes every other species they interact with and other aspects of the environment such as temperature, the amount of precipitation, sunlight, seasonal changes, etc. It also includes interactions at the microscopic scale. Recently, for example, scientists have caught the first glimpse of the microbiome:[11] the extraordinarily complex, dynamic populations of bacteria and viruses that inhabit our bodies and the environment. This opens the door, for the first time, on understanding their influence on our evolution (and vice versa) and human health.

Single molecules can promote or hinder an organism’s survival and reproductive capacity, so they, too, contribute to natural selection as they carry out functions in cells. Here they will serve as an example of a gap that remains in our understanding of the interplay between organisms and their environments.

Nearly every biological process involves a process whereby cells detect and respond to change. One mechanism involves signaling cascades that typically start when a molecule binds to a receptor protein on the surface of the cell. The receptor undergoes a structural and chemical change that causes it to bind to other proteins, subsequently changing their structure and behavior. This effect is transferred from one type of molecule to the next, often ending with the transport of a protein to the cell nucleus. There it helps change the overall pattern of active and silent genes in the cell, altering the population of molecules it contains, its biochemistry, and its responsiveness to other signals.

A particular signaling cascade requires certain molecules to be present or quickly produced in response to a stimulus. They need to be located in the right regions of the cell: microenvironments that must also be properly configured to respond to the signal. Signal molecules have to be present in sufficient quantities, and they are usually bound to complexes (sometimes consisting of dozens of other molecules), whose components also need to be present in sufficient quantities. Some protein complexes are “prefabricated” and localized in particular microenvironments, where they can be “switched on” through the addition of a single component.

Passing a signal requires that a protein’s atoms have a particular physical architecture. This requires the help of still more molecules that help it fold, or “decorate” it with complex sugars, or bind it to a membrane with a particular composition of fats and other molecules, etc. This takes place against the background of multiple signals that may carry conflicting “instructions” and compete to push the cell in different directions. By adopting different conformations, or docking on to different complexes, a single molecule can act as a “switching station” to route different signals in various directions.

The quantities and states of all the other molecules in a microenvironment influence whether a protein receives a signal and how the “information” is passed along. Those populations determine whether the protein will bind to its proper partner; too many copies of another protein may change its preferences (affinities) for other molecules. If everything works and the protein does transmit the signal, the contingencies must also be met by the next molecule, in a neighboring microenvironment, so that it can be passed farther.

Microenvironments both constitute the cell and are shaped by it. They are dynamic, constantly requiring the production, refinement, and delivery of new molecules. Events within them move beyond to activate new genes, silence others, and cause changes across the entire system in intricate feedback loops. Molecules, microenvironments, and entire cells continually undergo fluid transitions – rather than adopting a clearly definable state – in which adjustments are constantly being carried out. At any given time, some proteins have achieved the form necessary to receive and pass along a signal; others are being processed; still others are being translated from RNA molecules; RNAs are being transcribed from genes at a particular frequency, etc. Every protein in a signaling cascade is undergoing similar transitions in terms of its chemistry, form, and quantities. So the success of a signal depends on the attainment of tipping points: changes from various conditions under which a microenvironment is not yet ready to receive a signal, to conditions which permit it.

Until very recently it has been impossible to capture a remotely adequate census of microenvironments or the dynamic nature of their components. As a result, proteins have generally been described as metaphorical actors – like telling the history of a war only from the perspective of generals. Some do have powerful roles, as clarified through experiments that change or remove them, but such experiments usually involve hundreds, thousands, or millions of copies of a particular molecule in highly standardized microenvironments. What is really being described is collective behavior, averaged out in a statistical way to make a model that is then applied to single molecules, in microenvironments where the major contingencies have been met.

Such descriptions aren’t perfect; they rarely describe the behavior of any single molecule, and they don’t have to. This inexactitude isn’t just a by-product of gaps in technology. Evolution predicts that it must be an inherent feature of cells. Life is constantly subject to variation and unpredictable events, so cells and their microenvironments have to have a certain tolerance for them. Most of these systems exhibit a robustness by which one molecule can step in for another, or some other “backup” system comes into play – evolution has favored them. At the same time, cells can’t tolerate everything. So far it has been impossible to define precise boundaries of permissiveness and intolerance in their microenvironments.

The same principles that govern proteins and their surroundings apply to all scales of biological organization. Simply by living – using resources and producing waste products – a cell changes the environment for itself and everything around it. In a complex organism, cells build higher levels of structure and tissue to create a body that is likewise in a fluid state of change, constantly adjusting to internal and external changes. There is an upward-moving causal chain whose restrictions are most evident in diseases where events triggered by specific molecules – in the context of their microenvironments – disrupt the body as a whole. Such upward causality participates in every aspect of growth, activity, and physiological processes such as digestion.

This is dramatically different than the common concept of environments as large external spaces in which organisms interact with each other, and where causal forces work mainly downward. That concept is also appropriate: temperature and other external factors (such as the availability of specific types of food) reorganize biological structures down to the level of molecules. But a better definition of the evolutionary environment is a to imagine a succession of fields of all scales in which biological activity has causal, fluid effects in both directions, upward and downward.

One fascinating “downward” causal chain can be found in the process of thinking, which may create a new biological environment that can affect all lower levels of biological structure. Suppose I interpret a phrase of music on a bowed instrument. That interpretation is a personal construct developed from years of experience, learning, and aesthetic tastes that constantly move back and forth between mental and physical domains. My conception of it somehow triggers specific types of motor activity across the body: muscles in the hand holding the bow do something very different than my fingerings on the string, while remaining highly coordinated. Playing music produces new cellular signals and the activation of new genes. At the same time I remain highly responsive to external feedback: feeling an irregularity in the surface of the string, noticing the expression on a listener’s face, or hearing the behavior of my fellow musicians. Thoughts, intentions, and social interactions create and constantly reshape environments for biological activity at every scale.

* * * *

This much more fluid, multi-scalar view of biology shakes up some central metaphors by which we have described living systems and the models we use to understand them: a fusion of materialism and mechanism. Their breakdown will significantly alter the way we think about issues like genetic determinism, states of health and disease, and large models such as evolution.

Materialism is probably easiest to understand in contrast to another philosophical tradition called vitalism. Until the 19th century and even later, many scientists (and all theologians) postulated a qualitative difference between living things and inorganic substances. Evolution might be fine to describe everything that had happened since the appearance of the first cell, but how did that organism arise? Vitalists believed that some “spark”, energy, or force must have been necessary to create life from the inorganic world. Theologians ascribed this to a supernatural being, but it didn’t have to be; it might simply be a type of measurable energy that simply hadn’t yet been detected in physical or chemical experiments. The idea attracted droves of physicists to the life sciences.

What they discovered ultimately led to the abandonment of vitalism in the life sciences. In 1828, Friedrich Wöhler demonstrated that a biological molecule (urea) could be synthesized using purely inorganic substances. In the 1950s, Watson and Crick drew on physics experiments to propose a model of DNA whereby a molecule could reproduce itself by purely biochemical means. Experiments at about the same time carried out by Stanley Miller showed that complex organic molecules such as amino acids could spontaneously arise in sterile conditions, even in outer space.[12] Miller never managed to build something as complex as RNA or DNA in the lab, but he didn’t have the time or virtually infinite resources of the early Earth. Every single molecule on the planet could be considered a chemical workbench, carrying out experiments over a billion years.

So biology chose materialism, at a time of rapid industrialization, which made it easy to choose machines as the guiding metaphor for understanding cells and organisms. The components of machines interact based on their physical composition and structures. Obviously organisms were very complex machines, but technology was becoming more complex as well. New machines provided a richer source of metaphors. With the advent of computers, people began discussing biology in terms of systems, as intricate networks of feedback loops and self-regulatory mechanisms somehow analogous to electronic circuitry.

Even with such fabulous machines on hand, the metaphor has reached its limits and, strictly speaking, can no longer be applied. One limitation should have been clear from the outset: Machines couldn’t reproduce themselves. And not even the most complex machines come close to possessing the complex, interlinked, fluid microenvironments described above. We usually design machines with rigid parts that have single, repetitive functions; if they break down, they can be fixed by changing a single part. Their components aren’t continually, fluidly, rebuilt at every level; they haven’t been tested and redesigned to adapt to any contingency. Human machines are rigid and designed to operate as stably as possible under specific conditions foreseen by engineers, rather than in continually changing enviroments whose variations know few bounds. Applying the machine metaphor to life leads to concepts of genetic diseases, for example, in which solutions are sometimes seen as machine-like exchanges of new parts for defective ones. Sometimes that might work, but it may not – the metaphor doesn’t really apply.

Another blow to the metaphor is the fact that by nature, no two organisms are alike; variation is an inherent quality of every species, and a tolerance for unpredictability is essential to its long-term survival. That is much less true of machines, particularly in the age of mass production, where variation in a particular model is usually regarded as an accident. This will be explored in more detail in the next section.

By abandoning the metaphor of the machine, we also abandon a naïve style of hard deterministic thinking that has arisen around notions of genes and organisms. (“My genes made me do it; my genome dictates my life.”) Determinism might be appropriate in a system that works completely from the bottom up, where rigid components dictate the behavior of a system, then the next higher scale of structure and so on. But what if the causal chain flows both upward and down, in which every component is responsive to unpredictable environmental events, contains immeasurable amounts of variation, and where human behavior creates new environments that shape biological activity? Causality itself is a model, usually based on the idea that one state naturally transforms to another after the application of some (model) force. It can only strictly be applied if it’s possible to define states – will it work in the context of ultimately fluid causal systems?

How could it be achieved, for example, in the case of music? To start you would have to fully describe both the material and mechanical basis by which aesthetic experience is physiologically “recorded” in the brain and nervous system. You would have to assume that internal physical structures not only underpin but cause particular thoughts. The system would have to be responsive to unpredictable effects, like an expression of pleasure or distaste on the face of someone in the audience. It’s safer to postulate a system in which unpredictable external stimuli from the environment exert a shaping influence on physical structure that works downward as well. Thoughts themselves – and their content – change the physiological substrate that permits them. Experiments in neurobiology have demonstrated that this is the case.[13]

* * * *

To survive, organisms can’t have some of the features we normally associate with machines. Every existing life form encodes at least a billion years of compromise that creates various degree of tolerance for variation at every scale of biological organization. There are boundaries, of course: Some variants are so disruptive that they are fatal. But just as deadly is any failure of the mechanisms that tolerate variation and change.

The field of biology has had a hard time fully grasping the extent – possibly even the concept – of this variation, and this is the last “gap” in evolutionary science I will discuss. It causes a fundamental problem in defining biological objects – whether single molecules or species. I think it can be dealt with, but this will probably require a new type of model-building. That may be difficult because the problem is closely linked to more general issues of human cognition.

The link is probably easiest to grasp through a metaphor, something much simpler than a molecule or a species – let’s take the concept of a “chair”. As a child I perceive individual chairs in various contexts, do various things with them, and hear people talk about them. There is no real consensus among cognitive psychologists about what happens next, but at some point a child creates conceptual models of “things called chairs” and begins using the models to name things she hasn’t seen before. At that point other people may correct her. She has to understand that different objects can have the same name while remaining distinct from objects with another name. In doing so she integrates features such as shapes, colors, textures, functions, parts, and different materials. Other features include a lifetime trajectory that involves being built, undergoing changes, and falling apart or being destroyed.

Children don’t come pre-programmed with a concept of a “chair”; each of us builds our own in an individual, constructive process based on encounters with specific chairs. The process is highly flexible, permitting us to recognize things that don’t fit any “classical definition” of a chair – such as something with a leg broken off, or a chair in a dollhouse, or a two-dimensional stick-drawing of a chair. All of these acts are based on tropes.

Building a model for a biological entity – such as a protein, or a species – requires a similar process. After specific objects are studied, an abstraction is made to define a “class model” that is as inclusive as possible of everything that belongs and everything that does not. From the beginning the model is intended for refinement: We haven’t yet encountered every object that can potentially belong to the class, so it is difficult to describe the boundary conditions. And since this process is based on experience, it is inherently statistical and subjective, while proposing a model that can be expanded or restricted as it is applied to new objects.

Experimentation allows science to escape the corsets of an inappropriate model. For a long time it might have been fine to think of atoms as tiny planetary systems, made of small, solid objects. But experiments forced the development of quantum mechanics, which suddenly said that objects on the human scale aren’t good metaphors for the subcomponents of atoms. Photons or electrons can’t be snagged like footballs and held onto; they may seem to disappear as they move from place to another, temporarily converted to energy; they are always in transition.

* * * *

Let’s see where this type of thinking gets us in biology by considering one of the most fundamental components of organic life: a protein. The usual biological account of the features of proteins goes something like this: Proteins are strings of amino acids (a metaphor: they share some features of human-scale “strings” but not others). They have sequences: the list of amino acids in their order in the string (a complex metaphor with a time, spatial, and behavioral component:  you imagine traveling down a text in a certain direction and reading letters as they appear). Proteins have a complex, three-dimensional structure or architecture (which don’t behave like most objects on our scale, unless you’re thinking of something like jello, because they are constantly in motion and often reshape themselves).

They have life histories that play a crucial role in their current behavior: Sequences in genes are transcribed into an RNA molecule, which is used as a template for proteins. This simple account skips many steps of processing, each of which may change the molecule’s final form, so the history becomes encoded in its final location, structure, and functions. Proteins have functions that are usually metaphorical (receptors, signal transducers, inhibitors, promoters, etc.). Originally such names convey an impression of their activities, but the terms are ultimately based on specific chemical reactions. In describing features and functions we use letters, texts, mathematical symbols, sequences, and other tropes.

Every feature of a protein naturally appears in extensive variations that can’t be fully measured or catalogued. For example, proteins never have a static, completely immovable structure, although we depict them in two or three-dimensional pictures that give this impression. These are symbols for a type of archetype that probably never exists, at least for any length of time.

Once the features of a specific protein have been defined, it is given a “class” name that can be applied species-wide (“human beta-catenin”) This class is further extended to other species in a process called homology. There is a compelling evolutionary reason to do so: human and mouse versions of beta-catenin evolved from the same gene in an ancestral species. This is established by noticing extensive overlap in their sequences, and it usually allows researchers to draw parallels between a protein’s structure and function in different species.

The central problem in this type of model is that it does not (in fact, cannot) capture a full view of variation along any parameter. It’s impossible within one species, often within one organism, and sometimes even within a single cell. There are two reasons: The technological problem stems from the fact that until very recently, we didn’t have instruments that could identify a single aberrant molecule against the background noise of alternative forms, either in terms of sequence, structure, or function. A single copy may have experienced some sort of accident in which a bit is cut off. Or it might have been improperly folded, or undergone some other processing error.

The second problem lies with the impossibility of defining a consensus sequence within a species. Random mutations continually occur and produce new versions of the molecule; there is no way to predict all possible variations that may occur and yet remain functional. It is possible to predict that specific changes will eliminate the production of a molecule, but not other parameters of variation. This problem is magnified when trying to cross species boundaries.

If we can’t define the sequence of a single gene, how can we define a species? Once again, naming species is a convention – an example of reasoning from specific examples up to a general model, then down again to new examples. This doesn’t create an objectively applicable definition because there is no “consensus genome” (or any other single feature) that can be definitively attributed to a species. Even if you could carry out some sort of census of every living individual, each birth produces a unique genome with variations that might break the rules.

Instead, scientists rely on statistical definitions of objects and parameters that loosely define boundaries of inclusion and exclusion. Suppose that someone discovers a bit of tissue in the woods and asks a lab to identify the species – “Did it come from a human? A gorilla? Or Bigfoot?” A sample is sent to the lab, which produces a DNA sequence. Most likely this exact sequence has never been seen before. It doesn’t matter: It can be attributed to an existing species if the amount of variation doesn’t exceed certain statistical parameters. If it falls substantially outside a norm for humans, gorillas, or other known species, it is deemed to be a new one. Even then, the statistical values permit it to assign it to a space on the evolutionary tree (it’s from a new species of bear or hominid).

By necessity, biological models of objects ranging from proteins to species fall into the domain of a more basic cognitive issue. We construct models individually in a complex process that involves metaphors and other tropes, a process limited by experience, unable to account for all existing and permissible variations, and yet applicable to new objects in a fluid way that is, for lack of a better word, statistical in nature. Like living systems, our mental models are simultaneously individual, robust and flexible. They arise in specific contexts (the way an organism is born into specific genomic and environmental conditions) including physical laws, human beings, and other ideas, and then venture into new territory.

* * * *

What does all of this say about the future of evolutionary debates? In a sense, it shifts the focus from specific questions about biology to more fundamental discussions of scientific practices and “everything else.” It draws a closer link between scientific thinking and everyday cases in which we construct and apply models of the world – including religious systems and the learning of language. It demonstrates that there is something fundamentally flawed about applying bottom-up/top-down reasoning to open-ended systems – at least if we expect the result to be a comprehensive definition that will always work.

Models of species themselves play a central role in popular debates on evolutionary theory. Bitter fights are waged over the question of whether evolution produced new ones, or did they all appear on Earth “as they now are” in an instant of Creation. The second perspective is just wrong – if for no other reason than the fact that the human genome has changed immensely even over the past 6,000 years, simply by adding several billion members to the population. Modern studies of organisms show that it has to be wrong. The notion of a species itself comes from science and bears no relationship to the number of names we have for animals (or organisms) in a particular language. So any time the concept of species comes up in these discussions, people are discussing wildly different things. And they rarely mention that within science, the models are being revised to encompass a more fluid notion of variation and populations that exhibit it in wide, unpredictable amounts.

I believe that what I have called “upward and downward thinking” – reasoning from specific examples to abstract models that are then applied to new examples – is a component of the acquisition of virtually every human concept, and that the act of acquiring it is individual and constructive. This process usually involves tropes that help individuals learn things in a multi-dimensional way, but whose application is not very well controlled. Individuals are usually left to decide on their own what features of a network of relations should be transferred from a known object to a new one. The development of a model is therefore inherently subjective, although it seems to become more objective after it has been shared, its predictions and boundaries have been tested by many people in a wide range of contexts, and becomes a currency for social agreement. This process entails an inherent cognitive flaw, at least in open-ended systems like cells or the attempt to design a new type of chair, that I will explore more fully in later work.

But this account can already shift some of the rhetoric of evolutionary debates because it discounts certain metaphors that are clearly inappropriate and no longer apply. Natural selection itself is an upwards-downwards concept. It can’t be considered some sort of external force – like a heat wave that scorches a population and leaves only one individual with a unique form of a gene standing. Seeing it as a statistical event that happens within a subpopulation, rather than individuals, and something that only happens over many generations is a large shift from the “survival of the fittest” mentality.

I think this view of life also rings the death-knell for the concept of a “selfish gene” (or “selfish allele”). A particular form of a molecule is only successful if it operates within a microenvironment that is permissive (and possibly encouraging) to its activity. This means that many molecules must be attuned to each other to create functional environments. When selection favors a gene, it simultaneously favors all the contingencies that allow it to succeed. These are not established in advance but arise through dialogue. At the moment, we are unable to survey all of the forms of a particular gene that are found in a population, or the variants of other genes that collaborate with it, or establish the mutual constraints on their behavior. So while we know that genes are “social” rather than selfish, at least theoretically, the extent of these mutual contingencies can’t yet be measured.

Evolutionary theory has proven tremendously valuable when it comes to assigning new facts a place in a model; its direct applications have also been incredibly powerful in manipulating organisms and biological systems. This has led to accusations that scientists are “playing God” by taking “artificial control” of “natural processes.” The metaphor only makes sense if you accept its religious premise; additionally, it is merely a way of dressing up the old debate between vitalism and materialism in new clothes. The same charge of “playing God” can be leveled at the inventor of a new type of chair, or anything else, unless you believe that there is some qualitative difference between manipulating living systems and “inorganic” objects (like wood, which is still organic, just no longer attached to a tree).

Genetic engineering and other activities certainly might affect human evolution by altering the environments in which we live, and that it might do so rapidly by releasing organisms that reproduce quickly under particular environmental conditions. On the other hand, changes are inevitably happening anyway as we change the environment in other ways, deliberately or not. Our planet now hosts seven billion humans who continue to produce new babies and waste products, who continually create new technologies, and who spread both diseases and cures at a faster rate than ever before. Our own existence and behavior are integral components of the environments of the future.

The more profound issue that underlies many of these debates, I think, is fear – fear of certain types of change, especially if they seem to threaten something of value. Evolution offers no guarantee that humans will survive (nor does the notion of a “Rapture”); it also allows for changes that we personally wouldn’t care for. We can only be glad that ancient hominids didn’t regard themselves as the pinnacle of Creation and somehow nip future evolution in the bud. They could never have succeeded, nor could the eugenicists, because there is no way to prevent random biological variation and gain long-term control over the fate of our species.

The alternative to a fluid, evolving view of life is a static model that is the gateway to a mechanistic view and thus a deterministic one. If the central metaphor in understanding life is a man-made machine, it is easy to overlook all of the aspects that are non-machine-like, particularly in the interconnectedness of every level of every biological system. To think otherwise is to continue to debate evolution in an intellectual Flatland that the theory has already escaped.

I don’t think a deterministic system can survive within a much greater model that is fluid, individually constructed, open-ended, tolerant of variation, engaged in a multidimensional conversation with its environment – in other words, organic. The metaphor of a watch – or of any other machine – is far too mechanistic to describe any living system. The amazing complexity of life is not evidence of deliberate creation or intelligent design; in fact, its unpredictability is the best evidence for an ongoing process of evolution.

– Russ Hodge, April 2013

[1] Russ Hodge. Evolution: the History of Life on Earth. New York: Facts on File, 2009.

[2] Richard Rodgers and Oscar Hammerstein. “The Farmer and the Cowboy should be Friends” (song). Oklahoma (musical). 1943.

[3] For a fairly complete list of tropes, see “Figure of speech,”

[4] In 1872 Francis Galton, a cousin of Charles Darwin, studied the health of the British Royal family. So many people prayed for their health, he reasoned, that if “third-party” prayer were effective, they ought to have exceptional health. But it appeared to have no effects on their longevity.

[5] Edwin A. Abbott. Flatland: A Romance of Many Dimensions. Dover Publications, 1992.

[6] Hull’s comment from a book review is widely quoted; I have not yet found the original source.

[7] “Ockham’s razor”. Encyclopædia Britannica. Encyclopædia Britannica Online. 2010. Retrieved 1 July 2011.

[8] William Paley. Natural Theology. (Originally published in 1802). DeWard Publishing, 2010.

[9] Intelligent design in court. See, for example, “Judge rules against ‘intelligent design.’” Last accessed on April 5, 2013.

[10] Behe, Michael. Darwin’s Black Box: the Biochemical Challenge to Evolution. Tenth Anniversary Edition. New York: Free Press, 2006.

[11]See, for example, the “Human Microbiome Project.” Accessed April 15, 2013.

[12] Miller, SL. A production of amino acids under possible primitive earth conditions. Science. 1953 May 15;117(3046):528-9.

[13]  see, for example, Hubel, D.H.; Wiesel, T.N. (February 1, 1970). “The period of susceptibility to the physiological effects of unilateral eye closure in kittens”. The Journal of Physiology 206 (2): 419–436.