Wednesday, December 28, 2011

Paul Boghossian on relativism http://opinionator.blogs.nytimes.com/2011/07/24/the-maze-of-moral-relativism/?scp=1&sq=boghossian&st=cse]:


The Maze of Moral Relativism

The Stone
The Stone is a forum for contemporary philosophers on issues both timely and timeless.
Relativism about morality has come to play an increasingly important role in contemporary culture.  To many thoughtful people, and especially to those who are unwilling to derive their morality from a religion, it appears unavoidable.  Where would absolute facts about right and wrong come from, they reason, if there is no supreme being to decree them? We should reject moral absolutes, even as we keep our moral convictions, allowing that there can be right and wrong relative to this or that moral code, but no right and wrong per se.  (See, for example, Stanley Fish’s 2001 op-ed, “Condemnation Without Absolutes.”)[1]
When we decided that there were no such things as witches, we didn’t become relativists about witches.
Is it plausible to respond to the rejection of absolute moral facts with a relativistic view of morality?  Why should our response not be a more extreme, nihilistic one, according to which we stop using normative terms like “right” and “wrong” altogether, be it in their absolutist or relativist guises?

Relativism is not always a coherent way of responding to the rejection of a certain class of facts.  When we decided that there were no such things as witches, we didn’t become relativists about witches.  Rather, we just gave up witch talk altogether, except by way of characterizing the attitudes of people (such as those in Salem) who mistakenly believed that the world contained witches, or by way of characterizing what it is that children find it fun to pretend to be on Halloween.  We became what we may call “eliminativists” about witches.
On the other hand, when Einstein taught us, in his Special Theory of Relativity, that there was no such thing as the absolute simultaneity of two events, the recommended outcome was that we become relativists about simultaneity, allowing that there is such a thing as “simultaneity relative to a (spatio-temporal) frame of reference,” but not simultaneity as such.
What’s the difference between the witch case and the simultaneity case?  Why did the latter rejection lead to relativism, but the former to eliminativism?
In the simultaneity case, Einstein showed that while the world does not contain simultaneity as such, it does contain its relativistic cousin — simultaneity relative to a frame of reference — a property that plays something like the same sort of role as classical simultaneity did in our theory of the world.
By contrast, in the witch case, once we give up on witches, there is no relativistic cousin that plays anything like the role that witches were supposed to play.   The property, that two events may have, of “being simultaneous relative to frame of reference F” is recognizably a kind of simultaneity.  But the property of “being a witch according to a belief system T” is not a kind of witch, but a kind of content (the content of belief system T):  it’s a way of characterizing what belief system T says, not a way of characterizing the world.
Leif Parsons
Now, the question is whether the moral case is more like that of simultaneity or more like that of witches?  When we reject absolute moral facts is moral relativism the correct outcome or is it moral eliminativism (nihilism)?
The answer, as we have seen, depends on whether there are relativistic cousins of “right” and “wrong” that can play something like the same role that absolute “right” and “wrong” play.
It is hard to see what those could be.
What’s essential to “right” and “wrong” is that they are normativeterms, terms that are used to say how things ought to be, in contrast with how things actually are.  But what relativistic cousin of “right” and “wrong” could play anything like such a normative role?
Most moral relativists say that moral right and wrong are to be relativized to a community’s “moral code.” According to some such codes, eating beef is permissible; according to others, it is an abomination and must never be allowed.  The relativist proposal is that we must never talk simply about what’s right or wrong, but only about what’s “right or wrong relative to a particular moral code.”
The trouble is that while “Eating beef is wrong” is clearly a normative statement, “Eating beef is wrong relative to the moral code of the Hindus” is just a descriptive remark that carries no normative import whatsoever.  It’s just a way of characterizing what is claimed by a particular moral code, that of the Hindus.  We can see this from the fact that anyone, regardless of their views about eating beef, can agree that eating beef is wrong relative to the moral code of the Hindus.
So, it looks as though the moral case is more like the witch case than the simultaneity case:  there are no relativistic cousins of “right” and “wrong.”  Denial of moral absolutism leads not to relativism, but to nihilism.[2]
If there are no absolute facts about morality, “right” and “wrong” would have to join “witch” in the dustbin of failed concepts.
There is no half-way house called “moral relativism,” in which we continue to use normative vocabulary with the stipulation that it is to be understood as relativized to particular moral codes.  If there are no absolute facts about morality, “right” and “wrong” would have to join “witch” in the dustbin of failed concepts.
The argument is significant because it shows that we should not rush to give up on absolute moral facts, mysterious as they can sometimes seem, for the world might seem even more mysterious without any normative vocabulary whatsoever.
One might be suspicious of my argument against moral relativism. Aren’t we familiar with some normative domains — such as that of etiquette — about which we are all relativists?  Surely, no one in their right minds would think that there is some absolute fact of the matter about whether we ought to slurp our noodles while eating.
If we are dining at Buckingham Palace, we ought not to slurp, since our hosts would consider it offensive, and we ought not, other things being equal, offend our hosts.  On the other hand, if we are dining in Xian, China, we ought to slurp, since in Xian slurping is considered to be a sign that we are enjoying our meal, and our hosts would consider it offensive if we didn’t slurp, and we ought not, other things being equal, offend our hosts.
But if relativism is coherent in the case of etiquette why couldn’t we claim that morality is relative in the same way?
The reason is that our relativism about etiquette does not actually dispense with all absolute moral facts.  Rather, we are relativists about etiquette in the sense that, with respect to a restricted range of issues (such as table manners and greetings), we take the correct absolute norm to be “we ought not, other things being equal, offend our hosts.”
This norm is absolute and applies to everyone and at all times.  Its relativistic flavor comes from the fact that, with respect to that limited range of behaviors (table manners and greetings, but not, say, the abuse of children for fun), it advocates varying one’s behavior with local convention.
In other words, the relativism of etiquette depends on the existence of absolute moral norms.  Since etiquette does not dispense with absolute moral facts, one cannot hope to use it as a model for moral relativism.
Suppose we take this point on board, though, and admit that there have to be some absolute moral facts.  Why couldn’t they all be like the facts involved in etiquette?  Why couldn’t they all say that, with respect to any morally relevant question, what we ought to do depends on what the local conventions are?
The trouble with this approach is that once we have admitted that there are some absolute moral facts, it is hard to see why we shouldn’t think that there are many — as many as common sense and ordinary reasoning appear to warrant.  Having given up on the purity of a thoroughgoing anti-absolutism, we would now be in the business of trying to figure out what absolute moral facts there are.  To do that, we would need to employ our usual mix of argument, intuition and experience.  And what argument, intuition and experience tell us is that whether we should slurp our noodles depends on what the local conventions are, but whether we should abuse children for fun does not.
A would-be relativist about morality needs to decide whether his view grants the existence of some absolute moral facts, or whether it is to be a pure relativism, free of any commitment to absolutes.  The latter position, I have argued, is mere nihilism; whereas the former leads us straight out of relativism and back into the quest for the moral absolutes.
None of this is to deny that there are hard cases, where it is not easy to see what the correct answer to a moral question is.  It is merely to emphasize that there appears to be no good alternative to thinking that, when we are in a muddle about what the answer to a hard moral question is, we are in a muddle about what the absolutely correct answer is.


FOOTNOTES:
[1] Pinning a precise philosophical position on someone, especially a non-philosopher, is always tricky, because people tend to give non-equivalent formulations of what they take to be the same view. Fish, for example, after saying that his view is that “there can be no independent standards for determining which of many rival interpretations of an event is the true one,” which sounds appropriately relativistic, ends up claiming that all he means to defend is “the practice of putting yourself in your adversary’s shoes, not in order to wear them as your own but in order to have some understanding (far short of approval) of why someone else might want to wear them.” The latter, though, is just the recommendation of empathetic understanding and is, of course, both good counsel and perfectly consistent with the endorsement of moral absolutes.
Another view with which moral relativism is sometimes conflated is the view that the right thing to do can depend on the circumstances. There is no question that the right thing to do can depend on the circumstances, even on an absolutist view. Whether you should help someone in need can depend on what your circumstances are, what their circumstances are, and so forth. What makes a view relativistic is its holding that the right thing to do depends not just on the circumstances, but on what the person (or his community) takes to be the right thing to do, on their moral code.
In this column, I am only concerned with those who wish to deny that there are any absolute moral truths in this sense. If that is not your view, then you are not the target of this particular discussion.
[2] Some philosophers may think that they can evade this problem by casting the relativism in terms of a relativized truth predicate rather than a relativized moral predicate. But as I have explained elsewhere, the problem of the loss of normative content recurs in that setting.

DESCRIPTION
Paul Boghossian is Silver Professor of Philosophy at New York University. He is the author of “Fear of Knowledge: Against Relativism and Constructivism,” “Content and Justification: Philosophical Papers,” and co-editor of “New Essays on the A Priori,” all from Oxford University Press. More of his work can be found on his Web site.

Monday, December 26, 2011

ID and retinal design

The "poor design" of the retina has long been a standard objection to the efficiency of  the design of the eye. Here is an answer. Casey Luskin is an excellent scientist, and very generous in answering queries. 


http://www.discovery.org/a/18011


Eyeballing Design
"Biomimetics" Exposes Attacks on ID as Poorly Designed

By: Casey Luskin
Salvo Magazine
December 20, 2011


Link to Original Article
At least since the ancient Chinese tried to produce artificial silk, people have turned to biology for inspiration when designing technology. A 2009 article in the world's oldest science journal, Philosophical Transactions of the Royal Society of London, authored by Ohio State University nanotechnology engineer Bharat Bhushan, explains how this design process works:
The understanding of the functions provided by objects and processes found in nature can guide us to imitate and produce nanomaterials, nanodevices and processes. Biologically inspired design or adaptation or derivation from nature is referred to as "biomimetics." It means mimicking biology or nature.1
Perhaps the most familiar example of biomimetics is the body shape of birds serving as the inspiration for aircraft design. But the list of fascinating cases where engineers have mimicked nature to develop or improve human technology goes on and on:
• Faster Speedo swimsuits have been developed by studying the properties of sharkskin.
• Spiny hooks on plant seeds and fruits led to the development of Velcro.
• Better tire treads were created by understanding the shape of toe pads on tree frogs.
• Polar bear furs have inspired textiles and thermal collectors.
• Studying hippo sweat promises to lead to better sunscreen.
• Volvo has studied how locusts swarm without crashing into one another to develop an anti-collision system.
• Mimicking mechanisms of photosynthesis and chemical energy conversion might lead to the creation of cheaper solar cells.
• Copying the structure of sticky gecko feet could lead to the development of tape with cleaner and dryer super-adhesion.
• Color-changing cuttlefish have inspired television screens that use a fraction of the power of standard TVs.
• DNA might become a framework for building faster microchips.
• The ability of the human ear to pick up many frequencies of sound is being replicated to build better antennas.
• The Namibian fog-­basking beetle has inspired methods of desalinizing ocean water, growing crops, and producing electricity, all in one!
Disclaiming Design
The purpose of Dr. Bhushan's paper was to encourage engineers to study nature when creating technology. For some reason, however, he felt compelled to open his article with the following disclaimer:
Nature has gone through evolution over the 3.8 Gyr [Gigayear, equal to one billion years] since life is estimated to have appeared on the Earth. Nature has evolved objects with high performance using commonly found materials.
Why did Bhushan feel this was necessary?
The answer is hard to miss. The widespread practice and success of biomimetics among technology-creating engineers has powerful implications that point to intelligent design (ID). After all, if human technology is intelligently designed, and if biological systems inspire or outperform man-made systems, then we are confronted with the not-so-subtle inference that nature, too, might have been designed.

To prevent ID-oriented thoughts from entering the minds of readers, materialists writing about biomimetics have long upheld a tradition of including superfluous praise of the amazing power of Darwinian evolution.
For example, when explaining how the unique bumpy shape of whale flippers has been mimicked to improve wind turbine design, a ScienceDaily article reminded readers that "sea creatures have evolved over millions of years to maximise efficiency of movement through water."2
Similarly, in 2008, Business Week carried a piece on biomimetics noting that "ultra-strong, biodegradable glues" have been developed "by analyzing how mussels cling to rocks under water," and that bullet-trains could be made more aerodynamic if given "a distinctly bird-like nose." But the story couldn't help but point out that these biological templates weren't designed, but rather "evolved in the natural world over billions of years."3
It's uncanny how predictable this theme has become. In another instance, MSNBC explained how "armor" on fish might be copied to improve battle ware for soldiers. Yet the article included the obligatory subheading instructing readers that "millions of years of evolution could provide exactly what we need today."4
Well, aren't we lucky?
Better Keep the Disclaimers
Dr. Bhushan was wise to include his disclaimer promoting unguided evolution: From an ID-based view, it's unsurprising that designers of human technology would find so many solutions to problems within the biosphere. ID-friendly implications permeate the field of biomimetics, and they are dangerous to materialism.
Evolutionary thinkers, of course, will assert that these finely tuned biological systems evolved by blind natural selection preserving random mutations. Over billions of years, they imagine, this unguided process perfected these systems, ultimately besting the inventions of our top engineering minds.
Such deeply held convictions might be hard to unseat from the minds of materialists. But consider this: When human engineers want to create technology, do they use unguided processes of random mutation and natural selection? No. They use intelligent design.
In fact, whenever we understand the origin of a piece of technology, we see that intelligent design was always required to generate the system. How then, is Dr. Bhushan so confident that the elegant systems in nature that surpass human designs—including multi-component machines—­resulted from unguided evolutionary processes?
Poorly Designed Objections
Some materialists attack design arguments not by alleging that biological systems lack high levels of specified complexity, but by alleging that they are full of "flaws." Yet anyone who has used Microsoft Windows is painfully aware that flawed designs are still designed. But theistic evolutionist biologist Kenneth Miller argues that evolution would naturally lead us to expect the biological world to be full of "cobbled together" kluges that reflect the clumsy, undirected Darwinian process.5
For example, Miller maintains that the vertebrate eye was not intelligently designed because the optic nerve extends over the retina instead of going out the back of the eye—an alleged design flaw. According to Miller, "visual quality is degraded because light scatters as it passes through several layers of cellular wiring before reaching the retina."
Similarly, Richard Dawkins contends that the retina is "wired in backwards" because light-sensitive cells face away from the incoming light, which is partly blocked by the optic nerve. In Dawkins's ever-humble opinion, the vertebrate eye is "the design of a complete idiot."6
A closer examination shows that the design of the vertebrate eye works far better than Dawkins and Miller let on.
Dawkins concedes that the optic nerve's impact on vision is "probably not much," but the negative effect is even less than he admits. Only if you cover one eye and stare directly at a fixed point does a tiny "blind spot" appear in your peripheral vision as a result of the optic nerve covering the retina. When both eyes are functional, the brain compensates for the blind spot by meshing the visual fields of both eyes. Under normal circumstances, the nerves' wiring does nothing to hinder vision.
Nonetheless, Dawkins argues that even if the design works, it would "offend any tidy-minded engineer." But the overall design of the eye actually optimizes visual acuity.
To achieve the high-quality vision that vertebrates need, retinal cells require a large blood supply. By facing the photoreceptor cells toward the back of the retina, and extending the optic nerve out over them, the cells are able to plug directly into the blood vessels that feed the eye, maximizing access to blood.
Pro-ID biologist George Ayoub suggests a thought experiment where the optic nerve goes out the back of the retina, the way Miller and Dawkins claim it ought to be wired. Ayoub finds that this design would interfere with blood supply, as the nerve would crowd out blood vessels. In this case, the only means of restoring blood supply would be to place capillaries over the retina—but this change would block even more light than the optic nerve does under the actual design.
Ayoub concludes: "In trying to eliminate the blind spot, we have generated a host of new and more severe functional problems to solve."7
In 2010, two eye specialists made a remarkable discovery that showed the elegant mechanism found in vertebrate eyes to solve the problem of any blockage of light due to the position of the optic nerve. Special "glial cells" sit over the retina and act like fiber-optic cables to channel light through the optic nerve wires directly onto the photoreceptor cells. According to New Scientist, these funnel-shaped cells prevent scattering of light and "act as light filters, keeping images clear."8
Ken Miller acknowledges that an intelligent designer "would choose the orientation that produces the highest degree of visual quality." Yet that seems to be exactly what we find in the vertebrate eye. In fact, the team of scientists who determined the function of glial cells concluded that the "retina is revealed as an optimal structure designed for improving the sharpness of images."
ID-theorist William Dembski has observed that "no one has demonstrated how the eye's function might be improved without diminishing its visual speed, sensitivity, and resolution."9 It's therefore unsurprising that optics engineers study the eye to improve camera technology. According to another tech article:
Borrowing one of nature's best designs, U.S. scientists have built an eye-shaped camera using standard sensor materials and say it could improve the performance of digital cameras and enhance imaging of the human body.
The article reported that the "digital camera has the size, shape and layout of a human eye" because "the curved shape greatly improves the field of vision, bringing the whole picture into focus."10
It seems that human eyes are so poorly designed that engineers regularly mimic them.
Repeat After Me . . .
Bhushan ends his article on biomimetics by paying more lip service to evolution, declaring that "nature has evolved and optimized a large number of materials and structured surfaces with rather unique characteristics." His chosen blindness to the pro-ID implications of biomimetics does not negate the fact that, intriguingly, nature routinely inspires and outperforms the best human ­technology.
Biologists and engineers who still want to believe that life's elegant complexity results from neo-Darwinian processes may find that the only way to do so is to keep repeating Francis Crick's mantra—"Biologists must constantly keep in mind that what they see was not designed, but rather evolved"—over and over to themselves. •
Endnotes 1. Bharat Bhushan, "Biomimetics: lessons from nature—an overview," Philosophical Transactions of the Royal Society of London A, vol. 367 (2009), pp. 1445–1486.
2. "Whales and Dolphins Influence New Wind Turbine Design" ScienceDaily (July 7, 2008): www.sciencedaily.com/releases/2008/07/080707222315.htm.
3. Matt Vella, "Using Nature as a Design Guide," Bloomberg Businessweek (February 11, 2008):www.businessweek.com/innovate/content/feb2008/id20080211_074559.htm.
4. Jeanna Bryner, "Incredible fish armor could suit soldiers" (July 28, 2008):www.msnbc.msn.com/id/25886406.
5. Kenneth R. Miller, "Life's Grand Design," Technology Review (February/March 1994), pp. 25–32.
6. Richard Dawkins, The Greatest Show on Earth: The Evidence for Evolution (Free Press, 2009), p. 354.
7. George Ayoub, "On the Design of the Vertebrate Retina," Origins & Design, vol. 17:1 (Winter 1996): www.arn.org/docs/odesign/od171/retina171.htm.
8. Kate McAlpine, "Evolution gave flawed eye better vision," New Scientist (May 6, 2010):www.newscientist.com/article/mg20627594.000-evolution-gave-flawed-eye-better-vision.html.
9. William Dembski & Sean McDowell, Understanding Intelligent Design: Everything You Need to Know in Plain Language (Harvest House, 2008), p. 53.
10. Julie Steenhuysen, "Eye spy: U.S. scientists develop eye-shaped camera," Reuters (August 6, 2008): www.reuters.com/article/2008/08/06/us-camera-eye-idUSN0647922920080806.

Sunday, December 11, 2011

fetal learning

Think of all the traditional sources concerning the effect of the behavior of a pregnant woman on her fetus. And then read this: http://edition.cnn.com/2011/12/11/opinion/paul-ted-talk/index.html?hpt=hp_c4




Editor's note: Annie Murphy Paul is the author of "Origins: How the Nine Months Before Birth Shape the Rest of Our Lives." She's now working on a book about learning, and writes a weekly column at Time.com called "Brilliant: The Science of Smart." TED is a nonprofit organization dedicated to "Ideas worth spreading," which it distributes through talks posted on its website.

(CNN) -- When does learning begin? As I explain in the talk I gave at TED, learning starts much earlier than many of us would have imagined: in the womb.
I was surprised as anyone when I first encountered this notion. I'm a science writer, and my job is to trawl the murky depths of the academic journals, looking for something shiny and new -- a sparkling idea that catches my eye in the gloom.
h
Starting a few years ago, I began noticing a dazzling array of findings clustered around the prenatal period. These discoveries were generating considerable excitement among scientists, even as they overturned settled beliefs about when we start absorbing and responding to information from our environment. As a science reporter -- and as a mother -- I had to find out more.
This research, I discovered, is part of a burgeoning field known as "fetal origins," and it's turning pregnancy into something it has never been before: a scientific frontier. Obstetrics was once a sleepy medical specialty, and research on pregnancy a scientific backwater. Now the nine months of gestation are the focus of intense interest and excitement, the subject of an exploding number of journal articles, books, and conferences.
What it all adds up to is this: much of what a pregnant woman encounters in her daily life -- the air she breathes, the food and drink she consumes, the chemicals she's exposed to, even the emotions she feels -- are shared in some fashion with her fetus. They make up a mix of influences as individual and idiosyncratic as the woman herself. The fetus treats these maternal contributions as information, as what I like to call biological postcards from the world outside.
By attending to such messages, the fetus learns the answers to questions critical to its survival: Will it be born into a world of abundance, or scarcity? Will it be safe and protected, or will it face constant dangers and threats? Will it live a long, fruitful life, or a short, harried one?
The pregnant woman's diet and stress level, in particular, provide important clues to prevailing conditions, a finger lifted to the wind. The resulting tuning and tweaking of the fetus's brain and other organs are part of what give humans their enormous flexibility, their ability to thrive in environments as varied as the snow-swept tundra in Siberia and the golden-grassed savanna in Africa.
The recognition that learning actually begins before birth leads us to a striking new conception of the fetus, the pregnant woman and the relationship between them.
The fetus, we now know, is not an inert blob, but an active and dynamic creature, responding and adapting as it readies itself for life in the particular world it will soon enter. The pregnant woman is neither a passive incubator nor a source of always-imminent harm to her fetus, but a powerful and often positive influence on her child even before it's born. And pregnancy is not a nine-month wait for the big event of birth, but a crucial period unto itself -- "a staging period for well-being and disease in later life," as one scientist puts it.
This crucial period has become a promising new target for prevention, raising hopes of conquering public health scourges like obesity and heart disease by intervening before birth. By "teaching" fetuses the appropriate lessons while they're still in utero, we could potentially end vicious cycles of poverty, infirmity and illness and initiate virtuous cycles of health, strength and stability.
So how can pregnant women communicate to their fetuses what they need to know?
Eat fish, scientists suggest, but make sure it's the low-mercury kind -- the omega-three fatty acids in seafood are associated with higher verbal intelligence and better social skills in school-age children. Exercise: research suggests that fetuses benefit from their mothers' physical activity. Protect yourself from toxins and pollutants, which are linked to birth defects and lowered IQ.
Don't worry too much about stress: research shows that moderate stress during pregnancy is associated with accelerated infant brain development. Seek help if you think you might be suffering from depression: the babies of depressed women are more likely to be born early and at low birth weight, and may be more irritable and have more trouble sleeping. And -- my favorite advice -- eat chocolate: it's associated with a lower risk of the high blood pressure condition known as preeclampsia.
When we hold our babies for the first time, we imagine them clean and new, unmarked by life, when in fact they have already been shaped by the world, and by us. It's my privilege to share with the TED audience the good news about how we can teach our children well from the very beginning.

Wednesday, November 23, 2011

summary of main issues in evolution


http://www.evolutionnews.org/2011/11/berlinski_on_darwin_on_trial053171.html

Majestic Ascent: Berlinski on Darwin on Trial



Richard Dawkins published The Blind Watchmaker in 1985. The appearance of design in nature, Dawkins argued, is an illusion. Complex biological structures may be entirely explained by random variations and natural selection. Why biology should be quite so vested in illusions, Dawkins did not say. The Blind Watchmaker captured the public's imagination, but in securing the public's allegiance, very little was left to chance. Those critics who believed that living systems appear designed because they are designed underwent preemptive attack in the New York Times. "Such are the thought habits of uncultivated intellects," wrote the biologist Michael Ghiselin, " -- children, savages and simpletons."
Comments such as these had the effect of raw meat dropped carelessly among carnivores. A scramble ensued to get the first bite. No one bothered to attack the preposterous Ghiselin. It was Richard Dawkins who had waggled his tempting rear end, and behind Dawkins, fesse à fesse, Charles Darwin. With the publication in 1991 of Darwin on Trial Phil Johnson did what carnivores so often do: He took a bite.
Johnson was at the time a professor of law at the University of California at Berkeley, a man whose training had given him what great lawyers so often have and that is a shrewd eye for the main chance. Darwin's theory, Johnson observed, is hardly in need of empirical support: It is its own best friend. "The prevailing assumption in evolutionary science," he wrote, "seems to be that speculative possibilities, without experimental confirmation, are all that is really necessary."
This is wrong only to the extent that speculative possibilities without experimental confirmation are often all that is really possible.
Every paleontologist writing since Darwin published his masterpiece in 1859, has known that the fossil record does not support Darwin's theory. The theory predicted a continuum of biological forms, so much so that from the right perspective, species would themselves be seen as taxonomic artifacts, like the classification of certain sizes in men's suiting as husky. Questions about the origin of species were resolved in the best possible way: There are no species and so there is no problem. Inasmuch as the historical record suggested a discrete progression of fixed biological forms, it was fatal to Darwin's project. All the more reason, Darwin argued, to discount the evidence in favor of the theory. "I do not pretend," he wrote, "that I should ever have suspected how poor a record of the mutations of life, the best preserved geological section presented, had not the difficulty of our not discovering innumerable transitional links between the species which appeared at the commencement and close of each formation, pressed so hardly on my theory."
This is, as Johnson noted, self-serving gibberish.
Few serious biologists are today willing to defend the position that Dawkins expressed in The Blind Watchmaker. The metaphor remains stunning and so the watchmaker remains blind, but he is now deaf and dumb as well. With a few more impediments, he may as well be dead. The publication in 1983 of Motoo Kimura's The Neutral Theory of Molecular Evolution consolidated ideas that Kimura had introduced in the late 1960s. On the molecular level, evolution is entirely stochastic, and if it proceeds at all, it proceeds by drift along a leaves-and-current model. Kimura's theories left the emergence of complex biological structures an enigma, but they played an important role in the local economy of belief. They allowed biologists to affirm that they welcomed responsible criticism. "A critique of neo-Darwinism," the Dutch biologist Gert Korthof boasted, "can be incorporated into neo-Darwinism if there is evidence and a good theory, which contributes to the progress of science."
By this standard, if the Archangel Gabriel were to accept personal responsibility for the Cambrian explosion, his views would be widely described as neo-Darwinian.
In Darwin on Trial, Johnson ascended majestically above the usual point of skepticism. It was the great case of Darwin et al v. the Western Religious Tradition that occupied his attention. The issue had been joined long before Johnson wrote. But the case had not been decided. It had not been decisively decided and like some terrifying cripple, it had continued to bang its crutches through all the lower courts of Hell and Dover, Pennsylvania.
A few nodding judges such as Stephen Jay Gould thought to settle the matter by splitting the difference between litigants. To science, Gould assigned everything of importance, and to religion, nothing. Such was his theory of non-overlapping magisteria, or NOMA, a term very much suggesting that Gould was endowing a new wing at the Museum of Modern Art. Serving two masters, Gould supposed that he would be served by them in turn. He was mistaken. In approaching Darwin's theory of evolution, theistic evolutionists acquired a posture of expectant veneration, imagining hopefully that their deference would allow them to lick the plates from various scientific tables. They were in short order assured that having settled for nothing, nothing is what they would get. From the likes of Richard Dawkins or Daniel Dennett, Gould got what he deserved.
And from Phillip Johnson too, but in a different way and with different ends in mind.
The scientific establishment had long believed that in res Darwin et al v. The Western Religious Tradition, Darwin would prevail. They expected to be assigned governance over the ideology of a democratic society. Their palms had collectively commenced to itch. Newspapers hymned Darwin's praise, and television documentaries -- breathless narrator, buggy jungle -- celebrated his achievement. Museum curators rushed to construct Darwinian dioramas in which human beings were shown ascending, step by step, from some ancient simian conclave, one suspiciously like the faculty lounge at Harvard. A very considerable apparatus or propaganda and persuasion was put at the disposal of the Darwinian community.
And, yet, no matter the extent to which Darwin's theory was said to be beyond both reappraisal and reproach, the conviction remained current that it was not so, and if so, not entirely so.
"Why not consider," Johnson asked, "the possibility that life is what it so evidently seems to be, the product of creative intelligence?"
The question is entirely reasonable. It is the question that every thoughtful person is inclined to ask.
So why not ask it?
No standard by which science is justified as an institution rules it out. The ones commonly employed -- naturalism, falsifiability, materialism, methodological naturalism -- are useless as standards because transparent as dodges.
The geneticist Richard Lewontin -- Harvard, oddly enough -- provided an answer to Johnson's question that is a masterpiece of primitive simplicity. It surely deserves to be quoted at length and quoted in full:
Our willingness to accept scientific claims that are against common sense is the key to an understanding of the real struggle between science and the supernatural. We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. The eminent Kant scholar Lewis Beck used to say that anyone who could believe in God could believe in anything. To appeal to an omnipotent deity is to allow that at any moment the regularities of nature may be ruptured, that miracles may happen.
There is much in these remarks that is analytically defective. The "commitment to materialism" that Lewontin defends is hardly clear enough to merit rebuttal. The history of physics suggests the maxim that anything goes if something works. It is as useful a maxim in mathematical physics as it is in international finance.
Nonetheless, Lewontin, as Johnson understood, had properly grasped the dynamics of the Great Case, its inner meaning. What remains when materialism (or anything else) is subtracted from Lewontin's prior commitment is the prior commitment itself; and like all such commitments, it is a commitment no matter what. Had they read the New York Review of Books, mullahs in Afghanistan would have understood Lewontin perfectly. They would have scrupled only at the side he had chosen.
Darwin's theories are correspondingly less important for what they explain, which is very little, and more important for what they deny, which is roughly the plain evidence of our senses. "Darwin," Richard Dawkins noted amiably, "had made it possible to be an intellectually fulfilled atheist."
But the Great Case, Johnson reminded his readers, has not yet been decided in the only court that counts, and that is the considered reflection of the human race. Efforts by one side to absent themselves from judgment are somewhat premature. Too much is at stake.
That much is at stake explains a good deal about the rhetoric of discussion in the United States, its vile tone. Biologists such as Jerry Coyne, Donald Prothero, Larry Moran or P.Z. Myers are of the opinion that if they cannot win the argument, they had better not lose it, and what better way not to lose an argument than to abuse one's antagonist? If necessary, the biological establishment has been quite willing to demand of the Federal Courts that they do what it has been unable to do in the court of public opinion. If the law is unwilling to act on their behalf, they are quite prepared to ignore it. Having been spooked by some tedious Darwinian toady, the California Science Center cancelled with blithe unconcern a contract to show a film about the Cambrian explosion. Spooked by some other Darwinian toady, the Department of Physics at the University of Kentucky denied the astronomer Martin Gaskell an appointment in astrophysics that he plainly was due.
The California Science Center paid up.
The University of Kentucky paid up.
As Philip Johnson might well have reminded them, the law is a knife that cuts two ways.
At the Discovery Institute we often offer an inter-faith Prayer of Thanksgiving to the Almighty for the likes of P.Z. Myers, Larry Moran, Barbara Forrest, Rob Pennock and Jeffrey Shallit.
For Donald Prothero, we are prepared to sacrifice a ram.
And now? Both critics and defenders of Darwin's theory have been humbled by the evidence. We are the beneficiaries of twenty years of brilliant and penetrating laboratory work in molecular biology and biochemistry. Living systems are more complex than ever before imagined. They are strange in their organization and nature. No theory is remotely adequate to the facts.
There is some evidence that once again, the diapason of opinion is being changed. The claims of intelligent design are too insistent and too plausible to be frivolously dismissed and the inadequacies of any Darwinian theory too obvious to be tolerated frivolously. Time has confirmed what critics like Phil Johnson have always suspected. Darwin's theory is far less a scientific theory than the default position for a view in which the universe and everything in it assembles itself from itself in a never-ending magical procession. The religious tradition and with it, a sense for the mystery, terror and grandeur of life, has always embodied insights that were never trivial.
The land is rising even as it sinks.
And this, too, is a message that Phil Johnson was pleased to convey.