Wednesday, December 28, 2011

Paul Boghossian on relativism http://opinionator.blogs.nytimes.com/2011/07/24/the-maze-of-moral-relativism/?scp=1&sq=boghossian&st=cse]:


The Maze of Moral Relativism

The Stone
The Stone is a forum for contemporary philosophers on issues both timely and timeless.
Relativism about morality has come to play an increasingly important role in contemporary culture.  To many thoughtful people, and especially to those who are unwilling to derive their morality from a religion, it appears unavoidable.  Where would absolute facts about right and wrong come from, they reason, if there is no supreme being to decree them? We should reject moral absolutes, even as we keep our moral convictions, allowing that there can be right and wrong relative to this or that moral code, but no right and wrong per se.  (See, for example, Stanley Fish’s 2001 op-ed, “Condemnation Without Absolutes.”)[1]
When we decided that there were no such things as witches, we didn’t become relativists about witches.
Is it plausible to respond to the rejection of absolute moral facts with a relativistic view of morality?  Why should our response not be a more extreme, nihilistic one, according to which we stop using normative terms like “right” and “wrong” altogether, be it in their absolutist or relativist guises?

Relativism is not always a coherent way of responding to the rejection of a certain class of facts.  When we decided that there were no such things as witches, we didn’t become relativists about witches.  Rather, we just gave up witch talk altogether, except by way of characterizing the attitudes of people (such as those in Salem) who mistakenly believed that the world contained witches, or by way of characterizing what it is that children find it fun to pretend to be on Halloween.  We became what we may call “eliminativists” about witches.
On the other hand, when Einstein taught us, in his Special Theory of Relativity, that there was no such thing as the absolute simultaneity of two events, the recommended outcome was that we become relativists about simultaneity, allowing that there is such a thing as “simultaneity relative to a (spatio-temporal) frame of reference,” but not simultaneity as such.
What’s the difference between the witch case and the simultaneity case?  Why did the latter rejection lead to relativism, but the former to eliminativism?
In the simultaneity case, Einstein showed that while the world does not contain simultaneity as such, it does contain its relativistic cousin — simultaneity relative to a frame of reference — a property that plays something like the same sort of role as classical simultaneity did in our theory of the world.
By contrast, in the witch case, once we give up on witches, there is no relativistic cousin that plays anything like the role that witches were supposed to play.   The property, that two events may have, of “being simultaneous relative to frame of reference F” is recognizably a kind of simultaneity.  But the property of “being a witch according to a belief system T” is not a kind of witch, but a kind of content (the content of belief system T):  it’s a way of characterizing what belief system T says, not a way of characterizing the world.
Leif Parsons
Now, the question is whether the moral case is more like that of simultaneity or more like that of witches?  When we reject absolute moral facts is moral relativism the correct outcome or is it moral eliminativism (nihilism)?
The answer, as we have seen, depends on whether there are relativistic cousins of “right” and “wrong” that can play something like the same role that absolute “right” and “wrong” play.
It is hard to see what those could be.
What’s essential to “right” and “wrong” is that they are normativeterms, terms that are used to say how things ought to be, in contrast with how things actually are.  But what relativistic cousin of “right” and “wrong” could play anything like such a normative role?
Most moral relativists say that moral right and wrong are to be relativized to a community’s “moral code.” According to some such codes, eating beef is permissible; according to others, it is an abomination and must never be allowed.  The relativist proposal is that we must never talk simply about what’s right or wrong, but only about what’s “right or wrong relative to a particular moral code.”
The trouble is that while “Eating beef is wrong” is clearly a normative statement, “Eating beef is wrong relative to the moral code of the Hindus” is just a descriptive remark that carries no normative import whatsoever.  It’s just a way of characterizing what is claimed by a particular moral code, that of the Hindus.  We can see this from the fact that anyone, regardless of their views about eating beef, can agree that eating beef is wrong relative to the moral code of the Hindus.
So, it looks as though the moral case is more like the witch case than the simultaneity case:  there are no relativistic cousins of “right” and “wrong.”  Denial of moral absolutism leads not to relativism, but to nihilism.[2]
If there are no absolute facts about morality, “right” and “wrong” would have to join “witch” in the dustbin of failed concepts.
There is no half-way house called “moral relativism,” in which we continue to use normative vocabulary with the stipulation that it is to be understood as relativized to particular moral codes.  If there are no absolute facts about morality, “right” and “wrong” would have to join “witch” in the dustbin of failed concepts.
The argument is significant because it shows that we should not rush to give up on absolute moral facts, mysterious as they can sometimes seem, for the world might seem even more mysterious without any normative vocabulary whatsoever.
One might be suspicious of my argument against moral relativism. Aren’t we familiar with some normative domains — such as that of etiquette — about which we are all relativists?  Surely, no one in their right minds would think that there is some absolute fact of the matter about whether we ought to slurp our noodles while eating.
If we are dining at Buckingham Palace, we ought not to slurp, since our hosts would consider it offensive, and we ought not, other things being equal, offend our hosts.  On the other hand, if we are dining in Xian, China, we ought to slurp, since in Xian slurping is considered to be a sign that we are enjoying our meal, and our hosts would consider it offensive if we didn’t slurp, and we ought not, other things being equal, offend our hosts.
But if relativism is coherent in the case of etiquette why couldn’t we claim that morality is relative in the same way?
The reason is that our relativism about etiquette does not actually dispense with all absolute moral facts.  Rather, we are relativists about etiquette in the sense that, with respect to a restricted range of issues (such as table manners and greetings), we take the correct absolute norm to be “we ought not, other things being equal, offend our hosts.”
This norm is absolute and applies to everyone and at all times.  Its relativistic flavor comes from the fact that, with respect to that limited range of behaviors (table manners and greetings, but not, say, the abuse of children for fun), it advocates varying one’s behavior with local convention.
In other words, the relativism of etiquette depends on the existence of absolute moral norms.  Since etiquette does not dispense with absolute moral facts, one cannot hope to use it as a model for moral relativism.
Suppose we take this point on board, though, and admit that there have to be some absolute moral facts.  Why couldn’t they all be like the facts involved in etiquette?  Why couldn’t they all say that, with respect to any morally relevant question, what we ought to do depends on what the local conventions are?
The trouble with this approach is that once we have admitted that there are some absolute moral facts, it is hard to see why we shouldn’t think that there are many — as many as common sense and ordinary reasoning appear to warrant.  Having given up on the purity of a thoroughgoing anti-absolutism, we would now be in the business of trying to figure out what absolute moral facts there are.  To do that, we would need to employ our usual mix of argument, intuition and experience.  And what argument, intuition and experience tell us is that whether we should slurp our noodles depends on what the local conventions are, but whether we should abuse children for fun does not.
A would-be relativist about morality needs to decide whether his view grants the existence of some absolute moral facts, or whether it is to be a pure relativism, free of any commitment to absolutes.  The latter position, I have argued, is mere nihilism; whereas the former leads us straight out of relativism and back into the quest for the moral absolutes.
None of this is to deny that there are hard cases, where it is not easy to see what the correct answer to a moral question is.  It is merely to emphasize that there appears to be no good alternative to thinking that, when we are in a muddle about what the answer to a hard moral question is, we are in a muddle about what the absolutely correct answer is.


FOOTNOTES:
[1] Pinning a precise philosophical position on someone, especially a non-philosopher, is always tricky, because people tend to give non-equivalent formulations of what they take to be the same view. Fish, for example, after saying that his view is that “there can be no independent standards for determining which of many rival interpretations of an event is the true one,” which sounds appropriately relativistic, ends up claiming that all he means to defend is “the practice of putting yourself in your adversary’s shoes, not in order to wear them as your own but in order to have some understanding (far short of approval) of why someone else might want to wear them.” The latter, though, is just the recommendation of empathetic understanding and is, of course, both good counsel and perfectly consistent with the endorsement of moral absolutes.
Another view with which moral relativism is sometimes conflated is the view that the right thing to do can depend on the circumstances. There is no question that the right thing to do can depend on the circumstances, even on an absolutist view. Whether you should help someone in need can depend on what your circumstances are, what their circumstances are, and so forth. What makes a view relativistic is its holding that the right thing to do depends not just on the circumstances, but on what the person (or his community) takes to be the right thing to do, on their moral code.
In this column, I am only concerned with those who wish to deny that there are any absolute moral truths in this sense. If that is not your view, then you are not the target of this particular discussion.
[2] Some philosophers may think that they can evade this problem by casting the relativism in terms of a relativized truth predicate rather than a relativized moral predicate. But as I have explained elsewhere, the problem of the loss of normative content recurs in that setting.

DESCRIPTION
Paul Boghossian is Silver Professor of Philosophy at New York University. He is the author of “Fear of Knowledge: Against Relativism and Constructivism,” “Content and Justification: Philosophical Papers,” and co-editor of “New Essays on the A Priori,” all from Oxford University Press. More of his work can be found on his Web site.

Monday, December 26, 2011

ID and retinal design

The "poor design" of the retina has long been a standard objection to the efficiency of  the design of the eye. Here is an answer. Casey Luskin is an excellent scientist, and very generous in answering queries. 


http://www.discovery.org/a/18011


Eyeballing Design
"Biomimetics" Exposes Attacks on ID as Poorly Designed

By: Casey Luskin
Salvo Magazine
December 20, 2011


Link to Original Article
At least since the ancient Chinese tried to produce artificial silk, people have turned to biology for inspiration when designing technology. A 2009 article in the world's oldest science journal, Philosophical Transactions of the Royal Society of London, authored by Ohio State University nanotechnology engineer Bharat Bhushan, explains how this design process works:
The understanding of the functions provided by objects and processes found in nature can guide us to imitate and produce nanomaterials, nanodevices and processes. Biologically inspired design or adaptation or derivation from nature is referred to as "biomimetics." It means mimicking biology or nature.1
Perhaps the most familiar example of biomimetics is the body shape of birds serving as the inspiration for aircraft design. But the list of fascinating cases where engineers have mimicked nature to develop or improve human technology goes on and on:
• Faster Speedo swimsuits have been developed by studying the properties of sharkskin.
• Spiny hooks on plant seeds and fruits led to the development of Velcro.
• Better tire treads were created by understanding the shape of toe pads on tree frogs.
• Polar bear furs have inspired textiles and thermal collectors.
• Studying hippo sweat promises to lead to better sunscreen.
• Volvo has studied how locusts swarm without crashing into one another to develop an anti-collision system.
• Mimicking mechanisms of photosynthesis and chemical energy conversion might lead to the creation of cheaper solar cells.
• Copying the structure of sticky gecko feet could lead to the development of tape with cleaner and dryer super-adhesion.
• Color-changing cuttlefish have inspired television screens that use a fraction of the power of standard TVs.
• DNA might become a framework for building faster microchips.
• The ability of the human ear to pick up many frequencies of sound is being replicated to build better antennas.
• The Namibian fog-­basking beetle has inspired methods of desalinizing ocean water, growing crops, and producing electricity, all in one!
Disclaiming Design
The purpose of Dr. Bhushan's paper was to encourage engineers to study nature when creating technology. For some reason, however, he felt compelled to open his article with the following disclaimer:
Nature has gone through evolution over the 3.8 Gyr [Gigayear, equal to one billion years] since life is estimated to have appeared on the Earth. Nature has evolved objects with high performance using commonly found materials.
Why did Bhushan feel this was necessary?
The answer is hard to miss. The widespread practice and success of biomimetics among technology-creating engineers has powerful implications that point to intelligent design (ID). After all, if human technology is intelligently designed, and if biological systems inspire or outperform man-made systems, then we are confronted with the not-so-subtle inference that nature, too, might have been designed.

To prevent ID-oriented thoughts from entering the minds of readers, materialists writing about biomimetics have long upheld a tradition of including superfluous praise of the amazing power of Darwinian evolution.
For example, when explaining how the unique bumpy shape of whale flippers has been mimicked to improve wind turbine design, a ScienceDaily article reminded readers that "sea creatures have evolved over millions of years to maximise efficiency of movement through water."2
Similarly, in 2008, Business Week carried a piece on biomimetics noting that "ultra-strong, biodegradable glues" have been developed "by analyzing how mussels cling to rocks under water," and that bullet-trains could be made more aerodynamic if given "a distinctly bird-like nose." But the story couldn't help but point out that these biological templates weren't designed, but rather "evolved in the natural world over billions of years."3
It's uncanny how predictable this theme has become. In another instance, MSNBC explained how "armor" on fish might be copied to improve battle ware for soldiers. Yet the article included the obligatory subheading instructing readers that "millions of years of evolution could provide exactly what we need today."4
Well, aren't we lucky?
Better Keep the Disclaimers
Dr. Bhushan was wise to include his disclaimer promoting unguided evolution: From an ID-based view, it's unsurprising that designers of human technology would find so many solutions to problems within the biosphere. ID-friendly implications permeate the field of biomimetics, and they are dangerous to materialism.
Evolutionary thinkers, of course, will assert that these finely tuned biological systems evolved by blind natural selection preserving random mutations. Over billions of years, they imagine, this unguided process perfected these systems, ultimately besting the inventions of our top engineering minds.
Such deeply held convictions might be hard to unseat from the minds of materialists. But consider this: When human engineers want to create technology, do they use unguided processes of random mutation and natural selection? No. They use intelligent design.
In fact, whenever we understand the origin of a piece of technology, we see that intelligent design was always required to generate the system. How then, is Dr. Bhushan so confident that the elegant systems in nature that surpass human designs—including multi-component machines—­resulted from unguided evolutionary processes?
Poorly Designed Objections
Some materialists attack design arguments not by alleging that biological systems lack high levels of specified complexity, but by alleging that they are full of "flaws." Yet anyone who has used Microsoft Windows is painfully aware that flawed designs are still designed. But theistic evolutionist biologist Kenneth Miller argues that evolution would naturally lead us to expect the biological world to be full of "cobbled together" kluges that reflect the clumsy, undirected Darwinian process.5
For example, Miller maintains that the vertebrate eye was not intelligently designed because the optic nerve extends over the retina instead of going out the back of the eye—an alleged design flaw. According to Miller, "visual quality is degraded because light scatters as it passes through several layers of cellular wiring before reaching the retina."
Similarly, Richard Dawkins contends that the retina is "wired in backwards" because light-sensitive cells face away from the incoming light, which is partly blocked by the optic nerve. In Dawkins's ever-humble opinion, the vertebrate eye is "the design of a complete idiot."6
A closer examination shows that the design of the vertebrate eye works far better than Dawkins and Miller let on.
Dawkins concedes that the optic nerve's impact on vision is "probably not much," but the negative effect is even less than he admits. Only if you cover one eye and stare directly at a fixed point does a tiny "blind spot" appear in your peripheral vision as a result of the optic nerve covering the retina. When both eyes are functional, the brain compensates for the blind spot by meshing the visual fields of both eyes. Under normal circumstances, the nerves' wiring does nothing to hinder vision.
Nonetheless, Dawkins argues that even if the design works, it would "offend any tidy-minded engineer." But the overall design of the eye actually optimizes visual acuity.
To achieve the high-quality vision that vertebrates need, retinal cells require a large blood supply. By facing the photoreceptor cells toward the back of the retina, and extending the optic nerve out over them, the cells are able to plug directly into the blood vessels that feed the eye, maximizing access to blood.
Pro-ID biologist George Ayoub suggests a thought experiment where the optic nerve goes out the back of the retina, the way Miller and Dawkins claim it ought to be wired. Ayoub finds that this design would interfere with blood supply, as the nerve would crowd out blood vessels. In this case, the only means of restoring blood supply would be to place capillaries over the retina—but this change would block even more light than the optic nerve does under the actual design.
Ayoub concludes: "In trying to eliminate the blind spot, we have generated a host of new and more severe functional problems to solve."7
In 2010, two eye specialists made a remarkable discovery that showed the elegant mechanism found in vertebrate eyes to solve the problem of any blockage of light due to the position of the optic nerve. Special "glial cells" sit over the retina and act like fiber-optic cables to channel light through the optic nerve wires directly onto the photoreceptor cells. According to New Scientist, these funnel-shaped cells prevent scattering of light and "act as light filters, keeping images clear."8
Ken Miller acknowledges that an intelligent designer "would choose the orientation that produces the highest degree of visual quality." Yet that seems to be exactly what we find in the vertebrate eye. In fact, the team of scientists who determined the function of glial cells concluded that the "retina is revealed as an optimal structure designed for improving the sharpness of images."
ID-theorist William Dembski has observed that "no one has demonstrated how the eye's function might be improved without diminishing its visual speed, sensitivity, and resolution."9 It's therefore unsurprising that optics engineers study the eye to improve camera technology. According to another tech article:
Borrowing one of nature's best designs, U.S. scientists have built an eye-shaped camera using standard sensor materials and say it could improve the performance of digital cameras and enhance imaging of the human body.
The article reported that the "digital camera has the size, shape and layout of a human eye" because "the curved shape greatly improves the field of vision, bringing the whole picture into focus."10
It seems that human eyes are so poorly designed that engineers regularly mimic them.
Repeat After Me . . .
Bhushan ends his article on biomimetics by paying more lip service to evolution, declaring that "nature has evolved and optimized a large number of materials and structured surfaces with rather unique characteristics." His chosen blindness to the pro-ID implications of biomimetics does not negate the fact that, intriguingly, nature routinely inspires and outperforms the best human ­technology.
Biologists and engineers who still want to believe that life's elegant complexity results from neo-Darwinian processes may find that the only way to do so is to keep repeating Francis Crick's mantra—"Biologists must constantly keep in mind that what they see was not designed, but rather evolved"—over and over to themselves. •
Endnotes 1. Bharat Bhushan, "Biomimetics: lessons from nature—an overview," Philosophical Transactions of the Royal Society of London A, vol. 367 (2009), pp. 1445–1486.
2. "Whales and Dolphins Influence New Wind Turbine Design" ScienceDaily (July 7, 2008): www.sciencedaily.com/releases/2008/07/080707222315.htm.
3. Matt Vella, "Using Nature as a Design Guide," Bloomberg Businessweek (February 11, 2008):www.businessweek.com/innovate/content/feb2008/id20080211_074559.htm.
4. Jeanna Bryner, "Incredible fish armor could suit soldiers" (July 28, 2008):www.msnbc.msn.com/id/25886406.
5. Kenneth R. Miller, "Life's Grand Design," Technology Review (February/March 1994), pp. 25–32.
6. Richard Dawkins, The Greatest Show on Earth: The Evidence for Evolution (Free Press, 2009), p. 354.
7. George Ayoub, "On the Design of the Vertebrate Retina," Origins & Design, vol. 17:1 (Winter 1996): www.arn.org/docs/odesign/od171/retina171.htm.
8. Kate McAlpine, "Evolution gave flawed eye better vision," New Scientist (May 6, 2010):www.newscientist.com/article/mg20627594.000-evolution-gave-flawed-eye-better-vision.html.
9. William Dembski & Sean McDowell, Understanding Intelligent Design: Everything You Need to Know in Plain Language (Harvest House, 2008), p. 53.
10. Julie Steenhuysen, "Eye spy: U.S. scientists develop eye-shaped camera," Reuters (August 6, 2008): www.reuters.com/article/2008/08/06/us-camera-eye-idUSN0647922920080806.

Sunday, December 11, 2011

fetal learning

Think of all the traditional sources concerning the effect of the behavior of a pregnant woman on her fetus. And then read this: http://edition.cnn.com/2011/12/11/opinion/paul-ted-talk/index.html?hpt=hp_c4




Editor's note: Annie Murphy Paul is the author of "Origins: How the Nine Months Before Birth Shape the Rest of Our Lives." She's now working on a book about learning, and writes a weekly column at Time.com called "Brilliant: The Science of Smart." TED is a nonprofit organization dedicated to "Ideas worth spreading," which it distributes through talks posted on its website.

(CNN) -- When does learning begin? As I explain in the talk I gave at TED, learning starts much earlier than many of us would have imagined: in the womb.
I was surprised as anyone when I first encountered this notion. I'm a science writer, and my job is to trawl the murky depths of the academic journals, looking for something shiny and new -- a sparkling idea that catches my eye in the gloom.
h
Starting a few years ago, I began noticing a dazzling array of findings clustered around the prenatal period. These discoveries were generating considerable excitement among scientists, even as they overturned settled beliefs about when we start absorbing and responding to information from our environment. As a science reporter -- and as a mother -- I had to find out more.
This research, I discovered, is part of a burgeoning field known as "fetal origins," and it's turning pregnancy into something it has never been before: a scientific frontier. Obstetrics was once a sleepy medical specialty, and research on pregnancy a scientific backwater. Now the nine months of gestation are the focus of intense interest and excitement, the subject of an exploding number of journal articles, books, and conferences.
What it all adds up to is this: much of what a pregnant woman encounters in her daily life -- the air she breathes, the food and drink she consumes, the chemicals she's exposed to, even the emotions she feels -- are shared in some fashion with her fetus. They make up a mix of influences as individual and idiosyncratic as the woman herself. The fetus treats these maternal contributions as information, as what I like to call biological postcards from the world outside.
By attending to such messages, the fetus learns the answers to questions critical to its survival: Will it be born into a world of abundance, or scarcity? Will it be safe and protected, or will it face constant dangers and threats? Will it live a long, fruitful life, or a short, harried one?
The pregnant woman's diet and stress level, in particular, provide important clues to prevailing conditions, a finger lifted to the wind. The resulting tuning and tweaking of the fetus's brain and other organs are part of what give humans their enormous flexibility, their ability to thrive in environments as varied as the snow-swept tundra in Siberia and the golden-grassed savanna in Africa.
The recognition that learning actually begins before birth leads us to a striking new conception of the fetus, the pregnant woman and the relationship between them.
The fetus, we now know, is not an inert blob, but an active and dynamic creature, responding and adapting as it readies itself for life in the particular world it will soon enter. The pregnant woman is neither a passive incubator nor a source of always-imminent harm to her fetus, but a powerful and often positive influence on her child even before it's born. And pregnancy is not a nine-month wait for the big event of birth, but a crucial period unto itself -- "a staging period for well-being and disease in later life," as one scientist puts it.
This crucial period has become a promising new target for prevention, raising hopes of conquering public health scourges like obesity and heart disease by intervening before birth. By "teaching" fetuses the appropriate lessons while they're still in utero, we could potentially end vicious cycles of poverty, infirmity and illness and initiate virtuous cycles of health, strength and stability.
So how can pregnant women communicate to their fetuses what they need to know?
Eat fish, scientists suggest, but make sure it's the low-mercury kind -- the omega-three fatty acids in seafood are associated with higher verbal intelligence and better social skills in school-age children. Exercise: research suggests that fetuses benefit from their mothers' physical activity. Protect yourself from toxins and pollutants, which are linked to birth defects and lowered IQ.
Don't worry too much about stress: research shows that moderate stress during pregnancy is associated with accelerated infant brain development. Seek help if you think you might be suffering from depression: the babies of depressed women are more likely to be born early and at low birth weight, and may be more irritable and have more trouble sleeping. And -- my favorite advice -- eat chocolate: it's associated with a lower risk of the high blood pressure condition known as preeclampsia.
When we hold our babies for the first time, we imagine them clean and new, unmarked by life, when in fact they have already been shaped by the world, and by us. It's my privilege to share with the TED audience the good news about how we can teach our children well from the very beginning.