Sunday, March 17, 2013

Paul A. Rahe · March 14, 2013 at 1:43am
When I was an undergraduate at Cornell , then Yale and a graduate student at Oxford, then Yale once again, the American university was an exceedingly lively place in which students were encouraged to explore a diversity of perspectives. The people in charge were, by and large, New Deal liberals -- moderate in manner, open to argument, and distinguished first and foremost by their curiosity. They welcomed into the ranks of their colleagues both those to their left and those to their right -- for they did not regard the university as an instrument for transforming the world. They supposed, instead, that it was a space within which one could spend one's time trying to understand that world. Intellectual sparring partners were, in their opinion, a great boon.
Most of the New Deal liberals that I once knew have passed on. They have been replaced in positions of authority by a generation for whom everything is political. Its motto is "the personal is political and the political is personal." What this means in practice is that the members of this generation tend to regard those at odds with them not as merely wrong and perhaps intriguingly, interestingly wrong but as simply immoral. In the face of an argument or observation that does not sit comfortably with what they believe, they resort to denunciation. The dissenter is labeled a racist or a fascist or something worse, and he is read out of the human race. In this environment, conservatives are no longer welcome. No advertisement states that they need not apply for jobs at certain institutions, but that is nearly always the case.
The key to understanding what has happened is that the new generation has made of the university a political instrument. Its purpose, as they see it, is to help them transform the larger world. Those not on board with the program are interlopers to be demonized and driven out, and the quality of the scholarly work and the teaching they do has no weight. One can write and be widely read. One can be invited to conferences and to give lectures. But, if a job comes open at a major university, one will not even be interviewed. Trust me. I know from long experience.

Every once in a while, however, something happens that shakes things up, and then one sees that things are, in fact, far worse than one ever imagined. Take, for example, the recent furor regarding Thomas Nagel's book Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False.
Nagel is a distinguished professor of philosophy with an impeccable pedigree. He was born in 1937; did his BA at Cornell, did a B.Phil. at Corpus Christi College, Oxford; and completed his Ph.D. at Harvard in 1963 under the direction of John Rawls before going on to teach at Berkeley, Princeton, and New York University. He has in the intervening years published a host of books, all of them well-received, and he has won just about every honor reserved for members of his profession. On the 4th of July 2012, when he reached the ripe old age of 75, he was at the very top of the heap. But, thanks to his new book, he is rapidly becoming a pariah. The title is sufficient to explain why.
When Steven Pinker of Harvard turned to Twitter and denounced the book as “the shoddy reasoning of a once-great thinker,” Leon Wieseltier, a throwback to the old days of New Deal liberalism who has been the literary editor of The New Republic for decades, responded:
Here was a signal to the Darwinist dittoheads that a mob needed to be formed. In an earlier book Nagel had dared to complain of “Darwinist imperialism,” though in his scrupulous way he added that “there is really no reason to assume that the only  alternative to an evolutionary explanation of everything is a religious one.” He is not, God forbid, a theist. But he went on to warn that “this may not be comforting enough” for the materialist establishment, which may find it impossible to tolerate also “any cosmic order of which mind is an irreducible and non-accidental part.” For the bargain-basement atheism of our day, it is not enough that there be no God: there must be only matter. Now Nagel’s new book fulfills his old warning. A mob is indeed forming, a mob of materialists, of free-thinking inquisitors. “In the present climate of a dominant scientific naturalism, heavily dependent on speculative Darwinian explanations of practically everything, and armed to the teeth against religion,” Nagel calmly writes, “... I would like to extend the boundaries of what is not regarded as unthinkable, in light of how little we really understand about the world.” This cannot be allowed! And so the Sacred Congregation for the Doctrine of the Secular Faith sprang into action. “If there were a philosophical Vatican,” Simon Blackburndeclared in the New Statesman, “the book would be a good candidate for going on to the Index.” . . .
I understand that nobody is going to burn Nagel’s book or ban it. These inquisitors are just more professors. But he is being denounced not merely for being wrong. He is being denounced also for being heretical. I thought heresy was heroic. I guess it is heroic only when it dissents from a doctrine with which I disagree. Actually, the defense of heresy has nothing to do with its content and everything to do with its right. Tolerance is not a refutation of heresy, but a retirement of the concept. I am not suggesting that there is anything outrageous about the criticism of Nagel’s theory of the explanatory limitations of Darwinism. He aimed to provoke and he provoked. His troublemaking book has sparked the most exciting disputation in many years, because no question is more primary than the question of whether materialism (which Nagel defines as “the view that only the physical world is irreducibly real”) is true or false.
In fact, the question raised by Nagel is a very old question. It accounts for the so-called Socratic turn. The Athenian Socrates began his philosophical career as a would-be scientist. But somewhere along the way he realized that the process physics of Thales, Anaximander, Anaximenes, and their successors could not make sense of the greatest mystery of all: the existence of the scientist. Put in simple terms, the reductionist science of the materialists is self-refuting -- for it eventuates in the reduction of the scientist himself to mere matter in motion. It eventuates in a theory that explains in materialist terms why the theory itself is being proposed and thereby subverts any claim it has to be true. Reduce the scientist to a biochemical reaction and you destroy the science.
Nagel has returned to this conundrum with a vengeance. In doing so, he has broken ranks, and he has been relegated to the class of apostates. It is a good thing that he is 75 and not 25. If he were just starting his career, this book would have ended it.
The most vigorous denunciations have come from the ranks of the scientists. Wieseltier reminds us, however, that Nagel's book is not a work of science. It is a work of philosophy. It is, he observes,
entirely typical of the scientistic tyranny in American intellectual life that scientists have been invited to do the work of philosophers. The problem of the limits of science is not a scientific problem. It is also pertinent to note that the history of science is a history of mistakes, and so the dogmatism of scientists is especially rich. A few of Nagel’s scientific critics have been respectful: inThe New York Review of Books, H. Allen Orr has the decency to concede that it is not at all obvious how consciousness could have originated out of matter. But he then proceeds to an almost comic evasion. Finally, he says, we must suffice with “the mysteriousness of consciousness.” A Darwinii mysterium tremendum! He then cites Colin McGinn’s entirely unironic suggestion that our “cognitive limitations” may prevent us from grasping the evolution of mind from matter: “even if matter does give rise to mind, we might not be able to understand how.” Students of religion will recognize the dodge—it used to be called fideism, and atheists gleefully ridiculed it; and the expedient suspension of rational argument; and the double standard. What once vitiated godfulness now vindicates godlessness.
The thing that bothers Wieseltier the most, however, is another dimensiont of the attack on Nagel:
The most shabby aspect of the attack on Nagel’s heterodoxy has been its political motive. His book will be “an instrument of mischief,” it will “lend comfort (and sell a lot of copies) to the religious enemies of Darwinism,” and so on. It is bad for the left’s own culture war. Whose side is he on, anyway? Almost taunting the materialist left, which teaches skepticism but not self-skepticism, Nagel, who does not subscribe to intelligent design, describes some of its proponents as “iconoclasts” who “do not deserve the scorn with which they are commonly met.” I find this delicious, because it defies the prevailing regimentation of opinion and exemplifies a rebellious willingness to go wherever the reasoning mind leads. Cui bono? is not the first question that an intellectual should ask. The provenance of an idea reveals nothing about its veracity. “Accept the truth from whoever utters it,” said the rabbis, those poor benighted souls who had the misfortune to have lived so many centuries before Dennett and Dawkins.
I would like to think that Nagel's debunking of the scientistic orthodoxy now dominant in the academy would usher in a new age of sharp intellectual debate. But nothing that I see in the contemporary university suggests that such a dream is at all plausible. As long as the university is seen as a political instrument, there really are no grounds for hope.
After I posted my piece last night on The Perils of Intellectual Apostasy, a colleague drew my attention to the comments attracted by Leon Wieseltier's defense of Thomas Nagel's controversial recent book Mind and Cosmos.
These, too, deserve attention -- and none more so than the reply composed by Steven Pinker of Harvard University, which captures brilliantly the closed-mindedness that typifies the modern academy. Here is a juicy snippet:
The fact that Nagel’s wildly intemperate subtitle (that Darwinism is “almost certainly false”) will give ammunition to disturbing anti-science, anti-reason forces in the contemporary political power structure is, of course, not in itself a refutation of his argument. But surely it is not inappropriate of reviewers to bring this issue up. Nagel—and Wieseltier—have to know that there is a powerful and well-funded lobby in this country that is trying to discredit the entire institution of science as a close-minded, ideological propaganda front which is determined to promote a secular, materialistic, anti-Judaeo-Christian liberalism. This is emboldens them to blow off the scientific consensus about man-made climate change, corrupt science education, suppress research on gun violence, and criminalize lifesaving medical research. For several years Nagel has been expressing casual opinions and overstating claims in ways that are guaranteed to credit and energize this lobby. While the substance of his claims have to be evaluated on their merits, it is completely legitimate to criticize the way he has expressed them. This is not about the culture war. This is about the future of the planet.
If Wieseltier had wanted to gather further evidence for the strength of political correctness in the academy and the politicization of science, he could not have found anything elsewhere quite as compelling as this. When a distinguished scholar, such as Pinker, writes of "the scientific consensus about man-made climate change," you know he lives in a bubble where he talks only with those who agree with him. There never was such a consensus among scientists on this matter, and with every passing day there is less of one.

Monday, March 11, 2013

The Brain Activity Map
Hard cell
An ambitious project to map the brain is in the works. Possibly too ambitious
The Economist Mar 9th 2013 

NEWS of what protagonists hope will be America’s next big science project continues to dribble out. A leak to the New York Times, published on February 17th, let the cat out of the bag, with a report that Barack Obama’s administration is thinking of sponsoring what will be known as the Brain Activity Map. And on March 7th several of those protagonists published a manifesto for the project in Science.

The purpose of BAM is to change the scale at which the brain is understood. At the moment, neuroscience operates at two disconnected levels. The higher one, where the dimensions of features are measured in centimetres, has many techniques at its disposal, notably functional magnetic-resonance imaging (fMRI), which measures changes in tissues’ fuel consumption. This lets researchers see which bits of the brain are active in particular tasks—as long as those tasks can be performed by a person lying down inside a scanner.
At the other end of the scale, where features are measured in microns, lots of research has been done on how individual nerve cells work, how messages are sent from one to another, and how the connections between cells strengthen and weaken as memories are formed. Between these two, though, all is darkness. It is like trying to navigate America with an atlas that shows the states, the big cities and the main highways, and has a few street maps of local neighbourhoods, but displays nothing in between. BAM, if all goes well, will yield plans of entire towns and villages, and start to fill in the road network. It will also, to push the analogy to breaking point, let a user look at actual traffic flows on the roads in question, and even manipulate the road signs, in order to understand how particular communities work.

The mappers’ aim is to find out how nerve cells collaborate to process information. That means looking at the connections between hundreds, thousands and even millions of adjacent cells—and doing so, crucially, while those cells are still alive, rather than after they have been sliced and diced for microscopic examination.
[Keep in mind that these millions of neurons are out of some 100,000,000,000 neurons in the brain – a proportion of 1 to 100,000. So even if this overly ambitious project succeeds, it will be light-years from mapping the brain tout courte. D.G.]

This will require a new set of tools. And the guts of the BAM proposal are that the American taxpayer should provide those tools. It is thus no coincidence that the lead author of the paper, Paul Alivisatos, the director of the Lawrence Berkeley National Laboratory, is a materials scientist, not a neuroscientist. Dr Alivisatos and his ten colleagues would like their new tools to be able to record, simultaneously, the activity of millions of nerve cells. Then, having done the recording, they would like a second toolkit that lets them manipulate each cell at will, to see what effect that has on the rest of the circuit. Finally, to handle the unprecedented amounts of data that the first and second steps will generate, they would like a new set of computing hardware and software.
A modest proposal, then. And one which is inducing polite scepticism from many neuroscientists who are not part of the charmed circle, and who fear their subject is about to be sacrificed to a juggernaut.
Such scepticism is reasonable. The third part of the project, the computer side, should be doable. That is just a question of pushing harder in a direction things are, in any case, going. How you would do the first two, though, is anybody’s guess—and Dr Alivisatos and his colleagues are pretty sketchy about the details.

Thinking big, thinking small

What ideas there are draw heavily on the nascent field of nanotechnology. This is Dr Alivisatos’s particular province, and also that of the Kavli Foundation, which exists “to advance fundamental research in the fields of astrophysics, nanoscience, neuroscience and theoretical physics”. A brain map would push two of those buttons, which accounts for the fact that five of the manifesto’s authors work for institutes sponsored by this foundation. But any successes that nanotechnology has enjoyed so far have been small beer compared with the devices that would be needed to interrogate nerve cells, record the results and transmit them back to base, let alone tell those nerve cells what to do.

The protagonists would, they say, build up slowly, using humbler creatures than human beings as experimental subjects to start with. This was the approach taken by the Human Genome Project, which began with bacteria and yeast, progressed to worms, flies and mice, and only then tackled people. But the analogy is not quite a fair one. When the genome project started, genomicists already had a basic understanding of how to go about it. That understanding was vastly refined and improved by the application of several billion dollars. But it was there from the beginning.

Going from existing methods of recording and manipulating cell activity, which rely on large electrodes, often connected to the outside world by physical wires, to the massively parallel, wireless system envisaged by Dr Alivisatos, is a different proposition. It may be possible. But it requires a leap of faith. The next few weeks will reveal whether that faith is shared by a cash-strapped president.

Design Can Be Suboptimal on Purpose

Evolutionists wrongly argue that ID can't be true because some designs are not optimal. But there might be a perfectly intelligent reason for some suboptimal designs in nature.
"When It Comes to Genetic Code, Researchers Prove Optimum Isn't Always Best," according tothe news from Texas A&M University. For example, "Imagine two steel springs identical in look and composition but that perform differently because each was tempered at a different rate." Engineers might want the springs to perform differently, and temper them that way for a reason.
Turning to the living cell, the researchers considered how variations in the coding of the biological clock can create similar timing differences. Their finding is related to our comments the other day on the "snooze button" on the biological clock. They were part of the team that found out how synonymous codons allow for timing differences that fine-tune circadian rhythms. Applying their analogy about tempered springs, we learn:
The group's research indicates that the protein in the fungal genus Neurospora they studied, frequencyperforms better when the genetic code specifying it has non-optimal codon usage, as is normally found. However, when the genetic code is deliberately altered so that codon usage is optimized, clock function is lost. The reason for this is that non-optimal codon usage slows translation of the genetic code into protein, allotting the frequency protein the necessary time to achieve its optimal protein structure.
The team's results also demonstrate that genetic codons do more than simply determine the amino acid sequence of a protein as previously thought: They also affect how much protein can be made as well as the functional quality of that protein. (Emphasis added.)
So what at first appeared sloppy or suboptimal actually has a purpose. "Less is more" sometimes. Even though an alternate codon specifies the same amino acid, it can affect the action of the resulting enzymatic reaction through timing.
Also noteworthy about the news from Texas A&M is its elevated praise of design in the biological clock:
"Living organisms' inner clocks are like Swiss watches with precisely manufactured spring mechanisms," said Matthew Sachs, a professor in the Texas A&M Department of Biology. "For example, if you fast-temper a critical spring, the watch may be unable to keep time, as opposed to slow-tempering it. It's not just about the composition of the components, such as which alloy is used. It's about the manner in which the components are made. Our research says the genetic code is important for determining both composition and fabrication rate for a central component of the circadian clock, and that the fabrication rate also is critical. And that's essentially a discovery."
Swiss watch, you say? That sounds almost like an echo of Paley. But Paley's approach was natural theology. This approach is intelligent design: finding complex specified information, functioning with a purpose, that implies not necessarily a deity, but an intelligent cause that can be rightly inferred scientifically from our uniform experience with what intelligence routinely does.

Monday, March 4, 2013

The Scientific American February 2013 contains the following iconoclastic articles:

New Simulations Question the Gulf Stream’s Role in Tempering Europe’s Winters

It's the flow of warm tropical water across the Atlantic that keeps European winters mild, right? Maybe not
For a century, schoolchildren have been taught that the massive ocean current known as the Gulf Stream carries warm water from the tropical Atlantic Ocean to northwestern Europe. As it arrives, the water heats the air above it. That air moves inland, making winter days in Europe milder than they are in the northeastern U.S.
It might be time to retire that tidy story. The explosion of interest in global climate has prompted scientists to closely study the climatic effects of the Gulf Stream only to discover that those effects are not as clear as conventional wisdom might suggest. Based on modeling work and ocean data, new explanations have emerged for why winter in northern Europe is generally less bitter than winter at the same latitudes in the northeastern U.S. and Canada—and the models differ on the Gulf Stream's role. One of the explanations also provides insight into why winter in the U.S. Northwest is warmer than it is across the Pacific in eastern Russia.
At the same time, recent studies have been casting doubt on the popular conjecture made a few years ago that melting of Arctic ice could “shut down” the Gulf Stream, thereby wreaking havoc with Europe'sweather. Yet the studies do suggest that climate change could at least affect thestrength of the Gulf Stream, which could lessen the impact of global warming on northern Europe.
....later on in the article they say the gulf stream theory is refuted.......

A Single Brain Cell Stores a Single Concept [Preview]

Each concept—each person or thing in our everyday experience—may have a set of corresponding neurons assigned to it which are described the two theories of memory - millions of cells widely distributed in the brain, or narrow simple storage - that are still in competition after several decades. New discoveries lend some support to the latter, but memory storage in the brain is still not understood. Show this to those who think "science" understands the brain. 

Is the Free-Radical Theory of Aging Dead? [Preview]

The hallowed notion that oxidative damage causes aging and that vitamins might preserve our youth is now in doubt
David Gems's life was turned upside down in 2006 by a group of worms that kept on living when they were supposed to die. As assistant director of the Institute of Healthy Aging at University College London, Gems regularly runs experiments on Caenorhabditis elegans, a roundworm that is often used to study the biology of aging. In this case, he was testing the idea that a buildup of cellular damage caused by oxidation—technically, the chemical removal of electrons from a molecule by highly reactive compounds, such as free radicals—is the main mechanism behind aging. According to this theory, rampant oxidation mangles more and more lipids, proteins, snippets of DNA and other key components of cells over time, eventually compromising tissues and organs and thus the functioning of the body as a whole.

Gems genetically engineered the roundworms so they no longer produced certain enzymes that act as naturally occurring antioxidants by deactivating free radicals. Sure enough, in the absence of the antioxidants, levels of free radicals in the worms skyrocketed and triggered potentially damaging oxidative reactions throughout the worms' bodies.
Show this one to those who think taking anti-oxidants helps prevent aging. In fact the whole understaning of he process of aging is now in doubt> 

Will Scientists Ever Be Able to Piece Together Humanity's Early Origins? [Preview]

New fossil discoveries complicate the already devilish task of identifying our most ancient progenitors
By Katherine Harmon 

mammals, do. This familiar yet strange individual is Lucy, a member of the species Australopithecus afarensis, who lived some 3.2 million years ago. She is one of the oldest creatures presumed to have strode on the evolutionary path leading to our species, Homo sapiens.
From a distance, you probably would have assumed her to be human. Although she stood only about a meter tall, with long arms and a small head, she walked, if perhaps slightly inelegantly, upright on two legs, as we, alone among living mammals, do. This familiar yet strange individual is Lucy, a member of the species Australopithecus afarensis, who lived some 3.2 million years ago. She is one of the oldest creatures presumed to have strode on the evolutionary path leading to our species,Homo sapiens.
When Lucy was uncovered in 1974, evidence of bipedal locomotion virtually guaranteed her kind a spot in the human family tree. And although scientists had an inkling that other branches of humans coexisted more recently alongside our own, early human evolution appeared to be a simple affair, with Lucy and the other ancient bipeds that eventually came to light belonging to the same lone lineage. Thus, the discoveries seemed to uphold the notion of human evolution as a unilinear “march of progress” from a knuckle-walking chimplike ape to our striding, upright form—a schema that has dominated paleoanthropology for the past century. Yet as researchers dig back further in time, our origins are turning out to be a lot more complicated than that iconic image would suggest.

...indeed, in the body of the article it is suggested that it is unrealistic to expect that the lineage of homo sapiens will ever be established. 

And finally:

Human and Grasshopper Ears Are Remarkably Similar

Katydid ear structures resemble those of humans
In a striking example of how two unrelated creatures can evolve similar traits, researchers have discovered that a rain—forest grasshopper has ears remarkably like those of humans and other mammals-even though its hearing organ is tucked into the crook of its front legs.
The insect, a yellow-orange-faced katydid (Copiphora gorgonensis) from Gorgona Island in Colombia, has ear structures that are similar to the human eardrum and cochlea. As sound waves approach the katydid's legs, they rock a thin membrane akin to a human eardrum. This membrane translates larger movements from air-pressure waves to smaller, more powerful motions in another structure called the cuticle plate. The plate, in turn, creates ripples in a fluid-filled chamber akin to an unfurled human cochlea. Inside this chamber, sensory cells are arranged like a keyboard from high-to low-frequency sensitivity, much like in humans.

C. gorgonensis's exquisitely evolved ear may help it avoid predators such as bats, says sensory biologist Fernando Montealegre-Z, now at the University of Lincoln in England and lead author of the study, which appeared in Science. The finding “is yet another remarkable demonstration of convergent evolution,” says Ronald R. Hoy, a professor of neurobiology at Cornell University, who was not involved in the work.
The efficiency of this minuscule system could inspire engineers to create microsensors based on the katydid’s ear design—for example, for use in hearing aids. Such sensors could be less fragile, smaller and more sensitive, potentially spurring applications we have not thought of yet.

...Of course the  credit is given to evolution. But one would like to knkow: are there other related insects that also have "ears" similar to humans? And if not, what is the pathway of gradual change that led to this unique structure in insects? 

The lesson for cautious thinkers: expect what the textbooks say today to be largely rejected tomorrow [or next year, or next decade.....].