Wednesday, December 29, 2010

Scientific "documentation"

The New Yorker
ANNALS OF SCIENCE
THE TRUTH WEARS OFF
Is there something wrong with the scientific method?
by Jonah Lehrer
DECEMBER 13, 2010

Many results that are rigorously proved and accepted start shrinking in later studies.

On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties. The drugs, sold under brand names such as Abilify, Seroquel, and Zyprexa, had been tested on schizophrenics in several large clinical trials, all of which had demonstrated a dramatic decrease in the subjects’ psychiatric symptoms. As a result, second-generation antipsychotics had become one of the fastest-growing and most profitable pharmaceutical classes. By 2001, Eli Lilly’s Zyprexa was generating more revenue than Prozac. It remains the company’s top-selling drug.
But the data presented at the Brussels meeting made it clear that something strange was happening: the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
For many scientists, the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
Jonathan Schooler was a young graduate student at the University of Washington in the nineteen-eighties when he discovered a surprising new fact about language and memory. At the time, it was widely believed that the act of describing our memories improved them. But, in a series of clever experiments, Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
The study turned him into an academic star. Since its initial publication, in 1990, it has been cited more than four hundred times. Before long, Schooler had extended the model to a variety of other tasks, such as remembering the taste of a wine, identifying the best strawberry jam, and solving difficult creative puzzles. In each instance, asking people to put their perceptions into words led to dramatic decreases in performance.
But while Schooler was publishing these results in highly reputable journals, a secret worry gnawed at him: it was proving difficult to replicate his earlier findings. “I’d often still see an effect, but the effect just wouldn’t be as strong,” he told me. “It was as if verbal overshadowing, my big new idea, was getting weaker.” At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
Schooler tried to put the problem out of his mind; his colleagues assured him that such things happened all the time. Over the next few years, he found new research questions, got married and had kids. But his replication problem kept on getting worse. His first attempt at replicating the 1990 study, in 1995, resulted in an effect that was thirty per cent smaller. The next year, the size of the effect shrank another thirty per cent. When other labs repeated Schooler’s experiments, they got a similar spread of data, with a distinct downward trend. “This was profoundly frustrating,” he says. “It was as if nature gave me this great result and then tried to take it back.” In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
Schooler is now a tenured professor at the University of California at Santa Barbara. He has curly black hair, pale-green eyes, and the relaxed demeanor of someone who lives five minutes away from his favorite beach. When he speaks, he tends to get distracted by his own digressions. He might begin with a point about memory, which reminds him of a favorite William James quote, which inspires a long soliloquy on the importance of introspection. Before long, we’re looking at pictures from Burning Man on his iPhone, which leads us back to the fragile nature of memory.
Although verbal overshadowing remains a widely accepted theory—it’s often invoked in the context of eyewitness testimony, for instance—Schooler is still a little peeved at the cosmos. “I know I should just move on already,” he says. “I really should stop talking about this. But I can’t.” That’s because he is convinced that he has stumbled on a serious problem, one that afflicts many of the most exciting new ideas in psychology.
One of the first demonstrations of this mysterious phenomenon came in the early nineteen-thirties. Joseph Banks Rhine, a psychologist at Duke, had developed an interest in the possibility of extrasensory perception, or E.S.P. Rhine devised an experiment featuring Zener cards, a special deck of twenty-five cards printed with one of five different symbols: a card was drawn from the deck and the subject was asked to guess the symbol. Most of Rhine’s subjects guessed about twenty per cent of the cards correctly, as you’d expect, but an undergraduate named Adam Linzmayer averaged nearly fifty per cent during his initial sessions, and pulled off several uncanny streaks, such as guessing nine cards in a row. The odds of this happening by chance are about one in two million. Linzmayer did it three times.
Rhine documented these stunning results in his notebook and prepared several papers for publication. But then, just as he began to believe in the possibility of extrasensory perception, the student lost his spooky talent. Between 1931 and 1933, Linzmayer guessed at the identity of another several thousand cards, but his success rate was now barely above chance. Rhine was forced to conclude that the student’s “extra-sensory perception ability has gone through a marked decline.” And Linzmayer wasn’t the only subject to experience such a drop-off: in nearly every case in which Rhine and others documented E.S.P. the effect dramatically diminished over time. Rhine called this trend the “decline effect.”
Schooler was fascinated by Rhine’s experimental struggles. Here was a scientist who had repeatedly documented the decline of his data; he seemed to have a talent for finding results that fell apart. In 2004, Schooler embarked on an ironic imitation of Rhine’s research: he tried to replicate this failure to replicate. In homage to Rhine’s interests, he decided to test for a parapsychological phenomenon known as precognition. The experiment itself was straightforward: he flashed a set of images to a subject and asked him or her to identify each one. Most of the time, the response was negative—the images were displayed too quickly to register. Then Schooler randomly selected half of the images to be shown again. What he wanted to know was whether the images that got a second showing were more likely to have been identified the first time around. Could subsequent exposure have somehow influenced the initial results? Could the effect become the cause?
The craziness of the hypothesis was the point: Schooler knows that precognition lacks a scientific explanation. But he wasn’t testing extrasensory powers; he was testing the decline effect. “At first, the data looked amazing, just as we’d expected,” Schooler says. “I couldn’t believe the amount of precognition we were finding. But then, as we kept on running subjects, the effect size”—a standard statistical measure—“kept on getting smaller and smaller.” The scientists eventually tested more than two thousand undergraduates. “In the end, our results looked just like Rhine’s,” Schooler said. “We found this strong paranormal effect, but it disappeared on us.”
The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time! Hell, it’s happened to me multiple times.” And this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
In 1991, the Danish zoologist Anders Møller, at Uppsala University, in Sweden, made a remarkable discovery about sex, barn swallows, and symmetry. It had long been known that the asymmetrical appearance of a creature was directly linked to the amount of mutation in its genome, so that more mutations led to more “fluctuating asymmetry.” (An easy way to measure asymmetry in humans is to compare the length of the fingers on each hand.) What Møller discovered is that female barn swallows were far more likely to mate with male birds that had long, symmetrical feathers. This suggested that the picky females were using symmetry as a proxy for the quality of male genes. Møller’s paper, which was published in Nature, set off a frenzy of research. Here was an easily measured, widely applicable indicator of genetic quality, and females could be shown to gravitate toward it. Aesthetics was really about genetics.
Then the theory started to fall apart. In 1994, there were fourteen published tests of symmetry and sexual selection, and only eight found a correlation. In 1995, there were eight papers on the subject, and only four got a positive result. By 1998, when there were twelve additional investigations of fluctuating asymmetry, only a third of them confirmed the theory. Worse still, even the studies that yielded some positive result showed a steadily declining effect size. Between 1992 and 1997, the average effect size shrank by eighty per cent.
And it’s not just fluctuating asymmetry. In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
What happened? Leigh Simmons, a biologist at the University of Western Australia, suggested one explanation when he told me about his initial enthusiasm for the theory: “I was really excited by fluctuating asymmetry. The early studies made the effect look very robust.” He decided to conduct a few experiments of his own, investigating symmetry in male horned beetles. “Unfortunately, I couldn’t find the effect,” he said. “But the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.” For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts. Richard Palmer, a biologist at the University of Alberta, who has studied the problems surrounding fluctuating asymmetry, suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.” In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.) “I’ve learned the hard way to be exceedingly careful,” Schooler says. “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
In a forthcoming paper, Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
In the late nineteen-nineties, John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦

Monday, December 27, 2010

smallest to largest

Sent by a friend - worth trying -

http://primaxstudio.com/stuff/scale_of_universe/


click play, then scroll to the left to get smaller, all the way down to
the size of Strings, and scroll right to get all the way to the estimated
size of the Universe.

Tuesday, December 21, 2010

Richard Rorty on academic education

See here:

http://en.wikipedia.org/wiki/Richard_Rorty

On fundamentalist religion, Rorty said:

“It seems to me that the regulative idea that we heirs of the Enlightenment, we Socratists, most frequently use to criticize the conduct of various conversational partners is that of ‘needing education in order to outgrow their primitive fear, hatreds, and superstitions’ ... It is a concept which I, like most Americans who teach humanities or social science in colleges and universities, invoke when we try to arrange things so that students who enter as bigoted, homophobic, religious fundamentalists will leave college with views more like our own ... The fundamentalist parents of our fundamentalist students think that the entire ‘American liberal establishment’ is engaged in a conspiracy. The parents have a point. Their point is that we liberal teachers no more feel in a symmetrical communication situation when we talk with bigots than do kindergarten teachers talking with their students ... When we American college teachers encounter religious fundamentalists, we do not consider the possibility of reformulating our own practices of justification so as to give more weight to the authority of the Christian scriptures. Instead, we do our best to convince these students of the benefits of secularization. We assign first-person accounts of growing up homosexual to our homophobic students for the same reasons that German schoolteachers in the postwar period assigned The Diary of Anne Frank... You have to be educated in order to be ... a participant in our conversation ... So we are going to go right on trying to discredit you in the eyes of your children, trying to strip your fundamentalist religious community of dignity, trying to make your views seem silly rather than discussable. We are not so inclusivist as to tolerate intolerance such as yours ... I don’t see anything herrschaftsfrei [domination free] about my handling of my fundamentalist students. Rather, I think those students are lucky to find themselves under the benevolent Herrschaft [domination] of people like me, and to have escaped the grip of their frightening, vicious, dangerous parents ... I am just as provincial and contextualist as the Nazi teachers who made their students read Der Stürmer; the only difference is that I serve a better cause.”

‘Universality and Truth,’ in Robert B. Brandom (ed.), Rorty and his Critics (Oxford: Blackwell, 2000), pp. 21-2.

Sunday, December 19, 2010

tripling the number of stars

Discovery Triples Total Number of Stars in Universe

[from here: http://keckobservatory.org/news/print_without/discovery_triples_total_number_of_stars_in_universe/]

Kamuela, HI Dec. 1, 2010 - Astronomers have discovered that small, dim stars known as red dwarfs are much more prolific than previously thought—so much so that the total number of stars in the universe is likely three times bigger than realized.

Because red dwarfs are relatively small and dim compared to stars like our Sun, astronomers hadn’t been able to detect them in galaxies other than our own Milky Way and its nearest neighbors before now. As such, they did not know how much of the total stellar population of the universe is made up of red dwarfs.

Now astronomers have used powerful instruments on the W. M. Keck Observatory in Hawaii to detect the faint signature of red dwarfs in eight massive, relatively nearby galaxies called elliptical galaxies, which are located between about 50 million and 300 million light years away. They discovered that the red dwarfs, which are only between 10 and 20 percent as massive as the Sun, were much more bountiful than expected.

“This important study, which uses information at the red end of the optical spectrum, was aided by advances in detector technology that have been implemented at Keck,” said Keck Observatory Director Taft Armandroff.

“No one knew how many of these stars there were,” said Pieter van Dokkum, a Yale University astronomer who led the research, which is described in Nature’s Dec.1 Advanced Online Publication. “Different theoretical models predicted a wide range of possibilities, so this answers a longstanding question about just how abundant these stars are.”

The team discovered that there are about 20 times more red dwarfs in elliptical galaxies than in the Milky Way, said Charlie Conroy of the Harvard-Smithsonian Center for Astrophysics, who was also involved in the research.

“We usually assume other galaxies look like our own. But this suggests other conditions are possible in other galaxies,” Conroy said. “So this discovery could have a major impact on our understanding of galaxy formation and evolution.”

For instance, Conroy said, galaxies might contain less dark matter—a mysterious substance that has mass but cannot be directly observed—than previous measurements of their masses might have indicated. Instead, the abundant red dwarfs could contribute more mass than realized.

In addition to boosting the total number of stars in the universe, the discovery also increases the number of planets orbiting those stars, which in turn elevates the number of planets that might harbor life, van Dokkum said. In fact, a recently discovered exoplanet that astronomers believe could potentially support life orbits a red dwarf star, called Gliese 581.

“There are possibly trillions of Earths orbiting these stars,” van Dokkum said, adding that the red dwarfs they discovered, which are typically more than 10 billion years old, have been around long enough for complex life to evolve. “It’s one reason why people are interested in this type of star.”

The W. M. Keck Observatory operates two 10-meter optical/infrared telescopes on the summit of Mauna Kea. The twin telescopes feature a suite of advanced instrumentation including imagers, multi-object spectrographs, high-resolution spectrographs, integral-field spectroscopy and a world-leading laser guide star adaptive optics system. The Observatory is a private 501(c) 3 organization and a scientific partnership of the California Institute of Technology, the University of California and NASA.

Tuesday, December 14, 2010

the new atheism

Check this out:
http://www.literaryreview.co.uk/gray_12_10.html

It's a book review about suicide bombers called "Talking to the Enemy: Violent Extremism, Sacred Values, and What It Means to Be Human"

It discusses Richard Dawkins's analysis of suicide bombing. As usual, Dawkins is wrong. From the review:

Something like Hobbes's analysis (though without his refreshing pessimism or his wonderfully terse prose style) has resurfaced today in regard to suicide bombing. If you read evangelical atheists like Richard Dawkins, you will be told that suicide bombers are driven by their irrational religious beliefs. 'Suicide bombers do what they do', writes Dawkins in a passage cited by Scott Atran, 'because they really believe what they were taught in their religious schools; that duty to God exceeds all other priorities, and that martyrdom in his service will be rewarded in the gardens of Paradise.' What is striking about claims of this kind is that they are rarely accompanied by evidence. They are asserted as self-evident truths - in other words, articles of faith. In fact, as Atran writes, religion is not particularly prominent in the formation of jihadi groups:

Though there are few similarities in personality profiles, some general demographic and social tendencies exist: in age (usually early twenties), where they grew up (neighbourhood is often key), in schooling (mostly non-religious and often science-oriented), in socio-economic status (middle-class and married, though increasingly marginalized), in family relationships (friends tend to marry one another's sisters and cousins). If you want to track a group, look to where one of its members eats or hangs out, in the neighbourhood or on the Internet, and you'll likely find the other members.

Unlike Dawkins's assertions, Atran's account of violent jihadism is based on extensive empirical research. An anthropologist who has spent many years studying and talking to terrorists in Indonesia, Afghanistan, Gaza and Europe, Atran believes that what motivates them to go willingly to their deaths is not so much the cause they espouse - rationally or otherwise - but the relationships they form with each other. Terrorists kill and die 'for their group, whose cause makes their imagined family of genetic strangers - their brotherhood, fatherland, motherland, homeland, totem or tribe'. In this terrorists are no different from other human beings. They may justify their actions by reference to religion, but many do not. The techniques of suicide bombing were first developed by the Tamil Tigers, a Marxist-Leninist group hostile to all religions, while suicide bombers in Lebanon in the 1980s included many secular leftists. The Japanese Aum cult, which recruited biologists and geneticists and experimented with anthrax as a weapon of mass destruction, cobbled together its grotesque system of beliefs from many sources, including science fiction. Terrorists have held to many views of the world, including some - like Marxism-Leninism - that claim to be grounded in 'scientific atheism'. If religion is a factor in terrorism, it is only one among many.

There will be some who question Atran's analysis of suicide bombing. Clearly the practice has a rational-strategic aspect along with the emotional and social dimensions on which he focuses. Suicide bombing is highly cost-effective compared with other types of terrorist assault; when volunteers are plentiful life is cheap, and a successful suicide bomber cannot be captured and interrogated. But Talking to the Enemy is about far more than violent extremism. One of the most penetrating works of social investigation to appear in many years, it offers a fresh and compelling perspective on human conflict. No one who reads and digests what Atran has to say will be able to take seriously the faith-based claims of the 'new atheists'. As he notes, some of his fellow scientists may 'believe that science is better able than religion to constitute or justify a moral system that regulates selfishness and makes social life possible ... [But] there doesn't seem to be the slightest bit of historical or experimental evidence to support such faith in science'. The picture of human beings that emerges from genuine inquiry is far richer than anything that can be gleaned from these myopic rationalists.

Thursday, November 18, 2010

religion and brand loyalty

FuturePundit
Future technological trends and their likely effects on human society, politics and evolution.

November 15, 2010
Lost Religion Leads To More Brand Loyalty?
Do the non-religious have greater brand loyalty?

Prof. Ron Shachar of Tel Aviv University's Leon Recanati Graduate School of Business Administration says that a consumer's religiosity has a large impact on his likelihood for choosing particular brands. Comsumers who are deeply religious are less likely to display an explicit preference for a particular brand, while more secular populations are more prone to define their self-worth through loyalty to corporate brands instead of religious denominations.

This research, in collaboration with Duke University and New York University scientists, recently appeared in the journal Marketing Science.

I am reminded of a quote (comes in variations) attributed to G.K. Chesterton: "When a Man stops believing in God he doesn't then believe in nothing, he believes anything." The real origin of the quote might be Emile Cammaerts writing about Chesterton:

The first effect of not believing in God is to believe in anything.

Okay, without taking a side in the God Stuff debate can we think rationally about what is going on here? (the answer to that question might depend on our specific brand loyalties - not sure if my fairly shallow loyalties to Google, Amazon, or Norelco will serve as an obstacle). My take: I suspect we all have a finite capacity for loyalty or feeling of being allied or bonded. Take away a supernatural belief and reverence and basically some unused capacity for loyalty (need for loyalty?) becomes available for hijacking by corporate marketers. Is this an improvement? It depends on the specific beliefs and loyalties. For example, I'd rather someone have loyalty to a brand of running shoes or cell phone than loyalty to a diety who he thinks wants him to blow up tube stations. But loyalties to cigarette brands or sugary soda brands are definitely harmful to health.

Think religious thoughts before shopping and your purchasing choices will be less driven by brand loyalties.

Researchers discovered that those participants who wrote about their religion prior to the shopping experience were less likely to pick national brands when it came to products linked to appearance or self-expression — specifically, products which reflected status, such as fashion accessories and items of clothing. For people who weren't deeply religious, corporate logos often took the place of religious symbols like a crucifix or Star of David, providing feelings of self-worth and well-being. According to Prof. Shachar, two additional lab experiments done by this research team have demonstrated that like religiousity, consumers use brands to express their sense of self-worth.

Ever noticed how some ex-religious believers are incredibly bitter toward their former religion? This seems most visible with some ex-Catholics. Well, since brand loyalty seems to develop more strongly when religious loyalty is absent loss of brand loyalty makes people extremely emotional about their former loyalty.

It's just like a bad breakup: People get emotional when they end a relationship with a brand. A new study in the Journal of Consumer Research examines what happens when people turn their backs on the brands they once loved.

"Customers who were once enthusiastic about a brand may represent a headache for the associated firm beyond the lost revenue of foregone sales because they sometimes become committed to harming the firm," write authors Allison R. Johnson (University of Western Ontario), Maggie Matear (Queens University, Kingston, Ontario), and Matthew Thomson (University of Western Ontario).

Online forums are overloaded with customer complaints from people who once loved or were loyal to particular brands but now strongly oppose them. "I used to love (name of store), let me tell you all why I plan to never go back there again; I hate them with a passion now," writes one unhappy former customer, for example.

Why do these people feel so strongly about brands they once favored? According to the authors, some people identify so strongly with brands that they become relevant to their identity and self-concept. Thus, when people feel betrayed by brands, they experience shame and insecurity. "As in human relationships, this loss of identity can manifest itself in negative feelings, and subsequent actions may (by design) be unconstructive, malicious, and expressly aimed at hurting the former relationship partner," the authors write.

Do you have any strongly felt brand loyalties that might disappoint you? Might want to try some competing products before you become disappointed. That way your loyalty will weaken before your loss of brand faith. That'll make it easier to move on.

By Randall Parker at 2010 November 15 09:04 PM Brain Loyalty

Friday, November 5, 2010

Asexual reproduction in snakes

Amazing!

http://www.sciencedaily.com/releases/2010/11/101103111210.htm

Wednesday, October 27, 2010

Itinerary for Oct 25 to Nov 14

Oct 25-Nov 4 Toronto, shabbos Nov 6 - Calgary, Mon Nov 8 Tufts, Tues Nov 9 Cornell, Thurs Nov 11 Passaic, shabbos Nov 13 - Passaic, Sun Nov 14 Lakewood, Ner L'elef.

KEHILAS BAIS YOSEF PROUDLY PRESENTS
Having received his Ph.D. in mathematical logic at Brandeis University, Rabbi Dr. Dovid Gottlieb went on to become Professor of Philosophy at Johns Hopkins University. Today he is a senior faculty member at Ohr Somayach in Jerusalem. An accomplished author and lecturer, Rabbi Gottlieb has electrified audiences with his stimulating and energetic presentations on ethical and philosophical issues.
All Shiurim will take place at Kehilas Bais Yosef, 580 Broadway, Passaic, NJ
SPONSORSHIPS OF SHIURIM ARE AVAILABLE -- PLEASE EMAIL AVLEITER@AOL.COM FOR INFORMATION
Rabbi Dovid Gottlieb
NOVEMBER 11-13
TIME Thursday, 11/11 8:45 pm AFTER 8:30 MA’ARIV
TOPIC The Validity of Fulfilling Psychological Needs
TIME Friday, 11/12 8:30 pm
TOPIC Selfish vs. Selfless: The True Basis for Altruism
TIME Shabbos, 11/13
Morning Drasha DAVENING STARTS AT 8:30 AM
TOPIC Should One Make a Deal with G-d?
TIME Shabbos, 11/13
Sholosh Seudos MINCHA STARTS AT 4:05 PM
TOPIC Chanukah: The Nature of Nature

Wednesday, October 20, 2010

the new atheism

The Scientific American

Permanent Address: http://www.scientificamerican.com/blog/post.cfm?id=cosmic-clowning-stephen-hawkings-ne-2010-09-13
Cosmic Clowning: Stephen Hawking's "new" theory of everything is the same old CRAP

Editor's note (9/14/10): This post has been slightly modified.
I've always thought of Stephen Hawking—whose new book The Grand Design (Bantam 2010), co-written with Leonard Mlodinow, has become an instant bestseller—less as a scientist than as a cosmic, comic performance artist, who loves goofing on his fellow physicists and the rest of us.

This penchant was already apparent in 1980, when the University of Cambridge named Hawking Lucasian Professor of Mathematics at Cambridge, the chair held three centuries earlier by Isaac Newton. Many would have been cowed into caution by such an honor. But in his inaugural lecture, "Is the End in Sight for Theoretical Physics?", Hawking predicted that physics was on the verge of a unified theory so potent and complete that it would bring the field to a close. The theory would not only unite relativity and quantum mechanics into one tidy package and "describe all possible observations." It would also tell us why the big bang banged and spawned our weird world rather than something entirely different.

At the end of his speech Hawking slyly suggested that, given the "rapid rate of development" of computers, they might soon become so smart that they "take over altogether" in physics. "So maybe the end is in sight for theoretical physicists," he said, "if not for theoretical physics." This line was clearly intended as a poke in his colleagues' ribs. Wouldn't it be ironic if our mindless machines usurped our place as discoverers of Cosmic Truth? Hilarious!

The famous last line of Hawking's monumental bestseller A Brief History of Time (Bantam 1988) was also a joke, although many people didn't get it at the time. A final theory of physics, Hawking declared, "would be the ultimate triumph of human reason—for then we should know the mind of God." Hawking seemed to imply that physics was going to come full circle back to its spiritual roots, yielding a mystical revelation that tells us not just what the universe is but why it is. Science and religion are compatible after all! Yay!

But Hawking ain't one of these New Agey, feel-good physicist–deists like John Barrow, Paul Davies, Freeman Dyson or other winners of the Templeton Prize for Progress Toward Research or Discoveries about Spiritual Realities. Deep inside Brief History Hawking showed his true colors when he discussed the no-boundary proposal, which holds that the entire history of the universe, all of space and time, forms a kind of four-dimensional sphere. The proposal implies that speculation about the beginning or end of the universe is as meaningless as talking about the beginning or end of a sphere.

In the same way a unified theory of physics might be so seamless, perfect and complete that it even explains itself. "What place, then, for a creator?" Hawking asked. There is no place, he replied. Or rather, a final theory would eliminate the need for a God, a creator, a designer. Hawking's first wife, a devout Christian, knew what he was up to. After she and Hawking divorced in the early 1990s she revealed that one of the reasons was his scorn for religion.

Hawking's atheism is front and center in Grand Design. In an excerpt Hawking and Mlodinow declare, "There is a sound scientific explanation for the making of our world—no Gods required." But Hawking is, must be, kidding once again. The "sound scientific explanation" is M-theory, which Hawking calls (in a blurb for Amazon) "the only viable candidate for a complete 'theory of everything'."

Actually M-theory is just the latest iteration of string theory, with membranes (hence the M) substituted for strings. For more than two decades string theory has been the most popular candidate for the unified theory that Hawking envisioned 30 years ago. Yet this popularity stems not from the theory's actual merits but rather from the lack of decent alternatives and the stubborn refusal of enthusiasts to abandon their faith.

M-theory suffers from the same flaws that string theories did. First is the problem of empirical accessibility. Membranes, like strings, are supposedly very, very tiny—as small compared with a proton as a proton is compared with the solar system. This is the so-called Planck scale, 10^–33 centimeters. Gaining the kind of experimental confirmation of membranes or strings that we have for, say, quarks would require a particle accelerator 1,000 light-years around, scaling up from our current technology. Our entire solar system is only one light-day around, and the Large Hadron Collider, the world's most powerful accelerator, is 27 kilometers in circumference.

Hawking recognized long ago that a final theory—because it would probably involve particles at the Planck scale—might never be experimentally confirmable. "It is not likely that we shall have accelerators powerful enough" to test a unified theory "within the foreseeable future—or indeed, ever," he said in his 1980 speech at Cambridge. He nonetheless hoped that in lieu of empirical evidence physicists would discover a theory so logically inevitable that it excluded all alternatives.

Quite the opposite has happened. M-theory, theorists now realize, comes in an almost infinite number of versions, which "predict" an almost infinite number of possible universes. Critics call this the "Alice's restaurant problem," a reference to the refrain of the old Arlo Guthrie folk song: "You can get anything you want at Alice's restaurant." Of course, a theory that predicts everything really doesn't predict anything, and hence isn't a theory at all. Proponents, including Hawking, have tried to turn this bug into a feature, proclaiming that all the universes "predicted" by M-theory actually exist. "Our universe seems to be one of many," Hawking and Mlodinow assert.

Why do we find ourselves in this particular universe rather than in one with, say, no gravity or only two dimensions, or a Bizarro world in which Glenn Beck is a left-wing rather than right-wing nut? To answer this question, Hawking invokes the anthropic principle, a phrase coined by physicist Brandon Carter in the 1970s. The anthropic principle comes in two versions. The weak anthropic principle, or WAP, holds merely that any cosmic observer will observe conditions, at least locally, that make the observer's existence possible. The strong version, SAP, says that the universe must be constructed so as to make observers possible.

The anthropic principle has always struck me as so dumb that I can't understand why anyone takes it seriously. It's cosmology's version of creationism. WAP is tautological and SAP is teleological. The physicist Tony Rothman, with whom I worked at Scientific American in the 1990s, liked to say that the anthropic principle in any form is completely ridiculous and hence should be called CRAP.

In his 1980 speech in Cambridge Hawking mentioned the anthropic principle—which he paraphrased as "Things are as they are because we are"—as a possible explanation for the fact that our cosmos seems to be fine-tuned for our existence. But he added that "one cannot help feeling that there is some deeper explanation."

Like millions of other people I admire Hawking's brilliance, wit, courage and imagination. His prophecy of the end of physics inspired me to write The End of Science (which he called "garbage"). Hawking also played a central role in one of the highlights of my career. It dates back to the summer of 1990, when I attended a symposium in a remote Swedish resort on "The Birth and Early Evolution of Our Universe." The meeting was attended by 30 of the world's most prominent cosmologists, including Hawking.

Toward the end of the meeting, everyone piled into a bus and drove to a nearby village to hear a concert in a Lutheran church. When the scientists entered the church, it was already packed. The orchestra, a motley assortment of blond-haired youths and wizened, bald elders clutching violins, clarinets and other instruments, was seated at the front of the church. Their neighbors jammed the balconies and seats at the rear of the building.

The scientists filed down the center aisle to pews reserved for them at the front of the church. Hawking led the way in his motorized wheelchair. The townspeople started to clap, tentatively at first, then passionately. These religious folk seemed to be encouraging the scientists, and especially Hawking, in their quest to solve the riddle of existence.

Now, Hawking is telling us that unconfirmable M-theory plus the anthropic tautology represents the end of that quest. If we believe him, the joke's on us.
Clarification (9/14/10): My original post referred to Stephen Hawking's "smirk." Apparently many readers assume that Hawking can't control his expression and that I was mocking him for this symptom of his paralysis. When I met Hawking, he could and did grin on purpose, and I assumed that's still the case. I apologize for any offense caused by my (now deleted) remark.
© 2010 Scientific American, a Division of Nature America, Inc. All Rights Reserved.
The Economist Understanding the universe
Order of creation
Even Stephen Hawking doesn't quite manage to explain why we are here
Sep 9th 2010


The Grand Design. By Stephen Hawking and Leonard Mlodinow. Bantam; 198 pages; $28 and £18.99. Buy from Amazon.com, Amazon.co.uk
IN 1988, Stephen Hawking, a British cosmologist, ended his best-selling book, “A Brief History of Time”, on a cliff hanger. If we find a physical theory that explains everything, he wrote—suggesting that this happy day was not too far off—“then we would know the mind of God.” But the professor didn’t mean it literally. God played no part in the book, which was renowned for being bought by everyone and understood by few. Twenty-two years later, Professor Hawking tells a similar story, joined this time by Leonard Mlodinow, a physicist and writer at the California Institute of Technology.
In their “The Grand Design”, the authors discuss “M-theory”, a composite of various versions of cosmological “string” theory that was developed in the mid-1990s, and announce that, if it is confirmed by observation, “we will have found the grand design.” Yet this is another tease. Despite much talk of the universe appearing to be “fine-tuned” for human existence, the authors do not in fact think that it was in any sense designed. And once more we are told that we are on the brink of understanding everything.
The authors may be in this enviable state of enlightenment, but most readers will not have a clue what they are on about. Some physics fans will enjoy “The Grand Design” nonetheless. The problem is not that the book is technically rigorous—like “A Brief History of Time”, it has no formulae—but because whenever the going threatens to get tough, the authors retreat into hand-waving, and move briskly on to the next awe-inspiring notion. Anyone who can follow their closing paragraphs on the relation between negative gravitational energy and the creation of the universe probably knows it all already. This is physics by sound-bite.
There are some useful colour diagrams and photographs, and the prose is jaunty. The book is peppered with quips, presumably to remind the reader that he is not studying for an exam but is supposed to be having fun. These attempted jokes usually fuse the weighty with the quotidian, in the manner of Woody Allen, only without the laughs. (“While perhaps offering great tanning opportunities, any solar system with multiple suns would probably never allow life to develop.”) There is a potted history of physics, which is adequate as far as it goes, though given what the authors have to say about Aristotle, one can only hope that they are more reliable about what happened billions of years ago at the birth of the universe than they are about what happened in Greece in the fourth century BC. Their account appears to be based on unreliable popularisations, and they cannot even get right the number of elements in Aristotle’s universe (it is five, not four).
The authors rather fancy themselves as philosophers, though they would presumably balk at the description, since they confidently assert on their first page that “philosophy is dead.” It is, allegedly, now the exclusive right of scientists to answer the three fundamental why-questions with which the authors purport to deal in their book. Why is there something rather than nothing? Why do we exist? And why this particular set of laws and not some other?
It is hard to evaluate their case against recent philosophy, because the only subsequent mention of it, after the announcement of its death, is, rather oddly, an approving reference to a philosopher’s analysis of the concept of a law of nature, which, they say, “is a more subtle question than one may at first think.” There are actually rather a lot of questions that are more subtle than the authors think. It soon becomes evident that Professor Hawking and Mr Mlodinow regard a philosophical problem as something you knock off over a quick cup of tea after you have run out of Sudoku puzzles.
The main novelty in “The Grand Design” is the authors’ application of a way of interpreting quantum mechanics, derived from the ideas of the late Richard Feynman, to the universe as a whole. According to this way of thinking, “the universe does not have just a single existence or history, but rather every possible version of the universe exists simultaneously.” The authors also assert that the world’s past did not unfold of its own accord, but that “we create history by our observation, rather than history creating us.” They say that these surprising ideas have passed every experimental test to which they have been put, but that is misleading in a way that is unfortunately typical of the authors. It is the bare bones of quantum mechanics that have proved to be consistent with what is presently known of the subatomic world. The authors’ interpretations and extrapolations of it have not been subjected to any decisive tests, and it is not clear that they ever could be.
Once upon a time it was the province of philosophy to propose ambitious and outlandish theories in advance of any concrete evidence for them. Perhaps science, as Professor Hawking and Mr Mlodinow practice it in their airier moments, has indeed changed places with philosophy, though probably not quite in the way that they think.
Books and Arts

Stephen Hawking's big bang gaps
The laws that explain the universe's birth are less comprehensive than Stephen Hawking suggests
Paul Davies
The Guardian, Saturday 4 September 2010
Cosmologists are agreed that the universe began with a big bang 13.7 billion years ago. People naturally want to know what caused it. A simple answer is nothing: not because there was a mysterious state of nothing before the big bang, but because time itself began then – that is, there was no time "before" the big bang. The idea is by no means new. In the fifth century, St Augustine of Hippo wrote that "the universe was created with time and not in time".
Religious people often feel tricked by this logic. They envisage a miracle-working God dwelling within the stream of time for all eternity and then, for some inscrutable reason, making a universe (perhaps in a spectacular explosion) at a specific moment in history.
That was not Augustine's God, who transcended both space and time. Nor is it the God favoured by many contemporary theologians. In fact, they long ago coined a term for it – "god-of-the-gaps" – to deride the idea that when science leaves something out of account, then God should be invoked to plug the gap. The origin of life and the origin of consciousness are favourite loci for a god-of-the-gaps, but the origin of the universe is the perennial big gap.
In his new book, Stephen Hawking reiterates that there is no big gap in the scientific account of the big bang. The laws of physics can explain, he says, how a universe of space, time and matter could emerge spontaneously, without the need for God. And most cosmologists agree: we don't need a god-of-the-gaps to make the big bang go bang. It can happen as part of a natural process. A much tougher problem now looms, however. What is the source of those ingenious laws that enable a universe to pop into being from nothing?
Traditionally, scientists have supposed that the laws of physics were simply imprinted on the universe at its birth, like a maker's mark. As to their origin, well, that was left unexplained.
In recent years, cosmologists have shifted position somewhat. If the origin of the universe was a law rather than a supernatural event, then the same laws could presumably operate to bring other universes into being. The favoured view now, and the one that Hawking shares, is that there were in fact many bangs, scattered through space and time, and many universes emerging therefrom, all perfectly naturally. The entire assemblage goes by the name of the multiverse.
Our universe is just one infinitesimal component amid this vast – probably infinite – multiverse, that itself had no origin in time. So according to this new cosmological theory, there was something before the big bang after all – a region of the multiverse pregnant with universe-sprouting potential.
A refinement of the multiverse scenario is that each new universe comes complete with its very own laws – or bylaws, to use the apt description of the cosmologist Martin Rees. Go to another universe, and you would find different bylaws applying. An appealing feature of variegated bylaws is that they explain why our particular universe is uncannily bio-friendly; change our bylaws just a little bit and life would probably be impossible. The fact that we observe a universe "fine-tuned" for life is then no surprise: the more numerous bio-hostile universes are sterile and so go unseen.
So is that the end of the story? Can the multiverse provide a complete and closed account of all physical existence? Not quite. The multiverse comes with a lot of baggage, such as an overarching space and time to host all those bangs, a universe-generating mechanism to trigger them, physical fields to populate the universes with material stuff, and a selection of forces to make things happen. Cosmologists embrace these features by envisaging sweeping "meta-laws" that pervade the multiverse and spawn specific bylaws on a universe-by-universe basis. The meta-laws themselves remain unexplained – eternal, immutable transcendent entities that just happen to exist and must simply be accepted as given. In that respect the meta-laws have a similar status to an unexplained transcendent god.
According to folklore the French physicist Pierre Laplace, when asked by Napoleon where God fitted into his mathematical account of the universe, replied: "I had no need of that hypothesis." Although cosmology has advanced enormously since the time of Laplace, the situation remains the same: there is no compelling need for a supernatural being or prime mover to start the universe off. But when it comes to the laws that explain the big bang, we are in murkier waters.
Stephen Hawking's big bang gaps | Paul Davies
This article appeared on p30 of the Main section section of the Guardian on Saturday 4 September 2010. It was published on guardian.co.uk at 08.30 BST on Saturday 4 September 2010.
Editorial: Hawking's faith in M-theory
Craig Callender, contributor
Three decades ago, Stephen Hawking famously declared that a "theory of everything" was on the horizon, with a 50 per cent chance of its completion by 2000. Now it is 2010, and Hawking has given up. But it is not his fault, he says: there may not be a final theory to discover after all. No matter; he can explain the riddles of existence without it.
The Grand Design, written with Leonard Mlodinow, is Hawking's first popular science book for adults in almost a decade. It duly covers the growth of modern physics (quantum mechanics, general relativity, modern cosmology) sprinkled with the wild speculation about multiple universes that seems mandatory in popular works these days. Short but engaging and packed with colourful illustrations, the book is a natural choice for someone wanting a quick introduction to mind-bending theoretical physics.
Early on, the authors claim that they will be answering the ultimate riddles of existence - and that their answer won't be "42". Their starting point for this bold claim is superstring theory.
In the early 1990s, string theory was struggling with a multiplicity of distinct theories. Instead of a single theory of everything, there seemed to be five. Beginning in 1994, though, physicists noticed that, at low energies, some of these theories were "dual" to others - that is, a mathematical transformation makes one theory look like another, suggesting that they may just be two descriptions of the same thing. Then a bigger surprise came: one string theory was shown to be dual to 11-dimensional supergravity, a theory describing not only strings but membranes, too. Many physicists believe that this supergravity theory is one piece of a hypothetical ultimate theory, dubbed M-theory, of which all the different string theories offer us mere glimpses.
This multiplicity of distinct theories prompts the authors to declare that the only way to understand reality is to employ a philosophy called "model-dependent realism". Having declared that "philosophy is dead", the authors unwittingly develop a theory familiar to philosophers since the 1980s, namely "perspectivalism". This radical theory holds that there doesn't exist, even in principle, a single comprehensive theory of the universe. Instead, science offers many incomplete windows onto a common reality, one no more "true" than another. In the authors' hands this position bleeds into an alarming anti-realism: not only does science fail to provide a single description of reality, they say, there is no theory-independent reality at all. If either stance is correct, one shouldn't expect to find a final unifying theory like M-theory - only a bunch of separate and sometimes overlapping windows.
So I was surprised when the authors began to advocate M-theory. But it turns out they were unconventionally referring to the patchwork of string theories as "M-theory" too, in addition to the hypothetical ultimate theory about which they remain agnostic.
M-theory in either sense is far from complete. But that doesn't stop the authors from asserting that it explains the mysteries of existence: why there is something rather than nothing, why this set of laws and not another, and why we exist at all. According to Hawking, enough is known about M-theory to see that God is not needed to answer these questions. Instead, string theory points to the existence of a multiverse, and this multiverse coupled with anthropic reasoning will suffice. Personally, I am doubtful.
Take life. We are lucky to be alive. Imagine all the ways physics might have precluded life: gravity could have been stronger, electrons could have been as big as basketballs and so on. Does this intuitive "luck" warrant the postulation of God? No. Does it warrant the postulation of an infinity of universes? The authors and many others think so. In the absence of theory, though, this is nothing more than a hunch doomed - until we start watching universes come into being - to remain untested. The lesson isn't that we face a dilemma between God and the multiverse, but that we shouldn't go off the rails at the first sign of coincidences.
Craig Callender is a philosopher of physics at the University of California, San Diego

Tuesday, September 28, 2010

religious observance in Israel

See this article:
http://www.ynetnews.com/articles/0,7340,L-3952847,00.html

Monday, September 13, 2010

Inconstant physical "constants"

Take a look at this:
http://news.stanford.edu/news/2010/august/sun-082310.html

A correspondent suggested that the effect may not be clear - see these:

3 attachments — Download all attachments (zipped for English (US))
Evidence against correlations between nuclear decay rates and Earth–Sun distance.pdf


Evidence for Correlations Between Nuclear Decay Rates and Earth-Sun Distance.pdf


Perturbation of Nuclear Decay Rates During the Solar Flare of 13 December 2006.pdf

Thursday, August 19, 2010

animal "morality" in question

Marc Hauser is a professor at Harvard who has pubilshed work on the evolution of morality in animals. The suggestion is that since animals possess some kind of moral judgment and behavior, it is not too hard to imagine the further devlopment of human morality in natural terms. I have read some of his books and have been trying to define the relevant differences between human morality and the behavioral studies he reports. But now that effort may be unnecessary - it seems that his "findings" were not suported by his evidence. See http://www.nytimes.com/2010/08/12/education/12harvard.html?_r=2&ref=science. And this should be a reminder [to me as well as others] not to accept reported "findings" too quicly....And see this also: http://chronicle.com/article/Document-Sheds-Light-on/123988/

Sunday, August 15, 2010

the new atheism

August 11, 2010, 3:05 pm
On Dawkins’s Atheism: A Response
By GARY GUTTING

The Stone is a forum for contemporary philosophers on issues both timely and timeless.

Tags:
atheism, Philosophy, religion, Richard Dawkins



My August 1 essay, “Philosophy and Faith,” was primarily addressed to religious believers. It argued that faith should go hand-in-hand with rational reflection, even though such reflection might well require serious questioning of their faith. I very much appreciated the many and diverse comments and the honesty and passion with which so many expressed their views. Interestingly, many of the most passionate responses came from non-believers who objected to my claim that popular atheistic arguments (like popular theistic arguments) do not establish their conclusions. There was particular dismay over my passing comment that the atheistic arguments of Richard Dawkins are “demonstrably faulty.” This follow-up provides support for my negative assessment. I will focus on Dawkins’ arguments in his 2006 book, “The God Delusion.”

‘The God Delusion’ does not meet the standards of rationality that a topic as important as religion requires.
Dawkins’s writing gives the impression of clarity, but his readable style can cover over major conceptual confusions. For example, the core of his case against God’s existence, as he summarizes it on pages 188-189, seems to go like this:

1. There is need for an explanation of the apparent design of the universe.

2. The universe is highly complex.

3. An intelligent designer of the universe would be even more highly complex.

4. A complex designer would itself require an explanation.

5. Therefore, an intelligent designer will not provide an explanation of the universe’s complexity.

6. On the other hand, the (individually) simple processes of natural selection can explain the apparent design of the universe.

7. Therefore, an intelligent designer (God) almost certainly does not exist.

(Here I’ve formulated Dawkins’ argument a bit more schematically than he does and omitted his comments on parallels in physics to the explanations natural selection provides for apparent design in biology.)

As formulated, this argument is an obvious non-sequitur. The premises (1-6), if true, show only that God cannot be posited as the explanation for the apparent design of the universe, which can rather be explained by natural selection. They do nothing to show that “God almost certainly does not exist” (189).

But the ideas behind premises 3 and 4 suggest a more cogent line of argument, which Dawkins seems to have in mind in other passages:

1. If God exists, he must be both the intelligent designer of the universe and a being that explains the universe but is not itself in need of explanation.

2. An intelligent designer of the universe would be a highly complex being.

3. A highly complex being would itself require explanation.

4. Therefore, God cannot be both the intelligent designer of the universe and the ultimate explanation of the universe.

5. Therefore, God does not exist.

Here the premises do support the conclusion, but premise 2, at least, is problematic. In what sense does Dawkins think God is complex and why does this complexity require an explanation? He does not discuss this in any detail, but his basic idea seems to be that the enormous knowledge and power God would have to possess would require a very complex being and such complexity of itself requires explanation. He says for example: “A God capable of continuously monitoring and controlling the individual status of every particle in the universe cannot be simple” (p. 178). And, a bit more fully, “a God who is capable of sending intelligible signals to millions of people simultaneously, and of receiving messages from all of them simultaneously, cannot be . . . simple. Such bandwidth! . . . If [God] has the powers attributed to him he must have something far more elaborately and randomly constructed than the largest brain or the largest computer we know” (p. 184).

Dawkins ignores the possibility that God is a very different sort of being than brains and computers.
Here Dawkins ignores the possibility that God is a very different sort of being than brains and computers. His argument for God’s complexity either assumes that God is material or, at least, that God is complex in the same general way that material things are (having many parts related in complicated ways to one another). The traditional religious view, however, is that God is neither material nor composed of immaterial parts (whatever that might mean). Rather, he is said to be simple, a unity of attributes that we may have to think of as separate but that in God are united in a single reality of pure perfection.

Obviously, there are great difficulties in understanding how God could be simple in this way. But philosophers from Thomas Aquinas through contemporary thinkers have offered detailed discussions of the question that provide intelligent suggestions about how to think coherently about a simple substance that has the power and knowledge attributed to God. Apart from a few superficial swipes at Richard Swinburne’s treatment in “Is There a God?”, Dawkins ignores these discussions. (see Swinburne’s response to Dawkins, paragraph 3.) Making Dawkins’ case in any convincing way would require detailed engagement not only with Swinburne but also with other treatments by recent philosophers such as Christopher Hughes’ “A Complex Theory of a Simple God.” (For a survey of recent work on the topic, see William Vallicella’s article, “Divine Simplicity,” in the Stanford Encyclopedia of Philosophy).

Further, Dawkins’ argument ignores the possibility that God is a necessary being (that is, a being that, by its very nature, must exist, no matter what). On this traditional view, God’s existence would be, so to speak, self-explanatory and so need no explanation, contrary to Dawkins’ premise 3. His ignoring this point also undermines his effort at a quick refutation of the cosmological argument for God as the cause of the existence of all contingent beings (that is, all beings that, given different conditions, would not have existed). Dawkins might, like some philosophers, argue that the idea of a necessary being is incoherent, but to make this case, he would have to engage with the formidable complexities of recent philosophical treatments of the question (see, for example, Timothy O’Connor’s “Theism and Ultimate Explanation” and Bruce Reichenbach’s article in the Stanford Encyclopedia of Philosophy.

Religious believers often accuse argumentative atheists such as Dawkins of being excessively rationalistic, demanding standards of logical and evidential rigor that aren’t appropriate in matters of faith. My criticism is just the opposite. Dawkins does not meet the standards of rationality that a topic as important as religion requires.

The basic problem is that meeting such standards requires coming to terms with the best available analyses and arguments. This need not mean being capable of contributing to the cutting-edge discussions of contemporary philosophers, but it does require following these discussions and applying them to one’s own intellectual problems. Dawkins simply does not do this. He rightly criticizes religious critics of evolution for not being adequately informed about the science they are calling into question. But the same criticism applies to his own treatment of philosophical issues.

There are sensible people who report having had some kind of direct awareness of a divine being, and there are competent philosophers who endorse arguments for God’s existence.
Friends of Dawkins might object: “Why pay attention to what philosophers have to say when, notoriously, they continue to disagree regarding the ‘big questions’, particularly, the existence of God?” Because, successful or not, philosophers offer the best rational thinking about such questions. Believers who think religion begins where reason falters may be able to make a case for the irrelevance of high-level philosophical treatments of religion — although, as I argued in “Philosophy and Faith,” this move itself raises unavoidable philosophical questions that challenge religious faith. But those, like Dawkins, committed to believing only what they can rationally justify, have no alternative to engaging with the most rigorous rational discussions available. Dawkins’ distinctly amateur philosophizing simply isn’t enough.

Of course, philosophical discussions have not resolved the question of God’s existence. Even the best theistic and atheistic arguments remain controversial. Given this, atheists may appeal (as many of the comments on my blog did) to what we might call the “no-arguments argument.” To say that the universe was created by a good and powerful being who cares about us is an extraordinary claim, so improbable to begin with that we surely should deny it unless there are decisive arguments for it (arguments showing that it is highly probable). Even if Dawkins’ arguments against theism are faulty, can’t he cite the inconclusiveness of even the most well-worked-out theistic arguments as grounds for denying God’s existence?

He can if he has good reason to think that, apart from specific theistic arguments, God’s existence is highly unlikely. Besides what we can prove from arguments, how probable is it that God exists? Here Dawkins refers to Bertrand Russell’s example of the orbiting teapot. We would require very strong evidence before agreeing that there was a teapot in orbit around the sun, and lacking such evidence would deny and not remain merely agnostic about such a claim. This is because there is nothing in our experience suggesting that the claim might be true; it has no significant intrinsic probability.

But suppose that several astronauts reported seeing something that looked very much like a teapot and, later, a number of reputable space scientists interpreted certain satellite data as showing the presence of a teapot-shaped object, even though other space scientists questioned this interpretation. Then it would be gratuitous to reject the hypothesis out of hand, even without decisive proof that it was true. We should just remain agnostic about it.

The claim that God exists is much closer to this second case. There are sensible people who report having had some kind of direct awareness of a divine being, and there are competent philosophers who endorse arguments for God’s existence. Therefore, an agnostic stance seems preferable atheism.

To this, Dawkins might respond that there are other reasons that make the idea of God’s existence so improbable that nothing short of decisive arguments can override a denial of that existence. It’s as if, they might say, we had strong scientific evidence that nothing shaped like a teapot could remain in an orbit around the sun. We could then rightly deny the existence of an orbiting teapot, despite eye-witness reports and scientific arguments supporting its existence.

What could be a reason for thinking that God’s existence is, of itself, highly improbable? There is, of course, Dawkins’ claim that God is highly complex, but, as we’ve seen, this is an assumption he has not justified. Another reason, which seems implicit in many of Dawkins’ comments, is that materialism (the view that everything is material) is highly probable. If so, the existence of an immaterial being such as God would be highly improbable.

Related More From The Stone
Read previous contributions to this series.

•Go to All Posts »

But what is the evidence for materialism? Presumably, that scientific investigation reveals the existence of nothing except material things. But religious believers will plausibly reply that science is suited to discover only what is material (indeed, the best definition of “material” may be just “the sort of thing that science can discover”). They will also cite our experiences of our own conscious life (thoughts, feelings, desires, etc.) as excellent evidence for the existence of immaterial realities that cannot be fully understood by science.

At this point, the dispute between theists and atheists morphs into one of the most lively (and difficult) of current philosophical debates—that between those who think consciousness is somehow reducible to material brain-states and those who think it is not. This debate is far from settled and at least shows that materialism is not something atheists can simply assert as an established fact. It follows that they have no good basis for treating the existence of God as so improbable that it should be denied unless there is decisive proof for it. This in turn shows that atheists are at best entitled to be agnostics, seriously doubting but not denying the existence of God.

I find Dawkins’ “The God Delusion” stimulating, informative, and often right on target. But it does not make a strong case for atheism. His case is weak because it does not take adequate account of the philosophical discussions that have raised the level of reflection about God’s existence far above that at which he operates. It may be possible to make a decisive case against theism through a penetrating philosophical treatment of necessity, complexity, explanation, and other relevant concepts. Because his arguments fail to do this, Dawkins falls far short of establishing his claim.


--------------------------------------------------------------------------------
Gary Gutting teaches philosophy at the University of Notre Dame and co-edits Notre Dame Philosophical Reviews, an on-line book review journal. His most recent book is “What Philosophers Know: Case Studies in Recent Analytic Philosophy.”

Thursday, August 5, 2010

Biblical Criticism refuted

When I first considered traditional Judaism, one obstacle was Biblical Criticism. I investigated it then, in the 1960s, and found it unconvincing [largely due to the work of Cassutto]. In the last few years students have again put the question, especially in virtue of the writings of Richard Elliot Friedman. In response I posted a very short cursory set of notes on a few of Friedman's fantasies http://www.dovidgottlieb.com/comments/Who_Wrote_The_Bible.htm. But now a friend alerted me to two books that have just appeared - both with the same title [!!] - Who Really Wrote the Bible? one by Eyal Rav-Noy and Gil Weinreich and one by Clayton Howard Ford.

Let me start by saying that each book contains a considerable number of errors. A number of their readings of verses can be disputed in neutral scholarly terms, and are in violation of the Jewish tradition. Some of the uses of chiasms in both, and sevens in the first book, could be challenged as subjective. But each book contains hundreds of critical points. Each book alone would be sufficient to destroy the credibility of Biblical Criticism for an honest thinker. The two together are simply devastating. [By the way, there is very little overlap between the books, remarkably.] Their common methodology is to use the tools of BC against itself, and to demonstrate the inconsistencies and unbelievably unreliable readings BC espouses. I heartily recommend both, even though Ford is a believing Christian. [That means if you do not read a book in which the author mentions Christianity positively, this book is not for you.]

Tuesday, August 3, 2010

The Waning of Materialism by Robert C. Koons, et al.

When I was in graduate school, materialism was about the only position respected. Mention the soul and you would be laughed out of the room. Well, philosophical fashions have changed [for the better!]. See The Waning of Materialism by Robert C. Koons, et al. - 23 major philosophers - from Oxford, Yale, UCLA and other universities - use all the tools of the latest philosophy to show the insufficiency of a materialist view of the world. And some of the best anti-materialist are not in the volume, e.g. David Chalmers and Michael Rea. the papers are technical - they are for trained philosophers. But the very existence of the volume should give pause to those who simply assume that materialism is obviously correct.

the latest science "news"

From the New York Times:

ESSAY

Rumors in Astrophysics Spread at Light Speed

[SPACE QUEST Technicians readied one of the telescope mirrors used in NASA's Kepler planet-finding mission.]

Ball Aerospace
SPACE QUEST Technicians readied one of the telescope mirrors used in NASA's Kepler planet-finding mission.



By DENNIS OVERBYE
Published: August 03, 2010







Dimitar Sasselov, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics, lit up the Internet last month with a statement that would stir the soul of anyone who ever dreamed of finding life or another home in the stars.





Brandishing data from NASA's Kepler planet-finding satellite, during a talk at TED Global 2010 in Oxford on July 16, Dr. Sasselov said the mission had discovered 140 Earthlike planets in a small patch of sky in the constellation Cygnus that Kepler has been surveying for the last year and a half.





"The next step after Kepler will be to study the atmospheres of the planets and see if we can find any signs of life," he said.





Last week, Dr. Sasselov was busy eating his words. In a series of messages posted on the Kepler Web site Dr. Sasselov acknowledged that should have said "Earth-sized," meaning a rocky body less than three times the diameter of our own planet, rather than "Earthlike," with its connotations of oxygenated vistas of blue and green. He was speaking in geophysics jargon, he explained.





And he should have called them "candidates" instead of planets.





"The Kepler mission is designed to discover Earth-sized planets but it has not yet discovered any; at this time we have found only planet candidates," he wrote.





In other words: keep on moving, nothing to see here.





I've heard that a lot lately. Call it the two-sigma blues. Two-sigma is mathematical jargon for a measurement or discovery of some kind that sticks up high enough above the random noise to be interesting but not high enough to really mean anything conclusive. For the record, the criterion for a genuine discovery is known as five-sigma, suggesting there is less than one chance in roughly 3 million that it is wrong. Two sigma, leaving a 2.5 percent chance of being wrong, is just high enough to jangle the nerves, however, and all of ours have been jangled enough.





Only three weeks ago, rumors went flashing around all the way to Gawker that researchers at Fermilab in Illinois had discovered the Higgs boson, a celebrated particle that is alleged to imbue other particles with mass. The rumored effect was far less than the five-sigma gold standard that would change the world. And when the Fermilab physicists reported on their work in Paris last week, there was still no trace of the long-sought Higgs.





Scientists at particle accelerators don't have all the fun. Last winter, physicists worked themselves up into a state of "serious hysteria," in the words of one physicist, over rumors that an experiment at the bottom of an old iron mine in Minnesota had detected the purported sea of subatomic particles known as dark matter, which is thought to make up 25 percent of creation.





Physicists all over the world tuned into balky Webcasts in December to hear scientists from the team, called the Cryogenic Dark Matter Search, give a pair of simultaneous talks at Stanford and Fermilab, and this newspaper held its front page, only to hear that the experiment had detected only two particles, only one more than they would have expected to find by chance.





We all went to bed that night in the same world in which we had woken up.





One culprit here is the Web, which was invented to foster better communication among physicists in the first place, but has proved equally adept at spreading disinformation. But another, it seems to me, is the desire for some fundamental discovery about the nature of the universe - the yearning to wake up in a new world - and a growing feeling among astronomers and physicists that we are in fact creeping up on enormous changes with the advent of things like the Large Hadron Collider outside Geneva and the Kepler spacecraft.





I can't say what the discovery of dark matter or the final hunting down of the Higgs boson would do for the average person, except to paraphrase Michael Faraday, the 19th-century English chemist who discovered the basic laws of electromagnetism. When asked the same question about electricity, he said that someday it would be taxable. Nothing seemed further from everyday reality once upon a time than Einstein's general theory of relativity, the warped space-time theory of gravity, but now it is at the heart of the GPS system, without which we are increasingly incapable of navigating the sea or even the sidewalks.





The biggest benefit from answering these questions - what is the universe made of, or where does mass come from - might be better questions. Cosmologists have spent the last century asking how and when the universe began and will end or how many kinds of particles and forces are needed to make it tick, but maybe we should wonder why it is we feel the need to think in terms of beginnings and endings or particles at all.





As for planets, I no longer expect to see boots on Mars before I die, but I do expect to know where there is a habitable, really Earthlike planet or planets, thanks to Kepler and the missions that are to succeed it. If such planets exist within a few light-years of here, I can imagine pressure building to send a probe, a robot presumably, to investigate. It would be a trip that would take ages and would be for the ages.





There is a deadline of sorts for Kepler in the form of a conference in December. By then, said William J. Borucki, Kepler's leader, the team hopes to have moved a bunch of those candidate planets to the confirmed list. They will not be habitable, he warned, noting that that would require water, which would require an orbit a moderate distance from their star that takes a year or so to go around. With only 43 days' worth of data to analyze yet, only planets with tighter, faster and hotter orbits will have shown up.





"They'll be smaller, but they will be hot," Mr. Borucki said.





But Kepler has three more years to find a habitable planet. The real point of Dr. Sasselov's talk was that we are approaching a Copernican moment, in which astronomy and biology could combine to tell us something new about our place in the universe.





I know that science does not exist just to fulfill my science-fiction fantasies, but still I wish that things would speed up, and the ratio of discovery to hopeful noise would go up.





Hardly a week goes by, for example, that I don't hear some kind of rumor that, if true, would rock the Universe As We Know It. Recently I heard a rumor that another dark matter experiment, which I won't name, had seen an interesting signal. I contacted the physicist involved. He said the results were preliminary and he had nothing to say.





Smart guy. Very.