Wednesday, November 11, 2009

scientific predictions

Worthwhile article:

The Scientist
Volume 23 | Issue 11 | Page 28


By Stuart Blackman
Promises, Promises
Ill-judged predictions and projections can be embarrassing at best and, at worst, damaging to the authority of science and science policy.

A South Korean postage stamp issued in 2005 depicts a scene that is reminiscent of the iconic human evolution cartoon in which a stooping ape evolves, in six or so steps, into an upright, bipedal Homo sapiens. It shows a paraplegic man climbing slowly out of his wheelchair, standing up straight, and then performing a giant leap of celebration. Placed next to an image of an ovum undergoing the technique of nuclear transfer, the message was clear: Thanks to the groundbreaking publications of Hwang Woo-Suk, therapeutic cloning was a medical miracle that had as good as happened. The trouble is, it hadn’t happened. And nearly 4 years on, it still hasn’t.

South Korea was understandably proud of Hwang’s achievements and, like the rest of the world, excited by his claims and those of researchers worldwide that his human embryonic stem cell (hESC) techniques were set to provide therapies for not only spinal injuries, but Alzheimer’s, Parkinson’s, and a host of other degenerative diseases. The rest is history. By January 2006, it was clear that Hwang’s pioneering papers had been fabricated and that the eleven individualized human stem cell lines he claimed to have established did not exist. Hwang left Seoul National University and was subject to criminal investigation, the stamp was withdrawn from circulation, and the world still awaits approval for the first hESC therapeutic application.
It can sometimes feel as if cures for diseases are forever 10 years off, while nuclear fusion seems to have been 50 years away from practical reality for about half a century now.

It doesn’t take anything so extreme as scientific fraud to scupper what may have seemed, at the time, to be a well-grounded scientific prediction. At its most enthusiastic, science has always been prone to promise rather more, and sooner, than it has managed to deliver. It can sometimes feel as if cures for diseases are forever 10 years off, while nuclear fusion seems to have been 50 years away from practical reality for about half a century now. It might be easy to look back and laugh at claims that eugenics would spell the end for not only heritable diseases, but also of social problems such as vagrancy and crime, but a 1989 Science editorial’s claim during the run-up to the human genome project that the new genetics could help reduce homelessness by tackling mental illness1 is perhaps fresh enough to make biologists’ toes curl with embarrassment.

Meanwhile, in bleaker moments, scientific authorities have predicted the end of the world and civilization as we know them at the hand of pandemics or environmental catastrophe. And yet we are still here, in defiance of Thomas Malthus’s eighteenth-century warnings about overpopulation and ecologist Paul Ehrlich’s prophesy in his 1968 book The Population Bomb that “In the 1970s and 1980s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.”
Related Articles

Authors of Our Own Misfortune

The Future of Public Engagement

The Scientist as Politician

Of course, scientists have a strong incentive to make bold predictions—namely, to obtain funding, influence, and high-profile publications. But while few will be disappointed when worst-case forecasts fail to materialize, unfulfilled predictions—of which we’re seeing more and more—can be a blow for patients, policy makers, and for the reputation of science itself.

In 1995, for example, an expert panel on gene therapy convened by the U.S. National Institutes of Health’s then-director Harold Varmus2 concluded: “Expectations of current gene therapy protocols have been oversold. Overzealous representation of clinical gene therapy has obscured the exploratory nature of the initial studies, colored the manner in which findings are portrayed to the scientific press and public, and led to the widely held, but mistaken, perception that clinical gene therapy is already highly successful. Such misrepresentation threatens confidence in the field and will inevitably lead to disappointment in both medical and lay communities.”
S

cientists have been making predictions for as long as there have been scientists. Indeed, without speculating about the future, it would be impossible to make decisions about how best to proceed. But there is reason to believe that promises are becoming more central to the scientific process.

Sir Ian Wilmut, leader of the Roslin Institute team that cloned Dolly the sheep, says that a “soundbite” media culture that demands uncomplicated, definitive, and sensational statements plays a significant role. “It’s [the media] who put the most pressure on scientists to make predictions,” he says. And in a radio or TV interview that allows perhaps only 10 or 20 seconds for an answer, “it’s very easy then to inadvertently mislead.”

But it might also pay scientists—financially and politically—to go along with such demands, and to indulge in what Joan Haran, Cesagen Research Fellow at Cardiff University, UK, diplomatically calls “discursive overbidding,” whereby they talk up the potential value of work for which they seek the support of funds, changes in legislation or public approval.

“Since the late 20th century, scientists no longer quite have that quality that we used to speak of as scientists being disinterested. They are now very interested,” says Hilary Rose, professor emerita of the sociology of science at the University of Bradford, UK and Gresham College London. “Many clearly manage to rise above this, but the basic culture of science has changed.”

Various developments such as the 1980 Bayh-Dole Act in the United States, and the rise of the spin-out companies from universities, mean that research has become more intrinsically bound up with the commercial world. Many biotech companies are now led by financial directors rather than scientific directors, says Nik Brown, co-director of the Science and Technology Studies Unit, University of York, UK. The past decade has seen a rise in the number of financial experts appointed to influential positions in biotech companies, for instance. And since the end of the Cold War, he says, the central role of science has become less about security and more about economy, with science and technology becoming central to many nations’ economic strategy.
Some famous (and infamous) predictions
YEAR PREDICTION RIGHT OR WRONG?
1869 Dmitri Mendeleev’s periodic table left spaces for elements that he predicted would be discovered. Three of these (gallium, scandium, and germanium) were subsequently discovered within his lifetime. RIGHT
1964 Physicists predict the existence of the Higgs Boson. If CERN’s Large Hadron Collider finds no evidence for the existence of this massive fundamental particle, working models of the material universe might require a fundamental rethink. PENDING
1965 Intel cofounder Gordon E. Moore predicts that the number of transistors on a computer chip would double every two years. The industry has so far managed to keep up (despite many predictions over the years about the law’s imminent demise). RIGHT
1968 Entomologist Paul Ehrlich predicts that hundreds of millions of people will starve to death in the next two decades. WRONG
2002 At the website longbets.org, astronomer Sir Martin Rees, president of the Royal Society, predicts that “By 2020, bioterror or bioerror will lead to one million casualties in a single event.” Also at Long Bets, entrepreneurial engineer Ray Kurzweil bets $10,000 that by 2029 a computer will have passed the Turing Test for machine intelligence. PENDING
2003 Cold Spring Harbor Laboratory sponsored GeneSweep, a sweepstakes on the number of human genes. While bids averaged around 60,000 genes, it was eventually won by a bid of 25,947—the lowest of the hundreds received. WRONG
2007 The Intergovernmental Panel on Climate Change’s 4th Assessment Report projects that global surface air temperatures will increase by between 1.1 and 6.4°C over preindustrial levels by the end of the century. PENDING

It’s a changing role for science that finds formal expression in the scientific funding process, says Brian Wynne, professor of science studies at Lancaster University, UK. “Every research proposal these days, whichever field you’re in, has got to include a statement on the impact your research is going to have. And that isn’t just intellectual impact; it’s also economic impact. And that is basically requiring scientists to make promises, and to exaggerate those promises.”

Central too is the desperate competition to get funded and published, which forces scientists to emphasize the potential impact of their work, introducing further temptation to exaggerate. Last year, 8% of papers submitted to Nature were accepted for publication (down from nearly 11% in 1997). In recent years, fewer than 1 in 10 applications for new R01s from the US National Institutes of Health have been successful.

Moreover, at a time when the pressures on scientists to “rhetorically overbid” is increasing, politics is becoming more reliant on science to provide predictions to guide policy.

“There are probably more issues than there were where there is political concern about issues—often global issues—which have a scientific content,” says Sir Martin Rees, astronomer and president of the United Kingdom’s Royal Society. “Climate change is one. Pandemics are another. These are both issues where the science is uncertain, but it’s better to listen to the best scientists than to the man in the street.”

But according to Dan Sarewitz, director of the Consortium for Science, Policy, and Outcomes at Arizona State University, a consequence of this reliance on science is that politicians are able to “fob off responsibility to scientists” when making difficult policy decisions. That politicians are looking to science for certainty regarding complex political issues is illustrated by an address to the Copenhagen Climate Conference earlier this year by the then Danish Prime Minister Anders Fogh Rasmussen, who appealed to scientific delegates for simple, unambiguous accounts of the science. “[Don’t] provide us with too many moving targets, because it is already a very, very complicated process,” he said. “I need fixed targets and certain figures, and not too many considerations on uncertainty and risk and things like that.” Such demands, says Sarewitz, can tempt scientists into providing simplistic and unqualified extrapolations from the current state of knowledge to possible future scenarios.

Another development is that scientists, still reeling from public opposition—at least in Europe—to genetically modified crops and food, increasingly need the public on their side to secure funds and make progress. As British fertility expert Robert Winston told the BBC in 2005: “We tend often to really have rather too much overconfidence. We may exaggerate, simply because [stem cell research, for example] is an area where we need support, where we need the support of the public, and we need to persuade them. And I think we can go about persuading people a bit too vigorously sometimes.”
O

f course, cloning technologies might yet help people walk again. But the fact remains that the South Korean stamp was celebrating something that had not yet actually happened. And as the Hwang case testifies, the future has an annoying habit of taking unexpected turns.

As physicist Niels Bohr once jokingly put it, “predictions can be very difficult—especially about the future.” Or as Joan Haran says, when scientists make predictions and promises they are entering “a realm of the imaginary.” So even if those predictions are based on science’s conventional territory of facts and data, they have as much to do with wishful thinking and social and political possibilities.
While few will be disappointed when worst-case forecasts fail to materialize, unfulfilled predictions—of which we're seeing more and more—can be a blow for patients, policy makers, and for the reputation of science itself.

A single unexpected scientific discovery is all it can take to confound the most carefully considered of predictions by throwing open new worlds of possibilities or shutting down others. Wilmut knows this only too well. In 2006, at a public lecture at the Edinburgh International Book Festival, Wilmut predicted that within 5 years scientists will have determined whether they can use cloning to cure motor neuron disease, the subject of his research since moving to the University of Edinburgh. But within only a couple of years, Wilmut had, like many other researchers, switched from using cloning techniques in favor of induced pluripotent stem cells (iPS), whereby somatic cells are reprogrammed, either genetically or biochemically, to become pluripotent. Other scientists continue to pursue cloning techniques, but the shift towards iPS means that 3 years since Wilmut made his prediction, it is looking less likely that it will even be proven right or wrong. Wilmut holds his hands up. “Sometimes you’re right, and sometimes you’re wrong,” he says. “You give the best prediction you can at the time.”
How to predict responsibly
Advice from the experts on how to avoid making promises that owe more to Nostradamus than natural science

1. AVOID SIMPLE TIMELINES
When asked how long it might take for your research to translate into therapies, try to communicate the complexities of the process rather than make a specific prediction. “I’ve come to recognize that these things take even longer than you hope,” says Ian Wilmut. So what would he say if asked about the prospects of tackling motor neuron disease with the iPS system? “I would say, one or two labs have now got nerve cells which are the genuine equivalent of those in a person who inherited the disease; it will perhaps take a couple of years before they have identified the molecular differences between them and healthy cells; it might take a couple of years after that to set up a high-throughput assay; a couple more years after that to run that and identify the first compounds. Which of course then simply gets you to the point where you have to put drugs through animal tests before you can get to patients. So, it’s likely to be at least ten years before there is the possibility of a new drug being used on any scale to treat human patients.”

2. LEARN FROM HISTORY
According to Nik Brown, just heeding the lessons of past predictions and promises—both the successes and the failures—can help scientists avoid what he calls “institutional amnesia,” in which they deliver serial disappointments.

3. STATE THE CAVEATS
Harold Varmus’s gene therapy report concluded that scientists need to “inform the public about not only the extraordinary promise of gene therapy, but also its current limitations.” It might not be easy when the scientific culture encourages promise-making and hyperbole, but for Brian Wynne, science and scientists need to be more modest about their claims. “If modesty were institutional, politics and science would be completely transformed.” Adds Brown: “A more modest science would probably also be a more reliable science.”

4. REMEMBER WHAT YOU DON’T KNOW
“Scientists know about science, at least their own subdisciplines,” says Dan Sarewitz, “but they often know a lot less about technology and innovation and political context, so it’s not very surprising that they’re often wrong in their predictions.” Hilary Rose says that natural scientists are sometimes inclined to think of complex human social and political behavior in biological terms, which can introduce further error. A problem for ecologist Paul Ehrlich’s predictions in The Population Bomb, for example, was that “he did not know enough about demography,” she says.

By the time a prediction has been proved right or wrong, however, it is already out there influencing the worlds of research and policy, for better or for worse. And it’s a two-way street: Add to the mix the influence of legislative changes on research trajectories, and things get even less predictable. It’s natural to assume, for example, that countries with less restrictive policies for human embryonic stem cell research would show more progress in this area. But that optimism has not been rewarded. While hESC therapies have so far proved elusive, and even clinical trials are rare and newsworthy events, adult SC research has at least kept pace with treatments—notably for heart disease and the regeneration of a patient’s collapsed trachea—already emerging from the pipeline.

Hilary Rose says that while there are plenty of reasons to be critical of the former U.S. President George W. Bush’s hostility to hESC research, the restrictions forced scientists to think harder about how to make the most of alternatives. “It almost feels like hurray for George Bush,” she says. Sociologist Christine Hauskeller, Senior Research Fellow at the ESRC Centre for Genomics in Society, University of Exeter, UK, points out that many countries other than the United States sought alternatives to hESCs in the light of ethical objections. (Indeed, iPS was initially developed in Japan.)

It is a research landscape that continues to change in unforeseen ways as UK scientists drop hESC research in favor of the new promise offered by iPS. “In fieldwork in 2007 and 2008, we did not find a single embryonic stem cell laboratory that is not also working on iPS,” says Hauskeller.

Cloning isn’t the only area where scientists make wild predictions, of course. In his book Our Final Hour: A Scientist’s Warning, Sir Martin Rees predicts that “the odds are no better than fifty-fifty that our present civilisation on Earth will survive to the end of the present century.” Plainly, much more than science goes into an assessment of the risks posed by nuclear warfare, bio-terror, bio-error and environmental disaster.

“Of course,” agrees Rees. “I’m writing this book as a member of the human race,” not a representative of the Royal Society. But while his prediction involves judgments outside his area of expertise, it still carries the authority of science, as witnessed by the book’s subtitle. And it’s that very authority that could be undermined should these predictions fall flat.

Haran’s research shows that people are rather trusting of scientists’ visions of the future, while taking the predictions of journalists or of movies and other fictional media with a big pinch of salt. “Because of the high esteem in which scientists are held, it becomes very hard to mount a critique of their promises,” says Haran. And scientists want to keep it that way, it would seem. As an example, scientists complained after a New York Times article in 1980 warned readers not to hope for immediate miracles from research on interferons, arguing that such expressions of doubt by the press would affect their research funding, erode public trust in science, and make further progress impossible. Scientists defending their corner is understandable, says Haran, but it should be recognized that it can be at the expense of healthy skepticism.
T

he consequences of inflated expectations about what, and when, science can deliver may be felt by individuals, society, and by science itself. Harold Varmus’s expert panel on gene therapy reported that overselling of the science by scientists and their sponsors “threatened confidence in the integrity of the field and may ultimately hinder progress toward successful application of gene therapy to human disease.” During the ensuing debate, David Valle, a pediatrician at Johns Hopkins Medical Institutions in Baltimore, was quoted as saying that one of his patients had stopped a restricted diet that could save his eyesight on the basis that “gene therapy is right around the corner.”
David Valle, a pediatrician at Johns Hopkins Medical Institutions in Baltimore, was quoted as saying that one of his patients had stopped a restricted diet that could save his eyesight on the basis that "gene therapy is right around the corner."

Predictions can also create a sense of haste and urgency that can impede cool, calm reflection on how to proceed at the policy level. Brown says it can create a pressure to legislate before experts properly understand a new research path and its potential. “You’ve got to legislate before your international competitors do,” he says.

Christine Hauskeller cites recent amendments made to the United Kingdom’s Human Fertilisation and Embryology Act, which make it legal to create human-animal hybrid embryos for use in stem cell research following intense lobbying by scientists who argued that egg donations were insufficient to supply research needs. However, now that the law is in place, she says, the development of iPS, combined with unforeseen serious technical problems in making hybrid embryos and a lack of funding for the research, means that no scientists in the UK are actually working on them. “We now have a policy without a product,” she says.

This is not only a waste of financial and legal resources, she says, but it serves to narrow social and scientific possibilities. Indeed, she says, a promissory culture of science and technology can detract from the essence of scientific investigation: “If we already know what scientists must produce, then it’s not science—it’s called engineering.”

According to Brown, entire regulatory bodies have been established in anticipation of promising new research paths, which subsequently fail to deliver. The UK Xenotransplantation Interim Regulatory Authority (UKXIRA), for example, was set up in 1997 to oversee the development of animal-human organ and tissue transplantation. “Meanwhile, the science itself has been collapsing,” he says. “It’s proving to be unstable and unsound, and certainly not delivering what was expected.” UKXIRA was disbanded in 2006, without having granted a single license to conduct transplantation. “It met regularly to talk about…well…to kind of speculate really, but did little more than that,” says Brown.

Hilary Rose believes that an overemphasis on certain research trajectories, and overoptimistic expectations of what they can deliver, can obscure political and social solutions to problems. She cites the Science editorial that looked to the Human Genome Project as a solution to homelessness, which might skew spending towards genetics and away from other, proven social services. “Which is going to create more public health—more health for more people—improvements in, for example, housing and nutritional status for people, or genetics? I think genetics might do some wonderful things for a tiny number of extremely sick people, but I don’t think it’s likely to do much good for the public health of the entire population.”

Perhaps the most worrying aspect for scientists of a promissory culture of the discipline is that unmet promises, as the NIH gene therapy report suggested, might ultimately undermine public confidence. Presently, says Sarewitz, science seems “incredibly robust” to public skepticism. “Science budgets continue to grow, and science in the US is riding a crest of political legitimacy, of popularity, and I think that’s what’s encouraged this continual promise-making,” he says.

However, public opposition to GM in Europe is perhaps an indication that trust in science is not bulletproof. How many expert assurances or warnings must turn out to be conspicuously wrong for the authority of science and scientists to be diminished? “I do very much worry for the soul of science should there be a backlash,” says Sarewitz. “And I can’t see any feedbacks into the system right now that would encourage communities of scientists to be more circumspect in their claims about what the future will look like.”
Have a comment? E-mail us at mail@the-scientist.com

References
1. D.E. Koshland, Jr., “Sequences and Consequences of the Human Genome,” Science, 246(4927), 1989.
2. S.H. Orkin, A.G. Motulsky, Report and Recommendations of the Panel to Assess the NIH Investment in Research on Gene Therapy, December 7, 1995; available online at http://www.nih.gov/news/panelrep.html