Monday, October 28, 2013



Problems with scientific research
How science goes wrong




Scientific research has changed the world. Now it needs to change itself
Oct 19th 2013 |From the print edition

[[There are a lot of articles with this subject, but due to the deserved reputation of the Economist I am putting this one on the blog. D. G.]]
·                                  
·                                 
A SIMPLE idea underpins science: “trust, but verify”. Results should always be subject to challenge from experiment. That simple but powerful idea has generated a vast body of knowledge. Since its birth in the 17th century, modern science has changed the world beyond recognition, and overwhelmingly for the better.
But success can breed complacency. Modern scientists are doing too much trusting and not enough verifying—to the detriment of the whole of science, and of humanity.
In this section
·                                 How science goes wrong
Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis (see article). A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.
What a load of rubbish
Even when flawed research does not put people’s lives at risk—and much of it is too far from the market to do so—it squanders money and the efforts of some of the world’s best minds. The opportunity costs of stymied progress are hard to quantify, but they are likely to be vast. And they could be rising.
One reason is the competitiveness of science. In the 1950s, when modern academic research took shape after its successes in the second world war, it was still a rarefied pastime. The entire club of scientists numbered a few hundred thousand. As their ranks have swelled, to 6m-7m active researchers on the latest reckoning, scientists have lost their taste for self-policing and quality control. The obligation to “publish or perish” has come to rule over academic life. Competition for jobs is cut-throat. Full professors in America earned on average $135,000 in 2012—more than judges did. Every year six freshly minted PhDs vie for every academic post. Nowadays verification (the replication of other people’s results) does little to advance a researcher’s career. And without verification, dubious findings live on to mislead.
Careerism also encourages exaggeration and the cherry-picking of results. In order to safeguard their exclusivity, the leading journals impose high rejection rates: in excess of 90% of submitted manuscripts. The most striking findings have the greatest chance of making it onto the page. Little wonder that one in three researchers knows of a colleague who has pepped up a paper by, say, excluding inconvenient data from results “based on a gut feeling”. And as more research teams around the world work on a problem, the odds shorten that at least one will fall prey to an honest confusion between the sweet signal of a genuine discovery and a freak of the statistical noise. Such spurious correlations are often recorded in journals eager for startling papers. If they touch on drinking wine, going senile or letting children play video games, they may well command the front pages of newspapers, too.
Conversely, failures to prove a hypothesis are rarely even offered for publication, let alone accepted. “Negative results” now account for only 14% of published papers, down from 30% in 1990. Yet knowing what is false is as important to science as knowing what is true. The failure to report failures means that researchers waste money and effort exploring blind alleys already investigated by other scientists.
The hallowed process of peer review is not all it is cracked up to be, either. When a prominent medical journal ran research past other experts in the field, it found that most of the reviewers failed to spot mistakes it had deliberately inserted into papers, even after being told they were being tested.
If it’s broke, fix it
All this makes a shaky foundation for an enterprise dedicated to discovering the truth about the world. What might be done to shore it up? One priority should be for all disciplines to follow the example of those that have done most to tighten standards. A start would be getting to grips with statistics, especially in the growing number of fields that sift through untold oodles of data looking for patterns. Geneticists have done this, and turned an early torrent of specious results from genome sequencing into a trickle of truly significant ones.
Ideally, research protocols should be registered in advance and monitored in virtual notebooks. This would curb the temptation to fiddle with the experiment’s design midstream so as to make the results look more substantial than they are. (It is already meant to happen in clinical trials of drugs, but compliance is patchy.) Where possible, trial data also should be open for other researchers to inspect and test.
The most enlightened journals are already becoming less averse to humdrum papers. Some government funding agencies, including America’s National Institutes of Health, which dish out $30 billion on research each year, are working out how best to encourage replication. And growing numbers of scientists, especially young ones, understand statistics. But these trends need to go much further. Journals should allocate space for “uninteresting” work, and grant-givers should set aside money to pay for it. Peer review should be tightened—or perhaps dispensed with altogether, in favour of post-publication evaluation in the form of appended comments. That system has worked well in recent years in physics and mathematics. Lastly, policymakers should ensure that institutions using public money also respect the rules.
Science still commands enormous—if sometimes bemused—respect. But its privileged status is founded on the capacity to be right most of the time and to correct its mistakes when it gets things wrong. And it is not as if the universe is short of genuine mysteries to keep generations of scientists hard at work. The false trails laid down by shoddy research are an unforgivable barrier to understanding.


Sunday, October 6, 2013





Evolution, Speeded by Computation

‘Probably Approximately Correct’ Explores Nature’s Algorithms

  • FACEBOOK
  • TWITTER
  • GOOGLE+
  • SAVE
  • E-MAIL
  • SHARE
  • PRINT
  • REPRINTS
Our daily lives are growing ever more dependent on algorithms, those omnipresent computational procedures that run programs on our laptops, our smartphones, our GPS devices and much else. Algorithms influence our decisions, too: when we choose a movie on Netflix or a book on Amazon, we are presented with recommendations churned out by sophisticated algorithms that take into account our past choices and those of other users determined (by still other algorithms) to be similar to us.

The importance of these algorithms in the modern world is common knowledge, of course. But in his insightful new book “Probably Approximately Correct,” the Harvard computer scientist Leslie Valiant goes much further: computation, he says, is and has always been “the dominating force on earth within all its life forms.” Nature speaks in algorithms.
Dr. Valiant believes that phenomena like evolution, adaptation and learning are best understood in terms of “ecorithms,” a term he has coined for algorithms that interact with and benefit from their environment. Ecorithms are at play when children learn how to distinguish cats from dogs, when we navigate our way in a new city — but more than that, Dr. Valiant writes, when organisms evolve and when brain circuits are created and maintained.
Here is one way he illustrates this complex idea. Suppose we want to distinguish between two types of flowers by measuring their petals. For each petal we have two numbers, x for length and y for width. The task is to find a way to tell which type of flower a petal with given measurements x and y belongs to.
To achieve this, the algorithm is fed a set of examples meant to “train” it to come up with a good criterion for distinguishing the two flowers. The algorithm does not know the criterion in advance; it must “learn” it using the data that are fed to it.
So it starts with a hypothesis and tests it on the first example. (Say flower No. 1’s petals can be described by the formula 2x—3y>2, while for Flower No. 2 it’s 2x—3y<2 .="" a="" algorithm="" along.="" an="" and="" applied="" as="" by="" certain="" example.="" example="" goes="" hypothesis="" if="" is="" it="" learning="" literally="" misclassifies="" new="" next="" p="" precise="" proceed="" rule="" that="" the="" to="" updated="" we="" works="">
A striking mathematical theorem is that if a rule separating the two flowers exists (within the class of criteria we are considering, such as linear inequalities), then our algorithm will find it after a finite number of steps, no matter what the starting hypothesis was.
And Dr. Valiant argues that similar mechanisms are at work in nature. An organism can adapt to a new environment by starting with a hypothesis about the environment, then testing it against new data and, based on the feedback, gradually improving the hypothesis by using an ecorithm, to behave more effectively.
“Probably Approximately Correct,” Dr. Valiant’s winsome title, is his quantitative framework for understanding how these ecorithms work. In nature, there is nothing so neat as our idealized flower algorithm; we cannot really hope to get a precise rule distinguishing between two types of flowers, but can hope only to have one that gives an approximate result with high probability.
The evolution of species, as Darwin taught us, relies on natural selection. But Dr. Valiant argues that if all the mutations that drive evolution were simply random and equally distributed, it would proceed at an impossibly slow and inefficient pace.
Darwin’s theory “has the gaping gap that it can make no quantitative predictions as far as the number of generations needed for the evolution of a behavior of a certain complexity,” he writes. “We need to explain how evolution is possible at all, how we got from no life, or from very simple life, to life as complex as we find it on earth today. This is the BIG question.”
[[Shades of Thomas Nagel!! - D.G.]]
Dr. Valiant proposes that natural selection is supplemented by ecorithms, which enable organisms to learn and adapt more efficiently. Not all mutations are realized with equal probability; those that are more beneficial are more likely to occur. In other words, evolution is accelerated by computation.
This is an ambitious proposal, sure to ignite controversy. But what I find so appealing about this discussion, and the book in general, is that Dr. Valiant fearlessly goes to the heart of the “BIG” questions. (I don’t know him, though we share an editor at Basic Books.) He passionately argues his case, but is also first to point out the parts of his theory that are incomplete, eager to anticipate and confront possible counterarguments. This is science at its best, driven not by dogma and blind belief, but by the desire to understand, intellectual integrity and reliance on facts.
Many other topics are discussed, from computational complexity to intelligence (artificial and not), and even an algorithmic perspective on teaching. The
book is written in a lively, accessible style and is surprisingly entertaining. It’s funny how your perception of even mundane tasks can change after reading it — you start thinking algorithmically, confirming Dr. Valiant’s maxim that computer science is “more about humans than about computers.”
Edward Frenkel, a professor of mathematics at the University of California, Berkeley, is the author of the new book “Love and Math.”