Sunday, August 2, 2020

Can Philosophy be saved? by Susan Haack

Can Philosophy be saved? by Susan Haack - an extremely worthwhile paper on the limits of scientism and the decade3nce of contemporary philosopohy - I was not able tyo convert the PDF so here is the link: https://www.researchgate.net/publication/319990958_The_Real_Question_Can_Philosophy_be_Saved_2017

Monday, July 27, 2020

The Many Faces of Bad Science

The Many Faces of Bad Science

CHRISTIE ASCHWANDEN

https://www.wired.com/story/the-many-faces-of-bad-science/

In his new book, psychologist Stuart Ritchie paints a portrait of the modern system of research, and all the ways it gets undermined.

IN 1942, SOCIOLOGIST Robert Merton described the ethos of science in terms of its four key values: The first, universalism, meant the rules for doing research are objective and apply to all scientists, regardless of their status. The second, communality, referred to the idea that findings should be shared and disseminated. The third, disinterestedness, described a system in which science is done for the sake of knowledge, not personal gain. And the final value, organized skepticism, meant that claims should be scrutinized and verified, not taken at face value. For scientists, wrote Merton, these were "moral as well as technical prescriptions."

In his new book, Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth, Stuart Ritchie endorses the above as a model for how science is meant to work. “By following the four Mertonian Norms, we should end up with a scientific literature that we can trust,” he writes. He then proceeds to spend the rest of the book explaining all the ways in which modern science fails to do just this.

Science Fictions by Stuart Ritchie | Buy on AmazonDESIGNED BY CATHERINE CASALINO
Ritchie is a psychologist at King’s College London and the author of a previous book, Intelligence: All That Matters, about IQ testing. In Science Fictions he presents a broad overview of the problems facing science in the 21st century. The book covers everything from the replication crisis to fraud, bias, negligence and hype. Much of his criticism is aimed at his own field of psychology, but he also covers these issues as they occur in other fields such as medicine and biology.

Underlying most of these problems is a common issue: the fact that science, as he readily concedes, is “a social construct.” Its ideals are lofty, but it’s an enterprise conducted by humans, with all their foibles. To begin with, the system of peer-reviewed funding and publication is based on trust. Peer review is meant to look for errors or misinterpretations, but it’s done under the assumption that submitted data are genuine, and that the description of the methods used to obtain them are accurate.

Ritchie recounts how in the 1970s, William Summerlin, a dermatologist at the Memorial Sloan-Kettering Cancer Center, used a black felt-tipped pen to fake a procedure in which he’d purported to graft the skin from a black mouse onto a white one. (He was caught by a lab tech who spotted the ink and rubbed it off with alcohol.) Fraudulent studies like Summerlin’s are not one-off events. A few recent examples that Ritchie cites are a researcher who was caught faking cloned embryos, another found to be misrepresenting results from trachea implant surgeries, and a third who fabricated data in a study purporting to show that door-to-door canvassing could shift people’s opinions on gay marriage. With the rise of digital photography, scientists have manipulated images to make their data comply with their expectations; one survey of the literature found signs of image duplication in about 4 percent of some 20,000 papers examined.

But even when they’re not committing fraud, scientists can easily be influenced by biases. One of the revelations to come from psychology’s reckoning with its replication problem is that standard statistical methods for preventing bias are in fact subject to manipulation, whether intentional or not. The most famous example of this is p-hacking, where researchers conduct their analysis in a way that produces a favorable p-value, a much-abused and misunderstood statistic that reveals something about the likelihood of getting the result you saw if there wasn’t actually a real effect. (Ritchie’s footnote for p-hacking links to my WIRED story about how the phrase has gone mainstream)

Ritchie’s ambition here is to convince the reader that science is not living up to its ideals, and in that he succeeds.

An overreliance on p-values helps explain the spread of studies showing “social priming,” where subtle or subconscious cues were said to have large effects on people’s behavior. For instance, one study claimed that when people read words associated with old people (like old or gray), it made them walk more slowly down a hallway afterwards. A functional bullshit meter would have flagged this finding, and many others like it, as suspicious; but when they’re wrapped in the language of science, with an authoritative p-value and the peer-review stamp of approval, they gain a measure of credibility.

Peer review is another process that Ritchie flags as flawed by human bias, as well as perverse incentives and fraud. (Rogue researchers have been caught in self-reviewing scams, as the book points out.) There’s also publication bias, wherein null results—i.e., experiments that end up finding no effect—are left out of journals on the whole. And then there’s media hype, often blamed on journalists even though Ritchie says it rarely starts with us. “The scenario where an innocent researcher is minding their own business when the media suddenly seizes on one of their findings and blows it out of proportion is not at all the norm,” he writes. Instead, studies have shown that overblown claims in press accounts often stem from those found in official releases from the researchers, their institutions, or the journals that published their results.

Ritchie also calls out scientists who write hype-filled books for the public. He singles out Berkeley neuroscientist Matthew Walker, asserting that Walker’s book, Why We Sleep, blatantly misinterprets the underlying science with claims that “the shorter you sleep, the shorter your life span,” and that sleeping less than six or seven hours per night demolishes your immune system and doubles your risk of cancer. “Both statements go against the evidence,” Ritchie says, pointing to independent researcher Alexey Guzey’s detailed takedown. “Walker could have written a far more cautious book that limited itself to just what the data shows, but perhaps such a book wouldn’t have sold so many copies or been hailed as an intervention that ‘should change science and medicine.’”

Hype-filled science books paper over the intricacies of real scientific practice, Ritchie writes. “By implying that complex phenomena have simple, singular causes and fixes, [they] contribute to an image of science as something it isn’t.” His own book offers a more sober account. Science Fictions presents a highly readable and competent description of the problems facing researchers in the 21st century, and it’s an excellent primer for anyone who wants to understand why and how science is failing to live up to its ideals.

At the same time, while Ritchie outlines some of the solutions that are being proposed, he offers very little about how these are being deployed and the challenges they’re facing. It’s a shame there’s no mention of projects within his own field, like the Psychological Science Accelerator, which facilitates collaboration between labs around the globe to increase the size and diversity of data sets. Ritchie’s field even has a whole organization, the Society for the Improvement of Psychological Science, that was formed to tackle issues like the ones he describes. There are rich stories to be told about the rise of a new cadre of researchers who are tackling these problems head-on, and the conflicts that arise from threats to the status quo—but those stories are beyond the scope of this book.

Ritchie’s ambition here is to convince the reader that science is not living up to its ideals, and in that he succeeds. Yet it’s not just the way scientists do science that needs revamping. The public’s view of science as a badge of unshakable truth could also use updating. This book illustrates the ways in which science is a fallible process for seeking truth.

That process can be difficult. Ritchie makes a point of acknowledging how hard it is to get things right, and the importance of correcting errors when they’re found. He puts his money where his mouth is, too, by offering a monetary reward to readers who alert him to objective errors in the book. It pains me to report there’s one in the book’s very first sentence—its first two words, even.

Here’s how Ritchie starts the preface: “January 31, 2011 was the day the world found out that undergraduate students have psychic powers.” He’s referencing a now-discredited paper by Daryl Bem that purported to show that ESP is real. Surely it would have been more accurate to say the world found this out on January 6, 2011, when The New York Times put out a front-page story about the finding; or maybe it was the day before, when that same story was posted to the newspaper’s website; or, at the very latest, it happened on January 27, 2011, when Bem discussed the work on a nationally televised episode of The Colbert Report.

So when did “the world” find out about Bem’s irreproducible result? It depends on how you define “the world” and “found out.” Did it happen the first time Bem talked to the media about his study? Or when it made the front page? Or was the true public unveiling when the paper was finally published in a journal? The exact day that Bem’s study came to the public’s attention isn’t crucial to the point that Ritchie is making here, yet the uncertainty itself may be illustrative. Even objective truths depend on human decisions and interpretations. Turns out that just as science is a social construct, so too is science criticism.

Thursday, July 2, 2020

'97% Of Climate Scientists Agree' Is 100% Wrong

'97% Of Climate Scientists Agree' Is 100% Wrong
https://www.forbes.com/sites/alexepstein/2015/01/06/97-of-climate-scientists-agree-is-100-wrong/#738d273c3f9f
Alex Epstein

Opinion
If you've ever expressed the least bit of skepticism about environmentalist calls for making the vast majority of fossil fuel use illegal, you've probably heard the smug response: “97% of climate scientists agree with climate change” — which always carries the implication: Who are you to challenge them?

The answer is: you are a thinking, independent individual--and you don’t go by polls, let alone second-hand accounts of polls; you go by facts, logic and explanation.

Here are two questions to ask anyone who pulls the 97% trick.

1. What exactly do the climate scientists agree on?

Usually, the person will have a very vague answer like "climate change is real."

Which raises the question: What is that supposed to mean? That climate changes? That we have some impact? That we have a large impact? That we have a catastrophically large impact? That we have such a catastrophic impact that we shouldn't use fossil fuels?

What you'll find is that people don't want to define what 97% agree on--because there is nothing remotely in the literature saying 97% agree we should ban most fossil fuel use.

Most Popular In: Energy & Environment

It’s likely that 97% of people making the 97% claim have absolutely no idea where that number comes from.

If you look at the literature, the specific meaning of the 97% claim is: 97 percent of climate scientists agree that there is a global warming trend and that human beings are the main cause--that is, that we are over 50% responsible. The warming is a whopping 0.8 degrees over the past 150 years, a warming that has tapered off to essentially nothing in the last decade and a half.


Even if 97% of climate scientists agreed with this, and even if they were right, it in no way, shape, or form would imply that we should restrict fossil fuels--which are crucial to the livelihood of billions.


Because the actual 97% claim doesn’t even remotely justify their policies, catastrophists like President Obama and John Kerry take what we could generously call creative liberties in repeating this claim.

On his Twitter account, President Obama tweets: “Ninety-seven percent of scientists agree: #climate change is real, man-made and dangerous.” Not only does Obama sloppily equate “scientists” with “climate scientists,” but more importantly he added “dangerous” to the 97% claim, which is not there in the literature.

This is called the fallacy of equivocation: using the same term (“97 percent”) in two different ways to manipulate people.

John Kerry pulled the same stunt when trying to tell the underdeveloped world that it should use fewer fossil fuels:

And let there be no doubt in anybody’s mind that the science is absolutely certain. . . 97 percent of climate scientists have confirmed that climate change is happening and that human activity is responsible. . . . . they agree that, if we continue to go down the same path that we are going down today, the world as we know it will change—and it will change dramatically for the worse.

In Kerry’s mind, 97% of climate scientists said whatever Kerry wants them to have said.

Bottom line: What the 97% of climate scientists allegedly agree on is very mild and in no way justifies restricting the energy that billions need.

But it gets even worse. Because it turns out that 97% didn’t even say that.

Which brings us to the next question:

2. How do we know the 97% agree?

To elaborate, how was that proven?

Almost no one who refers to the 97% has any idea, but the basic way it works is that a researcher reviews a lot of scholarly papers and classifies them by how many agree with a certain position.

Unfortunately, in the case of 97% of climate scientists agreeing that human beings are the main cause of warming, the researchers have engaged in egregious misconduct.

One of the main papers behind the 97 percent claim is authored by John Cook, who runs the popular website SkepticalScience.com, a virtual encyclopedia of arguments trying to defend predictions of catastrophic climate change from all challenges.

Here is Cook’s summary of his paper: “Cook et al. (2013) found that over 97 percent [of papers he surveyed] endorsed the view that the Earth is warming up and human emissions of greenhouse gases are the main cause.”

This is a fairly clear statement—97 percent of the papers surveyed endorsed the view that man-made greenhouse gases were the main cause—main in common usage meaning more than 50 percent.

But even a quick scan of the paper reveals that this is not the case. Cook is able to demonstrate only that a relative handful endorse “the view that the Earth is warming up and human emissions of greenhouse gases are the main cause.” Cook calls this “explicit endorsement with quantification” (quantification meaning 50 percent or more). The problem is, only a small percentage of the papers fall into this category; Cook does not say what percentage, but when the study was publicly challenged by economist David Friedman, one observer calculated that only 1.6 percent explicitly stated that man-made greenhouse gases caused at least 50 percent of global warming.

Where did most of the 97 percent come from, then? Cook had created a category called “explicit endorsement without quantification”—that is, papers in which the author, by Cook’s admission, did not say whether 1 percent or 50 percent or 100 percent of the warming was caused by man. He had also created a category called “implicit endorsement,” for papers that imply (but don’t say) that there is some man-made global warming and don’t quantify it. In other words, he created two categories that he labeled as endorsing a view that they most certainly didn’t.

The 97 percent claim is a deliberate misrepresentation designed to intimidate the public—and numerous scientists whose papers were classified by Cook protested:

“Cook survey included 10 of my 122 eligible papers. 5/10 were rated incorrectly. 4/5 were rated as endorse rather than neutral.”

—Dr. Richard Tol

“That is not an accurate representation of my paper . . .”

—Dr. Craig Idso

“Nope . . . it is not an accurate representation.”

—Dr. Nir Shaviv

“Cook et al. (2013) is based on a strawman argument . . .”

—Dr. Nicola Scafetta

Think about how many times you hear that 97 percent or some similar figure thrown around. It’s based on crude manipulation propagated by people whose ideological agenda it serves. It is a license to intimidate.

It’s time to revoke that license.

Alex Epstein is founder of the Center for Industrial Progress and author of The Moral Case for Fossil Fuels.

Wednesday, July 1, 2020

On Behalf Of Environmentalists, I Apologize For The Climate Scare

On Behalf Of Environmentalists, I Apologize For The Climate Scare

On behalf of environmentalists everywhere, I would like to formally apologize for the climate scare we created over the last 30 years. Climate change is happening. It’s just not the end of the world. It’s not even our most serious environmental problem. I may seem like a strange person to be saying all of this. I have been a climate activist for 20 years and an environmentalist for 30.

But as an energy expert asked by Congress to provide objective expert testimony, and invited by the Intergovernmental Panel on Climate Change (IPCC) to serve as expert reviewer of its next assessment report, I feel an obligation to apologize for how badly we environmentalists have misled the public.

Here are some facts few people know:

  • Humans are not causing a “sixth mass extinction”

  • The Amazon is not “the lungs of the world”

  • Climate change is not making natural disasters worse

  • Fires have declined 25 percent around the world since 2003

  • The amount of land we use for meat—humankind’s biggest use of land—has declined by an area nearly as large as Alaska

  • The build-up of wood fuel and more houses near forests, not climate change, explain why there are more, and more dangerous, fires in Australia and California

  • Carbon emissions are declining in most rich nations and have been declining in Britain, Germany, and France since the mid-1970s

  • The Netherlands became rich, not poor while adapting to life below sea level

  • We produce 25 percent more food than we need and food surpluses will continue to rise as the world gets hotter

  • Habitat loss and the direct killing of wild animals are bigger threats to species than climate change

  • Wood fuel is far worse for people and wildlife than fossil fuels

  • Preventing future pandemics requires more not less “industrial” agriculture

I know that the above facts will sound like “climate denialism” to many people. But that just shows the power of climate alarmism.

In reality, the above facts come from the best-available scientific studies, including those conducted by or accepted by the IPCC, the Food and Agriculture Organization of the United Nations (FAO), the International Union for the Conservation of Nature (IUCN) and other leading scientific bodies.

Some people will, when they read this, imagine that I’m some right-wing anti-environmentalist. I’m not. At 17, I lived in Nicaragua to show solidarity with the Sandinista socialist revolution. At 23 I raised money for Guatemalan women’s cooperatives. In my early 20s I lived in the semi-Amazon doing research with small farmers fighting land invasions. At 26 I helped expose poor conditions at Nike factories in Asia.

I became an environmentalist at 16 when I threw a fundraiser for Rainforest Action Network. At 27 I helped save the last unprotected ancient redwoods in California. In my 30s I advocated renewables and successfully helped persuade the Obama administration to invest $90 billion into them. Over the last few years I helped save enough nuclear plants from being replaced by fossil fuels to prevent a sharp increase in emissions.

But until last year, I mostly avoided speaking out against the climate scare. Partly that’s because I was embarrassed. After all, I am as guilty of alarmism as any other environmentalist. For years, I referred to climate change as an “existential” threat to human civilization, and called it a “crisis.”

But mostly I was scared. I remained quiet about the climate disinformation campaign because I was afraid of losing friends and funding. The few times I summoned the courage to defend climate science from those who misrepresent it I suffered harsh consequences. And so I mostly stood by and did next to nothing as my fellow environmentalists terrified the public.

I even stood by as people in the White House and many in the news media tried to destroy the reputation and career of an outstanding scientist, good man, and friend of mine, Roger Pielke, Jr., a lifelong progressive Democrat and environmentalist who testified in favor of carbon regulations. Why did they do that? Because his research proves natural disasters aren’t getting worse.

But then, last year, things spiraled out of control.

Alexandria Ocasio-Cortez said “The world is going to end in 12 years if we don’t address climate change.” Britain’s most high-profile environmental group claimed “Climate Change Kills Children.”

The world’s most influential green journalist, Bill McKibben, called climate change the “greatest challenge humans have ever faced” and said it would “wipe out civilizations.” Mainstream journalists reported, repeatedly, that the Amazon was “the lungs of the world,” and that deforestation was like a nuclear bomb going off.

As a result, half of the people surveyed around the world last year said they thought climate change would make humanity extinct. And in January, one out of five British children told pollsters they were having nightmares about climate change. Whether or not you have children you must see how wrong this is. I admit I may be sensitive because I have a teenage daughter. After we talked about the science she was reassured. But her friends are deeply misinformed and thus, understandably, frightened. I thus decided I had to speak out. I knew that writing a few articles wouldn’t be enough. I needed a book to properly lay out all of the evidence.

 And so my formal apology for our fear-mongering comes in the form of my new book, Apocalypse Never: Why Environmental Alarmism Hurts Us All. It is based on two decades of research and three decades of environmental activism. At 400 pages, with 100 of them endnotes, Apocalypse Never covers climate change, deforestation, plastic waste, species extinction, industrialization, meat, nuclear energy, and renewables.

Some highlights from the book:

  • Factories and modern farming are the keys to human liberation and environmental progress

  • The most important thing for saving the environment is producing more food, particularly meat, on less land

  • The most important thing for reducing air pollution and carbon emissions is moving from wood to coal to petroleum to natural gas to uranium

  • 100 percent renewables would require increasing the land used for energy from today’s 0.5 percent to 50 percent

  • We should want cities, farms, and power plants to have higher, not lower, power densities

  • Vegetarianism reduces one’s emissions by less than 4 percent

  • Greenpeace didn’t save the whales, switching from whale oil to petroleum and palm oil did

  • “Free-range” beef would require 20 times more land and produce 300 percent more emissions

  • Greenpeace dogmatism worsened forest fragmentation of the Amazon

  • The colonialist approach to gorilla conservation in the Congo produced a backlash that may have resulted in the killing of 250 elephants

Why were we all so misled?

In the final three chapters of Apocalypse Never I expose the financial, political, and ideological motivations. Environmental groups have accepted hundreds of millions of dollars from fossil fuel interests. Groups motivated by anti-humanist beliefs forced the World Bank to stop trying to end poverty and instead make poverty “sustainable.” And status anxiety, depression, and hostility to modern civilization are behind much of the alarmism.

Once you realize just how badly misinformed we have been, often by people with plainly unsavory or unhealthy motivations, it is hard not to feel duped. Will Apocalypse Never make any difference? There are certainly reasons to doubt it.

The news media have been making apocalyptic pronouncements about climate change since the late 1980s, and do not seem disposed to stop. The ideology behind environmental alarmism—Malthusianism—has been repeatedly debunked for 200 years and yet is more powerful than ever.

But there are also reasons to believe that environmental alarmism will, if not come to an end, have diminishing cultural power. The coronavirus pandemic is an actual crisis that puts the climate “crisis” into perspective. Even if you think we have overreacted, COVID-19 has killed nearly 500,000 people and shattered economies around the globe.

Scientific institutions including the World Health Organisation and IPCC have undermined their credibility through the repeated politicization of science. Their future existence and relevance depends on new leadership and serious reform. Facts still matter, and social media is allowing for a wider range of new and independent voices to outcompete alarmist environmental journalists at legacy publications.

Nations are reverting openly to self-interest and away from Malthusianism and neoliberalism, which is good for nuclear and bad for renewables. The evidence is overwhelming that our high-energy civilization is better for people and nature than the low-energy civilization that climate alarmists would return us to.

The invitations from IPCC and Congress are signs of a growing openness to new thinking about climate change and the environment. Another one has been to the response to my book from climate scientists, conservationists, and environmental scholars. “Apocalypse Never is an extremely important book,” writes Richard Rhodes, the Pulitzer-winning author of The Making of the Atomic Bomb. “This may be the most important book on the environment ever written,” says one of the fathers of modern climate science Tom Wigley.

“We environmentalists condemn those with antithetical views of being ignorant of science and susceptible to confirmation bias,” wrote the former head of The Nature Conservancy, Steve McCormick. “But too often we are guilty of the same. Shellenberger offers ‘tough love:’ a challenge to entrenched orthodoxies and rigid, self-defeating mindsets. Apocalypse Never serves up occasionally stinging, but always well-crafted, evidence-based points of view that will help develop the ‘mental muscle’ we need to envision and design not only a hopeful, but an attainable, future.”

That is all I hoped for in writing it. If you’ve made it this far, I hope you’ll agree that it’s perhaps not as strange as it seems that a lifelong environmentalist, progressive, and climate activist felt the need to speak out against the alarmism.

I further hope that you’ll accept my apology.

 

Michael Shellenberger is a Time Magazine “Hero of the Environment,” and president of Environmental Progress, an independent research and policy organization. He is the author of Apocalypse Never: Why Environmental Alarmism Hurts Us AllFollow him on Twitter @ShellenbergerMD.

Feature image: The author in Maranhão, Brazil in 1995.

Physics Needs Philosophy / Philosophy Needs Physics

Physics Needs Philosophy / Philosophy Needs Physics
Philosophy has always played an essential role in the development of science, physics in particular, and is likely to continue to do so

https://blogs.scientificamerican.com/observations/physics-needs-philosophy-philosophy-needs-physics/

By Carlo Rovelli on July 18, 2018
Physics Needs Philosophy / Philosophy Needs Physics
Credit: Dimitri Otis Getty Images
Contrary to claims about the irrelevance of philosophy for science, philosophy has always had, and still has, far more influence on physics than commonly assumed. A certain current anti-philosophical ideology has had damaging effects on the fertility of science. The recent momentous steps taken by experimental physics are all rebuttals of today's freely speculative attitude in theoretical physics. Empirical results such as the detection of the Higgs particle and gravitational waves, and the failure to detect super-symmetry where many expected it, question the validity of philosophical assumptions common among theoretical physicists, inviting us to engage in a clearer philosophical reflection on scientific method.

Against Philosophy is the title of a chapter of a book by one of the great physicists of the last generation: Steven Weinberg.1 Weinberg argues eloquently that philosophy is more damaging than helpful for physics—it is often a straightjacket that physicists have to free themselves from. Stephen Hawking famously wrote that “philosophy is dead” because the big questions that used to be discussed by philosophers are now in the hands of physicists.2 Neil de Grasse Tyson publicly stated: “…we learn about the expanding universe, … we learn about quantum physics, each of which falls so far out of what you can deduce from your armchair that the whole community of philosophers … was rendered essentially obsolete.”3 I disagree. Philosophy has always played an essential role in the development of science, physics in particular, and is likely to continue to do so.

This is a long-standing debate. An early delightful chapter of the debate was played out in Athens during its classical period. At the time, the golden youth of the city were educated in famous schools. Two stood out: the school of Isocrates, and the Academy, founded by a certain Plato. The rivalry between the two was not just about quality: their approach to education was different. Isocrates offered a high-level practical education, teaching the youth of Athens the skills and knowledge directly required to become politicians, lawyers, judges, architects and so on. The Academy focused on discussing general questions about foundations: What is justice? What would be the best laws? What is beauty? What is matter made of? And Plato had invented a good name for this way of posing problems: “philosophy.”

Isocrates' criticisms of Plato’s approach to education and knowledge were direct and remarkably like the claim by those contemporary scientists who argue that philosophy has no role to play in science: “Those who do philosophy, who determine the proofs and the arguments … and are accustomed to enquiring, but take part in none of their practical functions, … even if they happen to be capable of handling something, they automatically do it worse, whereas those who have no knowledge of the arguments [of philosophy], if they are trained [in concrete sciences] and have correct opinions, are altogether superior for all practical purposes. Hence for sciences, philosophy is entirely useless.”4

As it happened, a brilliant young student in Plato’s school wrote a short work in response to Isocrates’ criticisms: the Protrepticus, a text that became famous in antiquity. The bright young fellow who authored the pamphlet later left Athens, but eventually returned to open his own school, and had quite a career. His name was Aristotle. Two millennia of development of the sciences and philosophy have vindicated and, if anything, strengthened Aristotle’s defense of philosophy against Isocrates’ accusations of futility. His arguments are still relevant and we can take inspiration from them to reply to the current claims that philosophy is useless to physics.

The first of Aristotle’s arguments is the fact that general theory supports and happens to be useful for the development of practice. Today, after a couple of millennia during which both philosophy and science have developed considerably, historical evidence regarding the influence of philosophy on science is overwhelming.

Here are a few examples of this influence, from astronomy and physics. Ancient astronomy—that is, everything we know about the Earth being round, its size, the size of the moon and the sun, the distances to the moon and the sun, the motion of the planets in the sky and the basis from which modern astronomy and modern physics have emerged—is a direct descendent of philosophy. The questions that motivated these developments were posed in the Academy and the Lyceum, motivated by theoretical, rather than practical concerns. Centuries later, Galileo and Newton took great steps ahead but they relied heavily on what had come before.5 They extended previous knowledge, reinterpreting, reframing, and building upon it. Galileo's work would have been inconceivable without Aristotelian physics. Newton was explicit about his debt to ancient philosophy, Democritus in particular, for ideas that arose originally from philosophical motivations, such as the notions of empty space, atomism and natural rectilinear motion. His crucial discussion about the nature of space and time built upon his discussions with (and against) Descartes.

In the 20th century, both major advances in physics were strongly influenced by philosophy. Quantum mechanics springs from Heisenberg’s intuition, grounded in the strongly positivist philosophical atmosphere in which he found himself: one gets knowledge by restricting oneself to what is observable. The abstract of Heisenberg’s 1925 milestone paper on quantum theory is explicit about this: “The aim of this work is to set the basis for a theory of quantum mechanics based exclusively on relations between quantities that are in principle observable.”6 The same distinctly philosophical attitude nourished Einstein’s discovery of special relativity: by restricting to what is observable, we recognize that the notion of simultaneity is misleading. Einstein recognized very explicitly his debt to the philosophical writings of Mach and Poincaré. The philosophical influences on the conception of general relativity were even stronger. Once again, he was explicit in recognizing his debt to the philosophical arguments in Leibniz, Berkeley and Mach. Einstein claimed that even Schopenhauer had had a pervasive influence on him. Schopenhauer’s ideas on time and representation are perhaps not so hard to recognize in Einstein’s ideas leading to general relativity.7 Can it really be a coincidence that, in his younger days, the greatest physicist of the twentieth century should have had such a clear focus on philosophy,8 reading Kant’s three Critics when he was 15?

Why this influence? Because philosophy provides methods leading to novel perspectives and critical thinking. Philosophers have tools and skills that physics needs, but do not belong to the physicists training: conceptual analysis, attention to ambiguity, accuracy of expression, the ability to detect gaps in standard arguments, to devise radically new perspectives, to spot conceptual weak points, and to seek out alternative conceptual explanations. Nobody puts this better than Einstein himself: “A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is—in my opinion—the mark of distinction between a mere artisan or specialist and a real seeker after truth.”9 It is sometimes said that scientists do not do anything unless they first get permission from philosophy. If we read what the greatest scientists had to say about the usefulness of philosophy, physicists like Heisenberg, Schrödinger, Bohr and Einstein, we find opposite opinions to those of Hawking and Weinberg.

Here is a second argument due to Aristotle: Those who deny the utility of philosophy, are doing philosophy. The point is less trivial than it may sound at first. Weinberg and Hawking have obtained important scientific results. In doing this, they were doing science. In writing things like “philosophy is useless to physics,” or “philosophy is dead,” they were not doing physics. They were reflecting on the best way to develop science. The issue is the methodology of science: a central concern in the philosophy of science is to ask how science is done and how it could be done to be more effective. Good scientists reflect on their own methodology, and it is appropriate that Weinberg and Hawking have done so too. But how? They express a certain idea about the methodology of science. Is this the eternal truth about how science has always worked and should work? Is it the best understanding of science we have at present?

It is neither. In fact, it is not difficult to trace the origins of their ideas. They arise from the background of logical positivism, corrected by Popper and Kuhn. The current dominant methodological ideology in theoretical physics relies on their notions of falsifiability and scientific revolution, which are popular among theoretical physicists; they are often referred to, and are used to orient research and evaluate scientific work.

newsletter promo
Sign up for Scientific American’s free newsletters.

Sign Up
Hence, in declaring the uselessness of philosophy, Weinberg, Hawking and other “anti-philosophical” scientists are in fact paying homage to the philosophers of science they have read, or whose ideas they have absorbed from their environment. The imprint is unmistakable. When viewed as an ensemble of pseudo-statements, words that resemble statements but have no proper meaning, of the kind recurrent for instance in the way Neil de Grasse Tyson mocks philosophy, these criticisms are easily traced to the Vienna Circle’s anti-metaphysical stance.10 Behind these anathemas against “philosophy,” one can almost hear the Vienna Circle's slogan of “no metaphysics!”

Thus, when Weinberg and Hawking state that philosophy is useless, they are actually stating their adhesion to a particular philosophy of science.

In principle, there's nothing wrong with that; but the problem is that it is not a very good philosophy of science. On the one hand, Newton, Maxwell, Boltzmann, Darwin, Lavoisier and so many other major scientists worked within a different methodological perspective, and did pretty good science as well. On the other hand, philosophy of science has advanced since Carnap, Popper and Kuhn, recognizing that the way science effectively works is richer and more subtle than the way it was portrayed in the analysis of these thinkers. Weinberg and Hawking’s error is to mistake a particular, historically circumscribed, limited understanding of science for the eternal logic of science itself.

The weakness of their position is the lack of awareness of its frail historical contingency. They present science as a discipline with an obvious and uncontroversial methodology, as if this had been the same from Bacon to the detection of gravitational waves, or as if it was completely obvious what we should be doing and how we should be doing it when we do science.

Reality is different. Science has repeatedly redefined its own understanding of itself, along with its goals, its methods, and its tools. This flexibility has played a major role in its success. Let us consider a few examples from physics and astronomy. In light of Hipparchus and Ptolemy’s extraordinarily successful predictive theories, the goal of astronomy was to find the right combination of circles to describe the motion of the heavenly bodies around the Earth. Contrary to expectations, it turned out that Earth was itself one of the heavenly bodies. After Copernicus, the goal appeared to be to find the right combination of moving spheres that would reproduce the motion of the planets around the Sun. Contrary to expectations, it turned out that abstract elliptical trajectories were better than spheres. After Newton, it seemed clear that the aim of physics was to find the forces acting on bodies. Contrary to this, it turned out that the world could be better described by dynamical fields rather than bodies. After Faraday and Maxwell, it was clear that physics had to find laws of motion in space, as time passes. Contrary to assumptions, it turned out that space and time are themselves dynamical. After Einstein, it became clear that physics must only search for the deterministic laws of Nature. But it turned out that we can at best give probabilistic laws. And so on. Here are some sliding definitions for what scientists have thought science to be: deduction of general laws from observed phenomena, finding out the ultimate constituents of Nature, accounting for regularities in empirical observations, finding provisional conceptual schemes for making sense of the world. (The last one is the one I like.) Science is not a project with a methodology written in stone, or a fixed conceptual structure. It is our ever-evolving endeavor to better understand the world. In the course of its development, it has repeatedly violated its own rules and its own stated methodological assumptions.

A currently common description of what scientists do is collecting data and making sense of them in the form of theories. As time goes by, new data are acquired and theories evolve. In this picture scientists are depicted as rational beings who play this game using their intelligence, a specific language, and a well-established cultural and conceptual structure. The problem with this picture is that conceptual structures evolve as well. Science is not simply an increasing body of empirical information and a sequence of changing theories. It is also the evolution of our own conceptual structure. It is the continuous search for the best conceptual structure for grasping the world, at a given level of knowledge. The modification of the conceptual structure needs to be achieved from within our own thinking, rather as a sailor must rebuild his own boat while sailing, to use the beautiful simile of Otto Neurath so often quoted by Quine.11

This intertwining of learning and conceptual change and this evolution of methodology and objectives have developed historically in a constant dialogue between practical science and philosophical reflection. The views of scientists, whether they like it or not, are impregnated by philosophy.

And here we come back to Aristotle: Philosophy provides guidance how research must be done.  Not because philosophy can offer a final word about the right methodology of science (contrary to the philosophical stance of Weinberg and Hawking). But because the scientists who deny the role of philosophy in the advancement of science are those who think they have already found the final methodology, they have already exhausted and answered all methodological questions. They are consequently less open to the conceptual flexibility needed to go ahead. They are the ones trapped in the ideology of their time.

One reason for the relative sterility of theoretical physics over the last few decades may well be precisely that the wrong philosophy of science is held dear today by many physicists. Popper and Kuhn, popular among theoretical physicists, have shed light on important aspects of the way good science works, but their picture of science is incomplete and I suspect that, taken prescriptively and uncritically, their insights have ended up misleading research.

Kuhn’s emphasis on discontinuity and incommensurability has misled many theoretical and experimental physicists into disvaluing the formidable cumulative aspects of scientific knowledge. Popper’s emphasis on falsifiability, originally a demarcation criterion, has been flatly misinterpreted as an evaluation criterion. The combination of the two has given rise to disastrous methodological confusion: the idea that past knowledge is irrelevant when searching for new theories, that all unproven ideas are equally interesting and all unmeasured effects are equally likely to occur, and that the work of a theoretician consists in pulling arbitrary possibilities out of the blue and developing them, since anything that has not yet been falsified might in fact be right.

This is the current “why not?” ideology: any new idea deserves to be studied, just because it has not yet been falsified; any idea is equally probable, because a step further ahead on the knowledge trail there may be a Kuhnian discontinuity that was not predictable on the basis of past knowledge; any experiment is equally interesting, provided it tests something as yet untested.

I think that this methodological philosophy has given rise to much useless theoretical work in physics and many useless experimental investments. Arbitrary jumps in the unbounded space of possibilities have never been an effective way to do science. The reason is twofold: first, there are too many possibilities, and the probability of stumbling on a good one by pure chance is negligible; more importantly, nature always surprises us and we, limited critters, are far less creative and imaginative than we may think. When we proudly consider ourselves to be “speculating widely,” we are mostly playing out rearrangements of old tunes: true novelty that works is not something we can just find by guesswork.

The radical conceptual shifts and the most unconventional ideas that have actually worked have indeed been always historically motivated, almost forced, either by the overwhelming weight of new data, or by a well-informed analysis of the internal contradictions within existing, successful theories. Science works through continuity, not discontinuity.

Examples of the first case−novelty forced by data−are Kepler’s ellipses and quantum theory. Kepler did not just “come out with the idea” of ellipses: nature had to splash ellipses on his face before he could see them. He was using ellipses as an approximation for the deferent-epicycle motion of Mars and was astonished to find that the approximation worked better than his model.12 Similarly, atomic physicists of the early 20th century struggled long and hard against the idea of discontinuities in the basic laws, doing everything they could to avoid accepting the clear message from spectroscopy, that is, that there was actually discontinuity in the very heart of mechanics. In both instances, the important new idea was forced by data.

Examples of the second case−radical novelty from old theories−are the heliocentric system and general relativity. Neither Copernicus nor Einstein relied significantly on new data. But neither did their ideas come out of the blue either. They both started from an insightful analysis of successful well-established theories: Ptolemaic astronomy, Newtonian gravity and special relativity. The contradictions and unexplained coincidences they found in these would open the way to a new conceptualization.

It is not fishing out un-falsified theories, and testing them, that brings results. Rather, it is a sophisticated use of induction, building upon a vast and ever growing accumulation of empirical and theoretical knowledge, that provides the hints we need to move ahead. It is by focusing on empirically successful insights that we move ahead. Einstein’s “relativity” was not a “new idea”: it was Einstein’s realization of the extensive validity of Galilean relativity. There was no discontinuity: in fact it was continuity at its best. It was Einstein’s insightful “conservatism” in the face of those who were too ready to discard the relativity of velocity, just because of Maxwell’s equations.

I think this lesson is missed by much contemporary theoretical physics, where plenty of research directions are too quick to discard what we have already found out about Nature.

Three major empirical results have marked recent fundamental physics: gravitational waves, the Higgs, and the absence of super-symmetry at LHC. All three are confirmations of old physics and disconfirmations of widespread speculation. In all three cases, Nature is telling us: do not speculate so freely. So let’s look more closely at these examples.

The detection of gravitational waves, rewarded by the last Nobel Prize in fundamental physics, has been a radical confirmation of century-old general relativity. The recent nearly simultaneous detection of gravitational and electromagnetic signals from the merging of two neutron stars (GW170817) has improved our knowledge of the ratio between the speeds of propagation of gravity and electromagnetism by something like 14 orders of magnitude in a single stroke.13 One consequence of this momentous increase in our empirical knowledge has been to rule out a great many theories put forward as alternatives to general relativity, ideas that have been studied by a large community of theoreticians over the last decades, confirming instead the century-old general relativity as the best theory of gravity available at present.

The well-publicized detection of the Higgs particle at CERN has confirmed the Standard Model as the best current theory for high-energy physics, against scores of later alternatives that have long been receiving much attention.

But CERN's emphasis on the discovery of the Higgs when the Large Hadron Collider became operational has also served to hide the true surprise: the absence of super-symmetric particles where a generation of theoretical physicists had been expecting to find them. Despite rivers of ink and flights of fancy, the minimal super-symmetric model suddenly finds itself in difficulty. So once again, Nature has seriously rebuffed the free speculations of a large community of theoretical physicists who ended up firmly believing them.

Nature's repeated snub of the current methodology in theoretical physics should encourage a certain humility, rather than arrogance, in our philosophical attitude.

Part of the problem is precisely that the dominant ideas of Popper and Kuhn (perhaps not even fully digested) have misled current theoretical investigations. Physicists have been too casual in dismissing the insights of successful established theories. Misled by Kuhn’s insistence on incommensurability across scientific revolutions, they fail to build on what we already know, which is how science has always moved forward. A good example of this is the disregard for general relativity’s background independence in many attempts to incorporate gravity into the rest of fundamental physics.

Similarly, the emphasis on falsifiability has made physicists blind to a fundamental aspect of scientific knowledge: the fact that credibility has degrees and that reliability can be extremely high, even when it is not absolute certainty. This has a doubly negative effect: considering the insights of successful theories as irrelevant for progress in science (because “they could be falsified tomorrow”), and failing to see that a given investigation may have little plausibility even if it has not yet been falsified.

The scientific enterprise is founded on degrees of credibility, which are constantly updated on the basis of new data or new theoretical developments. Recent attention to Bayesian accounts of confirmation in science is common in the philosophy of science, but largely ignored in the theoretical physics community, with negative effects, in my opinion.14

What I intend here is not a criticism of Popper and Kuhn, whose writings are articulate and obviously insightful. What I am pointing out is that a simple-minded version of their outlooks has been taken casually by many physicists as the ultimate word on the methodology of science.

Far from being immune from philosophy, current physics is deeply affected by philosophy. But the lack of philosophical awareness needed to recognize this influence, and the refusal to listen to philosophers who try to make amends for it, is a source of weakness for physics.

Here is one last argument from Aristotle: More in need of philosophy are the sciences where perplexities are greater.

Today fundamental physics is in a phase of deep conceptual change, because of the success of general relativity and quantum mechanics and the open “crisis” (in the sense of Kuhn, I would rather say “opportunity”) generated by the current lack of an accepted quantum theory of gravity. This is why some scientists, including myself, working as I do on quantum gravity, are more acutely aware of the importance of philosophy for physics. Here is a list of topics currently discussed in theoretical physics: What is space? What is time? What is the “present”? Is the world deterministic? Do we need to take the observer into account to describe nature? Is physics better formulated in terms of a “reality” or in terms of “what we observe,” or is there a third option? What is the quantum wave function? What exactly does “emergence” mean? Does a theory of the totality of the universe make sense? Does it make sense to think that physical laws themselves might evolve? It is clear to me that input from past and current philosophical thinking cannot be disregarded in addressing these topics.

In loop quantum gravity, my own technical area, Newtonian space and time are reinterpreted as a manifestation of something which is granular, probabilistic and fluctuating in a quantum sense. Space, time, particles and fields get fused into a single entity: a quantum field that does not live in space or time. The variables of this field acquire definiteness only in interactions between subsystems. The fundamental equations of the theory have no explicit space or time variables. Geometry appears only in approximations. Objects exist within approximations. Realism is tempered by a strong dose of relationalism. I think we physicists need to discuss with philosophers, because I think we need help in making sense of all this.

To be fair, some manifestations of anti-philosophical attitudes in scientific circles are also a reaction to anti-scientific attitudes in some areas of philosophy and other humanities. In the post-Heideggerian atmosphere that dominates some philosophy departments, ignorance of science is something to exhibit with pride. Just as the best science listens keenly to philosophy, so the best philosophy listen keenly to science. This has certainly been so in the past: from Aristotle and Plato to Descartes, Hume, Kant, Husserl and Lewis, the best philosophy has always been closely tuned in to science. No great philosopher of the past would ever have thought for a moment of not taking seriously the knowledge of the world offered by the science of their times.

Science is an integral and essential part of our culture. It is far from being capable of answering all the questions we ask, but it is an extremely powerful tool. Our general knowledge is the result of the contributions from vastly different domains, from science to philosophy, all the way to literature and the arts, and our capacity to integrate them.

Those philosophers who discount science, and there are many of them, do a serious disservice to intelligence and civilization. When they claim that entire fields of knowledge are impermeable to science, and that they are the ones who know better, they remind me of two little old men on a park bench: “Aaaah," says one, his voice shaking, "all these scientists who claim they can study consciousness, or the beginning of the universe.” “Ohh," says the other, "how absurd! Of course they can't understand these things. We do!”

Notes

Steven Weinberg,Dreams of a Final Theory,Chapter VII (Vintage, 1994)
Stephen Hawking,The Grand Design(Bantam, 2012)
Inhttps://www.youtube.com/watch?v=ltbADstPdek, at time 1:03.
 Isocrates quoted in Iamblichus, Protrepticus, VI 37.22-39.8 (de Gruyter 1996)
C. Rovelli, “Aristotle's Physics: A Physicist's look”,Journal of the American Philosophical Association, 1 (2015) 23-40, arXiv:1312.4057.
W. Heisenberg, “Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen,” Zeitschrift fur Physik 33 (1925) no. 1, 879–893.
D. Howard “A Peek behind the Veil of Maya: Einstein, Schopenhauer, and the Historical Background of the Conception of Space as a Ground for the Individuation of Physical Systems.” InThe Cosmos of Science: Essays of Exploration. John Earmanand John D. Norton, eds. Pittsburgh-Konstanz Series in the Philosophy and History of Science, vol. 6. (Pittsburgh: University of Pittsburgh Press 1997) Konstanz: Universitätsverlag, 87–150.
D. Howard, “'A kind of vessel in which the struggle for eternal truth is played out’-Albert Einstein and the Role of Personality in Science.” InThe Natural History of Paradigms: Science and the Process of Intellectual Evolution.John H. Langdon and Mary E. McGann, eds. Indianapolis: University of Indianapolis Press, 1994, 111–138.
A. Einstein. Letter to Robert A. Thornton, 7 December 1944. EA 61-574, inThe Collected Papers of Albert Einstein(Princeton, NJ: Princeton University Press, 1986-present).
R. Carnap, “Überwindungder Metaphysik durch Logische Analyse der Sprache” inErkenntnis, vol. 2, 1932 (English translation 'The Elimination of Metaphysics Through Logical Analysis of Language' in Sarkar, Sahotra, ed.,Logical empiricism at its peak: Schlick, Carnap, andNeurath, New York : Garland Pub., 1996, pp. 10–31).
W. V. O. Quine,Word and Object. (Cambridge, Mass.: MIT Press, 2015).
Johannes Kepler,Astronomia Nova, translated by William H. Donahue, (Cambridge: Cambridge Univ. Pr., 1992).
Abbott, B. P.; et al. (LIGO Scientific Collaboration & Virgo Collaboration) "GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral”,Physical Review Letters. 119 (16) 2017. “Multi-messenger Observations of a Binary Neutron Star Merger"The Astrophysical Journal. 848 (2) 2017.
The worst episode of this misunderstanding is the confusion between the the (strong) common-sense notion of `confirmation’ and the (weak) Bayesian notion of `confirmation’ that has driven the controversy over Richard Dawid's work on non-empirical confirmation [R. Dawid,String Theory and the Scientific Method(Cambridge University Press, 2013).] An attempt to study the actual source of (possibly unjustified) confidence in a theory has been re-trumpeted by scientists as a proof of validity.

Sunday, March 1, 2020

The evidence for evidence-based therapy is not as clear as we thought


The evidence for evidence-based therapy is not as clear as we thought
is programme director of psychology and director of the Psychological Clinic, both at the University of Kansas, Edwards Campus.
is assistant professor in psychology at the University of Victoria, British Columbia.
1,200 words
Edited by Christian Jarrett

Over the past decade, many scholars have questioned the credibility of research across a variety of scientific fields. Some of these concerns arise from cases of outright fraud or other misconduct. More troubling are difficulties in replicating previous research findings. Replication is cast as a cornerstone of science: we can trust the results originating in one lab only if other labs can follow similar procedures and get similar results. But in many areas of research – including psychology – scientists have found that too often they cannot replicate prior findings.
As psychologists specialising in clinical work (Alexander Williams) and methodology (John Sakaluk), we wondered what these concerns mean for psychotherapy. Over the past 50 years, therapy researchers have increasingly embraced the evidence-based practice movement. Just as medicines are pitted against placebos in research studies, psychologists have used randomised clinical trials to test whether certain therapies (eg, ‘exposure therapy’, or systematically confronting what one fears) benefit people with certain mental-health conditions (eg, a phobia of spiders). The treatment-for-diagnosis combinations that have amassed evidence from these trials are known as empirically supported treatments (ESTs).
We wondered, though: is the credibility of the evidence for ESTs as strong as that designation suggests? Or does the evidence-base for ESTs suffer from the same problems as published research in other areas of science? This is what we (with our coauthors, the US psychologists Robyn Kilshaw and Kathleen T Rhyner) explored in our study published recently in the Journal of Abnormal Psychology.
Top of Form
The Society of Clinical Psychology – or Division 12 of the American Psychological Association – has done the arduous work since the 1990s of establishing a list of more than 70 ESTs. They have continued to update the ESTs listed, and the evidence cited for them, to the present day. We conducted a ‘meta-scientific review’ of these ESTs. Across a variety of statistical metrics, we assessed the credibility of the evidence cited by the Society for every EST on their list. We examined measures related to statistical power, which indicates plausibility of the reported data given the sample sizes of the experiments. We computed Bayesian indices of evidence that shows how probable the results were, assuming the therapies actually helped those receiving them. We even looked at rates of misreported statistics – if a study reports, say, ‘2 + 2 = 5’, we know that there must be a problem with at least some of the numbers. All told, we analysed more than 450 research articles. What we found is a study in contrasts.
Around 20 per cent of ESTs performed well across a majority of our metrics (eg, problem-solving therapy for depression, interpersonal psychotherapy for bulimia nervosa, the aforementioned exposure therapy for specific phobias). This means not only that the therapies have been subjected to clinical trials, but that the evidence produced from these clinical trials seems credible and supports the claim that the EST will help people. We also found a ‘murky middle’: 30 per cent of ESTs had mixed results across metrics, performing neither consistently well nor poorly (eg, cognitive therapy for depression, interpersonal psychotherapy for binge-eating disorder).
That leaves 50 per cent of ESTs with subpar outcomes across most of our metrics (eg, eye-movement desensitisation and reprocessing for PTSD, interpersonal psychotherapy for depression). In other words, although these ESTs seemed to work based on the claims of the clinical trials cited by the Society of Clinical Psychology, we found the evidence from these trials lacked statistical credibility. For these ESTs, the relevant research results are sufficiently ambiguous that we cannot be sure that they really do work better than other forms of therapy.
There is a large, dense body of literature showing that psychotherapy usually helps those who seek it out. Our results don’t challenge that conclusion. What does it mean, though, if the evidence behind the therapies thought to be best supported by research is not as strong as one would hope?
One conclusion we draw is that we might be in need of what we’re calling ‘psychological reversal’. The term, a version of what the US medical scholars Vinay Prasad and Adam Cifu called medical reversal, argues for desisting from the use of psychological practices if they are found to be ineffective, inadvertently harmful or more expensive to employ than equally effective alternatives. If some ESTs lack credible evidence that they are superior to simpler, less costly and time-consuming forms of therapy, shifting resources towards the latter group of treatments will benefit therapy clients and all those bearing the costs of mental-health care.
The other conclusion is a lesson in humility for those who provide therapy (one of the authors of this article among them). For close to a century, psychologists have debated the ‘dodo bird hypothesis’. Deriving its name from the proclamation of the Dodo Bird in Alice in Wonderland (‘Everybody has won and all must have prizes!’), the dodo bird hypothesis suggests that different forms of psychotherapy perform equally well, and that this is because of the common factors of all therapies (eg, they all provide clients with a rationale for the therapy). The existence of ESTs seems to refute the hypothesis, demonstrating that some therapies do work better than others for certain mental-health conditions. We put forward a different possibility: the ‘do not know’ bird hypothesis. Given the problems with credibility we found across many clinical trials, we contend that we currently do not know in many cases if some therapies perform better than others. Of course, this also means we do not know if the majority of therapies are equally effective, and, if such equality exists, we do not know if it owes to common factors. When it comes to comparing psychotherapies, therapists could do worse than to channel every philosophy undergrad: when someone purports one therapy works better than another, wonder aloud: ‘How do we know?’
Psychotherapy could be on the verge of a renaissance. Research on mental-illness treatment can benefit greatly from the lessons psychology has learned about credibility. For example, investigators can ensure that their studies have sufficient power; that is, enough participants in a clinical trial to reliably detect if a psychotherapy works. They can also practise open science by making their datasets publicly available so that other researchers can verify that a trial’s statistics are reported accurately; and/or preregister their therapy trials, specifying in advance their methods and hypotheses, which makes the research process transparent and helps prevent the burying of negative findings.
Ethical therapists can continue to engage in practice that is evidence-based, not eminence-based, rooting their therapies in scientific evidence rather than their own conjecture or that of senior colleagues. They can also continue the routine outcome measurement many already employ: solicit therapy clients’ feedback early and often, be open to surprise about what’s working and what’s not, and adjust accordingly. Clients can ask their therapists upfront if they will offer the opportunity for such mutual assessment of their progress.
Therapy helps the vast majority of those who receive it. Happily – if the discipline embraces reform in research, and cultivates a humble, flexible approach to therapy – it could help even more.