Tuesday, March 16, 2021

Tim Maudlin on knowledge and reality in physics

  This is Tim Maudlin’s review of two books in the Boston Review. It is the best essay that I have read in many years. I have been following Tim Maudlin’s writings on the philosophy of physics for many years and have been greatly instructed thereby. But this general review of the basic underlying philosophical issues I think is magnificent. Even if philosophy of physics does not interest you particularly there's a great deal to be learned from this essay.

 

http://bostonreview.net/science-nature-philosophy-religion/tim-maudlin-defeat-reason

 

What Is Real?: The Unfinished Quest for the Meaning of Quantum Physics

Adam Becker

Basic Books, $32 (cloth)

The Ashtray: (Or the Man Who Denied Reality)

Errol Morris

University of Chicago Press, $30 (cloth)


People are gullible. Humans can be duped by liars and conned by frauds; manipulated by rhetoric and beguiled by self-regard; browbeaten, cajoled, seduced, intimidated, flattered, wheedled, inveigled, and ensnared. In this respect, humans are unique in the animal kingdom.

Aristotle emphasizes another characteristic. Humans alone, he tells us, have logos: reason. Man, according to the Stoics, is zoön logikon, the reasoning animal. But on reflection, the first set of characteristics arises from the second. It is only because we reason and think and use language that we can be hoodwinked.

Not only can people be led astray, most people are. If the devout Christian is right, then committed Hindus and Jews and Buddhists and atheists are wrong. When so many groups disagree, the majority must be mistaken. And if the majority is misguided on just this one topic, then almost everyone must be mistaken on some issues of great importance. This is a hard lesson to learn, because it is paradoxical to accept one’s own folly. You cannot at the same time believe something and recognize that you are a mug to believe it. If you sincerely judge that it is raining outside, you cannot at the same time be convinced that you are mistaken in your belief. A sucker may be born every minute, but somehow that sucker is never oneself.

A sucker may be born every minute, but somehow that sucker is never oneself.

The two books under consideration here bring the paradox home, each in its own way. Adam Becker’s What Is Real? chronicles the tragic side of a crowning achievement of reason, quantum physics. The documentarian Errol Morris gives us The Ashtray, a semi-autobiographical tale of the supremely influential The Structure of Scientific Revolutions (1962) by Thomas S. Kuhn. Both are spellbinding intellectual adventures into the limits, fragility, and infirmity of human reason. Becker covers the sweep of history, from the 1925 birth of the “new” quantum physics up through the present day. Morris’s tale is more picaresque. Anecdotes, cameos, interviews, historical digressions, sly sidenotes, and striking illustrations hang off a central spine that recounts critical episodes in the history of analytic philosophy.

Quantum theory first. Becker does not discuss the earliest signs that something was amiss in the theory of light and matter, but the fundamentals are well known. The first hints of particle-like behavior in electromagnetic waves were dropped by Max Planck in his treatment of blackbody radiation, the light given off as a body heats up. In 1905 Albert Einstein took a decisive step with his analysis of the photoelectric effect, the current that flows in certain metals exposed to light. Einstein postulated that the light wave delivers its energy to the metal in small packets or quanta. The energy per packet varies with the color (frequency) of the light, and the number of packets with the brightness (amplitude). Below a critical frequency, no current flows, no matter how bright the light. Above that frequency, some flows no matter how dim.

Light is not just absorbed by matter; it is also emitted. The emission from atoms occurs at only certain precise frequencies. These constitute atomic spectra, which permit us to determine how much of each element there is in a distant star.

In 1913 Niels Bohr devised the Bohr atom. Electrons orbit the nucleus just like planets orbiting the sun. Only certain orbits—which Bohr gave rules for—are available to the electron, and when an electron jumps from a higher orbit to a lower one, it emits light of a frequency determined by the energies of the orbits. The challenge was figuring out how these quantum jumps happen. Over the next decade, Bohr failed to find any precise electron motions. The spectra and intensities of emitted light never came out right. This is the period of the “old” quantum theory.

Becker’s main historical narrative begins dramatically at the October 1927 Fifth Solvay International Conference in Brussels. In 1925 Werner Heisenberg had invented matrix mechanics. Heisenberg’s mathematical formalism got the predictions that Bohr had been seeking. But the central mathematical objects used in his theory were matrices, rectangular arrays of numbers. The predictions came out with wonderful accuracy, but that still left the old puzzle in place: how does the electron get from one orbit to another? You can stare at a matrix from morning to night, but you will not get a clue.

Bohr took an unexpected approach to this question: instead of asking if the theory was too young to be fully understood, he declared that the theory was complete; you cannot visualize what the electron is doing because the microworld of the electron is not, in principle, visualizable (anschaulich). It is unvisualizable (unanschaulich). In other words, the fault lay not in the theory, it lay in us. Bohr took to calling any visualizable object classical. Quantum theory had passed beyond the bounds of classical physics: there is no further classical story to tell. This became a central tenet of the Copenhagen interpretation of quantum theory.

Imagine Bohr’s motivation to adopt this extreme conclusion. For over a decade, he had been seeking exact, visualizable electron trajectories and failed. He concluded that his failure was rooted in the impossibility of the task.

But in 1926 Erwin Schrödinger produced a mathematically different theory, wave mechanics. Schrödinger’s mathematics was essentially just the classical mathematics of waves. The atomic system was not designated by a matrix, it was described by a wavefunction. And waves may not be particles, but they are certainly visualizable objects from everyday life.

What is Real? and The Ashtray are spellbinding intellectual adventures into the limits, fragility, and infirmity of human reason.

Schrödinger’s theory proved easier to use than Heisenberg’s, in part because it is more intuitive. Furthermore, first Schrödinger and then Paul Dirac proved that the two theories are equivalent. In physics any two theories that make precisely the same observable predictions are observably equivalent. And one of the predominant philosophical views of the age—logical positivism—held that any two observably equivalent theories are really one and the same theory. That is, although the two theories may seem to be giving completely different accounts of the world, they are not. The total content of an empirical theory consists in the predictions it makes about the observable. No more and no less.

Logical positivism is a very attractive view for people who do not want to worry about what they cannot observe. It is ultimately a theory about meaning, about the content of a theory. According to the positivists, a theory says no more than its observable consequences.

Logical positivism has been killed many times over by philosophers. But no matter how many stakes are driven through its heart, it arises unbidden in the minds of scientists. For if the content of a theory goes beyond what you can observe, then you can never, in principle, be sure that any theory is right. And that means there can be interminable arguments about which theory is right that cannot be settled by observation.

So the situation in 1926 was rather confused. Matrix mechanics and wave mechanics were, in some sense, thought to be the same theory, differently expressed. But if you use the mathematics to derive a certain matrix yet have no notion of how the physical situation associated with the matrix would appear, how do you get a prediction about what you will observe? And wave mechanics is not much better off. Waves are certainly visualizable, but the world we live in, the world of laboratory experiments, does not present itself as made of waves. It presents itself, if anything, as made of particles. How do we get from waves to recognizable everyday stuff?

This, in a nutshell, is the central conundrum of quantum mechanics: how does the mathematical formalism used to represent a quantum system make contact with the world as given in experience? This is commonly called the measurement problem, although the name is misleading. It might better be called the where-in-the-theory-is-the-world-we-live-in problem.

For Bohr and Heisenberg, the measurement problem is how the unvisualizable can influence the observable (and hence visualizable). For Schrödinger it is how waves can constitute solid objects such as cats. In wave mechanics, the little planetary electron of the old quantum theory gets smeared out into a cloud surrounding the nucleus. If quantum mechanics provides a complete description of the electron—as Bohr insisted—this diffuseness is not merely a reflection of our ignorance about where the electron is, it is a characteristic of the electron itself. As Schrödinger memorably wrote to Albert Einstein, “There is a difference between a shaky or out-of-focus photograph and a snapshot of clouds and fog banks.” This unexpected (but perfectly visualizable) mistiness of the electron was fine by Schrödinger: after all, we have no direct experience of electrons to contradict it. But the dynamics of the theory could not confine the smeariness to microscopic scale. In certain experimental situations, the haziness of the electron would get amplified up to everyday scales. The electron that is nowhere-in-particular gives birth to a cat that is no-state-of-health-in-particular. Schrödinger found this result manifestly absurd: something must have gone wrong somewhere in the physics.

For his part, Bohr insisted—as he had to—that the description of an experimental procedure and its outcome be classical, which is to say visualizable. Otherwise, you could not tell what experiment was done and how it came out. But at some point, if we are probing the microscopic realm, we must reach the unvisualizable. And the interaction between the two must itself be unvisualizable, since one part is. So all one can ask for is a mathematical rule: if an interaction occurs, what are the probabilities of the various possible classical outcomes? There is no more to be sought from quantum theory than these numbers. And matrix mechanics typically does not provide a precise prediction but a set of probabilities for different outcomes. The deterministic world of classical physics has been lost.

Which is all well and good, so long as you know what counts as the point of interaction between a quantum system and a classical one. But this Bohr could never nail down. We are left with the question: under what conditions does such an interaction (a measurement of the quantum state) occur? Do we need a human observer? Some conscious detection device, even if not human? Will a mouse do? Some detection device, even if not conscious? The Copenhagen interpretation never answered.

For Schrödinger, we get a different problem. We can visualize the microworld: it is a wave. But at some point, waves must manage to appear as particles, things located at definite positions in space. And just as the Copenhagenists advert to measurement here, so too does Schrödinger. The sudden change from an electron wavefunction being spread all over space to being located at a point is called “the collapse of the wavefunction.” So for wave mechanics, the measurement problem becomes: When and how does the wavefunction collapse? And the tentative answer is, upon measurement.

By the time of the Fifth Solvay Conference, much of this doctrine had been worked out. And along with Bohr, Heisenberg, and Schrödinger, the conference attracted our other protagonist, Einstein.

Einstein demanded a clear account of what is going on in the physical world. Bohr thought that the key to quantum mechanics was the realization that there is no such thing.

Einstein was the great anti-positivist. His position is often called realism, but a better name is perhaps common sense. Einstein believed that there is a real, objective, mind-independent physical world, and that the goal of physics is to describe that world. Mere prediction, no matter how precise, is not enough: explanation is the goal. Further, he said, you do not start out knowing what you can observe and then building the theory to predict certain observations. Rather, it is the theory itself that tells you what you can observe.

So Einstein and Bohr were polar opposites in their approach to physics. Einstein demanded a clear and comprehensible account of what is going on in the physical world—at all scales—in space and time. Bohr thought that the key to quantum mechanics was the realization that no such thing could be had.

Becker sets up the Solvay showdown skillfully. In the conventional story, Einstein, once the radical, has aged into a conservative who cannot abide the idea that God plays dice. Desperate for determinism, he challenges Bohr with a thought experiment designed to show the untenability of Bohr’s contention that you cannot do better—even in principle—than probabilistic predictions. The necessity of probabilism was encoded in the Heisenberg uncertainty relations, which assert that the better one can predict one aspect of a system (e.g., its position), the worse one can predict another (e.g., its momentum). Einstein’s thought experiment comes as a shock, but after a tense night Bohr hits on the solution and refutes Einstein with his own brainchild: the general theory of relativity. A showdown for the ages. Einstein, defeated, drifts into crankhood, never more doing significant physics.

Here Becker begins his exposé. He shows that every single detail of the standard account of the Solvay Conference is untrue. Einstein was not concerned with saving determinism. His example was not designed to refute the uncertainty relation. And most critically, Bohr did not win, he lost.

Thus begins the great debunking. None of this is news to historians and philosophers of physics. The true account has been worked out by many people whom Becker cites. But he has done prodigious research and created a powerful narrative.

As we noted, Einstein was not centrally bothered by the indeterminism of quantum mechanics. What vexed him—as he said repeatedly—was the nonlocality, or, in his pungent phrase, the spooky action at a distance (spukhafte Fernwirkung) in quantum mechanics. Einstein put his finger on this right away and never took it off.

Consider the collapse of the wavefunction in Schrödinger’s wave mechanics. If an electron-wave is channeled through a very narrow hole, when it emerges it will spread out in all directions like a circular undulation in water. But a hemispheric screen constructed to catch the electron does not reveal anything spread out: there is a single bright flash, as of a particle hitting the screen. The transition from extended wave to localized particle requires the collapse of the wavefunction. What bothered Einstein was that the sudden appearance of the flash at one spot implied that there could not be a flash at any other spot, no matter how far away. Somehow, all the distant spread-out parts of the wavefunction instantaneously disappear. Faster than light. Spooky action at a distance.

Einstein saw that the phenomena themselves—as distinct from Schrödinger’s theory with its wavefunctions—did not require anything spooky. All you had to believe is that the electron was always in some precise location, of which we are ignorant, and takes a humdrum path from the source to the screen, causing a flash. But because quantum mechanics does not specify the location, accepting this picture demands rejecting the completeness of quantum mechanics. The Copenhagen interpretation cannot be the final story.

Bohr never came to grips with this argument. Indeed, it is unclear whether he ever understood it.

But while Einstein won—and would continue to win—all the logical battles, Bohr was decisively winning the propaganda war. The Copenhagen doctrine of the completeness of quantum theory and the inescapability of fundamental chance spread, enforced by Bohr and Heisenberg and the rest of the Copenhagen school. Behind the scenes, the Copenhagenists did not agree with each other, but to the world they presented a unified front. Meanwhile, Einstein and Schrödinger both rejected Bohr, but they also bickered with each other.

Here is Einstein’s own description of Copenhagen: “The theory reminds me a little of the system of delusions of an exceedingly intelligent paranoiac.” Philosopher Imre Lakatos gave this later assessment:

In the new, post-1925 quantum theory the ‘anarchist’ position became dominant and modern quantum physics, in its ‘Copenhagen interpretation’, became one of the main standard bearers of philosophical obscurantism. In the new theory Bohr’s notorious ‘complementarity principle’ enthroned [weak] inconsistency as a basic ultimate feature of nature, and merged subjectivist positivism and antilogical dialectic and even ordinary language philosophy into one unholy alliance. After 1925 Bohr and his associates introduced a new and unprecedented lowering of critical standards for scientific theories. This led to a defeat of reason within modern physics and to an anarchist cult of incomprehensible chaos.

Strong words. It is Becker’s burden, and Becker’s triumph, to show that every word is true.

The story has twists and turns: John von Neumann’s purported mathematical proof (1932) that quantum mechanics is complete and one could not add anything more to it and retain its successful predictions; the philosopher Grete Hermann’s detection in 1935 of the fatal flaw in von Neumann’s proof—and the complete disregard of her work; the elaboration of Einstein’s reasoning into the famous Einstein-Podolsky-Rosen (EPR) argument; Bohr’s incomprehensible response to EPR; Schrödinger’s reaction, including his eponymous cat. Surely, one thinks, this mess must have been cleaned up eventually! But it never was. It persists to this day. And we are only through the first third of the book.

Robert Oppenheimer is reported to have said, ‘If we cannot disprove Bohm, then we must agree to ignore him.’

The middle third of Becker’s book adopts a somber tone in the stories of three renegades who bucked the system in the 1950s and ’60s, after the Copenhagen mysticism had congealed into an icy command: shut up and calculate! Work on the foundations of quantum theory was effectively forbidden, with one’s career and future at peril. The first renegade was David Bohm, a bright and dutiful Copenhagenist until he met the aging Einstein and recanted. Bohm rediscovered the pilot wave theory that Louis de Broglie had presented at Solvay in 1927. The theory slices through the enigma—wave or particle?—like Alexander’s sword through the Gordian knot: the answer is wave and particle. The wavefunction becomes a pilot wave that guides the particles along their paths. The theory is completely deterministic—no playing dice—and recovers all the predictions of standard quantum mechanics. One would think Einstein would love the theory, but he did not. The dreaded nonlocality had not been exorcized. Indeed, it was even more striking.

Bohm’s theory put the lie to von Neumann’s impossibility proof by direct counterexample. Contra Bohr, the particles are visualizable even at microscopic scale. In short, the theory demonstrates beyond all doubt that the Copenhagen interpretation is nonsense. But Bohm’s work was ignored and effectively suppressed.

A political leftist, Bohm had refused to testify at the House Un-American Activities Committee. He was dismissed from his job at Princeton and went into exile in Brazil. His U.S. passport was revoked. He eventually found his way to Birkbeck College in London, but never received the recognition that was his due. In a notorious episode, Robert Oppenheimer is reported to have said, “If we cannot disprove Bohm, then we must agree to ignore him.”

The second renegade was a graduate student at Princeton not long after Bohm left in 1952. Also rejecting Copenhagen, Hugh Everett took Schrödinger’s evolving wavefunction and removed the collapse. He argued that rather than an incomprehensible smear resulting, as Schrödinger’s neither-alive-nor-dead cat suggested, a multiplication of worlds results. Schrödinger’s cat ends up both dead and alive, as two cats in two equally real physical worlds. Today this approach is called the many-worlds interpretation.

Everett’s thesis advisor, John Wheeler, had great enthusiasm for Everett’s innovation. But he insisted that Everett get the nod of approval from Bohr. Bohr refused, and Wheeler required Everett to bowdlerize his thesis. Everett left academia and did not look back. His work lay in obscurity.

The last and greatest renegade was John Stewart Bell. Spurred by Bohm’s papers, Bell queried whether Einstein’s dreaded spooky action at a distance could be avoided. Copenhagen and the pilot wave theory had both failed this test. Bell proved that the nonlocality is unavoidable. No local theory—the type Einstein had sought—could recover the predictions of quantum mechanics. The predictions of all possible local theories must satisfy the condition called Bell’s inequality. Quantum theory predicts that Bell’s inequality can be violated. All that was left was to ask nature herself. In a series of sophisticated experiments, the answer has been established: Bell’s inequality is violated. The world is not local. No future innovation in physics can make it local again. The spookiness that Einstein spent decades deriding is here to stay.

How did the physics community react to this epochal discovery? With a shrug of incomprehension. For decades, discussion of the foundations of quantum theory had been suppressed. Physicists were unaware of the problems and unaware of the solutions. To this day, they commonly claim that Bell’s result proves Bohm’s theory to be impossible and indeterminism to be inevitable, while Bell himself was the staunchest advocate of Bohm’s deterministic theory. Even now, the average physicist has no understanding of what Einstein argued in the EPR paper and what Bell proved.

The last third of What Is Real? could hopefully be titled “Slow Convalescence.” Gradually the worst excesses of Bohr’s influence are mitigated as Bell’s work inspires a new generation to look into foundational issues. We meet a new cast of characters, and the overall atmosphere is mildly optimistic. But there is a long way to go, and this very book could prove to be a watershed moment for the physics community if it faces up to its own past and its present. Or, following the fate of Einstein, Bohm, and Everett, Becker could just be ignored. But if you have any interest in the implications of quantum theory, or in the suppression of scientific curiosity, What is Real? is required reading. There is no more reliable, careful, and readable account of the whole history of quantum theory in all its scandalous detail.

 

The subtitle of Errol Morris’s new book is, “Or the Man Who Denied Reality.” That might suggest a biography of Bohr, but the face on the cover is that of Thomas Kuhn. A renowned documentarian known for his dogged pursuit of truth that got one man off death row, Morris had a short-lived stint as Kuhn’s graduate student at Princeton. The cut-glass ashtray of the title was hurled at Morris’s head by Kuhn in a fit of pique. Morris has never forgiven Kuhn. And the ashtray is the least of it. Morris loathed Kuhn’s relativism and abandonment of reason and evidence, and Kuhn loathed Morris back.

Morris’s book is a settling of scores, both personal and philosophical. It is also delightful, digressive, unpredictable, engrossing, amusing, infuriating, and visually stunning.

The tale of The Ashtray is one of serendipity. Kuhn trained at Harvard as a physicist. There he started teaching classes in the history of science, and as a Harvard Junior Fellow decided to switch from physics to the history of science. His first book, The Copernican Revolution (1957), is a splendid work. Rejecting the usual physicist’s tendency to see past scientific work through the lens of present scientific theory, Kuhn brings the reader back into the debates of the time. There are no high theoretical pronouncements, just the patient historical work needed to make the assumptions and commitments of an earlier generation of scientists comprehensible to a modern audience. Had all of his work been of this character, Kuhn would be remembered as a talented historian of science, largely unknown by the general public.

Errol Morris’s clash with Thomas Kuhn was preordained: it is one thing to remark how hard truth can be to establish, and quite another to deny that there is any truth at all.

Through a series of random events, Kuhn was asked to write a monograph on the history of scientific revolutions for the Encyclopedia of Unified Science. That book became The Structure of Scientific Revolutions. Kuhn said that The Structure of Scientific Revolutions was just a sketch for a longer book which never got written. Instead it went on, as it was, to become the most widely read and influential work of philosophy in the last half of the twentieth century.

The first three quarters of The Structure of Scientific Revolutions give an insightful account of the everyday life of a scientist doing what Kuhn dubbed normal science. As a doctor of physics, Kuhn was on familiar ground and his account rang true. Normal science, according to Kuhn, is designed to solve puzzles. Both the nature of these puzzles and the acceptable means of resolving them are fixed by a set of rules, practices, and examples that Kuhn called a paradigm. Only by reference to the paradigm could a scientist defend the importance of the puzzle she is working on and the legitimacy of her solution. In particular, says Kuhn, it is not in the nature of normal science to question or challenge the paradigm: the paradigm provides the rules by which the game of a particular science is played. But of course, we are not playing the same scientific games as we did two hundred years ago. To get from there to here, various paradigms had to be overthrown and replaced. In Kuhn’s argot, there had to be paradigm shifts. And all of the excitement and controversy surrounding Kuhn turns on the nature and the outcome of these paradigm shifts. Exchanging one paradigm for another constitutes a scientific revolution.

We can ask three critical questions about scientific revolutions: how are they fought, why are they won (or lost), and what is the cumulative outcome of them. Kuhn’s answers to all of these questions could be read in an unsettling way.

Kuhn explicitly analogized scientific revolutions to political revolutions. The outcome of an attempted political revolution cannot be settled through political means since there is no institutional structure that both sides will submit to. “The parties to a political conflict,” writes Kuhn, “must finally resort to the techniques of mass persuasion, often including force.” Often elusive, Kuhn does not explicitly say that scientists engaged in a conflict over paradigms do the exact same thing, but he does not quite deny it either. (The fate of David Bohm cannot but spring to mind in this context.) The choice of a paradigm, he says, “can never be unequivocally settled by logic and experiment alone.” This repudiation of the rationality of scientific practice struck a chord in the zeitgeist. In the 1960s, it was chic to depict science as no more legitimate or authoritative than any other cultural practice. Instead, it is all a matter of propaganda and power moves.

But surely, one objects, these scientific revolutions lead to progress. Scientific theories, unlike fashion trends, do not merely change; they get closer to the truth. Here, too, Kuhn is adamant: he remarks near the end that the word truth has never once appeared in his text except in a quote by Francis Bacon. Then comes the coup de grace: truth is just what the winners of the conflict over paradigms say it is. And of course, according to the winners, their own paradigm is true.

To top it all off, Kuhn insists that the psychological effect of adopting a new paradigm is to change the very world you live in. Because different paradigms are incommensurable, the people who adopt them cannot communicate clearly with each other. They do not speak the same language and their very experience of the world is different. Hence there can be no neutral, objective, rational adjudication of their dispute.

So Errol Morris’s clash with Kuhn was preordained. After the ashtray incident, Morris did a stint as a philosophy graduate student at Berkeley, but he ultimately went on to be an investigative reporter and documentary filmmaker best known for The Thin Blue Line (1988). While shooting a movie about a prosecution psychiatrist in Texas known as Dr. Death, Morris came across a death row inmate convicted of a policeman’s murder. Morris became convinced the inmate’s claims of innocence were true. The Thin Blue Line examines the stories people tell, the explicit and implicit falsehoods, the distortions that can seal the fate of an innocent man. Although the film depicts several wildly different accounts of what happened the night of the murder, it is not, Morris insists, another Rashomon (Akira Kurosawa’s 1951 classic). Morris’s film is fact rather than fiction, and there is a unique truth about what happened. It occurred exactly one way. It is one thing to remark how hard truth can be to establish, and quite another to deny that there is any truth at all. Morris found the latter claim manifestly absurd. Indeed, by getting a confession from the real killer on tape, Morris solves the murder.

Whereas Becker’s villain is Bohr and his heroes are Einstein and Bell, Morris has Kuhn get his comeuppance from philosophers Saul Kripke and Hilary Putnam. Morris’s cast of characters reads like a who’s who of modern analytic philosophy: Bertrand Russell, Karl Popper, Ludwig Wittgenstein, Norwood Russell Hanson, and John Earman. For the reader familiar with all these names, there is good sport in seeing them bouncing off each other in Morris’s historical pinball machine. If a few ring a bell, then with application one can learn some ins and outs of twentieth-century Anglophone philosophy. If none do, the book may be heavy going. And whereas Becker’s history is meticulous and his explanations careful and measured, Morris writes more impressionistically, with passion. His account of the philosophical issues is in the ballpark but not right on target.

The central philosophical issue that Morris discusses is the reference of terms: how does a noun such as mass or planet or Albert Einstein pick out or denote something in the world? Without an account of reference, we cannot construct a theory of truth. A true claim correctly describes the object or objects it denotes, so determining truth or falsity requires determining the object under discussion. Analysis of the reference of terms goes back to the very beginning of the strangest and most intellectually shocking philosophical view in the Western tradition. The pre-Socratic philosopher Parmenides defended the thesis that all change and motion is an illusion. Parmenides came to this conclusion by reflecting on claims about nonexistence or, in Greek, tō mē on, that which is not. We all accept as true the claim that Santa Claus does not exist, or, equivalently, Santa Claus is nonexistent. But what, exactly, is this supposedly true claim about? It cannot be about Santa Claus because if it is true, then there is no such thing. Parmenides asserted, “The same things exist for thinking as for being.” In other words, you can only think about existent items because there are no nonexistent items to be the objects of thought. It follows that a nonexistence claim such as “Santa Claus does not exist” cannot be true: if it were true, then Santa Claus would not refer to anything, so the sentence would be meaningless. Parmenides took this result to establish the incoherence of all nonexistence claims. And since to say that things have changed is to say that the nonexistent has come to be, and the nonexistent is meaningless, there can be no change.

Philosophers rose to Parmenides’s challenge by theorizing how a term such as “unicorn” can be meaningful even if it does not refer to anything. Unicorn is just shorthand for a description such as “horse-like animal with a horn growing from its forehead.” And “unicorns do not exist” is true just in case no animal fits that description. Bertrand Russell suggested a similar analysis of everyday proper names: “Santa Claus does not exist” just means there is no jolly, bearded, red-suited, toymaking individual who lives at the North Pole. John Mill accepted the descriptive account of unicorn but objected to the parallel theory of proper names: a name such as Heisenberg has no associated description or connotation. It is a mere tag that has only a denotation, the man Heisenberg himself. There is no description in virtue of which Werner Heisenberg denotes that very man. So Parmenides’s puzzle still remains for names of nonexistent items such as Santa Claus.

Kuhn believed that we can do no better than miscommunicate, misunderstand, and ultimately resort to raw institutional power to resolve our disputes.

One advantage of the descriptive view is that it works not only for talk of the actual world, but also for talk about mere possibilities. The descriptive view explains not just why it is true to say there are no unicorns, but how under certain conditions there would have been. All you need are conditions that would have produced horse-like animals with horns. So there are two quite different contexts in which the meaning and reference of terms has to be explicated: how they get (or fail to get) referents in the actual world, and how they work when considering merely possible (counterfactual) situations. The difference between indicative propositions about the actual world and counterfactual propositions about mere possibilities is illustrated by these two conditionals: if Lee Harvey Oswald did not shoot John F. Kennedy, then someone else did (indicative and true); and if Oswald had not shot Kennedy, then someone else would have (counterfactual and probably false).

Kuhn implicitly accepts the descriptive view. The meanings of theoretical terms such as “mass” are determined by the theories in which they are deployed. Mass as used by Newton means something different from mass as employed by Einstein because the theories they are embedded in are different. Therefore Newtonians cannot really communicate with Einsteinians, Ptolemaic astronomers cannot really communicate with Copernican astronomers, and so on. This is why, for Kuhn, scientific revolutions cannot be settled by rational means: the disputants necessarily speak different languages.

The descriptive view was demolished by Kripke and Putnam in a series of lectures and papers in the 1970s. Whereas Russell took the descriptive theory and applied it to both general terms like unicorn and proper names like Heisenberg, Kripke took Mill’s view that names have no connotation and applied it to general terms like unicorn and water. This left both Kripke and Putnam with the task of explaining both how scientific terms like mass manage to refer to anything in the actual world, and how they function when used to talk about merely possible situations. These two tasks were addressed in different ways: the first by the causal theory of names, and the second by the theory of rigid designation. Articulating these fine distinctions would be out of synch with the spirit of Morris’s boisterous book, but as a result, conceptually different issues get somewhat muddled together.

One page contains a picture of a pet rock, another a painting called Truth Coming from the Well Armed with Her Whip to Chastise Mankind. Here is a Glyptodon, there a map of bomb damage in London, and last of all a photograph of a school class that contains a young Adolf Hitler and, perhaps, a young Ludwig Wittgenstein. For Morris, Wittgenstein so effectively undermined the philosophical ideals of truth and reason that he seriously pauses to consider which of the two did more damage to mankind.

The question may seem extreme but it springs from the noble place of a firm commitment to the possibility of rationality and evidence. Our beliefs should not be whatever feels comforting but what is most likely to be true. As angry as Morris is about how Kuhn treated him personally, he is much more outraged at the widespread influence of Kuhn’s ideas. He must delve into philosophy to elucidate the refutation of Kuhn’s sophistry. For if, as Kuhn suggests, we all live in worlds of our own manufacture, worlds bent to conform to our beliefs rather than our beliefs being adjusted to conform to the world, then what becomes of truth? All of us living in this post-truth political culture must face that question.

 

Accounts of human gullibility are generally retrospective. We laugh at tulip mania, and shake our heads at the Salem witch trials. But both Becker and Morris are after more dangerous game, delusions that are still in effect. One exposes the intellectual rot in the foundations of physics and the other decries the anti-rationalism sprouting from Kuhn. For Kuhn’s legacy lives on, not in philosophy (where he is widely derided for his excesses) but in other parts of academia and in popular culture.

Becker exposes how Bohr and company succeeded, in some cases by smash-mouth academic politics, including the shameful treatment of Bohm and the denigration of Einstein. But Kuhn wielded no such power. The Structure of Scientific Revolutions succeeded through its own allure. What is the attraction of Kuhn’s account of science? It has its roots far back in time, with the biggest self-deluder of all, Immanuel Kant.

The hand of Kant lies behind both Bohr and Kuhn. In his epic and epically incomprehensible masterpiece The Critique of Pure Reason (1781), Kant pulled off the grandest intellectual hocus-pocus in scholarly history. Kant called it his Copernican revolution in philosophy. According to Copernicus, phenomena that had been attributed to the motion of the stars and other heavenly bodies—the daily cycle of the sun and stars, the erratic motions of the planets—were really the product of the motion of Earth itself. These apparent motions had their source not in the observed but in the observer. Similarly, Kant argued that what have been taken to be features of a mind-independent reality—the structure of space and time, the existence of cause and effect, the law of conservation of energy—are actually imposed upon our experience by the mind itself. We have no justification for thinking that reality is intrinsically spatiotemporal or causally structured. But we are nonetheless eternally destined to experience the world in those terms because those are the intellectual and perceptive structures we must bring to our experience.

Our beliefs should not be whatever feels comforting but what is most likely to be true.

Kant’s argumentation for this Parmenidean thesis is famously obscure, and his writing forbiddingly impenetrable. But the moral he wanted to draw, which goes by the name of transcendental idealism, is easily summarized. I just did. And for whatever reason, this conclusion of Kant’s has been attracting people like a siren’s call ever since. Remarkably, many people just want Kant’s conclusion to be true.

Bohr grew up in an atmosphere of neo-Kantianism. And his most prized achievement, the doctrine of complementarity, is an insidious tweak on Kant. Kant had argued that in order to be comprehensible to us—in order to be anschaulich—the world of experience must be given in space and time and governed by deterministic laws of causation. Fundamental quantities must be conserved. Bohr adopted these as the essential properties of the classical world. The world of everyday experience, of lab experiments and their outcomes, must of necessity be classical, said Bohr.

The microphysical world, according to Bohr, is not visualizable, not classical. It does not, and could not, satisfy all of Kant’s requirements. But Bohr hit on his great revelation: although the microscopic world cannot be both pictured in space and time and regarded as governed by deterministic causal laws, it can be either pictured in space and time or treated by means of deterministic causal laws.

Furthermore, which of these two possibilities is realized is up to the observer. By setting up one sort of laboratory situation, the concepts of space and time can be applied to the microsystem, and by setting up an incompatible laboratory situation the concepts of causation and determinism, of energy and momentum, can be applied.

The conversion of a classical both/and into a quantum either/or became Bohr’s great mania. He started to see this complementarity everywhere. In biology, being alive is complementary to having a detailed account of the structure of cells: “Thus the existence of life itself would have to be regarded in biology, both as regards the possibilities of observation and of definition, as no more subject to analysis than the existence of the quantum of action in atomic physics,” Bohr wrote. There was complementarity between the practical and mystical understanding of human life. Complementarity would solve the mind-body problem.

Bohr showed as much obsessive attachment to his brainchild as Kant had to his. When granted the Danish Order of the Elephant in 1947, he chose as the motto on his coat of arms Contraria Sunt Complementa (opposites are complementary). He even appealed to complementarity to account for the obscurity of his own writings. According to Rudolf Peierls, Bohr would often say, “truth and clarity are complementary.” This sentiment is the death of Enlightenment rationality. Descartes, Locke, Berkeley, Spinoza, Leibniz, and Hume all strove for both clarity of expression and for truth. But according to Bohr, necessarily the more you have of one, the less you have of the other. Bohr triumphed through anti-rational aphorisms such as this. As the great physicist Murray Gell-Mann said, after conversations with Putnam, “Bohr brainwashed a generation of physicists.” A vivid illustration of Kuhn’s kinship to Bohr in this respect can be drawn from Morris: “What I hated most about Kuhn’s lectures was the combination of obscurantism and dogmatism. On one hand, he was extremely dogmatic. On the other, it was never really clear about what.” It is no stretch to apply this precise description to Bohr, and not much of one to apply it to The Critique of Pure Reason as well.

When the Copenhagen interpretation got imported to the pragmatic soil of the United States, Bohr’s incomprehensible nonsense was replaced by the more concise “shut up and calculate.” That is the philosophy that dominates physics to this day.

What of Kuhn? He was quite explicit about his relationship to Kant. Late in his life, Kuhn declared, “I am a Kantian with movable categories.” That is, he embraced Kant’s thesis that the mind imposes structure on the experienced world rather than discovering structure in it, but, contrary to Kant, the imposed structure can change. Such a change is a paradigm shift, the ultimately irrational replacement of one experienced reality with another incompatible one. Caught in our own little thought-worlds, deprived of access to objective truth (because there is no objective truth), we can do no better than miscommunicate, misunderstand, and ultimately resort to raw institutional power to resolve our disputes. As appropriated and mangled by Bohr and Kuhn, Kant—despite his own embrace of science and reason—becomes the agent of the anti-Enlightenment, the post-truth Age of Spin and Branding we live in.

Both Becker and Morris, each in his own way, is fighting an uphill battle against these trends. Each wants to reestablish the authority of reason and evidence. But it is the most difficult of all tasks. How do you convince a whole culture that it is deluded? How do you shine light into conceptual blind spots? Each of these books, as different as they are in style, is an attempt to provoke an epiphany and a revolution.

If works like these cannot succeed, then we ought to acknowledge the situation. We should shorten the dignified designation Homo sapiens to the pithier and more accurate Homo sap.



Thursday, March 11, 2021

What Is Life?

 


What Is Life? Its Vast Diversity Defies Easy Definition.

https://www.quantamagazine.org/what-is-life-its-vast-diversity-defies-easy-definition-20210309/?utm_source=Quanta+Magazine&utm_campaign=8a17ae8b94-RSS_Daily_Biology&utm_medium=email&utm_term=0_f0cb61321c-8a17ae8b94-389846569&mc_cid=8a17ae8b94&mc_eid=61275b7d81

Scientists have struggled to formulate a universal definition of life. Is it possible they don’t need one?

[[State of the art: 123 definitions of life plus controversy about what method should be used for making definitions and even whether the project of making a definition is appropriate at all. Now all bow and follow the science!]]

Scientists’ efforts to develop a good working definition for life have been stymied by the existence of puzzling cases like snowflakes (left) that have some attributes of life, red blood cells (center) that lack some attributes, and organisms like tardigrades (right) that can seem inanimate for long intervals.


Carl Zimmer

Contributing Writer

March 9, 2021


People often feel that they can intuitively recognize whether something is alive, but nature is filled with entities that flout easy categorization as life or non-life — and the challenge may intensify as other planets and moons open up to exploration. In this excerpt from his new book, Life’s Edge: The Search for What It Means to Be Alive, published today, the science writer Carl Zimmer discusses scientists’ frustrated efforts to develop a universal definition of life.

“It is commonly said,” the scientists Frances Westall and André Brack wrote in 2018, “that there are as many definitions of life as there are people trying to define it.”

As an observer of science and of scientists, I find this behavior strange. It is as if astronomers kept coming up with new ways to define stars. I once asked Radu Popa, a microbiologist who started collecting definitions of life in the early 2000s, what he thought of this state of affairs.

“This is intolerable for any science,” he replied. “You can take a science in which there are two or three definitions for one thing. But a science in which the most important object has no definition? That’s absolutely unacceptable. How are we going to discuss it if you believe that the definition of life has something to do with DNA, and I think it has something to do with dynamic systems? We cannot make artificial life because we cannot agree on what life is. We cannot find life on Mars because we cannot agree what life represents.”

With scientists adrift in an ocean of definitions, philosophers rowed out to offer lifelines.

Some tried to soothe the debate, assuring the scientists they could learn to live with the abundance. We have no need to zero in on the One True Definition of Life, they argued, because working definitions are good enough. NASA can come up with whatever definition helps them build the best machine for searching for life on other planets and moons. Physicians can use a different one to map the blurry boundary that sets life apart from death. “Their value does not depend on consensus, but rather on their impact on research,” the philosophers Leonardo Bich and Sara Green argued.


Other philosophers found this way of thinking — known as operationalism — an intellectual cop‐out. Defining life was hard, yes, but that was no excuse not to try. “Operationalism may sometimes be unavoidable in practice,” the philosopher Kelly Smith countered, “but it simply cannot substitute for a proper definition of life.”

Smith and other foes of operationalism complain that such definitions rely on what a group of people generally agree on. But the most important research on life is at its frontier, where it will be hardest to come to an easy agreement. “Any experiment conducted without a clear idea of what it is looking for ultimately settles nothing,” Smith declared.

Smith argued that the best thing to do is to keep searching for a definition of life that everyone can get behind, one that succeeds where others have failed. But Edward Trifonov, a Russian‐born geneticist, wondered if a successful definition already exists but is lying hidden amidst all the past attempts.

In 2011, Trifonov reviewed 123 definitions of life. Each was different, but the same words showed up again and again in many of them. Trifonov analyzed the linguistic structure of the definitions and sorted them into categories. Beneath their variations, Trifonov found an underlying core. He concluded that all the definitions agreed on one thing: life is self‐reproduction with variations. What NASA’s scientists had done in eleven words (“Life is a self‐sustained chemical system capable of undergoing Darwinian evolution”), Trifonov now did with three.

His efforts did not settle matters. All of us — scientists included — keep a personal list of things that we consider to be alive and not alive. If someone puts forward a definition, we check our list to see where it draws that line. A number of scientists looked at Trifonov’s distilled definition and did not like the line’s location. “A computer virus performs self‐reproduction with variations. It is not alive,” declared the biochemist Uwe Meierhenrich.

Some philosophers have suggested that we need to think more carefully about how we give a word like life its meaning. Instead of building definitions first, we should start by thinking about the things we’re trying to define. We can let them speak for themselves.

These philosophers are following in the tradition of Ludwig Wittgenstein. In the 1940s, Wittgenstein argued that everyday conversations are rife with concepts that are very hard to define. How, for example, would you answer the question, “What are games?”

If you tried to answer with a list of necessary and sufficient requirements for a game, you’d fail. Some games have winners and losers, but others are open‐ended. Some games use tokens, others cards, others bowling balls. In some games, players get paid to play. In other games, they pay to play, even going into debt in some cases.

For all this confusion, however, we never get tripped up talking about games. Toy stores are full of games for sale, and yet you never see children staring at them in bafflement. Games are not a mystery, Wittgenstein argued, because they share a kind of family resemblance. “If you look at them you will not see something that is common to all,” he said, “but similarities, relationships, and a whole series of them at that.”

A group of philosophers and scientists at Lund University in Sweden wondered if the question “What is life?” might better be answered the way Wittgenstein answered the question “What are games?” Rather than come up with a rigid list of required traits, they might be able to find family resemblances that could naturally join things together in a category we could call Life.

In the 1990s, some researchers suggested that the meteorite Allan Hills 84001 (left), a fragment of Mars that landed in Antarctica, contained fossils of ancient Martian microbial life (right). Their argument was largely discounted by other scientists.

In 2019 they set out to find it by carrying out a survey of scientists and other scholars. They put together a list of things including people, chickens, Amazon mollies, bacteria, viruses, snowflakes, and the like. Next to each entry the Lund team provided a set of terms commonly used to talk about living things, such as order, DNA, and metabolism.

The participants in the study checked off all the terms that they believed to apply to each thing. Snowflakes have order, for example, but they don’t have a metabolism. A human red blood cell has a metabolism but it contains no DNA.

The Lund researchers used a statistical technique called cluster analysis to look at the results and group the things together based on family resemblances. We humans fell into a group with chickens, mice, and frogs — in other words, animals with brains. Amazon mollies have brains, too, but the cluster analysis put them in a separate group close to our own. Because they don’t reproduce by themselves, they’re set a little apart from us. Further away, the scientists found a cluster made up of brainless things, such as plants and free‐living bacteria. In a third group was a cluster of red blood cells and other cell‐like things that can’t live on their own.

Furthest away from us were things that are commonly not considered alive. One cluster included viruses and prions, which are deformed proteins that can force other proteins to take their shape. Another included snowflakes, clay crystals, and other things that don’t replicate in a lifelike way.

The Lund researchers found that they could sort things pretty well into the living and the nonliving without getting tied up in an argument over the perfect definition of life. They propose that we can call something alive if it has a number of properties that are associated with being alive. It doesn’t have to have all those properties, nor does it even need exactly the same set found in any other living thing. Family resemblances are enough.

One philosopher has taken a far more radical stand. Carol Cleland argues that there’s no point in searching for a definition of life or even just a convenient stand‐in for one. It’s actually bad for science, she maintains, because it keeps us from reaching a deeper understanding about what it means to be alive. Cleland’s contempt for definitions is so profound that some of her fellow philosophers have taken issue with her. Kelly Smith has called Cleland’s ideas “dangerous.”

Cleland had a slow evolution into a firebrand. When she enrolled in the University of California, Santa Barbara, she started off studying physics. “I was a klutz in the lab, and my experiments never turned out right,” she later told an interviewer. From physics she turned to geology, and while she liked the wild places that the research took her to, she didn’t like feeling isolated as a woman in the male‐dominated field. She discovered philosophy in her junior year and was soon grappling with deep questions about logic. After graduating college and spending a year working as a software engineer, she went to Brown University to earn a Ph.D. in philosophy.

In graduate school Cleland mulled space and time, cause and effect.

When Cleland finished grad school, she moved on to subjects that were easier to talk about at dinner parties. She worked at Stanford University for a time, contemplating the logic of computer programs. She then became an assistant professor at the University of Colorado, where she remained for the rest of her career.

In Boulder, Cleland turned her attention to the nature of science itself. She examined how some scientists, like physicists, could run experiments over and over again, while others, like geologists, couldn’t replay millions of years of history. It was while she was reflecting about these differences that she learned about a Martian rock in Antarctica that was posing a philosophical conundrum of its own.

[The Martian rock, a meteorite designated Allan Hills 84001, was examined in 1996 by a NASA team led by David McKay. They reported seeing signs of ancient life in it, including microbial fossils, but most scientists dismissed the evidence as too ambiguous to be credible.]

A lot of the arguments over Allan Hills 84001 had less to do with the rock itself than with the right way to do science. Some researchers thought the NASA team had done an admirable job of studying it, but others thought it was ridiculous to conclude from their findings that the meteorite might contain fossils. The planetary scientist Bruce Jakosky, one of Cleland’s colleagues at the University of Colorado, decided to organize a public discussion where the two sides could air their views. But he realized that judging Allan Hills 84001 required more than running some experiments to measure magnetic minerals. It demanded thinking through how we make scientific judgments. He asked Cleland to join the event, to talk about Allan Hills 84001 as a philosopher.

What started as a quick prep for a talk turned into a dive into the philosophy of extraterrestrial life. Cleland concluded that the fight over Allan Hills 84001 sprang from the divide between experimental and historical sciences. The critics made the mistake of treating the meteorite study as experimental science. It was absurd to expect McKay’s team to replay history. They couldn’t fossilize microbes on Mars for 4 billion years and see if they matched Allan Hills 84001. They couldn’t hurl a thousand asteroids at a thousand copies of Mars and see what came our way.

Cleland concluded that the NASA team had carried out good historical science, comparing explanations for the ones that explained their evidence best. “The martian‐life hypothesis is a very good candidate for being the best explanation of the structural and chemical features of the martian meteorite,” she wrote in 1997 in the Planetary Report.

Cleland’s work on the meteorite impressed Jakosky so much that he invited her in 1998 to join one of the teams at NASA’s newly created Astrobiology Institute. In the years that followed, Cleland developed a philosophical argument for what the science of astrobiology should look like. She informed her ideas by spending time with scientists doing different kinds of research that fit under the umbrella of astrobiology. She traveled around the Australian outback with a paleontologist searching for clues to how giant mammals went extinct 40,000 years ago. She went to Spain to learn how geneticists sequence DNA. And she spent a lot of time at scientific meetings, roaming from talk to talk. “I felt like a kid in a candy store,” she once told me.

But sometimes the scientists Cleland spent time with set off her philosophical alarms. “Everybody was working with a definition of life,” she recalled. NASA’s definition, only a few years old at that point, was especially popular.

As a philosopher, Cleland recognized that the scientists were making a mistake. Their error didn’t have to do with determinate attributes or some other fine philosophical point understood only by a few logicians. It was a fundamental blunder that got in the way of the science itself. Cleland laid out the nature of this mistake in a paper, and in 2001 she traveled to Washington, D.C., to deliver it at a meeting of the American Association for the Advancement of Science. She stood up before an audience made up mostly of scientists, and told them it was pointless to try to find a definition of life.


JULY 16, 2020

“There was an explosion,” Cleland recalled. “Everyone was yelling at me. It was really amazing. Everyone had their pet definitions and wanted to air them. And here I told them the whole definition project was worthless.”

Fortunately, some people who heard Cleland talk thought she was onto something. She began collaborating with astrobiologists to explore the implications of her ideas. Over the course of two decades she published a series of papers, culminating in a book, The Quest for a Universal Theory of Life.

The trouble that scientists had with defining life had nothing to do with the particulars of life’s hallmarks such as homeostasis or evolution. It had to do with the nature of definitions themselves — something that scientists rarely stopped to consider. “Definitions,” Cleland wrote, “are not the proper tools for answering the scientific question ‘what is life?’”

Definitions serve to organize our concepts. The definition of, say, a bachelor is straightforward: an unmarried man. If you’re a man and you’re unmarried, you are — by definition — a bachelor. Being a man is not enough to make you a bachelor, nor is being unmarried. As for what it means to be a man, well, that can get complicated. And marriage has its own complexity. But we can define “bachelor” without getting bogged down in those messy matters. The word simply links these concepts in a precise way. And because definitions have such a narrow job to do, we can’t revise them through scientific investigation. There is simply no way that we could ever discover that we were wrong about the definition of a bachelor as being an unmarried man.

Life is different. It is not the sort of thing that can be defined simply by linking together concepts. As a result, it’s futile to search for a laundry list of features that will turn out to be the real definition of life. “We don’t want to know what the word life means to us,” Cleland said. “We want to know what life is.” And if we want to satisfy our desire, Cleland argues, we need to give up our search for a definition.

From the book Life’s Edge: The Search for What It Means to Be Alive by Carl Zimmer, published by Dutton, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2021 by Carl Zimmer.