Tuesday, May 22, 2018




Self-verifying theories

[[So all the philosophical baloney about the impossibility of knowing that you are consistent etc. is non-sense.....]]

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Self-verifying_theories
Self-verifying theories are consistent first-order systems of arithmetic much weaker than Peano arithmetic that are capable of proving their own consistencyDan Willard was the first to investigate their properties, and he has described a family of such systems. According to Gödel's incompleteness theorem, these systems cannot contain the theory of Peano arithmetic, and in fact, not even its weak fragment Robinson arithmetic; nonetheless, they can contain strong theorems.
In outline, the key to Willard's construction of his system is to formalise enough of the Gödel machinery to talk about provability internally without being able to formalise diagonalisation. Diagonalisation depends upon being able to prove that multiplication is a total function (and in the earlier versions of the result, addition also). Addition and multiplication are not function symbols of Willard's language; instead, subtraction and division are, with the addition and multiplication predicates being defined in terms of these. Here, one cannot prove the  sentence expressing totality of multiplication:
where  is the three-place predicate which stands for . When the operations are expressed in this way, provability of a given sentence can be encoded as an arithmetic sentence describing termination of an analytic tableau. Provability of consistency can then simply be added as an axiom. The resulting system can be proven consistent by means of a relative consistency argument with respect to ordinary arithmetic.
We can add any true  sentence of arithmetic to the theory and still remain consistent.

Monday, May 21, 2018





What Does Quantum Physics Actually Tell Us About the World?

https://www.nytimes.com/2018/05/08/books/review/adam-becker-what-is-real.html
May 8, 2018
WHAT IS REAL? 
The Unfinished Quest for the Meaning of Quantum Physics
By Adam Becker
370 pp. Basic Books. $32.
Are atoms real? Of course they are. Everybody believes in atoms, even people who don’t believe in evolution or climate change. If we didn’t have atoms, how could we have atomic bombs? But you can’t see an atom directly. And even though atoms were first conceived and named by ancient Greeks, it was not until the last century that they achieved the status of actual physical entities — real as apples, real as the moon.
The first proof of atoms came from 26-year-old Albert Einstein in 1905, the same year he proposed his theory of special relativity. Before that, the atom served as an increasingly useful hypothetical construct. At the same time, Einstein defined a new entity: a particle of light, the “light quantum,” now called the photon. Until then, everyone considered light to be a kind of wave. It didn’t bother Einstein that no one could observe this new thing. “It is the theory which decides what we can observe,” he said.
Which brings us to quantum theory. The physics of atoms and their ever-smaller constituents and cousins is, as Adam Becker reminds us more than once in his new book, “What Is Real?,” “the most successful theory in all of science.” Its predictions are stunningly accurate, and its power to grasp the unseen ultramicroscopic world has brought us modern marvels. But there is a problem: Quantum theory is, in a profound way, weird. It defies our common-sense intuition about what things are and what they can do.
“Figuring out what quantum physics is saying about the world has been hard,” Becker says, and this understatement motivates his book, a thorough, illuminating exploration of the most consequential controversy raging in modern science.
The debate over the nature of reality has been growing in intensity for more than a half-century; it generates conferences and symposiums and enough argumentation to fill entire journals. Before he died, Richard Feynman, who understood quantum theory as well as anyone, said, “I still get nervous with it...I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem.” The problem is not with using the theory — making calculations, applying it to engineering tasks — but in understanding what it means. What does it tell us about the world?

From one point of view, quantum physics is just a set of formalisms, a useful tool kit. Want to make better lasers or transistors or television sets? The Schrödinger equation is your friend. The trouble starts only when you step back and ask whether the entities implied by the equation can really exist. Then you encounter problems that can be described in several familiar ways:
Wave-particle duality. Everything there is — all matter and energy, all known forces — behaves sometimes like waves, smooth and continuous, and sometimes like particles, rat-a-tat-tat. Electricity flows through wires, like a fluid, or flies through a vacuum as a volley of individual electrons. Can it be both things at once?
The uncertainty principle. Werner Heisenberg famously discovered that when you measure the position (let’s say) of an electron as precisely as you can, you find yourself more and more in the dark about its momentum. And vice versa. You can pin down one or the other but not both.
The measurement problem. Most of quantum mechanics deals with probabilities rather than certainties. A particle has a probability of appearing in a certain place. An unstable atom has a probability of decaying at a certain instant. But when a physicist goes into the laboratory and performs an experiment, there is a definite outcome. The act of measurement — observation, by someone or something — becomes an inextricable part of the theory.


The strange implication is that the reality of the quantum world remains amorphous or indefinite until scientists start measuring. Schrödinger’s cat, as you may have heard, is in a terrifying limbo, neither alive nor dead, until someone opens the box to look. Indeed, Heisenberg said that quantum particles “are not as real; they form a world of potentialities or possibilities rather than one of things or facts.”
This is disturbing to philosophers as well as physicists. It led Einstein to say in 1952, “The theory reminds me a little of the system of delusions of an exceedingly intelligent paranoiac.”
So quantum physics — quite unlike any other realm of science — has acquired its own metaphysics, a shadow discipline tagging along like the tail of a comet. You can think of it as an “ideological superstructure” (Heisenberg’s phrase). This field is called quantum foundations, which is inadvertently ironic, because the point is that precisely where you would expect foundations you instead find quicksand.
Competing approaches to quantum foundations are called “interpretations,” and nowadays there are many. The first and still possibly foremost of these is the so-called Copenhagen interpretation. “Copenhagen” is shorthand for Niels Bohr, whose famous institute there served as unofficial world headquarters for quantum theory beginning in the 1920s. In a way, the Copenhagen is an anti-interpretation. “It is wrong to think that the task of physics is to find out how nature is,” Bohr said. “Physics concerns what we can say about nature.Nothing is definite in Bohr’s quantum world until someone observes it. Physics can help us order experience but should not be expected to provide a complete picture of reality. The popular four-word summary of the Copenhagen interpretation is: “Shut up and calculate!”
For much of the 20th century, when quantum physicists were making giant leaps in solid-state and high-energy physics, few of them bothered much about foundations. But the philosophical difficulties were always there, troubling those who cared to worry about them.
Becker sides with the worriers. He leads us through an impressive account of the rise of competing interpretations, grounding them in the human stories, which are naturally messy and full of contingencies. He makes a convincing case that it’s wrong to imagine the Copenhagen interpretation as a single official or even coherent statement. It is, he suggests, a “strange assemblage of claims.”
111Comments
The Times needs your voice. We welcome your on-topic commentary, criticism and expertise.
An American physicist, David Bohm, devised a radical alternative at midcentury, visualizing “pilot waves” that guide every particle, an attempt to eliminate the wave-particle duality. For a long time, he was mainly lambasted or ignored, but variants of the Bohmian interpretation have supporters today. Other interpretations rely on “hidden variables” to account for quantities presumed to exist behind the curtain. Perhaps the most popular lately — certainly the most talked about — is the “many-worlds interpretation”: Every quantum event is a fork in the road, and one way to escape the difficulties is to imagine, mathematically speaking, that each fork creates a new universe.
So in this view, Schrödinger’s cat is alive and well in one universe while in another she goes to her doom. And we, too, should imagine countless versions of ourselves. Everything that can happen does happen, in one universe or another. “The universe is constantly splitting into a stupendous number of branches,” said the theorist Bryce DeWitt, “every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies of itself.”
This is ridiculous, of course. “A heavy load of metaphysical baggage,” John Wheeler called it. How could we ever prove or disprove such a theory? But if you think the many-worlds idea is easily dismissed, plenty of physicists will beg to differ. They will tell you that it could explain, for example, why quantum computers (which admittedly don’t yet quite exist) could be so powerful: They would delegate the work to their alter egos in other universes.
Is any of this real? At the risk of spoiling its suspense, I will tell you that this book does not propose a definite answer to its title question. You weren’t counting on one, were you? The story is far from finished.
When scientists search for meaning in quantum physics, they may be straying into a no-man’s-land between philosophy and religion. But they can’t help themselves. They’re only human. “If you were to watch me by day, you would see me sitting at my desk solving Schrödinger’s equation...exactly like my colleagues,” says Sir Anthony Leggett, a Nobel Prize winner and pioneer in superfluidity. “But occasionally at night, when the full moon is bright, I do what in the physics community is the intellectual equivalent of turning into a werewolf: I question whether quantum mechanics is the complete and ultimate truth about the physical universe.”
James Gleick







Sam Harris and the Myth of Perfectly Rational Thought



https://www.wired.com/story/sam-harris-and-the-myth-of-perfectly-rational-thought/


Robert Wright

Sam Harris, one of the original members of the group dubbed the “New Atheists” (by Wired!) 12 years ago, says he doesn’t like tribalism. During his recent, much-discussed debate with Vox founder Ezra Klein about race and IQ, Harris declared that tribalism “is a problem we must outgrow.”
But apparently Harris doesn’t think he is part of that “we.” After he accused Klein of fomenting a “really indissoluble kind of tribalism” in the form of identity politics, and Klein replied that Harris exhibits his own form of tribalism, Harris said coolly, “I know I’m not thinking tribally in this respect.”
Not only is Harris capable of transcending tribalism—so is his tribe! Reflecting on his debate with Klein, Harris said that his own followers care “massively about following the logic of a conversation” and probe his arguments for signs of weakness, whereas Klein’s followers have more primitive concerns: “Are you making political points that are massaging the outraged parts of our brains? Do you have your hands on our amygdala and are you pushing the right buttons?”
Of the various things that critics of the New Atheists find annoying about them—and here I speak from personal experience—this ranks near the top: the air of rationalist superiority they often exude. Whereas the great mass of humankind remains mired in pernicious forms of illogical thought—chief among them, of course, religion—people like Sam Harris beckon from above: All of us, if we will just transcend our raw emotions and rank superstitions, can be like him, even if precious few of us are now.
We all need role models, and I’m not opposed in principle to Harris’s being mine. But I think his view of himself as someone who can transcend tribalism—and can know for sure that he’s transcending it—may reflect a crude conception of what tribalism is. The psychology of tribalism doesn’t consist just of rage and contempt and comparably conspicuous things. If it did, then many of humankind’s messes—including the mess American politics is in right now—would be easier to clean up.
What makes the psychology of tribalism so stubbornly powerful is that it consists mainly of cognitive biases that easily evade our awareness. Indeed, evading our awareness is something cognitive biases are precision-engineered by natural selection to do. They are designed to convince us that we’re seeing clearly, and thinking rationally, when we’re not. And Harris’s work features plenty of examples of his cognitive biases working as designed, warping his thought without his awareness. He is a case study in the difficulty of transcending tribal psychology, the importance of trying to, and the folly of ever feeling sure we’ve succeeded.
To be clear: I’m not saying Harris’s cognition is any more warped by tribalism than, say, mine or Ezra Klein’s. But somebody’s got to serve as an example of how deluded we all are, and who better than someone who thinks he’s not a good example?
There’s another reason Harris makes a good Exhibit A. This month Bari Weiss, in a now famous (and, on the left, infamous) New York Times piece, celebrated a coalescing group of thinkers dubbed the “Intellectual Dark Web”—people like Harris and Jordan Peterson and Christina Hoff Sommers, people for whom, apparently, the ideal of fearless truth telling trumps tribal allegiance. Andrew Sullivan, writing in support of Weiss and in praise of the IDW, says it consists of “nontribal thinkers.” OK, let’s take a look at one of these thinkers and see how nontribal he is.
Examples of Harris’s tribal psychology date back to the book that put him on the map: The End of Faith. The book exuded his conviction that the reason 9/11 happened—and the reason for terrorism committed by Muslims in general—was simple: the religious beliefs of Muslims. As he has put it: “We are not at war with ‘terrorism.’ We are at war with Islam.”
Believing that the root of terrorism is religion requires ruling out other root causes, so Harris set about doing that. In his book he listed such posited causes as “the Israeli occupation of the West Bank and Gaza…the collusion of Western powers with corrupt dictatorships…the endemic poverty and lack of economic opportunity that now plague the Arab world.”
Then he dismissed them. He wrote that “we can ignore all of these things—or treat them only to place them safely on the shelf—because the world is filled with poor, uneducated, and exploited peoples who do not commit acts of terrorism, indeed who would never commit terrorism of the sort that has become so commonplace among Muslims.”
If you’re tempted to find this argument persuasive, I recommend that you first take a look at a different instance of the same logic. Suppose I said, “We can ignore the claim that smoking causes lung cancer because the world is full of people who smoke and don’t get lung cancer.” You’d spot the fallacy right away: Maybe smoking causes lung cancer under some circumstances but not others; maybe there are multiple causal factors—all necessary, but none sufficient—that, when they coincide, exert decisive causal force.
Or, to put Harris’s fallacy in a form that he would definitely recognize: Religion can’t be a cause of terrorism, because the world is full of religious people who aren’t terrorists.
Harris isn’t stupid. So when he commits a logical error this glaring—and when he rests a good chunk of his world view on the error—it’s hard to escape the conclusion that something has biased his cognition.
As for which cognitive bias to blame: A leading candidate would be “attribution error.” Attribution error leads us to resist attempts to explain the bad behavior of people in the enemy tribe by reference to “situational” factors—poverty, enemy occupation, humiliation, peer group pressure, whatever. We’d rather think our enemies and rivals do bad things because that’s the kind of people they are: bad.
With our friends and allies, attribution error works in the other direction. We try to explain their bad behavior in situational terms, rather than attribute it to “disposition,” to the kind of people they are.
You can see why attribution error is an important ingredient of tribalism. It nourishes our conviction that the other tribe is full of deeply bad, and therefore morally culpable, people, whereas members of our tribe deserve little if any blame for the bad things they do.
This asymmetrical attribution of blame was visible in the defense of Israel that Harris famously mounted during Israel’s 2014 conflict with Gaza, in which some 70 Israelis and 2,300 Palestinians died.
Granted, Harris said, Israeli soldiers may have committed war crimes, but that’s because they have “been brutalized…that is, made brutal by” all the fighting they’ve had to do. And this brutalization “is largely due to the character of their enemies.”
Get the distinction? When Israelis do bad things, it’s because of the circumstances they face—in this case repeated horrific conflict that is caused by the bitter hatred emanating from Palestinians. But when Palestinians do bad things—like bitterly hate Israelis—this isn’t the result of circumstance (the long Israeli occupation of Gaza, say, or the subsequent, impoverishing, economic blockade); rather, it’s a matter of the “character” of the Palestinians.
This is attribution error working as designed. It sustains your conviction that, though your team may do bad things, it’s only the other team that’s actually bad. Your badness is “situational,” theirs is “dispositional.”
After Harris said this, and the predictable blowback ensued, he published an annotated version of his remarks in which he hastened to add that he wasn’t justifying war crimes and hadn’t meant to discount “the degree to which the occupation, along with collateral damage suffered in war, has fueled Palestinian rage.”
That’s progress. “But,” he immediately added, “Palestinian terrorism (and Muslim anti-Semitism) is what has made peaceful coexistence thus far impossible.” In other words: Even when the bad disposition of the enemy tribe is supplemented by situational factors, the buck still stops with the enemy tribe. Even when Harris struggles mightily against his cognitive biases, a more symmetrical allocation of blame remains elusive.
Another cognitive bias—probably the most famous—is confirmation bias, the tendency to embrace, perhaps uncritically, evidence that supports your side of an argument and to either not notice, reject, or forget evidence that undermines it. This bias can assume various forms, and one was exhibited by Harris in his exchange with Ezra Klein over political scientist Charles Murray’s controversial views on race and IQ.
Harris and Klein were discussing the “Flynn effect”—the fact that average IQ scores have tended to grow over the decades. No one knows why, but such factors as nutrition and better education are possibilities, and many of the other possibilities also fall under the heading of “improved living conditions.”
So the Flynn effect would seem to underscore the power of environment. Accordingly, people who see the black-white IQ gap as having no genetic component have cited it as reason to expect that the gap could move toward zero as average black living conditions approach average white living conditions. The gap has indeed narrowed, but people like Murray, who believe a genetic component is likely, have asked why it hasn’t narrowed more.
This is the line Harris pursued in an email exchange with Klein before their debate. He wrote that, in light of the Flynn effect, “the mean IQs of African American children who are second- and third-generation upper middle class should have converged with those of the children of upper-middle-class whites, but (as far as I understand) they haven’t.”
Harris’s expectation of such a convergence may seem reasonable at first, but on reflection you realize that it assumes a lot.
It assumes that when African Americans enter the upper middle class—when their income reaches some specified level—their learning environments are in all relevant respects like the environments of whites at the same income level: Their public schools are as good, their neighborhoods are as safe, their social milieus reward learning just as much, their parents are as well educated, they have no more exposure to performance-impairing drugs like marijuana and no less access to performance-enhancing (for test-taking purposes, at least) drugs like ritalin. And so on.
Klein alluded to this kink in Harris’s argument in an email to Harris: “We know, for instance, that African American families making $100,000 a year tend to live in neighborhoods with the same income demographics as white families making $30,000 a year.”
Harris was here exhibiting a pretty subtle form of confirmation bias. He had seen a fact that seemed to support his side of the argument—the failure of IQ scores of two groups to fully converge—and had embraced it uncritically; he accepted its superficial support of his position without delving deeper and asking any skeptical questions about the support.
I want to emphasize that Klein may here also be under the influence of confirmation bias. He saw a fact that seemed to threaten his views—the failure of IQ scores to fully converge—and didn’t embrace it, but rather viewed it warily, looking for things that might undermine its significance. And when he found such a thing—the study he cited—he embraced that.
And maybe he embraced it uncritically. For all I know it suffers from flaws that he would have looked for and found had it undermined his views. That’s my point: Cognitive biases are so pervasive and subtle that it’s hubristic to ever claim we’ve escaped them entirely.
In addition to exhibiting one side of confirmation bias—uncritically embracing evidence congenial to your world view—Harris recently exhibited a version of the flip side: straining to reject evidence you find unsettling. He did so in discussing the plight of physicist and popular writer Lawrence Krauss, who was recently suspended by Arizona State University after multiple women accused him of sexual predation.
Krauss is an ally of Harris’s in the sense of being not just an atheist, but a “new” atheist. He considers religion not just confused but pernicious and therefore in urgent need of disrespect and ridicule, which he is good at providing.
After the allegations against Krauss emerged, Harris warned against rushing to judgment. I’m in favor of such warnings, but Harris didn’t stop there. He said the following about the website that had first reported the allegations against Krauss: “Buzzfeed is on the continuum of journalistic integrity and unscrupulousness somewhere toward the unscrupulous side.”
So far as I can tell, this isn’t true in any relevant sense. Yes, Buzzfeed has had the kinds of issues that afflict even the most elite journalistic outlets: a firing over plagiarism, an undue-advertiser-influence incident, a you-didn’t-explicitly-warn-us-that-this-conversation-was-on-the-record complaint. And there was a time when Buzzfeed wasn’t really a journalistic outlet at all, but more of a spawning ground for cheaply viral content—a legacy that lives on as a major part of Buzzfeed’s business model and as a parody site called clickhole.
Still, since 2011, when Buzzfeed got serious about news coverage and hired Ben Smith as editor, the journalistic part of its operation has earned mainstream respect. And its investigative piece about Krauss was as thoroughly sourced as #metoo pieces that have appeared in places like the New York Times and the New Yorker.
But you probably shouldn’t take my word for that. I’ve had my contentious conversations with Krauss, and maybe this tension left me inclined to judge allegations against him too generously. In any event, I suspect that if the Buzzfeed piece were about someone Harris has had tensions with (Ezra Klein, maybe, or me), he might have just read it, found it pretty damning, and left it at that. But it was about Krauss—who is, if Harris will pardon the expression, a member of Harris’s tribe.
Most of these examples of tribal thinking are pretty pedestrian—the kinds of biases we all exhibit, usually with less than catastrophic results. Still, it is these and other such pedestrian distortions of thought and perception that drive America’s political polarization today.
For example: How different is what Harris said about Buzzfeed from Donald Trump talking about “fake news CNN”? It’s certainly different in degree. But is it different in kind? I would submit that it’s not.
When a society is healthy, it is saved from all this by robust communication. Individual people still embrace or reject evidence too hastily, still apportion blame tribally, but civil contact with people of different perspectives can keep the resulting distortions within bounds. There is enough constructive cross-tribal communication—and enough agreement on what the credible sources of information are—to preserve some overlap of, and some fruitful interaction between, world views.
Now, of course, we’re in a technological environment that makes it easy for tribes to not talk to each other and seems to incentivize the ridiculing of one another. Maybe there will be long-term fixes for this. Maybe, for example, we’ll judiciously amend our social media algorithms, or promulgate practices that can help tame cognitive biases.
Meanwhile, the closest thing to a cure may be for all of us to try to remember that natural selection has saddled us with these biases—and also to remember that, however hard we try, we’re probably not entirely escaping them. In this view, the biggest threat to America and to the world may be a simple lack of intellectual humility.
Harris, though, seems to think that the biggest threat to the world is religion. I guess these two views could be reconciled if it turned out that only religious people are lacking in intellectual humility. But there’s reason to believe that’s not the case.




Wednesday, May 16, 2018




To Build Truly Intelligent Machines, Teach Them Cause and Effect
https://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515/?utm_source=Quanta+Magazine&utm_campaign=1267952a81-RSS_Daily_Computer_Science&utm_medium=email&utm_term=0_f0cb61321c-1267952a81-389846569&mc_cid=1267952a81&mc_eid=61275b7d81

Judea Pearl, a pioneering figure in artificial intelligence, argues that AI has been stuck in a decades-long rut. His prescription for progress? Teach machines to understand the question why.

[[Note the critique of the limitations of current AI - DG.]]





May 15, 2018
Artificial intelligence owes a lot of its smarts to Judea Pearl. In the 1980s he led efforts that allowed machines to reason probabilistically. Now he’s one of the field’s sharpest critics. In his latest book, “The Book of Why: The New Science of Cause and Effect,” he argues that artificial intelligence has been handicapped by an incomplete understanding of what intelligence really is.
Three decades ago, a prime challenge in artificial intelligence research was to program machines to associate a potential cause to a set of observable conditions. Pearl figured out how to do that using a scheme called Bayesian networks. Bayesian networks made it practical for machines to say that, given a patient who returned from Africa with a fever and body aches, the most likely explanation was malaria. In 2011 Pearl won the Turing Award, computer science’s highest honor, in large part for this work.
But as Pearl sees it, the field of AI got mired in probabilistic associations. These days, headlines tout the latest breakthroughs in machine learning and neural networks. We read about computers that can master ancient games and drive cars. Pearl is underwhelmed. As he sees it, the state of the art in artificial intelligence today is merely a souped-up version of what machines could already do a generation ago: find hidden regularities in a large set of data. “All the impressive achievements of deep learning amount to just curve fitting,” he said recently.
In his new book, Pearl, now 81, elaborates a vision for how truly intelligent machines would think. The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever. Once this kind of causal framework is in place, it becomes possible for machines to ask counterfactual questions — to inquire how the causal relationships would change given some kind of intervention — which Pearl views as the cornerstone of scientific thought. Pearl also proposes a formal language in which to make this kind of thinking possible — a 21st-century version of the Bayesian framework that allowed machines to think probabilistically.
Pearl expects that causal reasoning could provide machines with human-level intelligence. They’d be able to communicate with humans more effectively and even, he explains, achieve status as moral entities with a capacity for free will — and for evil. Quanta Magazine sat down with Pearl at a recent conference in San Diego and later held a follow-up interview with him by phone. An edited and condensed version of those conversations follows.
Why is your new book called “The Book of Why”?
It means to be a summary of the work I’ve been doing the past 25 years about cause and effect, what it means in one’s life, its applications, and how we go about coming up with answers to questions that are inherently causal. Oddly, those questions have been abandoned by science. So I’m here to make up for the neglect of science.


That’s a dramatic thing to say, that science has abandoned cause and effect. Isn’t that exactly what all of science is about?
Of course, but you cannot see this noble aspiration in scientific equations. The language of algebra is symmetric: If X tells us about Y,then Y tells us about X. I’m talking about deterministic relationships. There’s no way to write in mathematics a simple fact — for example, that the upcoming storm causes the barometer to go down, and not the other way around.
Mathematics has not developed the asymmetric language required to capture our understanding that if X causes Y that does not mean that Ycauses X. It sounds like a terrible thing to say against science, I know. If I were to say it to my mother, she’d slap me.
But science is more forgiving: Seeing that we lack a calculus for asymmetrical relations, science encourages us to create one. And this is where mathematics comes in. It turned out to be a great thrill for me to see that a simple calculus of causation solves problems that the greatest statisticians of our time deemed to be ill-defined or unsolvable. And all this with the ease and fun of finding a proof in high-school geometry.
You made your name in AI a few decades ago by teaching machines how to reason probabilistically. Explain what was going on in AI at the time.
The problems that emerged in the early 1980s were of a predictive or diagnostic nature. A doctor looks at a bunch of symptoms from a patient and wants to come up with the probability that the patient has malaria or some other disease. We wanted automatic systems, expert systems, to be able to replace the professional — whether a doctor, or an explorer for minerals, or some other kind of paid expert. So at that point I came up with the idea of doing it probabilistically.
Unfortunately, standard probability calculations required exponential space and exponential time. I came up with a scheme called Bayesian networks that required polynomial time and was also quite transparent.
Yet in your new book you describe yourself as an apostate in the AI community today. In what sense?
In the sense that as soon as we developed tools that enabled machines to reason with uncertainty, I left the arena to pursue a more challenging task: reasoning with cause and effect. Many of my AI colleagues are still occupied with uncertainty. There are circles of research that continue to work on diagnosis without worrying about the causal aspects of the problem. All they want is to predict well and to diagnose well.
I can give you an example. All the machine-learning work that we see today is conducted in diagnostic mode — say, labeling objects as “cat” or “tiger.” They don’t care about intervention; they just want to recognize an object and to predict how it’s going to evolve in time.
I felt an apostate when I developed powerful tools for prediction and diagnosis knowing already that this is merely the tip of human intelligence. If we want machines to reason about interventions (“What if we ban cigarettes?”) and introspection (“What if I had finished high school?”), we must invoke causal models. Associations are not enough — and this is a mathematical fact, not opinion.
People are excited about the possibilities for AI. You’re not?
As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial.

The way you talk about curve fitting, it sounds like you’re not very impressed with machine learning.
No, I’m very impressed, because we did not expect that so many problems could be solved by pure curve fitting. It turns out they can. But I’m asking about the future — what next? Can you have a robot scientist that would plan an experiment and find new answers to pending scientific questions? That’s the next step. We also want to conduct some communication with a machine that is meaningful, and meaningful means matching our intuition. If you deprive the robot of your intuition about cause and effect, you’re never going to communicate meaningfully. Robots could not say “I should have done better,” as you and I do. And we thus lose an important channel of communication.
What are the prospects for having machines that share our intuition about cause and effect?
We have to equip machines with a model of the environment. If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans.
The next step will be that machines will postulate such models on their own and will verify and refine them based on empirical evidence. That is what happened to science; we started with a geocentric model, with circles and epicycles, and ended up with a heliocentric model with its ellipses.
Robots, too, will communicate with each other and will translate this hypothetical world, this wild world, of metaphorical models. 
When you share these ideas with people working in AI today, how do they react?
AI is currently split. First, there are those who are intoxicated by the success of machine learning and deep learning and neural nets. They don’t understand what I’m talking about. They want to continue to fit curves. But when you talk to people who have done any work in AI outside statistical learning, they get it immediately. I have read several papers written in the past two months about the limitations of machine learning.
Are you suggesting there’s a trend developing away from machine learning?
Not a trend, but a serious soul-searching effort that involves asking: Where are we going? What’s the next step?
That was the last thing I wanted to ask you.
I’m glad you didn’t ask me about free will.
In that case, what do you think about free will?
We’re going to have robots with free will, absolutely. We have to understand how to program them and what we gain out of it. For some reason, evolution has found this sensation of free will to be computationally desirable.
In what way?
You have the sensation of free will; evolution has equipped us with this sensation. Evidently, it serves some computational function.
Will it be obvious when robots have free will?
I think the first evidence will be if robots start communicating with each other counterfactually, like “You should have done better.” If a team of robots playing soccer starts to communicate in this language, then we’ll know that they have a sensation of free will. “You should have passed me the ball — I was waiting for you and you didn’t!” “You should have” means you could have controlled whatever urges made you do what you did, and you didn’t. So the
Now that you’ve brought up free will, I guess I should ask you about the capacity for evil, which we generally think of as being contingent upon an ability to make choices. What is evil?
It’s the belief that your greed or grievance supersedes all standard norms of society. For example, a person has something akin to a software module that says “You are hungry, therefore you have permission to act to satisfy your greed or grievance.” But you have other software modules that instruct you to follow the standard laws of society. One of them is called compassion. When you elevate your grievance above those universal norms of society, that’s evil.
So how will we know when AI is capable of committing evil?
When it is obvious for us that there are software components that the robot ignores, consistently ignores. When it appears that the robot follows the advice of some software components and not others, when the robot ignores the advice of other components that are maintaining norms of behavior that have been programmed into them or are expected to be there on the basis of past learning. And the robot stops following them.


Tuesday, May 15, 2018


How a new black hole paradox has set the physics world ablaze.
Alice and Bob Meet the Wall of Fire

An illustration of a galaxy with a supermassive black hole shooting out jets of radio waves.
Illustration by NASA/JPL-Caltech
Alice and Bob, beloved characters of various thought experiments in quantum mechanics, are at a crossroads. The adventurous, rather reckless Alice jumps into a very large black hole, leaving a presumably forlorn Bob outside the event horizon — a black hole’s point of no return, beyond which nothing, not even light, can escape.
Conventionally, physicists have assumed that if the black hole is large enough, Alice won’t notice anything unusual as she crosses the horizon. In this scenario, colorfully dubbed “No Drama,” the gravitational forces won’t become extreme until she approaches a point inside the black hole called the singularity. There, the gravitational pull will be so much stronger on her feet than on her head that Alice will be “spaghettified.”
Now a new hypothesis is giving poor Alice even more drama than she bargained for. If this alternative is correct, as the unsuspecting Alice crosses the event horizon, she will encounter a massive wall of fire that will incinerate her on the spot. As unfair as this seems for Alice, the scenario would also mean that at least one of three cherished notions in theoretical physics must be wrong.
When Alice’s fiery fate was proposed this summer, it set off heated debates among physicists, many of whom were highly skeptical. “My initial reaction was, ‘You’ve got to be kidding,’” admitted Raphael Bousso, a physicist at the University of California, Berkeley. He thought a forceful counterargument would quickly emerge and put the matter to rest. Instead, after a flurry of papers debating the subject, he and his colleagues realized that this had the makings of a mighty fine paradox.
The ‘Menu From Hell’
Paradoxes in physics have a way of clarifying key issues. At the heart of this particular puzzle lies a conflict between three fundamental postulates beloved by many physicists. The first, based on the equivalence principle of general relativity, leads to the No Drama scenario: Because Alice is in free fall as she crosses the horizon, and there is no difference between free fall and inertial motion, she shouldn’t feel extreme effects of gravity. The second postulate is unitarity, the assumption, in keeping with a fundamental tenet of quantum mechanics, that information that falls into a black hole is not irretrievably lost. Lastly, there is what might be best described as “normality,” namely, that physics works as expected far away from a black hole even if it breaks down at some point within the black hole — either at the singularity or at the event horizon.
Together, these concepts make up what Bousso ruefully calls “the menu from hell.” To resolve the paradox, one of the three must be sacrificed, and nobody can agree on which one should get the ax.
Physicists don’t lightly abandon time-honored postulates. That’s why so many find the notion of a wall of fire downright noxious. “It is odious,” John Preskill of the California Institute of Technology declared earlier this month at an informal workshop organized by Stanford University’s Leonard Susskind. For two days, 50 or so physicists engaged in a spirited brainstorming session, tossing out all manner of crazy ideas to try to resolve the paradox, punctuated by the rapid-fire tap-tap-tap of equations being scrawled on a blackboard. But despite the collective angst, even the firewall’s fiercest detractors have yet to find a satisfactory solution to the conundrum.

Joseph Polchinski, a string theorist at the University of California, Santa Barbara, is the “P” in the “AMPS” team that presented a new hypothesis about black hole firewalls.
According to Joseph Polchinski, a string theorist at the University of California, Santa Barbara, the simplest solution is that the equivalence principle breaks down at the event horizon, thereby giving rise to a firewall. Polchinski is a co-author of the paper that started it all, along with Ahmed Almheiri, Donald Marolf and James Sully — a group often referred to as “AMPS.” Even Polchinski thinks the idea is a little crazy. It’s a testament to the knottiness of the problem that a firewall is the least radical potential solution.
If there is an error in the firewall argument, the mistake is not obvious. That’s the hallmark of a good scientific paradox. And it comes at a time when theorists are hungry for a new challenge: The Large Hadron Collider has failed to turn up any data hinting at exotic physics beyond the Standard Model. “In the absence of data, theorists thrive on paradox,” Polchinski quipped.
If AMPS is wrong, according to Susskind, it is wrong in a really interesting way that will push physics forward, hopefully toward a robust theory of quantum gravity. Black holes are interesting to physicists, after all, because both general relativity and quantum mechanics can apply, unlike in the rest of the universe, where objects are governed by quantum mechanics at the subatomic scale and by general relativity on the macroscale. The two “rule books” work well enough in their respective regimes, but physicists would love to combine them to shed light on anomalies like black holes and, by extension, the origins of the universe.
An Entangled Paradox
The issues are complicated and subtle — if they were simple, there would be no paradox — but a large part of the AMPS argument hinges on the notion of monogamous quantum entanglement: You can only have one kind of entanglement at a time. AMPS argues that two different kinds of entanglement are needed in order for all three postulates on the “menu from hell” to be true. Since the rules of quantum mechanics don’t allow you to have both entanglements, one of the three postulates must be sacrificed.
Entanglement — which Albert Einstein ridiculed as “spooky action at a distance” — is a well-known feature of quantum mechanics (in the thought experiment, Alice and Bob represent an entangled particle pair). When subatomic particles collide, they can become invisibly connected, though they may be physically separated. Even at a distance, they are inextricably interlinked and act like a single object. So knowledge about one partner can instantly reveal knowledge about the other. The catch is that you can only have one entanglement at a time.
Under classical physics, as Preskill explained on Caltech’s Quantum Frontiersblog, Alice and Bob can both have copies of the same newspaper, which gives them access to the same information. Sharing this bond of sorts makes them “strongly correlated.” A third person, “Carrie,” can also buy a copy of that newspaper, which gives her equal access to the information it contains, thereby forging a correlation with Bob without weakening his correlation with Alice. In fact, any number of people can buy a copy of that same newspaper and become strongly correlated with one another.

With quantum correlations, Bob can be highly entangled with Alice or with Carrie, but not both.
Illustration courtesy of John Preskill
But with quantum correlations, that is not the case. For Bob and Alice to be maximally entangled, their respective newspapers must have the same orientation, whether right side up, upside down or sideways. So long as the orientation is the same, Alice and Bob will have access to the same information. “Because there is just one way to read a classical newspaper and lots of ways to read a quantum newspaper, the quantum correlations are stronger than the classical ones,” Preskill said. That makes it impossible for Bob to become as strongly entangled with Carrie as he is with Alice without sacrificing some of his entanglement with Alice.
This is problematic because there is more than one kind of entanglement associated with a black hole, and under the AMPS hypothesis, the two come into conflict. There is an entanglement between Alice, the in-falling observer, and Bob, the outside observer, which is needed to preserve No Drama. But there is also a second entanglement that emerged from another famous paradox in physics, one related to the question of whether information is lost in a black hole. In the 1970s, Stephen Hawking realized that black holes aren’t completely black. While nothing might seem amiss to Alice as she crosses the event horizon, from Bob’s perspective, the horizon would appear to be glowing like a lump of coal — a phenomenon now known as Hawking radiation.

The entanglement of particles in the No Drama scenario: Bob, outside the event horizon (dotted lines), is entangled with Alice just inside the event horizon, at point (b). Over time Alice (b’) drifts toward the singularity (squiggly line) while Bob (b”) remains outside the black hole.
Illustration courtesy of Joseph Polchinski
This radiation results from virtual particle pairs popping out of the quantum vacuum near a black hole. Normally they would collide and annihilate into energy, but sometimes one of the pair is sucked into the black hole while the other escapes to the outside world. The mass of the black hole, which must decrease slightly to counter this effect and ensure that energy is still conserved, gradually winks out of existence. How fast it evaporates depends on the black hole’s size: The bigger it is, the more slowly it evaporates.
Hawking assumed that once the radiation evaporated altogether, any information about the black hole’s contents contained in that radiation would be lost. “Not only does God play dice, but he sometimes confuses us by throwing them where they can’t be seen,” he famously declared. He and the Caltech physicist Kip Thorne even made a bet with a dubious Preskill in the 1990s about about whether or not information is lost in a black hole. Preskill insisted that information must be conserved; Hawking and Thorne believed that information would be lost. Physicists eventually realized that it is possible to preserve the information at a cost: As the black hole evaporates, the Hawking radiation must become increasingly entangled with the area outside the event horizon. So when Bob observes that radiation, he can extract the information.
But what happens if Bob were to compare his information with Alice’s after she has passed beyond the event horizon? “That would be disastrous,” Bousso explained, “because Bob, the outside observer, is seeing the same information in the Hawking radiation, and if they could talk about it, that would be quantum Xeroxing, which is strictly forbidden in quantum mechanics.”
Physicists, led by Susskind, declared that the discrepancy between these two viewpoints of the black hole is fine so long as it is impossible for Alice and Bob to share their respective information. This concept, called complementarity, simply holds that there is no direct contradiction because no single observer can ever be both inside and outside the event horizon. If Alice crosses the event horizon, sees a star inside that radius and wants to tell Bob about it, general relativity has ways of preventing her from doing so.
Susskind’s argument that information could be recovered without resorting to quantum Xeroxing proved convincing enough that Hawking conceded his bet with Preskill in 2004, presenting the latter with a baseball encyclopedia from which, he said, “information can be retrieved at will.” But perhaps Thorne, who refused to concede, was right to be stubborn.

The Hawking radiation is the result of virtual particle pairs popping into existence near the event horizon, with one partner falling in and the other escaping. The black hole’s mass decreases as a result and is emitted as radiation.
Illustration courtesy of Joseph Polchinski
Bousso thought complementarity would come to the rescue yet again to resolve the firewall paradox. He soon realized that it was insufficient. Complementarity is a theoretical concept developed to address a specific problem, namely, reconciling the two viewpoints of observers inside and outside the event horizon. But the firewall is just the tiniest bit outside the event horizon, giving Alice and Bob the same viewpoint, so complementarity won’t resolve the paradox.
Toward Quantum Gravity
If they wish to get rid of the firewall and preserve No Drama, physicists need to find a new theoretical insight tailored to this unique situation or concede that perhaps Hawking was right all along, and information is indeed lost, meaning Preskill might have to return his encyclopedia. So it was surprising to find Preskill suggesting that his colleagues at the Stanford workshop at least reconsider the possibility of information loss. Although we don’t know how to make sense of quantum mechanics without unitarity, “that doesn’t mean it can’t be done,” he said. “Look in the mirror and ask yourself: Would I bet my life on unitarity?”
Polchinski argues persuasively that you need Alice and Bob to be entangled to preserve No Drama, and you need the Hawking radiation to be entangled with the area outside the event horizon to conserve quantum information. But you can’t have both. If you sacrifice the entanglement of the Hawking radiation with the area outside the event horizon, you lose information. If you sacrifice the entanglement of Alice and Bob, you get a firewall.
David Kaplan, Petr Stepanek and MK12 for Quanta Magazine; Music by Steven Gutheinz
Video: David Kaplan explores black hole physics and the problem of quantum gravity in this In Theory video.
That consequence arises from the fact that entanglement between the area outside the event horizon and the Hawking radiation must increase as the black hole evaporates. When roughly half the mass has radiated away, the black hole is maximally entangled and essentially experiences a mid-life crisis. Preskill explained: “It’s as if the singularity, which we expected to find deep inside the black hole, has crept right up to the event horizon when the black hole is old.” And the result of this collision between the singularity and the event horizon is the dreaded firewall.
The mental image of a singularity migrating from deep within a black hole to the event horizon provoked at least one exasperated outburst during the Stanford workshop, a reaction Bousso finds understandable. “We should be upset,” he said. “This is a terrible blow to general relativity.”
Yet for all his skepticism about firewalls, he is thrilled to be part of the debate. “This is probably the most exciting thing that’s happened to me since I entered physics,” he said. “It’s certainly the nicest paradox that’s come my way, and I’m excited to be working on it.”
Alice’s death by firewall seems destined to join the ranks of classic thought experiments in physics. The more physicists learn about quantum gravity, the more different it appears to be from our current picture of how the universe works, forcing them to sacrifice one cherished belief after another on the altar of scientific progress. Now they must choose to sacrifice either unitarity or No Drama, or undertake a radical modification of quantum field theory. Or maybe it’s all just a horrible mistake. Any way you slice it, physicists are bound to learn something new.