Sunday, September 15, 2019


A Famous Argument Against Free Will Has Been Debunked
[[Very technical article, but for those who know of the research it is an excellent example of how a definite proof against free will can be based on a pure mistake.]]
For decades, a landmark brain study fed speculation about whether we control our own actions. It seems to have made a classic mistake.
 Sep 10, 2019

The death of free will began with thousands of finger taps. In 1964, two German scientists monitored the electrical activity of a dozen people’s brains. Each day for several months, volunteers came into the scientists’ lab at the University of Freiburg to get wires fixed to their scalp from a showerhead-like contraption overhead. The participants sat in a chair, tucked neatly in a metal tollbooth, with only one task: to flex a finger on their right hand at whatever irregular intervals pleased them, over and over, up to 500 times a visit.

The purpose of this experiment was to search for signals in the participants’ brains that preceded each finger tap. At the time, researchers knew how to measure brain activity that occurred in response to events out in the world—when a person hears a song, for instance, or looks at a photograph—but no one had figured out how to isolate the signs of someone’s brain actually initiating an action.

The experiment’s results came in squiggly, dotted lines, a representation of changing brain waves. In the milliseconds leading up to the finger taps, the lines showed an almost undetectably faint uptick: a wave that rose for about a second, like a drumroll of firing neurons, then ended in an abrupt crash. This flurry of neuronal activity, which the scientists called the Bereitschaftspotential, or readiness potential, was like a gift of infinitesimal time travel. For the first time, they could see the brain readying itself to create a voluntary movement.

This momentous discovery was the beginning of a lot of trouble in neuroscience. Twenty years later, the American physiologist Benjamin Libet used the Bereitschaftspotential to make the case not only that the brain shows signs of a decision before a person acts, but that, incredibly, the brain’s wheels start turning before the person even consciously intends to do something. Suddenly, people’s choices—even a basic finger tap—appeared to be determined by something outside of their own perceived volition.

As a philosophical question, whether humans have control over their own actions had been fought over for centuries before Libet walked into a lab. But Libet introduced a genuine neurological argument against free will. His finding set off a new surge of debate in science and philosophy circles. And over time, the implications have been spun into cultural lore.

Today, the notion that our brains make choices before we are even aware of them will now pop up in cocktail-party conversation or in a review of Black Mirror. It’s covered by mainstream journalism outlets, including This American Life, Radiolab, and this magazine. Libet’s work is frequently brought up by popular intellectuals such as Sam Harris and Yuval Noah Harari to argue that science has proved humans are not the authors of their actions.

It would be quite an achievement for a brain signal 100 times smaller than major brain waves to solve the problem of free will. But the story of the Bereitschaftspotential has one more twist: It might be something else entirely.


The Bereitschaftspotential was never meant to get entangled in free-will debates. If anything, it was pursued to show that the brain has a will of sorts. The two German scientists who discovered it, a young neurologist named Hans Helmut Kornhuber and his doctoral student L├╝der Deecke, had grown frustrated with their era’s scientific approach to the brain as a passive machine that merely produces thoughts and actions in response to the outside world. Over lunch in 1964, the pair decided that they would figure out how the brain works to spontaneously generate an action. “Kornhuber and I believed in free will,” says Deecke, who is now 81 and lives in Vienna.

To pull off their experiment, the duo had to come up with tricks to circumvent limited technology. They had a state-of-the-art computer to measure their participants’ brain waves, but it worked only after it detected a finger tap. So to collect data on what happened in the brain beforehand, the two researchers realized that they could record their participants’ brain activity separately on tape, then play the reels backwards into the computer. This inventive technique, dubbed “reverse-averaging,” revealed the Bereitschaftspotential.

The discovery garnered widespread attention. The Nobel laureate John Eccles and the prominent philosopher of science Karl Popper compared the study’s ingenuity to Galileo’s use of sliding balls for uncovering the laws of motion of the universe. With a handful of electrodes and a tape recorder, Kornhuber and Deecke had begun to do the same for the brain.

What the Bereitschaftspotential actually meant, however, was anyone’s guess. Its rising pattern appeared to reflect the dominoes of neural activity falling one by one on a track toward a person doing something. Scientists explained the Bereitschaftspotential as the electrophysiological sign of planning and initiating an action. Baked into that idea was the implicit assumption that the Bereitschaftspotential causes that action. The assumption was so natural, in fact, no one second-guessed it—or tested it.

Libet, a researcher at the University of California at San Francisco, questioned the Bereitschaftspotential in a different way. Why does it take half a second or so between deciding to tap a finger and actually doing it? He repeated Kornhuber and Deecke’s experiment, but asked his participants to watch a clocklike apparatus so that they could remember the moment they made a decision. The results showed that while the Bereitschaftspotential started to rise about 500 milliseconds before the participants performed an action, they reported their decision to take that action only about 150 milliseconds beforehand. “The brain evidently ‘decides’ to initiate the act” before a person is even aware that decision has taken place, Libet concluded.

To many scientists, it seemed implausible that our conscious awareness of a decision is only an illusory afterthought. Researchers questioned Libet’s experimental design, including the precision of the tools used to measure brain waves and the accuracy with which people could actually recall their decision time. But flaws were hard to pin down. And Libet, who died in 2007, had as many defenders as critics. In the decades since his experiment, study after study has replicated his finding using more modern technology such as fMRI.

But one aspect of Libet’s results sneaked by largely unchallenged: the possibility that what he was seeing was accurate, but that his conclusions were based on an unsound premise. What if the Bereitschaftspotential didn’t cause actions in the first place? A few notable studies did suggest this, but they failed to provide any clue to what the Bereitschaftspotential could be instead. To dismantle such a powerful idea, someone had to offer a real alternative.


In 2010, Aaron Schurger had an epiphany. As a researcher at the National Institute of Health and Medical Research in Paris, Schurger studied fluctuations in neuronal activity, the churning hum in the brain that emerges from the spontaneous flickering of hundreds of thousands of interconnected neurons. This ongoing electrophysiological noise rises and falls in slow tides, like the surface of the ocean—or, for that matter, like anything that results from many moving parts. “Just about every natural phenomenon that I can think of behaves this way. For example, the stock market’s financial time series or the weather,” Schurger says.

From a bird’s-eye view, all these cases of noisy data look like any other noise, devoid of pattern. But it occurred to Schurger that if someone lined them up by their peaks (thunderstorms, market records) and reverse-averaged them in the manner of Kornhuber and Deecke’s innovative approach, the results’ visual representations would look like climbing trends (intensifying weather, rising stocks). There would be no purpose behind these apparent trends—no prior plan to cause a storm or bolster the market. Really, the pattern would simply reflect how various factors had happened to coincide.
“I thought, Wait a minute,” Schurger says. If he applied the same method to the spontaneous brain noise he studied, what shape would he get?  “I looked at my screen, and I saw something that looked like the Bereitschaftspotential.” Perhaps, Schurger realized, the Bereitschaftspotential’s rising pattern wasn’t a mark of a brain’s brewing intention at all, but something much more circumstantial.

Two years later, Schurger and his colleagues Jacobo Sitt and Stanislas Dehaene proposed an explanation. Neuroscientists know that for people to make any type of decision, our neurons need to gather evidence for each option. The decision is reached when one group of neurons accumulates evidence past a certain threshold. Sometimes, this evidence comes from sensory information from the outside world: If you’re watching snow fall, your brain will weigh the number of falling snowflakes against the few caught in the wind, and quickly settle on the fact that the snow is moving downward.
But Libet’s experiment, Schurger pointed out, provided its subjects with no such external cues. To decide when to tap their fingers, the participants simply acted whenever the moment struck them. Those spontaneous moments, Schurger reasoned, must have coincided with the haphazard ebb and flow of the participants’ brain activity. They would have been more likely to tap their fingers when their motor system happened to be closer to a threshold for movement initiation.

This would not imply, as Libet had thought, that people’s brains “decide” to move their fingers before they know it. Hardly. Rather, it would mean that the noisy activity in people’s brains sometimes happens to tip the scale if there’s nothing else to base a choice on, saving us from endless indecision when faced with an arbitrary task. The Bereitschaftspotential would be the rising part of the brain fluctuations that tend to coincide with the decisions. This is a highly specific situation, not a general case for all, or even many, choices.

Other recent studies support the idea of the Bereitschaftspotential as a symmetry-breaking signal. In a study of monkeys tasked with choosing between two equal options, a separate team of researchers saw that a monkey’s upcoming choice correlated with its intrinsic brain activity before the monkey was even presented with options.

In a new study under review for publication in the Proceedings of the National Academy of Sciences, Schurger and two Princeton researchers repeated a version of Libet’s experiment. To avoid unintentionally cherry-picking brain noise, they included a control condition in which people didn’t move at all. An artificial-intelligence classifier allowed them to find at what point brain activity in the two conditions diverged. If Libet was right, that should have happened at 500 milliseconds before the movement. But the algorithm couldn’t tell any difference until about only 150 milliseconds before the movement, the time people reported making decisions in Libet’s original experiment.

In other words, people’s subjective experience of a decision—what Libet’s study seemed to suggest was just an illusion—appeared to match the actual moment their brains showed them making a decision.


When Schurger first proposed the neural-noise explanation, in 2012, the paper didn’t get much outside attention, but it did create a buzz in neuroscience. Schurger received awards for overturning a long-standing idea. “It showed the Bereitschaftspotential may not be what we thought it was. That maybe it’s in some sense artifactual, related to how we analyze our data,” says Uri Maoz, a computational neuroscientist at Chapman University.

For a paradigm shift, the work met minimal resistance. Schurger appeared to have unearthed a classic scientific mistake, so subtle that no one had noticed it and no amount of replication studies could have solved it, unless they started testing for causality. Now, researchers who questioned Libet and those who supported him are both shifting away from basing their experiments on the Bereitschaftspotential. (The few people I found still holding the traditional view confessed that they had not read Schurger’s 2012 paper.)

“It’s opened my mind,” says Patrick Haggard, a neuroscientist at University College London who collaborated with Libet and reproduced the original experiments.

It’s still possible that Schurger is wrong. Researchers broadly accept that he has deflated Libet’s model of Bereitschaftspotential, but the inferential nature of brain modeling leaves the door cracked for an entirely different explanation in the future. And unfortunately for popular-science conversation, Schurger’s groundbreaking work does not solve the pesky question of free will any more than Libet’s did. If anything, Schurger has only deepened the question.

Is everything we do determined by the cause-and-effect chain of genes, environment, and the cells that make up our brain, or can we freely form intentions that influence our actions in the world? The topic is immensely complicated, and Schurger’s valiant debunking underscores the need for more precise and better-informed questions.

“Philosophers have been debating free will for millennia, and they have been making progress. But neuroscientists barged in like an elephant into a china shop and claimed to have solved it in one fell swoop,” Maoz says. In an attempt to get everyone on the same page, he is heading the first intensive research collaboration between neuroscientists and philosophers, backed by $7 million from two private foundations, the John Templeton Foundation and the Fetzer Institute. At an inaugural conference in March, attendees discussed plans for designing philosophically informed experiments, and unanimously agreed on the need to pin down the various meanings of “free will.”

In that, they join Libet himself. While he remained firm on his interpretation of his study, he thought his experiment was not enough to prove total determinism—the idea that all events are set in place by previous ones, including our own mental functions. “Given the issue is so fundamentally important to our view of who we are, a claim that our free will is illusory should be based on fairly direct evidence,” he wrote in a 2004 book. “Such evidence is not available.”

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.
Bahar Gholipour is a New York–based tech and science journalist who covers the brain, neuroscience and psychology, genetics and AI.

Monday, September 9, 2019




Bacterial Clones Show Surprising Individuality

Quanta Magazine
September 4, 2019

Genetically identical bacteria should all be the same, but in fact, the cells are stubbornly varied individuals. That heterogeneity may be an important adaptation.

Adrian du Buisson for Quanta Magazine
Massed at the starting line, the crowd of runners all looked identical. But this wasn’t your standard 5K. Instead, researchers wanted to test both speed and navigational ability as competitors wound their way through a maze, choosing the right direction at every intersection. At the end of the course, the postdocs Mehdi Salek and Francesco Carrara would be waiting to identify each of the finishers. The postdocs wouldn’t have any medals or a commemorative T-shirt for the winners, however, because their racers weren’t human. They were Escherichia coli bacteria.
That there could be individual winners at all is a notion that has shaken the foundations of microbiology in recent years. Working in the lab of Roman Stocker at the Swiss Federal Institute of Technology Zurich (ETH Zurich), a team of microbiologists and engineers invented this unique endurance event. The cells at the starting line of Stocker’s microbial marathon were genetically identical, which implied, according to decades of biological dogma, that their resulting physiology and behavior should also be more or less the same, as long as all the cells experienced identical environmental conditions. At the DNA level, every E. coli cell had a roughly equal encoded ability to swim and steer through the course. A pack of cells that started the race at the same time would in theory all finish around the same time.
But that’s not what Salek and Carrara found. Instead, some bacteria raced through the maze substantially more quickly than others, largely because of varying aptitude for moving toward higher concentrations of food, a process called chemotaxis. What appeared to Salek and Carrara as a mass of indistinguishable cells at the beginning was actually a conglomerate of unique individuals.
“Bacteria can be genetically identical but phenotypically different,” Carrara said.
This bacterial individuality — known more technically as phenotypic heterogeneity — upends decades of traditional thinking about microbes. Although scientists knew that, for example, antibiotics didn’t always kill every last microbe in a colony of identical clones, both the cause of these differences and the resulting implications remained shrouded in mystery.
[[Stop a moment and take that in. Wasn’t that supposed to be the absolute proof of evolutionary change in bacteria: grow a culture from a single bacterium and if you find later that an anti-biotic kills only some of the culture, there must have been a genetic change that rendered some of the bacteria immune. So I used to think. But you see here that is is not true.]]

Now advances in microscopy and microfluidics (the technology Stocker’s lab used to build the bacterial maze) have begun to lift the veil on an important evolutionary process.
“This has been a relatively overlooked phenomenon,” said Hesper Rego, a microbiologist at the Yale School of Medicine. “The idea that microbial populations could evolve heterogeneity and control it using genetics is a really powerful concept.”
From Populations to Individuals
Ever since the days of Robert Koch and Louis Pasteur in the 1870s, microbiologists have typically studied groups of bacteria rather than individuals. Much of this was out of necessity: The technology didn’t exist to allow scientists to do much more with single cells than peer at them through a microscope. Besides, if the bacteria were all identical, then there seemingly wasn’t a need to study every cell. An individual cell deposited on a plate of nutrient-rich jelly would divide and divide until it formed a visible colony of cells, all clones of the original cell. All the bacteria in this colony could be expected to show the same behaviors, physiology and physical appearance — the same phenotype — when placed in identical environments. By and large, they did.
The development of antibiotics in the 1940s revealed a curious anomaly, however. In many cases, antibiotics didn’t annihilate all the bacteria, even in groups of cells that were fully susceptible to the killing power of antibiotics. The surviving cells were considered “persistent.” They just hunkered down and waited out the chemical barrage of penicillin or similar drugs. Initially, scientists thought that persisters might come from a genetically distinctive subpopulation that grew more slowly even before the antibiotic treatments. But when microbiologists looked for genes that could predict which cells would become persisters, they were disappointed.
“There was no such [distinct persistent] subpopulation,” said Laurence van Melderen, a microbiologist at the Free University of Brussels in Belgium. “In every population, you will find some persisters if you look for them.” For scientists, this posed a major quandary: How could identical bacteria have such radically different behaviors?
By the late 1970s, researchers had identified one possible answer. Scientists at the University of California, Berkeley showed that random chance alone could lead to different behaviors even in genetically identical cells. Bacteria with whiplike flagella can swim in a straight line (known as “running”) or lurch in random directions. Swimming cells spend much of their time tumbling about, actively sampling their environment. But to move toward higher concentrations of nutrients and away from toxins and predators, bacteria must use a direct run. When they can no longer sense a gradient, they return to tumbling.
Berkeley microbiologists studying E. coli found that each cell stopped swimming and started tumbling at a different concentration of various chemical attractants, including aspartate and L-serine. Even after considering random statistical variations and any influence from unlikely spontaneous mutations during the experiment, the researchers couldn’t account for the cells’ marked and persistent individual differences in running and tumbling. That mystery, according to Thierry Emonet, a biophysicist at Yale, was “a big deal.”
The study appeared during the heyday of the idea that a single gene made a single protein, which would subsequently elicit a consistent behavior when all the cells were in the same environment. After a century of experimentation on batches of bacteria, scientists were accustomed to slight collective deviations in “identical” traits, but their data still tended to cluster tightly around a mean. The Berkeley scientists, in contrast, found that sensitivity to the attractants was smeared out over a broad concentration, not a single mean. Their paper challenged the general assumption by showing substantial cell-to-cell variation in swimming behavior among the individual bacteria. No longer could phenotypic heterogeneity be shrugged off as a quirk of the bacterial response to antibiotics.
Although the researchers knew that this individuality resulted both from how tightly each cell regulated tumbling and from its response to L-serine, quantifying this variation in specific cells was more challenging. In 2002, glowing E. coli changed all of that.

The cloned E. coli bacteria growing in this laboratory culture glow with colors from two fluorescent proteins they express. Their colors differ because even though the cells are genetically identical, they are functionally individuals: Stochastic noise in their gene expression makes them produce different amounts of the proteins.
The biophysicist Michael Elowitz, now at the California Institute of Technology, inserted two fluorescent genes — one yellow, one cyan — into specimens of E. coli. The fluorescent genes were under the control of the exact same machinery, so prevailing wisdom held that the bacteria would glow a uniform green, a constant mixture of the yellow and blue.
Yet they didn’t. Elowitz and his colleagues found that the ratio of yellow and cyan fluorescence varied from cell to cell, proving that gene expression varied among cells in the same environment. The team described that variation precisely in a 2002 Science paper. This work, van Melderen says, sparked a renaissance in the study of phenotypic heterogeneity.
Selection of Diversity
Advances in microscopy and microfluidics allowed researchers to build rapidly on Elowitz’s 2002 discovery. Two particular cellular behaviors — chemotaxis, or navigation along a chemical gradient, and the microbial stress response — figured prominently in their experiments. That’s because both of these responses, which are easily measured in a lab, allow cells to respond to a changing environment, according to Jessica Lee, a microbiology fellow at Global Viral who studied bacterial individuality as a postdoc in the lab of Chris Marx at the University of Idaho.
Take chemotaxis. If bacteria are moving toward something they like, they swim more and tumble less. But the point at which they make this switch varies from individual to individual, as Berkeley scientists discovered 40 years ago. Subsequent experiments revealed the existence of a family of chemotaxis proteins, such as one called CheY; the more copies of these proteins bacteria carried, the more likely they were to tumble instead of swim. Even without any environmental pressures affecting protein production, some bacteria may randomly have more molecules like CheY at any given time. Lee, Emonet and other researchers hypothesize that this innate variability lets a population of bacteria hedge its bets about the optimal amount of chemotaxis proteins for dealing with inevitable environmental changes.
Lee spent several years studying this bet-hedging behavior in the plant-dwelling bacterium Methylobacterium extorquens. Plants release oxygen as a byproduct of photosynthesis, but some plants also release methanol (wood alcohol). As its name suggests, M. extorquens can use this methanol as food, but the first step involves transforming the chemical into formaldehyde — the pungent chemical that works as a preservative because it is toxic to bacteria. M. extorquens bacteria protect themselves by breaking down the formaldehyde into a less toxic metabolite as quickly as possible. That’s how the bacteria are essentially able to “not pickle themselves,” Lee quipped.

Because methanol isn’t always available and the metabolic machinery for thoroughly breaking down formaldehyde costs a lot of energy to produce, M. extorquens mostly doesn’t bother making the needed enzymes until the alcohol is actually present. But then the bacteria face a dilemma. When they start to break down methanol, the essential enzymes aren’t yet being produced at full capacity, so the soaring buildup of formaldehyde can kill the cells. Managing formaldehyde concentrations is life or death.
What Lee and Marx found, however, was that individual cells had different sensitivities to formaldehyde concentrations. As the scientists described in a paper posted earlier this year on the preprint server biorxiv.org, some bacteria continued to grow in the face of formaldehyde concentrations that killed most of their compatriots, even though all the cells were genetic clones.
“The only way we could explain it was that possibly the bacteria we thought were completely identical were in fact behaving in a not identical way,” Lee said. Something in the physiology of this formaldehyde-tolerant subpopulation — the scientists still don’t know what — allows it to survive and thrive in the presence of a deadly chemical. It’s the perfect example of a bet-hedging strategy, Lee says.

[[So the nest time someone tells you that the emergence of resistance to anti-biotics in hospitals proves evolution, you will know what to answer…………]]
But this heterogeneity might have a significance that goes beyond improving the odds of survival for some members of a bacterial community. Scientists have also discovered hints that that bacterial individuality could have contributed to the evolution of multicellular organisms.
For example, experiments by the biophysicist Teun Vissers at the University of Edinburgh revealed that E. coli clones vary in their ability to stick to surfaces. The bet-hedging explanation for these differences is that because some cells may survive when others get washed away, the bacterial community as a whole benefits.
Yet the microbial ecologist Martin Ackermann at ETH Zurich highlights an additional hypothesis: His own work with Salmonella and other organisms has shown that when groups of identical cells diversify, they can divide up some of their tasks and start to specialize in certain processes.
“A benefit emerges through some interaction between the subpopulations. I think division of labor is a much more precise term” for the situation, Ackermann said. Evolutionary theorists often cite the division of labor and subsequent specialization of tasks among collections of single-celled organisms as a likely major factor driving the emergence of multicellularity.
The crucial question is: What is making these bacteria into distinct individuals if it isn’t their genetics? What is the source of this variation? Researchers are still searching for answers, but it is clear that this individuality isn’t simply the result of noise in the system. Random factors may figure into it, but specific mechanisms also somehow seem to be impressing cell-to-cell differences across bacterial populations.
Rego’s work on the tuberculosis bacterium Mycobacterium tuberculosis and a related species showed how some differences can arise during mitotic cell division. When a bacterium divides, it doesn’t produce two identical daughter cells. Instead, as the cell grows and elongates during the prelude to division, it must synthesize additional cellular material. Because this material tends to be concentrated on one side of the original cell, one daughter cell inherits newer parts than the other. This lopsidedness is especially pronounced in bacteria like M. tuberculosis. Rego was able to find a gene responsible for nearly all of this asymmetry, and when she manipulated it to make the two daughter cells more even, she eliminated nearly all the heterogeneity in the bacteria’s responses. This result suggests that the bacteria’s individuality is an adaptive advantage.
These recent advances in understanding the origins and functions of bacterial individuality still don’t completely explain the paradox that such nongenetic benefits can be maintained over billions of years of evolution. The secret to the maintenance of this heterogeneity, scientists suspect, is not in the traits themselves but rather in how these traits are regulated at the cellular level. Many genes essential to life are tightly controlled, since too little or too much activity means certain death. Natural selection may be indifferent to the regulation of other traits and may even allow for greater survival of populations that have higher variability. Phenotypic heterogeneity seems to fall into this second category. Having some organisms grow more slowly may seem to be a biological dead end, but if these same cells can weather an antibiotic storm, tolerance for a wider variation in growth rates may be a good thing. “In biology, you never have a single cell doing something. You have a group of cells,” Emonet said. “The diversity will affect the average performance of the group.”
Back in Zurich, in Salek and Carrara’s microbial racecourse, these advantages can be seen in those bacteria that race across the finish line and those that barely make it out of the starting gate. Far from being billions of identical clones, bacteria can display remarkable differences, even when they all share the same DNA. And it’s only by watching these microscopic dramas unfold over time that scientists have come to understand the diversity inherent in even the most identical populations.
“It’s changed our view of microorganisms,” van Melderen said. “Bacteria and other microorganisms are probably not as simple as we used to think. This phenotypic heterogeneity adds a level of complexity to every process.”


Wednesday, September 4, 2019

In early February 1906 the volcano Mt. Vesuvius in Naples, Italy began erupting and throughout the following weeks Clemens's writings and dictation made comparisons to what was happening around him to the Vesuvius eruption. Three days after meeting with Tchaykovsky, he dictated for his autobiography the following passage on March 30, 1906:
Three days ago a neighbor brought the celebrated Russian revolutionist, Tchaykoffsky, to call upon me. He is grizzled, and shows age -- as to exteriors -- but he has a Vesuvius, inside, which is a strong and active volcano yet. He is so full of belief in the ultimate and almost immediate triumph of the revolution and the destruction of the fiendish autocracy, that he almost made me believe and hope with him. He has come over here expecting to arouse a conflagration of noble sympathy in our vast nation of eighty millions of happy and enthusiastic freemen. But honesty obliged me to pour some cold water down his crater. I told him what I believed to be true: that the McKinleys and the Roosevelts and the multimillionaire disciples of Jay Gould -- that man who in his brief life rotted the commercial morals of this natin and left them stinking when he died -- have quite completely transformed our people from a nation with pretty high and respectable ideals to just the oppostie of that; that our people have no ideals now that are worthy of consideration; that our Christianity which we have always been so proud of -- not to say so vain of -- is now nothing but a shell, a sham, a hypocrisy; that we have lost our ancient sympathy with oppressed peoples struggling for life and liberty; that when we are not coldly indifferent to such things we sneer at them, and that the sneer is about the only expression the newspapers and the nation deal in with regard to such things; that his mass meetings would not be attended by people entitled to call themselves representative Americans, even if they may call themselves Americans at all; that his audiences will be composed of foreigners who have suffered so recently that they have not yet had time to become Americanized and their hearts turned to stone in their breasts; that these audiences will be drawn from the ranks of the poor, not those of the rich; that they will give and give freely, but they will give from their poverty and the money result will not be large. I said that when our windy and flamboyant President conceived the idea, a year ago, of advertising himself to the world as the new Angel of Peace, and set himself the task of bringing about the peace between Russia and Japan and had the misfortune to accomplish his misbegotten purpose, no one in all this nation except Doctor Seaman and myself uttered a public protest against this folly of follies. That at that time I believed that that fatal peace had postponed the Russian nation's imminent liberation from its age-long chains indefinitely -- probably for centuries; that I believed at that time that Roosevelt had given the Russian revolution its death-blow, and that I am of that opinion yet.
I will mention here, in parenthesis, that I came across Doctor Seaman last night for the first time in my life, and found that his opinion also remains to-day as he expressed it at the time that that infamous peace was consummated.
Tchaykoffsky said that my talk depressed him profoundly, and that he hoped I was wrong.
I said I hoped the same.
He said, "Why, from this very nation of yours came a mighty contribution only two or three months ago, and it made us all glad in Russia. You raised two millions of dollars in a breath -- in a moment, as it were -- and sent that contribution, that most noble and generous contribution, to suffering Russia. Does not that modify your opinion?"
"No," I said, "it doesn't. That money came not from Americans, it came from Jews; much of it from rich Jews, but the most of it from Russian and Polish Jews on the East Side -- that is to say, it came from the very poor. The Jew has always been benevolent. Suffering can always move a Jew's heart and tax his pocket to the limit. He will be at your mass meetings. But if you find any Americans there put them in a glass case and exhibit them. It will be worthy fifty cents a head to go and look at that show and try to believe in it."
From Autobiography of Mark Twain: Volume 1:

The real danger today is not that computers are smarter than us, but that we think computers are smarter than us
GARY SMITH

AUGUST 30, 2019
Share




In 1997, Deep Blue defeated Garry Kasparov, the reigning world chess champion. In 2011, Watson defeated Ken Jennings and Brad Rutter, the world’s best Jeopardy players. In 2016, AlphaGo defeated Ke Jie, the world’s best Go player. In 2017, DeepMind unleashed AlphaZero, which trounced the world-champion computer programs at chess, Go, and shogi.
If humans are no longer worthy opponents, then perhaps computers have moved so far beyond our intelligence that we should rely on their superior intelligence to make our important decisions.

Nope.

Despite their freakish skill at board games, computer algorithms do not possess anything resembling human wisdom, common sense, or critical thinking. Deciding whether to accept a job offer, sell a stock, or buy a house is very different from recognizing that moving a bishop three spaces will checkmate an opponent. That is why it is perilous to trust computer programs we don’t understand to make decisions for us.

Consider the challenges identified by Stanford computer science professor Terry Winograd,which have come to be known as Winograd schemas. For example, what does the word “it” refer to in this sentence?

I can’t cut that tree down with that axe; it is too [thick/small].

If the bracketed word is “thick,” then it refers to the tree; if the bracketed word is “small,” then it refers to the axe. Sentences like these are understood immediately by humans but are very difficult for computers because they do not have the real-world experience to place words in context.

Paraphrasing Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, how can machines take over the world when they can’t even figure out what “it” refers to in a simple sentence?

When we see a tree, we know it is a tree. We might compare it to other trees and think about the similarities and differences between fruit trees and maple trees. We might recollect the smells wafting from some trees. We would not be surprised to see a squirrel run up a pine or a bird fly out of a dogwood. We might remember planting a tree and watching it grow year by year. We might remember cutting down a tree or watching a tree being cut down.

A computer does none of this. It can spellcheck the word “tree,” count the number of times the word is used in a story, and retrieve sentences that contain the word. But computers do not understand what trees are in any relevant sense. They are like Nigel Richards, who memorized the French Scrabble dictionary and has won the French-language Scrabble World Championship twice, even though he doesn’t know the meaning of the French words he spells.

To demonstrate the dangers of relying on computer algorithms to make real-world decisions, consider an investigation of risk factors for fatal heart attacks.

I made up some household spending data for 1,000 imaginary people, of whom half had suffered heart attacks and half had not. For each such person, I used a random number generator to create fictitious data in 100 spending categories.

These data were entirely random. There were no real people, no real spending, and no real heart attacks. It was just a bunch of random numbers. But the thing about random numbers is that coincidental patterns inevitably appear.

In 10 flips of a fair coin, there is a 46% chance of a streak of four or more heads in a row or four or more tails in a row. If that does not happen, heads and tails might alternate several times in a row. Or there might be two heads and a tail, followed by two more heads and a tail. In any event, some pattern will appear and it will be absolutely meaningless.

In the same way, some coincidental patterns were bound to turn up in my random spending numbers. As it turned out, by luck alone, the imaginary people who had not suffered heart attacks “spent” more money on small appliances and also on household paper products.

When we see these results, we should scoff and recognize that the patterns are meaningless coincidences. How could small appliances and household paper products prevent heart attacks?

A computer, by contrast, would take the results seriously because a computer has no idea what heart attacks, small appliances, and household paper products are. If the computer algorithm is hidden inside a black box, where we do not know how the result was attained, we would not have an opportunity to scoff.

Nonetheless, businesses and governments all over the world nowadays trust computers to make decisions based on coincidental statistical patterns just like these. One company, for example, decided that it would make more online sales if it changed the background color of the web page shown to British customers from blue to teal. Why? Because they tried several different colors in nearly 100 countries. Any given color was certain to fare better in some country than in others even if random numbers were analyzed instead of sales numbers. The change was made and sales went down.

Many marketing decisions, medical diagnoses, and stock trades are now done via computers. Loan applications and job applications are evaluated by computers. Election campaigns are run by computers, including Hillary Clinton’s disastrous 2016 presidential campaign. If the algorithms are hidden inside black boxes, with no human supervision, then it is up to the computers to decide whether the discovered patterns make sense and they are utterly incapable of doing so because they do not understand anything about the real world.

Computers are not intelligent in any meaningful sense of the word, and it is hazardous to rely on them to make important decisions for us. The real danger today is not that computers are smarter than us, but that we think computers are smarter than us.


Tuesday, September 3, 2019


MACHINES ARE NOT REALLY LEARNING


Billl Gates? Talking about AI again? Let’s recall a back-to-the-future comment he made to college students in 2004: “If you invent a breakthrough in artificial intelligence, so machines can learn, that is worth 10 Microsofts.” At the financial website The Motley Fool, Rex Moore exhumed Gates’ fifteen-year-old quote, newsworthy because it trumpets the already endlessly broadcast trope about the success of machine learning on the internet. Says Moore, “Fast-forward to today, and of course someone has figured it out. This special kind of artificial intelligence is called machine learning.”

Exciting. Except for one thing. Machine learning has been around since AI’s inception in the 1950s. Alan Turing, who essentially inaugurated “AI” (though he didn’t coin the phrase) in his much-read 1950 paper “Computing Machinery and Intelligence,” speculated that computer programs might improve their performance (output) by crunching more and more data (input). He called them “unorganized machines,” a not-so-subtle admission that organized machines—that is, computer programs—do only what they’re programmed to do.
At issue was his now well-known conversational test of intelligence, the eponymous Turing test . How to pass it? Said Turing: Figure out how to program computers to learn from experience, like people. They had to be a bit more disorganized, like British geniuses. In 1943, earlier even than Turing, neuroscientist Warren McColloch and logician Walter Pitts proposed simple input-output models of neurons. In the next decade, Frank Rosenblatt at Cornell generalized the model to adjust over time, to “learn.” The model entities were called “perceptrons.”

Researchers likened them to neurons, which they weren’t, though they did mimic human neurons’ behavior in a simple sense. Multiple incoming signals, meeting a threshold, resulted in a discrete output. They “fired” after excitation, like synapses more or less, like brain cells. Only, they didn’t reproduce the fantastic complexity of real neurons. And they didn’t really do much computation, either. Call them inspiring.

Still, perceptrons were the world’s first neural networks so they take their place in the annals of computer science. Simple networks with one layer, they could be made to perform simple logical operations by outputting discrete values after meeting a mathematical threshold set for their input. Cool.

Not really. The late MIT AI pioneer later pointed out, in a devastating critique, how powerless the little perceptrons were for addressing any interesting problems in AI. So-called connectionist (the old term for machine learning) approaches to AI were effectively abandoned in favor of logical and rule-based approaches (which also failed, but later). Turing worried that machines would be too rigid. Too organized. But the early machine learning attempts were hopeless.

Fast forward not to 2019 but to 1986. Machine learning research received a major shot of steroids. Feedback loops were introduced into artificial neural networks. No more perceptrons. Backpropagation meant that the input to a neural network propagates through new, hidden layers in more complicated networks. The input adjusts the learning weights and “propagates back” through the network until it settles on an optimal value (actually, a local optimal value, but this is TMI). Backpropagation-powered Artificial Neural Networks (ANNs) could learn, in a sense.

American (‘merica!) psychologists showed how the new systems could simulate one aspect of a toddler’s education by, say, forming the past tenses of English verbs, like start, walk, or run. Simply adding “ed” won’t cut it (unless you’re fine with “runned” or “goed”), so the system had to ferret out the irregular endings and then perform well enough on the task to convince researchers that it converged on the rule, which it more or less did. Machine learning was resuscitated, brought back to life, released from detention. Only, as promising as backpropagation seemed, the modern era of machine learning hadn’t arrived by the late 1980s.

It arrived shortly after Bill Gates’ comments to college kids in 2004. No coincidence, it showed up when the web took off, on the heels of terabytes of user-generated content from so-called web 2.0 companies like Google and Facebook. “Big data” entered the lexicon as volumes of unstructured (i.e., text and image) data-powered machine learning approaches to AI. AI itself was just then experiencing another of its notorious winters after the 2001 NASDAQ crash known as the “dot-com bubble.” Or just as the debacle. No one trusted the web after 2001 (almost no one), because it had failed to deliver on promises, tanked on Wall Street, and was still stuck in what we now snigger and sniff at, the “web 1.0” ideas that launched web sites for companies and tried to sell advertising—we are so different now!

Turns out, anyway, that, with loads of data, machine learning can perform admirably on well-defined tasks like spam detection and somewhat admirably on other tasks like language translation (Google Translate is still worse than you might assume). It can also personalize content in newsfeeds, or recommendations on Netflix, Spotify, and Amazon and the like. Image recognition is getting better. Cars, especially in Silicon Valley, can even sort of drive themselves. All thanks to big data and machine learning.

The learning algorithms in vogue today are still ANNs. They now are convoluted, in at least a wordplay nod to Turing’s disorganization. But they are organized; typically, layers are stacked upon other layers, no matter. Fast forward, along with our Motley Fool reporter to 2019, and we have arrived at Deep Learning. We can trace a line from the lowly perceptron (poor thing) to backpropagation to Deep Learning. This is progress, no doubt, but probably not in the sense intended by Rex Moore’s bold font.

Take a deep breath, look around at the best AI around. Chances are it’s Deep Learning, true. Chances are, too, that it does some one thing. It recognizes human faces, say, but not fish or bicycles. It personalizes your newsfeeds for you, say, but you keep wondering why you’re seeing all the same stuff (or maybe not, if you’re online too much). It drives your car, if you’re rich and friends with Elon Musk anyway, but you deliberately avoid pondering what would happen if the system goes into collision avoidance mode when a deer walks onto the road, followed by a much smaller toddler on a tricycle.

What gives? Letdown alert. Spoiler coming. Deep Learning could be called Narrow Learning if the redundancy didn’t discourage it because machine learning is always narrow. It is always what pundits wax dismissively at as Narrow AI. For that matter, “learning” for a machine isn’t learning for you or me. We (what’s-the-word?) forget, for instance, which should make one suspicious that machines that learn and can’t forget are ever really learning. Also, we tend to learn by incorporating information into a deeper web of knowledge that sits front and center in the mystery of mind. Knowledge is different from information because it gets linked together into a general picture of the world.

General. Not narrow. Get off the computer. Go outside (I’ll join you). Go talk to a neighbor or a friend, or pop over to the coffee shop and chat up the barista. You’ve just done something that Deep Learning can’t do. Worse, it can’t even learn to do it, because that’s not a narrow, well-defined problem to solve. (Now pat yourself on the back. You’re amazing!)

Bill Gates could have been channeling Deep Learning, anticipating it just up ahead, when he spoke in 2004. More likely, he was talking about machines that can really learn. That would be worth 10 Microsofts (time makes fools of us all. He should have said “Googles”). But we don’t have learning systems that really learn. Wait: check that. We have seven billion of them, give or take. Our question is not how to make more out of computer code, but how to make computer code do something more, for us.