Sunday, November 10, 2019

Noise’ in the Brain Encodes Surprisingly Important Signals


‘Noise’ in the Brain Encodes Surprisingly Important Signals
Activity in the visual cortex and other sensory areas is dominated by signals about body movements, down to little tics and twitches. Scientists are now rethinking how they study and conceive of perception.

[[Everything is much much more complicated than we thought.]]

New research shows that perception can be deeply informed by movement: The visual system may constantly process information about even the smallest gestures.
Lenka Šimečková for Quanta Magazine



November 7, 2019


At every moment, neurons whisper, shout, sputter and sing, filling the brain with a dizzying cacophony of voices. Yet many of those voices don’t seem to be saying anything meaningful at all. They register as habitual echoes of noise, not signal; as static, not discourse.
Since scientists first became capable of recording from single neurons 60 years ago, they’ve known that brain activity is highly variable, even when there’s no obvious reason it should be. An animal’s neural response to a repeated stimulus changes considerably from trial to trial, fluctuating in a way that seems almost random. Even in the total absence of a stimulus, “you would just record spontaneous activity, and you would see that it seemed to have a mind of its own,” according to David McCormick, a neuroscientist at the University of Oregon.
“This gave rise to a view of the brain as being somehow either very [noisy], or using some type of high-level statistics to get over this noisiness,” he said.
But over the past decade, that view has changed. It’s become apparent that this purported randomness and variability relates not just to messiness in the brain’s neural mechanics, but also to behavioral states like arousal and stress — states that seem to affect perception and decision-making as well. There’s more to all the noise, scientists realized, than they had assumed.
Now, by analyzing both the neural activity and the behavior of mice in unprecedented detail, researchers have revealed a surprising explanation for much of that variability: Throughout the brain, even in low-level sensory areas like the visual cortex, neurons encode information about far more than their immediately relevant task. They also babble about whatever other behaviors the animal happens to be engaging in, even trivial ones — the twitch of a whisker, the flick of a hind leg.
Those simple gestures aren’t just present in the neural activity. They dominate it.
The findings are changing how scientists interpret brain activity, and how they design experiments to study it.
An Elegant but Outdated Story
Until about a decade ago, most neuroscientists used anesthetized animals in their experiments. This practice enabled them to make incredible strides toward a better understanding of the brain, but it also “led to a highly distorted view of neuronal processing,” said David Kleinfeld, a neurophysicist at the University of California, San Diego. It was a particular drawback in vision research, because the levels of anesthesia involved were often high enough to undercut confidence that the groggy (or unconscious) animals were subjectively seeing anything at all. At the very least, the anesthesia stripped away any real framework or context that might influence the visual process.
Even in the dark, the neurons of the visual cortex continue to chatter.
As a result, a certain picture of how vision works emerged, one in which signals from the eyes moved through passive sets of neurological filters that created increasingly specialized and complex representations of the environment. Only later in the process did that visual representation get integrated with information from other senses and other brain areas. “It’s tempting to see primary sensory areas as cameras that give an unadulterated view of what’s happening in the world,” said Anne Churchland, a neuroscientist at Cold Spring Harbor Laboratory in New York — but however elegant this model of vision might be, a mountain of evidence has proved it to be far too simplistic.
The brain’s immense interconnectivity fills it with feedback loops that let higher cortical areas talk to lower ones. Over decades of study, researchers have gradually found each region of the brain to be less specialized than labels might suggest: The visual cortex of people who are blind or visually impaired, for instance, can process auditory and tactile information. The somatosensory cortex, and not the motor cortex, was recently found to play a significant role in the learning of motor-based skills. And broader forces like attention, expectation or motivation can affect how people perceive.
Even in the dark, the neurons of the visual cortex continue to chatter.
But things got even stranger in 2010. New experimental methods, developed just a few years earlier, made it possible to record from the neurons of mice running on a treadmill or ball. The neuroscientists Michael Stryker and Cris Niell, then both at the University of California, San Francisco, initially decided to use those techniques to compare the visual activity in sleeping, anesthetized mice with that of mice which were either running or stationary. (Niell has since moved to the University of Oregon.)

Cris Niell, a neuroscientist at the University of Oregon, uses a home-built imaging system and spherical treadmill to measure the neural activity of mice as they run.
Courtesy of University of Oregon
They quickly realized, though, that there was a more interesting comparison to make. “We went into it wanting to study what’s different in awake versus anesthetized,” Niell said, “but what was surprising to us was how different various aspects of the awake state were when the animal was moving versus not.” They found that when a mouse ran, its responses to visual stimuli got larger: The neurons’ firing rates doubled — which was astonishing because in prior research, the firing rates hadn’t gone up that much even in humans and monkeys required to complete tasks that demanded intense visual attention. These effects tapered off as soon as the mouse stopped running.
“That was pretty surprising, because people had thought of primary visual cortex as being a purely sensory area,” Churchland said.
At the time, in keeping with conclusions drawn from previous work on attention and motivation, Stryker and Niell thought that the result might reflect some general behavioral shift in the mice, a transition from an inactive mode to an active one as the animals started to move. Studies in other labs over the next few years confirmed that general arousal alters neural responses considerably. In 2015, for instance, McCormick and his colleagues reported that a mouse’s engagement with a task predicted how well it would perform. That same year, the neuroscientist Jessica Cardin and her team at Yale University began to disentangle the effects of both running and arousal on activity in the visual cortex.
But all these experiments were limited both in how many neurons they could record from at once and in how many behavioral variables they could account for. Experiments that investigated changes in brain activity based exclusively on whether the mouse was running or still, or whether its pupils were dilated as a proxy for arousal, could explain only bits and pieces of neural variability. A more complete explanation was still wanting.
Now, by taking a more global approach to both animal behavior and brain activity, a handful of research groups have provided exactly that.
Finding Sense in the Swirling Kaleidoscope
Kenneth Harris and Matteo Carandini, neuroscientists at University College London, started with a different goal: to characterize the structure of the spontaneous activity in the visual cortex that occurs even when the rodent gets no visual stimulation. They and other members of their joint team at the university’s Cortexlab recorded from 10,000 neurons at once in mice that were free to act as they wanted — to run, sniff, groom themselves, glance around, move their whiskers, flatten their ears and so on — in the dark.

Marius Pachitariu, a neuroscientist at the Howard Hughes Medical Institute’s Janelia Research Campus in Virginia.
Matt Staley/Janelia Research Campus
The researchers found that even though the animals couldn’t see anything, the activity in their visual cortex was both extensive and shockingly multidimensional, meaning that it was encoding a great deal of information. Not only were the neurons chatting, but “there were many conversations going on at the same time,” wrote Marius Pachitariu, a neuroscientist at the Howard Hughes Medical Institute’s Janelia Research Campus in Virginia, and a former postdoctoral researcher in the Cortexlab.
At first, the scientists weren’t sure what to make of it. So they tried to explain the “conversations” by relating the brain activity to exactly what the mice were doing at each moment. They took videos of the face of each mouse, analyzing its motion frame by frame not just for single facets of behavior like running speed or pupil diameter, but for anything at all that might explain the neural variability, down to the tiniest tics and twitches.
Those minor behaviors turned out to account for at least one-third of the ongoing activity in the mice’s visual cortex — activity that had previously all been chalked up to mere noise. It was roughly comparable to the activity that an actual visual input would typically cause. “We have this part of the brain called the visual cortex, and you would think that what it does is vision,” Harris said. “And it does do that. But at least as much of its activity has nothing to do with vision.”
“If we look at the mouse as a whole,” McCormick said, “all of a sudden, that general activity, that swirling kaleidoscope of activity in the brain, starts to make sense.” (He and his lab reported similar findings in a recent preprint.) The activity didn’t just reflect the general state of the mouse’s alertness or arousal, or the fact that the animal was moving. The visual cortex knew exactly what the animal was doing, down to the details of its individual movements.

When researchers studied the spontaneous activity of more than 10,000 neurons in the visual cortex of mice, they were surprised to find it to be rich with information about the animals’ seemingly irrelevant movements. In these images from the experiment, neurons flash when they send signals. Each panel monitors a different depth of tissue in the cortex.
Courtesy of Marius Pachitariu and Carsen Stringer
In fact, this wasn’t unique to the visual cortex. “Everywhere in the brain, it’s the same story. The movement signals are just really unmistakable,” said Matt Smear, a systems neuroscientist at the University of Oregon who did not participate in the study. It cements the idea that “certain intuitive notions about the brain are probably wrong.”
Even more striking, the same neurons that encoded sensory or other functional information were the ones explicitly encoding these motor signals. “All of a sudden we’re saying, ‘Wait — maybe the brain isn’t noisy. Maybe it’s actually much more precise than we thought,’” McCormick said.
The Cortexlab’s findings, which were published in Science in April, demonstrated that neuroscientists need to rethink how they interpret animals’ neural responses. (Niell pointed out that a significant amount of variation observed in human functional MRI studies can also be explained by random fidgets, rather than noise or anything related to the task under investigation.) “For instance, every time the mouse was running, we saw this signal in the neurons right before the mouse would start,” said Carsen Stringer, a postdoctoral researcher at Janelia who did her doctoral work in the Cortexlab. “And we thought, ‘Maybe that’s just the mouse thinking to run.’ But really, it’s the mouse whisking [rhythmically moving its whiskers back and forth] right before it’s running.”
But then what are these signals doing, and why do they matter?
Perception as Action
A system in which each neuron channels information about multiple activities at once might seem unworkably convoluted, but the Cortexlab team found that the brain can cope with all that data more easily than we might think. Their analysis revealed that when a stimulus is shown, the incoming information simply gets added on top of the movement-related signals that were already present. In a single neuron, those signals appear jumbled together, impossible to tell apart. But different neurons might convey the same stimulus but different background behaviors, so that if enough neurons are recorded together, it becomes possible to tease vision and movement apart.
The movement signals therefore aren’t hurting the animal’s ability to process sensory information about the outside world. But scientists still need to explore exactly how those signals might help the brain work better. At its core, this discovery reflects the fact that fundamentally, the brain evolved for action — that animals have brains to let them move around, and that “perception isn’t just the external input,” Stringer said. “It’s modulated at least to some extent by what you’re doing at any given time.”

Carsen Stringer, a neuroscientist at he Howard Hughes Medical Institute’s Janelia Research Campus in Virginia.
Matt Staley/Janelia Research Campus
Sensory information represents only a small part of what’s needed to truly perceive the environment. “You need to take into account movement, your body relative to the world, in order to figure out what’s actually out there,” Niell said.
“We used to think that the brain analyzed all these things separately and then somehow bound them together,” McCormick said. “Well, we’re starting to learn that the brain does that mixing of multisensory and movement binding [earlier] than we previously imagined.”
It’s necessary to know how the body is moving to contextualize and interpret incoming sensory information. If you’re running, the visual world flies by, and the visual cortex needs to know that this is driven by your movement. If you’re circling around a monument, the visual cortex needs to know that you didn’t see 20 different statues, but the same statue from 20 different angles. “Where is the stability among this storm of variance?” McCormick said. “That’s why I think that this recent work is very interesting and very important, because we’re starting to see where the stability is.”
“Our brains aren’t just thinking in our heads. Our brains are interacting with our bodies and the way that we move through the world,” Niell said. “You think, ‘Oh, I’m just thinking,’ or ‘I’m just seeing.’ You don’t think about the fact that your body is playing a role in that.” So it makes sense that a mouse might need to integrate movement signals early on (though exactly how, say, the movement of a whisker helps with vision remains unclear).
In fact, it may go beyond that. The integration might help facilitate what’s known as active sensing — whereby an animal actively coordinates its movement with the information it wants to sense and find. Smear is currently studying this in olfaction: He and his colleagues have found that mice synchronize many of their movements to their sniffing rhythm (their primary means of receiving odor information) with surprising precision.
Even more intriguing, such coordination might help with learning.
A Greater Purpose
Harris, Stringer and their colleagues posit that this integration of sensory and motor information creates a mental scaffolding within which reinforcement learning can happen: If a particular action and stimulus in combination correlates with a noteworthy outcome — say, receiving a reward, or finding oneself in danger — this kind of dual neural coding could help the animal predict that outcome next time and act accordingly.

In each of these brain scans from a mouse, a single movement or behavior — such as licking, whisker twitching or pupil dilation — accounts for large variations in neural activity throughout the animal’s cortex. Even areas of the brain with specialized functions process neural signals about a variety of physical behaviors or states.
Courtesy of Anne Churchland
Churchland suggests that movement signals might help the animal learn in even more concrete ways. In September, Churchland, Simon Musall, a postdoctoral fellow in her laboratory, and their colleagues published the results of an experiment in which they monitored brain activity in mice that were performing a task: The animals had to grab a handle to start a trial, and lick one way or another to report a decision. Even though they were focused on their goal, their neural activity continued to erupt into a chorus of voices dedicated to trivial movements seemingly unrelated to the task at hand. “Most of the activity we found in the brain had nothing to do with the decision,” Churchland said. “It was reflecting the movements the animal was making.”
According to Niell, who was not involved in the study, “what was really striking was that even when an animal was doing that kind of a task, which we think of as purely vision, all of these [unrelated] movements … come in to dominate the signal.”
Churchland and her team also found that as each mouse was trained, its movements locked in more on the task. At first, for instance, the mouse would move its whiskers randomly, but as it learned, it would whisk at specific times — when the stimulus was presented, and when the reward was delivered — even though the act of whisking itself had nothing to do with the reward or the training involved in the task.
Churchland speculates that animals may use these types of signals to help them make decisions — that “maybe for them, this is part of the decision-making process,” she said. “Maybe for animals, as for humans, part of what it means to think and make decisions is to move.” She likened it to the ritual a baseball player might perform before stepping up to bat. “It makes me wonder if these fidgets … serve a greater purpose.”
Our brains aren’t just thinking in our heads. Our brains are interacting with our bodies and the way that we move through the world.
Cris Niell, University of Oregon
“People tend to think of movements as being separate from cognition — as interfering with cognition, even,” Churchland said. “We think that, given this work, it might be time to consider an alternative point of view, that at least for some subjects, movement is really a part of the cognition.”
Granted, at this point, “some subjects” mostly means rodents. Scientists are now conducting other experiments to test whether this kind of integration happens as pervasively in primates (including humans) and in a similar way.
Nevertheless, researchers agree that the work heralds a shift in how they conduct their experiments on perception — namely, it demonstrates that they need to start paying more attention to behavior, too.
A Surrender of Control
Until now, neuroscientists have taken a more reductionist route. Much of what we know about neural activity came first from recordings of anesthetized animals, and later from animals moving about in a constricted way. The experiments themselves have also been limited. Niell describes it as the “eye exam” model. “When you go to the optometrist, you sit there and say: horizontal, vertical, better, worse, E, A, T,” he said. But that kind of abstract exercise may not be representative of what we typically do in life. “Our brains did not evolve for us to just sit there and passively watch something without doing anything about it.”
Even the new work by Harris’ and Churchland’s teams involved keeping the mouse’s head still to enable readings from the brain. “If the brain is dominated by movement signals when the animal can’t move [its head], then what’s it going to look like when the animal can move?” Smear said.
Scientists are now advocating for additional approaches that get closer to studying animals performing a natural behavior, one that’s intuitive without training. Of course, other challenges come with that: It’s more difficult to determine cause and effect in less controlled settings, for instance.
1.       
Even so, Niell has begun studying mice that use their eyesight to catch and eat crickets. “It’s something that the mouse’s brain is wired up to do,” he said, “and also a task where they are moving, and so therefore they have to integrate their movement with what’s out there.” He and his colleagues have now found that certain types of already-discovered neurons serve precise behavioral roles in the capture of prey.
“What we think of as being weird or unusual signals,” Niell said, “might start to make sense when you actually let an animal do what it would normally do, and not train the mice to be like little humans.”
McCormick agreed. “We had a very impoverished view of the brain,” he said. “I wouldn’t say we have a perfect view now, but … we have a richer view that needs to grow.”
Editor’s note (added Nov. 8, 2019): The described work conducted in the Cortexlab and by Churchland’s group received funding from the Simons Foundation, which also funds this editorially independent magazine.


How can we test a Theory of Everything?


How can we test a Theory of Everything?
Sabine Hossenfelder
[[Extremely clear and informative.]]
That’s a question I get a lot in my public lectures. In the past decade, physicists have put forward some speculations that cannot be experimentally ruled out, ever, because you can always move predictions to energies higher than what we have tested so far. Supersymmetry is an example of a theory that is untestable in this particular way. After I explain this, I am frequently asked if it is possible to test a theory of everything, or whether such theories are just entirely unscientific.

It’s a good question. But before we get to the answer, I have tell you exactly what physicists mean by “theory of everything”, so we’re on the same page. For all we currently know the world is held together by four fundamental forces. That’s the electromagnetic force, the strong and the weak nuclear force, and gravity. All other forces, like for example Van-der-Waals forces that hold together molecules or muscle forces derive from those four fundamental forces.

The electromagnetic force and the strong and the weak nuclear force are combined in the standard model of particle physics. These forces have in common that they have quantum properties. But the gravitational force stands apart from the three other forces because it does not have quantum properties. That’s a problem, as I have explained in an earlier video. A theory that solves the problem of the missing quantum behavior of gravity is called “quantum gravity”. That’s not the same as a theory of everything.

If you combine the three forces in the standard model to only one force from which you can derive the standard model, that is called a “Grand Unified Theory” or GUT for short. That’s not a theory of everything either.

If you have a theory from which you can derive gravity and the three forces of the standard model, that’s called a “Theory of Everything” or TOE for short. So, a theory of everything is both a theory of quantum gravity and a grand unified theory.

The name is somewhat misleading. Such a theory of everything would of course *not explain everything. That’s because for most purposes it would be entirely impractical to use it. It would be impractical for the same reason it’s impractical to use the standard model to explain chemical reactions, not to mention human behavior. The description of large objects in terms of their fundamental constituents does not actually give us much insight into what the large objects do. A theory of everything, therefore, may explain everything in principle, but still not do so in practice.

The other problem with the name “theory of everything” is that we will never know that not at some point in the future we will discover something that the theory does not explain. Maybe there is indeed a fifth fundamental force? Who knows.

So, what physicists call a theory of everything should really be called “a theory of everything we know so far, at least in principle.”

The best known example of a theory of everything is string theory. There are a few other approaches. Alain Connes, for example, has an approach based on non-commutative geometry. Asymptotically safe gravity may include a grand unification and therefore counts as a theory of everything. Though, for reasons I don’t quite understand, physicists do not normally discuss asymptotically safe gravity as a candidate for a theory of everything. If you know why, please leave a comment.

These are the large programs. Then there are a few small programs, like Garrett Lisi’s E8 theory, or Xiao-Gang Wen’s idea that the world is really made of qbits, or Felix Finster’s causal fermion systems.

So, are these theories testable?

Yes, they are testable. The reason is that any theory which solves the problem with quantum gravity must make predictions that deviate from general relativity. And those predictions, this is really important, cannot be arbitrarily moved to higher and higher energies. We know that because combining general relativity with the standard model, without quantizing gravity, just stops working near an energy known as the Planck energy.

These approaches to a theory of everything normally also make other predictions. For example they often come with a story about what happened in the early universe, which can have consequences that are still observable today. In some cases they result in subtle symmetry violations that can be measurable in particle physics experiments. The details about this differ from one theory to the next.

But what you really wanted to know, I guess, is whether these tests are practically possible any time soon? I do think it is realistically possible that we will be able to see these deviations from general relativity in the next 50 years or so. About the other tests that rely on models for the early universe or symmetry violations, I’m not so sure, because for these it is again possible to move the predictions and then claim that we need bigger and better experiments to see them.

Is there any good reason to think that such a theory of everything is correct in the first place? No. There is good reason to think that we need a theory of quantum gravity, because without that the current theories are just inconsistent. But there is no reason to think that the forces of the standard model have to be unified, or that all the forces ultimately derive from one common explanation. It would be nice, but maybe that’s just not how the universe works.


Tuesday, November 5, 2019

Stanford professor who changed America with just one study was also a liar


[[I thought I couldn't be shocked at revelations of scientists cheating, but I was wrong. How many other studies are also corrupt? Studies that changed forever how we understand something?]]
Stanford psychology and law professor David Rosenhan could transfix an audience in a crowded lecture hall with just a few words.
“What is abnormality?” he would ask undergraduate students, his deep and resonant golden voice building and booming. “What are we here for? Some things will be black … Others will be white. But be prepared for shades of gray.”
Rosenhan would know. His own life, as I would later find out, was filled with shades of gray.
He wasn’t particularly attractive — the word often used to describe him was “balding” — but there was something magnetic, even seductive, about him, especially in front of a crowd.
His students called it a gift, describing his ability to “rivet a group of two to three hundred students with dynamic lectures that are full of feeling and poetry.” One student recalled how Rosenhan opened one of his lectures while sitting on a student’s lap — as a way to test the class’ reaction to abnormal behavior.
His research work was also groundbreaking. In 1973, Rosenhan published the paper “On Being Sane in Insane Places” in the prestigious journal Science, and it was a sensation. The study, in which eight healthy volunteers went undercover as “pseudopatients” in 12 psychiatric hospitals across the country, discovered harrowing conditions that led to national outrage. His findings helped expedite the widespread closure of psychiatric institutions across the country, changing mental-health care in the US forever.
Fifty years later, I tried to find out how Rosenhan had convinced his subjects to go undercover as psychiatric patients and discovered a whole lot more. Yes, Rosenhan had charm. He had charisma. He had chutzpah to spare. And, as I eventually uncovered, he was also not what he appeared to be.
I stumbled across Rosenhan and his study six years ago while on a book tour for my memoir “Brain on Fire,” which chronicled my experiences with a dangerous misdiagnosis, when doctors believed that my autoimmune disorder was a serious mental illness. After my talk, a psychologist and researcher suggested that I could be considered a “modern-day pseudopatient” from Rosenhan’s famous study.
Professor Rosenhan distinguished himself at Stanford with his 1973 paper “On Being Sane in Insane Places” in the prestigious journal Science, research that helped spur the closure of mental hospitals nationwide.Alamy Stock Photo
Reading the study for the first time that night in my hotel room, I was struck by its opening words: “If sanity and insanity exist, how shall we know them?” Psychiatry had been struggling throughout its history to answer this question, and Rosenhan’s paper, with its rigorous data collection, exposed the deep limitations in our attempt to answer it.
Rosenhan’s eight healthy pseudopatients allegedly each followed the same script to gain admittance to psychiatric hospitals around the country. They each told doctors that they heard voices that said, “Thud, empty, hollow.” Based on this one symptom alone, the study claimed, all of the pseudopatients were diagnosed with a mental illness — mostly schizophrenia.
And once they were labeled with a mental illness, it became impossible to prove otherwise. All eight were kept hospitalized for an average of 19 days — with the longest staying an unimaginable 52. They each left “against medical advice,” meaning the doctors believed that they were too sick to leave. A total of 2,100 pills — serious psychiatric drugs — were reportedly prescribed to these otherwise healthy individuals.
At the time, the collective American imagination was deeply suspicious of psychiatry and its institutions. It was the era of Ken Kesey’s “One Flew Over the Cuckoo’s Nest” and movies like “Shock Corridor” and “The Snake Pit.” Rosenhan — who was both an insider who studied abnormal psychology, and an outsider who was a psychologist rather than a psychiatrist — was the perfect person to pull back the curtain on psychiatry’s secrets.
His paper had an outsized impact and sparked further movements in the mental-health world — helping to debunk Freudian psychoanalysis, medicalizing psychiatry and pushing for mental-health patients’ rights, to name just a few. His conclusions were “like a sword plunged into the heart of psychiatry,” an article in the Journal of Nervous and Mental Diseases observed three decades later.
When I read his paper, I recognized my own experience with misdiagnosis in those nine pages. I saw the power of labels, the feeling of depersonalization as a psychiatric patient, the hopelessness.
I wanted to learn more about the study, about the participants who, at the urging of the charismatic Rosenhan, would put their lives on the line to volunteer for such a treacherous assignment.
The 1973 edition of Science that featured Rosenhan’s groundbreaking work.
So I was surprised to find that so little had been written about his study beyond the piece in Science. None of the pseudopatients had gone public and Rosenhan, sadly, had died in 2012. Instead, I started talking to the people who’d been closest to him during his life and career to try and understand the professor who had accomplished such a coup.
“He had a twinkle,” Florence Keller, a close friend recalled.
“If [a] party were dead, he would walk in and all of a sudden the party would come alive,” his son Jack Rosenhan recounted.
“I think he always made people feel special,” his research assistant Nancy Horn said.
Disappointed that I could never meet him myself, I was thrilled when Keller introduced me to a treasure trove of documents he had left behind — including many never-before-seen documents: his unpublished book, diary entries and reams of correspondence.
The first pseudopatient — “David Lurie” in his notes — was very clearly Rosenhan himself.
“It all started out as a dare,” Rosenhan told a local newspaper. “I was teaching psychology at Swarthmore College, and my students were saying that the course was too conceptual and abstract. So I said, ‘OK, if you really want to know what mental patients are like, become mental patients.’ ”
Soon after that, Rosenhan went undercover for nine days at Haverford State Hospital in Haverford, Pa., in February 1969. His diary and book describe a host of indignities: soiled bathrooms without doors, inedible food, sheer boredom and ennui, rank disregard by the staff and doctors. Rosenhan even witnessed an attendant sexually assault one of the more disturbed patients. The only time when Rosenhan was truly “seen” as a human by the staff was when an attendant mistook him for a doctor.
The experience was harrowing. After nine days he pushed for a release and made sure that his undergraduate students — who were planning to follow him as undercover patients into the hospital — would not be allowed to go. Colleagues described a shaken, changed man after his experience.
I dug deeper. If his own students were forbidden from pursuing the experiment after this dismaying event, who were the others who had willingly followed in Rosenhan’s footsteps? Why did they put their mental health — even their lives — on the line for this experiment?
The further I explored, the greater my concerns. With the exception of one paper defending “On Being Sane in Insane Places,” Rosenhan never again published any studies on psychiatric hospitalization, even though this subject made him an international success.
He had also landed a lucrative book deal and had even written eight chapters, well over a hundred pages of it. But then Rosenhan suddenly refused to turn over the manuscript. Seven years later his publisher sued him to return his advance. Why would he have given up on the subject that made him famous?
I also started to uncover serious inconsistencies between the documents I had found and the paper Rosenhan published in Science. For example, Rosenhan’s medical record during his undercover stay at Haverford found that he had not, as he had written in his published paper, only exhibited one symptom of “thud, empty, hollow.” Instead, he had told doctors that he put a “copper pot” up to his ears to drown out the noises and that he had been suicidal. This was a far more severe — and legitimately concerning — description of his illness than he had portrayed in his paper.
Meanwhile, I looked for the seven other pseudopatients and spent the next months of my life chasing ghosts. I hunted down rumors, pursuing one dead end after the next. I even hired a private detective, who got no further than I had.
After years of searching, I found only one pseudopatient who participated in the study and whose experience matched that of Rosenhan: Bill Underwood, who’d been a Stanford graduate student at the time.
The only other participant I discovered, Harry Lando, had a vastly different take. Lando had summed up his 19-day hospitalization at the US Public Health Service Hospital in San Francisco in one word: “positive.”
Even though he too was misdiagnosed with schizophrenia, Lando felt it was a healing environment that helped people get better.
“The hospital seemed to have a calming effect. Someone might come in agitated and then fairly quickly they would tend to calm down. It was a benign environment,” Lando, now a psychology professor at the University of Minnesota, recalled in an interview.
But instead of incorporating Lando into the study, Rosenhan dropped him from it.
Lando felt it was pretty obvious what had happened, and I agree: His data — the overall positive experience of his hospitalization — didn’t match Rosenhan’s thesis that institutions are uncaring, ineffective and even harmful places, and so they were discarded.
“Rosenhan was interested in diagnosis, and that’s fine, but you’ve got to respect and accept the data, even if the data are not supportive of your preconceptions,” Lando told me.
Rosenhan, I began to realize, may have been the ultimate unreliable narrator. And I believe it’s possible some of the other pseudopatients he mentioned in his study never existed at all.
As a result, I am now seriously questioning a study I had once admired and had originally planned to celebrate. In my new book “The Great Pretender” (Grand Central Publishing), out this week, I paint the picture of a brilliant but flawed psychologist who is likely also a fabulist.
It wasn’t what I intended, and I feel conflicted about my findings. I have so enjoyed dropping into Rosenhan’s world and getting to know his mind and his loved ones — but I have no doubt that his creation, one that touches all of our lives, is flimsy at best. And it’s time for the world to see the study for what it really is.
"The Great Pretender"
It’s not the first time a paper published by an esteemed journal has been called into serious question, or even exposed as an outright lie. There was Dutch social psychologist Diederik Stapel, once renowned for finding a correlation between filthier train platforms and racist views at a Utrecht station, who is now infamous for inventing data.
Philip Zimbardo, the architect of the famous prison study, which took place in Stanford’s basement in 1971, has also come under fire. Zimbardo and his researchers recruited students and assigned them roles as “inmates” or “guards.” Guards abused inmates; inmates reacted as real prisoners. A 2018 Medium piece tracked down the original participants in that study and exposed serious issues — including the fact that Zimbardo had coached the guards into behaving aggressively.
Psychologist Peter Gray told me that he sees the work of researchers such as Zimbardo and Rosenhan as prime examples of studies that “fit our biases … There is a kind of desire to expose the problems of society but in the process cut corners or even make up data.”
This may explain Rosenhan. He saw real problems in society: The country was warehousing very sick people in horror houses pretending to be hospitals, our diagnostic systems were flawed and psychiatrists in many ways had too much power — and very little substance. He saw how psychiatric labels degraded people and how doctors see patients through the prism of their mental illness. All of this was true. In many ways, it is still true.
But the problem is that scientific research needs to be sound. We cannot build progress on a rotten foundation.
In disregarding Lando’s data and inventing other facts, Rosenhan missed an opportunity to create something three-dimensional, something a bit messier but more honest. Instead, he helped perpetuate a dangerous half-truth.
And today, what we have is a mental-health crisis of epic proportions. Over 100,000 people with serious mental illnesses live on the streets, while we are chronically short of safe housing and hospital beds for the sickest among us.
Had Rosenhan been more measured in his treatment of the hospitals, had he included Lando’s data, there’s a chance a different dialogue, less extreme in its certainty, would have emerged from his study and maybe, just maybe, we’d be in a better place.
Susannah Cahalan is the author of “The Great Pretender” about famed psychology professor David Rosenhan, who she discovered while on a book tour for her memoir, “Brain on Fire.” The book chronicled her experiences with doctors who believed her autoimmune disorder was a serious mental illness.

Sunday, November 3, 2019


What’s Everything Made of?

is an assistant professor of philosophy at the California Institute of Technology. He is interested in the foundations of quantum mechanics, classical field theory, and quantum field theory.
Edited by Nigel Warburton
[[An excellent review article describing how little is settled concerning the composition of matter.]]
Long before philosophy and physics split into separate career paths, the natural philosophers of Ancient Greece speculated about the basic components from which all else is made. Plato entertained a theory on which everything on Earth is made from four fundamental particles. There are stable cube-shaped particles of earth, pointy and painful tetrahedron-shaped particles of fire, somewhat less pointy octahedron-shaped particles of air, and reasonably round icosahedron-shaped particles of water. Like the particles of contemporary physics, Plato thought it was possible for these particles to be created and destroyed. For example, an eight-sided air particle could be created by combining two four-sided fire particles (as one might imagine occurring when a campfire dies out).
Our understanding of nature has come a long way since Plato. We have learned that much of our world is made of the various atoms compiled in the periodic table of elements. We have also learned that atoms themselves are built from more fundamental pieces.
Today, philosophers who are interested in figuring out what everything is made of look to contemporary physics for answers. But, finding answers in physics is not simply a matter of reading textbooks. Physicists deftly shift between different pictures of reality as it suits the task at hand. The textbooks are written to teach you how to use the mathematical tools of physics most effectively, not to tell you what things the equations are describing. It takes hard work to distil a story about what’s really happening in nature from the mathematics. This kind of research is considered ‘philosophy of physics’ when done by philosophers and ‘foundations of physics’ when done by physicists.
Physicists have developed an improvement on the periodic table called ‘the standard model’. The standard model is missing something very important (gravity) and it might turn out that the pieces it describes are made of yet more fundamental things (such as vibrating strings). That being said, the standard model is not going anywhere. Like Isaac Newton’s theory of gravity or James Clerk Maxwell’s theory of electrodynamics, we expect that the standard model will remain an important part of physics no matter what happens next.
Unfortunately, it’s not immediately clear what replaces the atoms of the periodic table in the standard model. Are the fundamental building blocks of reality quantum particles, quantum fields, or some combination of the two? Before tackling this difficult question, let us consider the debate between particles and fields in the context of a classical (non-quantum) theory: Maxwell’s theory of electrodynamics.

Bottom of Form
Albert Einstein was led to his 1905 special theory of relativity by engaging in foundational research on electrodynamics. After developing special relativity, Einstein entered into a debate with Walther Ritz about the right way to formulate and understand classical electrodynamics. According to this theory, two electrons placed near one another will fly apart in opposite directions. They both have negative charge, and they will thus repel one another.
Ritz thought of this as an interaction directly between the two electrons – each one pushing the other, even though they are not touching. This interaction acts across the gap in space separating the two electrons. It also acts across a gap in time. Being precise, each electron responds to the other’s past behaviour (not its current state).
Einstein, who was averse to such action-at-a-distance, understood this interaction differently. For him, there are more players on the scene than just the particles. There are also fields. Each electron produces an electromagnetic field that extends throughout space. The electrons move away from one another not because they are directly interacting with each other across a gap, but because each one is feeling a force from the other’s field.
Do electrons feel forces from their own electromagnetic fields? Either answer leads to trouble. First, suppose the answer is yes. The electromagnetic field of an electron gets stronger as you get closer to the electron. If you think of the electron as a little ball, each piece of that ball would feel an enormous outward force from the very strong electromagnetic field at its location. It should explode. Henri Poincaré conjectured that there might be some other forces resisting this self-repulsion and holding the electron together – now called ‘Poincaré stresses’. If you think of the electron as point-size, the problem is worse. The field and the force would be infinite at the electron’s location.
If the electron does not interact with itself, how can we explain the energy loss?
So, let us instead suppose that the electron does not feel the field it produces. The problem here is that there is evidence that the electron is aware of its field. Charged particles such as electrons produce electromagnetic waves when they are accelerated. That takes energy. Indeed, we can observe electrons lose energy as they produce these waves. If electrons interact with their own fields, we can correctly calculate the rate at which they lose energy by examining the way these waves interact with the electron as they pass through it. But, if electrons don’t interact with their own fields, then it’s not clear why they would lose any energy at all.
In Ritz’s all-particles no-fields proposal, the electron will not interact with its own field because there is no such field for it to interact with. Each electron feels forces only from other particles. But, if the electron does not interact with itself, how can we explain the energy loss? Whether you believe, like Einstein, that there are both particles and fields, or you believe, like Ritz, that there are only particles, you face a problem of self-interaction.
Ritz and Einstein staked out two sides of a three-sided debate. There is a third option: perhaps there are no particles, just fields. In 1844, Michael Faraday explored this option in an unpublished manuscript and a short published ‘speculation’. One could imagine describing the physics of hard, solid bodies of various shapes and sizes colliding and bouncing off one another. However, when two charged particles (such as electrons) interact by electric attraction or repulsion, they do not actually touch one another. Each just reacts to the other’s electromagnetic field. The sizes and shapes of the particles are thus irrelevant to the interaction, except in so much as they change the fields surrounding the particles. So, Faraday asked: ‘What real reason, then, is there for supposing that there is any such nucleus in a particle of matter?’ That is, why should we think that there is a hard core at the centre of a particle’s electromagnetic field? In modern terms, Faraday has been interpreted as proposing that we eliminate the particles and keep only the electromagnetic fields.
On 8 August, at the 2019 International Congress on Logic, Methodology and Philosophy of Science and Technology in Prague, I joined four other philosophers of physics for a debate – tersely titled ‘Particles, Fields, or Both?’ Mathias Frisch of the Leibniz University Hannover opened our session with a presentation of the debate between Einstein and Ritz (see his Aeon essay, ‘Why Things Happen’). Then, the remaining three speakers defended opposing views – updated versions of the positions held by Einstein, Ritz, and Faraday.
Our second speaker, Mario Hubert of Caltech, sought to rescue Einstein’s picture of point-size particles and fields from the problem of self-interaction. He discussed the current status of multiple ideas about how this might be done. One such idea came from Paul Dirac, a mathematical wizard who made tremendous contributions to early quantum physics. Dirac’s name appears in the part of the standard model that describes electrons.
In a 1938 paper, Dirac took a step back from quantum physics to study the problem of self-interaction in classical electrodynamics. He proposed a modification to the laws of electrodynamics, changing the way that fields exert forces on particles. For a point-size particle, his new equation eliminates any interaction of the particle with its own electromagnetic field, and includes a new term to mimic the kind of self-interaction that we actually observe – the kind that causes a particle to lose energy when it makes waves. However, the equation that Dirac proposed has some strange features. One oddity is ‘pre-acceleration’: a particle that you’re going to hit with a force might start moving before you hit it.
In the 1930s and ’40s, a different strategy was pursued by four notable physicists: Max Born (known for ‘the Born rule’ that tells you how to calculate probabilities in quantum physics), Leopold Infeld (who coauthored a popular book on modern physics with Einstein: The Evolution of Physics), Fritz Bopp (who was part of the German nuclear research programme during the Second World War and, after the war, cosigned a manifesto opposing nuclear weapons and advocating nuclear energy in West Germany), and Boris Podolsky (a coauthor of the paper that spurred Erwin Schrödinger to coin the term ‘entanglement’ and introduce his enigmatic cat). These physicists proposed ways of changing the laws that specify how particles produce electromagnetic fields so that the fields produced by point particles never become infinitely strong.
When you change these laws, you change a lot. As Hubert explained in his presentation, we don’t fully understand the consequences of these changes. In particular, it is not yet clear whether the Born-Infeld and Bopp-Podolsky proposals will be able to solve the self-interaction problem and make accurate predictions about the motions of particles.
You might feel that all of this talk of classical physics has gotten us very far off topic. Aren’t we supposed to be trying to understand what the standard model of quantum physics tells us about what everything is made of?
As in a time-travel movie, the future can influence the past
The part of the standard model that describes electrons and the electromagnetic field is called ‘quantum electrodynamics’, as it is the quantum version of classical electrodynamics. The foundations of the two subjects are closely linked. Here’s how Richard Feynman motivates a discussion of the modifications to classical electrodynamics made by Dirac, Born, Infeld, Bopp, and Podolsky in a chapter of his legendary lectures at Caltech:
There are difficulties associated with the ideas of Maxwell’s theory which are not solved by and not directly associated with quantum mechanics. You may say, ‘Perhaps there’s no use worrying about these difficulties. Since the quantum mechanics is going to change the laws of electrodynamics, we should wait to see what difficulties there are after the modification.’ However, when electromagnetism is joined to quantum mechanics, the difficulties remain. So it will not be a waste of our time now to look at what these difficulties are.
Indeed, Feynman thought these issues were of central importance. In the lecture that he gave upon receiving the Nobel Prize in 1965 for his work on quantum electrodynamics, he chose to spend much of his time discussing classical electrodynamics. In collaboration with his graduate advisor, John Wheeler (advisor to a number of other important figures, including Hugh Everett III, the inventor of the Many-Worlds interpretation of quantum mechanics, and Kip Thorne, a corecipient of the 2017 Nobel Prize for gravitational-wave detection), Feynman had proposed a radical reimagining of classical electrodynamics.
Wheeler and Feynman – like Ritz – do away with the electromagnetic field and keep only the particles. As I mentioned earlier, Ritz’s field-free theory has particles interact across gaps in space and time so that each particle responds to the past states of the others. In the Wheeler-Feynman theory, particles respond to both the past and the future behaviour of one another. As in a time-travel movie, the future can influence the past. That’s a wild idea, but it seems to work. In appropriate circumstances, this revision yields accurate predictions about the motions of particles without any true self-interaction.
In a talk titled ‘Why Field Theories are not Theories of Fields’, the third speaker in our debate, Dustin Lazarovici of the University of Lausanne, took the side of Ritz, Wheeler, and Feynman. In the action-at-a-distance theories put forward by these physicists, you can’t tell what a particle will do at a particular moment just by looking at what the other particles are doing at that moment. You also need to look at what they were doing in the past (and perhaps what they will do in the future). Lazarovici argued that the electromagnetic field is merely a useful mathematical bookkeeping device that encodes this information about the past and future, not a real thing out there in the world.
Lazarovici then moved from classical to quantum electrodynamics. Like many other philosophers of physics, he believes that standard formulations of quantum electrodynamics are unsatisfactory – in part because they don’t give a clear picture of what
I was driven to this all-fields picture not by studying the self-interaction problem, but by two other considerations. First, I have found this picture helpful in understanding a property of the electron called ‘spin’. The standard lore in quantum physics is that the electron behaves in many ways like a spinning body but is not really spinning. It has spin but does not spin.
If you think of electrons as a field, then you can think of photons the same way
If the electron is point-size, of course it does not make sense to think of it as actually spinning. If the electron is instead thought of as a very small ball, there are concerns that it would have to rotate faster than the speed of light to account for the features that led us to use the word ‘spin’. This worry about faster-than-light rotation made the physicists who discovered spin in the 1920s uncomfortable about publishing their results.
If the electron is a sufficiently widely spread-out lump of energy and charge in the Dirac field, there is no need for faster-than-light motion. We can study the way that the energy and charge move to see if they flow in a circular way about some central axis – to see if the electron spins. It does.
The second consideration that led me to an all-fields picture was the realisation that we don’t have a way of treating the photon as a particle in quantum electrodynamics. Dirac invented an equation that describes the quantum behaviour of a single electron. But we have no similar equation for the photon.
If you think of electrons as particles, you’ll have to think of photons differently – either eliminating them (Lazarovici’s story) or treating them as a field (Hubert’s story). On the other hand, if you think of electrons as a field, then you can think of photons the same way. I see this consistency as a virtue of the all-fields picture.
As things stand, the three-sided debate between Einstein, Ritz and Faraday remains unresolved. We’ve certainly made progress, but we don’t have a definitive answer. It is not yet clear what classical and quantum electrodynamics are telling us about reality. Is everything made of particles, fields or both?
This question is not front and centre in contemporary physics research. Theoretical physicists generally think that we have a good-enough understanding of quantum electrodynamics to be getting on with, and now we need to work on developing new theories and finding ways to test them through experiments and observations.
That might be the path forward. However, sometimes progress in physics requires first backing up to reexamine, reinterpret and revise the theories that we already have. To do this kind of research, we need scholars who blend the roles of physicist and philosopher, as was done thousands of years ago in Ancient Greece.