Sunday, September 5, 2021

The Cosmological Principle seems to be false

 

New Evidence against the Standard Model of Cosmology

Sabine Hossenfelder


http://backreaction.blogspot.com/2021/09/new-evidence-against-standard-model-of.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+blogspot%2Fermku+%28Backreaction%29

 

[[I remember when i read Steven Weinberg's The First Three Minutes in the 70s where he was careful to say that everything we know about the universe as a whole depends upon the Cosmological Principle, and Stephen Hawking said we believe it on the grounds of humility [!]. And now there is significant evidence that it is false! Keep that in mind when you read the next “discovery” reported in the nyt.]]

 

Physicists believe they understand quite well how the universe works on large scales. There’s dark matter and there’s dark energy, and there’s the expansion of the universe that allows matter to cool and clump and form galaxies. The key assumption to this model for the universe is the cosmological principle, according to which the universe is approximately the same everywhere. But increasingly more observations show that the universe just isn’t the same everywhere. What are those observations? Why are they a problem? And what does it mean? That’s what we’ll talk about today.

 

Let’s begin with the cosmological principle, the idea that the universe looks the same everywhere. Well. Of course the universe does not look the same everywhere. There’s more matter under your feet than above your head and more matter in the Milky way than in intergalactic space, and so on. Physicists have noticed that too, so the cosmological principle more precisely says that matter in the universe is equally distributed when you average over sufficiently large distances.

 

To see what this means, forget about matter for a moment and suppose you have a row of detectors and they measure, say, temperature. Each detector gives you a somewhat different temperature but you can average over those detectors by taking a few of them at a time, let’s say 5, calculate the average value from the reading of those five detectors, and replace the values of the individual detectors with their average value. You can then ask how far away this averaged distribution is from one that’s the same everywhere. In this example it’s pretty close.

 

But suppose you have a different distribution, for example this one. If you average over sets of 5 detectors again, the result still does not look the same everywhere. Now, if you average over all detectors, then of course the average is the same everywhere. So if you want to know how close a distribution is to being uniform, you average it over increasingly large distances and ask from what distance on it’s very similar to just being the same everywhere.

 

In cosmology we don’t want to average over temperatures, but we want to average over the density of matter. On short scales, which for cosmologists is something like the size of the milky way, matter clearly is not uniformly distributed. If we average over the whole universe, then the average is uniform, but that’s uninteresting. What we want to know is, if we average over increasingly large distances, at what distance does the distribution of matter become uniform to good accuracy?

 

Yes, good question. One can calculate this distance using the concordance model, which is the currently accepted standard model of cosmology. It’s also often called ΛCDM, where Λ is the cosmological constant and CDM stands for cold dark matter. The distance at which the cosmological principle should be a good approximation to the real distribution of matter was calculated from the concordance model in a 2010 paper by Hunt and Sarkar.

 

They found that the deviations from a uniform distribution fall below one part in a hundred from an averaging distance of about 200-300 Mpc on. 300 Megaparsec are about 1 billion light years. And just to give you a sense of scale, our distance to the next closest galaxy, Andromeda, is about two and a half million light years. A billion light years is huge. But from that distance on at the latest, the cosmological principle should be fulfilled to good accuracy – if the concordance model is correct.

 

One problem with the cosmological principle is that astrophysicists have on occasion assumed it is valid already on shorter distances, down to about 100 Megaparsec. This is an unjustified assumption, but it has for example entered the analysis of supernovae data from which the existence of dark energy was inferred. And yes, that’s what the Nobel Prize in physics was awarded for in 2011.

 

Two years ago, I told you about a paper by Subir Sarkar and his colleagues, that showed if one analyses the supernovae data correctly, without assuming that the cosmological principle holds on too short distances, then the evidence for dark energy disappears. That paper has been almost entirely ignored by other scientists. Check out my earlier video for more about that.

 

Today I want to tell you about another problem with the cosmological principle. As I said, one can calculate the scale from which on it should be valid from the standard model of cosmology. Beyond that scale, the universe should look pretty much the same everywhere. This means in particular there shouldn’t be any clumps of matter on scales larger than about a billion light years. But. Astrophysicists keep on finding those.

 

Already in nineteen-ninety-one they found the Clowes-Campusano-Quasar group, which is a collection of thirty-four Quasars, about nine point five Billion light years away from us and it extends over two Billion Light-years, clearly too large to be compatible with the prediction from the concordance model.

 

Since 2003 astrophysicists know the „great wall“ a collection of galaxies about a billion light years away from us that extends over 1.5 billion light years. That too, is larger than it should be.

 

Then there’s the “Huge quasar group” which is… huge. It spans a whopping four Billion light-years. And just in July Alexia Lopez discovered the “Giant Arc” a collection of galaxies, galaxy clusters, gas and dust that spans 3 billion light years.

 

Theoretically, these structures shouldn’t exist. It can happen that such clumps appear coincidentally in the concordance model. That’s because this model uses an initial distribution of matter in the early universe with random fluctuations. So it could happen you end up with a big clump somewhere just by chance. But you can calculate the probability for that to happen. The Giant Arc alone has a probability of less than one in a hundred-thousand to have come about by chance. And that doesn’t factor in all the other big structures.

 

What does it mean? It means the evidence is mounting that the cosmological principle is a bad assumption to develop a model for the entire universe and it probably has to go. It increasingly looks like we live in a region in the universe that happens to have a significantly lower density than the average in the visible universe. This area of underdensity which we live in has been called the “local hole”, and it has a diameter of at least 600 million light years. This is the finding of a recent paper by a group of astrophysicists from Durham in the UK.

 

They also point out that if we live in a local hole then this means that the local value of the Hubble rate must be corrected down. This would be good news because currently measurements for the local value of the Hubble rate are in conflict with the value from the early universe. And that discrepancy has been one of the biggest headaches in cosmology in the past years. Giving up the cosmological principle could solve that problem.

 

However, the finding in that paper from the Durham group is only a mild tension with the concordance model, at about three sigma, which is not highly statistically significant. But Sarkar and his group had another paper recently in which they do a consistency check on the concordance model and find a conflict at four point nine sigma, that is a less than one in a million chance for it to be coincidence.

 

This works as follows. If we measure the temperature of the cosmic microwave background, it appears hotter into the direction which we move against it. This gives rise to the so-called CMB dipole. You can measure this dipole. You can also measure the dipole by inferring our motion from the observations of quasars. If the concordance model was right, the direction and magnitude of the dipoles should be the same. But they are not. You see this in this figure from Sarkar’s paper. The star is the location of the cmb dipole, the triangle that of the quasar dipole. In this figure you see how far away from the cmb expectation the quasar result is.

 

These recent developments make me think that in the next ten years or so, we will see a major paradigm shift in cosmology, where the current standard model will be replaced with another one. Just what the new model will be, and if it will still have dark energy, I don’t know.


Thursday, August 26, 2021

The extreme weakness of fMRI brain analysis

 

Seventy Teams of Scientists Analysed the Same Brain Data, and It Went Badly

What the latest fMRI “crisis” means for the rest of science


https://medium.com/the-spike/seventy-teams-of-scientists-analysed-the-same-brain-data-and-it-went-badly-e0d96c23dbf4 

Mark Humphries

Mark Humphries

To the outside observer, it can seem that fMRI research careens from one public crisis to another. Detecting brain activity in a dead salmon. Impossibly high correlations between brain activity and behaviour. Serious flaws in fMRI analysis software leading to claims of tens of thousands of papers being (partially) wrong. Finding wildly different active brain regions from the same set of fMRI scans by just varying the parameters in standard analysis pipelines. “Oof” says The Rest of Science, “glad that’s not us.”

And now a paper in Nature shows that a big groups of experts all looking at the same brain imaging data agree on almost nothing.

But within it is a warning for all of neuroscience, and beyond.

The group behind the Nature paper set a simple challenge: they asked teams of volunteers to each take the same set of fMRI scans from 108 people doing a decision-making task, and use them to test nine hypotheses of how brain activity would change during the task. Their goal was simply to test how many teams agreed on which hypotheses had significant evidence and which did not. The Neuroimaging Analysis Replication Study (NARPS) was born.

The task was simple too, cutting down on the complexity of the analysis. Lying in the scanner, you’d be shown the two potential outcomes of a coin-flip: if it comes up heads, you’d lose $X dollars; if tails, you’d win $Y dollars. Your decision is whether to accept or reject that gamble; accept it and the (virtual) coin is flipped, and your winnings adjusted accordingly. The clever bit is that the difference between the loss and win amount is varied on every trial, testing your tolerance for losing. And if you’re like most people, you have a strong aversion to losing, so will only regularly accept gambles where you could win at least twice as much as you lose.

From this simple task sprung those nine hypotheses, equally simple. Eight about how activity in a broad region of the brain should go up or down in response to wins or losses; one a comparison of changes within a brain region during wins and losses. And pretty broad regions of the brain too — a big chunk of the prefrontal cortex, the whole striatum, and the whole amygdala. Simple task, simple hypotheses, unmissably big chunks of brain — simple to get the same answer, right? Wrong.

Seventy teams stepped up to take the data and test the nine hypotheses. Of the nine, only one (Hypothesis 5) was reported as significant by more than 80% of the teams. Three were reported as significant by only about 5% of the teams, about as much as we’d expect by chance using classical statistics, so could be charitably interpreted as showing the hypotheses were not true. Which left five hypotheses in limbo, with between 20% and 35% of teams reporting a significant effect for each. Nine hypotheses: one agreed as correct; three rejected; five in limbo. Not a great scorecard for 70 teams looking at the same data.

Even worse were the predictions of how many teams would support each hypothesis. Whether made by the teams themselves, or by a group of experts not taking part, the predictions were wildly over-optimistic. The worst offender (hypothesis 2) was supported by the results of only about 25% of the teams, but its predicted support was about 75%. So not only did the teams not agree on what was true, they also couldn’t predict what was true and was not.

What then was it about the analysis pipelines used by the teams that led to big disagreements in what hypothesis was supported and what was not? The NARPS group could find little that systematically differed between them. One detectable effect was how smooth the teams made their brain maps — the more they were smoothed by averaging close-together brain bits, the more likely the team would find significant evidence for a hypotheses. But this smoothing effect only accounted for (roughly) 4% of variance in the outcomes, leaving 96% unaccounted for.

Whatever it was that differed between the teams, it came after the stage where each team built their initial statistical map of the brain’s activity, maps of which tiny cube of brain — each voxel — passed some test of significance. For these initial statistical maps of brain activity correlated quite well. So the NARPS people took a consensus of these maps across the groups, and claimed clear support for four of the hypotheses (numbers 2, 4, 5 and 6). Great: so all we need do to provide robust answers for every fMRI study is have 70 teams create maps from the same data then merge them together to find the answer. Let’s all watch the science funders line up behind that idea.

(And then some wag will run a study that tests if different teams will get the same answers from the same merged map, and around we go again).

Sarcasm aside, that is not the answer. Because the results from that consensus map did not agree with the actual results of the teams. The teams found hypotheses 1 and 3 to be significant equally as often as 2, 4 and 5, but hypothesis 1 and 3 were not well supported by the consensus map. So the polling of the teams provided different answers to the consensus of their maps. Which then are the supported hypotheses? At the end, we’re still none the wiser.

Some take pleasure in fMRI’s problems, and would add this NARPS paper to a long list of reasons not to take fMRI research seriously. But that would be folly.

Some of fMRI’s crises are more hype than substance. Finding activity in the brain of a dead salmon was not to show fMRI was broken, but was a teaching tool — an example of what could go wrong if for some reason you didn’t make the essential corrections for noise when analysing fMRI data, corrections that are built into neuroimaging analysis pipelines precisely so that you don’t find brain activity in a dead animal or, in one anecdotal case relayed to me, outside the skull. Those absurdly high “voodoo” correlations arise from double-dipping: first select out the most active voxels, and then correlate stuff only with them. The wrong thing to do, but fMRI research is hardly the only discipline that does double-dipping. And the much-ballyhooed software error turned out to maybe affect some of the results in a few hundred studies; but nonetheless was a warning to all to take care.

Everyone finds it inherently fascinating to see the activity deep within a living human brain. So fMRI studies are endlessly in the public eye, the media plastering coloured doodles of brains into their breathless reporting. But fMRI is a young field, so its growing pains are public too. Another “crisis” just broke — that when you re-scan the same person, the map of brain activity you get will likely differ quite a lot from the original scan. One crisis at a time please, fMRI. But this public coverage is not in proportion to its problems: there’s nothing special about its problems.

The fMRI analysis pipeline is fiercely complex. This is common knowledge. And because it’s common knowledge, many fMRI researchers look closely at the robustness of how fMRI data is analysed — at errors in correcting the maps of brain activity, about what happens if we don’t correct, about robustness of results to choices of how the analysis is setup, about robustness of results to having different scientists trying to obtain them. Rather than crises, one could equally interpret the above list — dead salmon, voodoo correlations and all — as a sign that fMRI is tackling its inevitable problems head on. And it just happens that they have to do it in public.

The NARPS paper ends with the warning that “although the present investigation was limited to the analysis of a single fMRI dataset, it seems highly likely that similar variability will be present for other fields of research in which the data are high-dimensional and the analysis workflows are complex and varied”.

The Rest of Science: “Do they mean us?”

Yes, they mean you. These crises should give any of us working on data from complex pipelines pause for serious thought. There is nothing unique to fMRI about the issues they raise. Other areas of neuroscience are just as bad. We can do issues of poor data collection: studies using too few subjects plague other areas of neuroscience just as much as neuroimaging. We can do absurdly high correlations too; for one thing if you use a tiny number of subjects then the correlations have to be absurdly high to pass as “significant”; for another most studies of neuron “function” are as double-dipped as fMRI studies, only analysing neurons that already passed some threshold for being tuned to the stimulus or movement studied. We can do dead salmon: without corrections for signal bleed (from the neuropil), calcium imaging can find neural activity outside of a neuron’s body. We can even do a version of this NARPS study, reaching wildly different conclusions about neural activity by varying the analysis pipelines applied to the same data-set. And the dark art of spike-sorting is, well, a dark art, with all that entails about the reliability of the findings that stem from the spikes (one solution might be: don’t sort them).

I come neither to praise fMRI nor to bury it. It’s a miraculous technology; but comes with deep limitations for anyone interested in how neurons do what they do — it records blood flow, slowly, at the resolution of millions of neurons. But all the above are crises of technique, of analysis, of statistics. They are likely common to many fields, and we should be so lucky that our field’s issues are not played out as publicly as those of fMRI. Indeed, in striving to sort out its own house, where fMRI research has gone, others should follow.



Thursday, July 29, 2021

Climate is very poorly understood




[[It cannot be emphasized enough that climate is an enormously complicated result of very many factors, many of which are not well understood, ;et alone their combined effects. Here is a very recent example.]]

A Soil-Science Revolution Upends Plans to Fight Climate Change

https://mail.google.com/mail/u/0/?zx=8nj49iv5nqad#search/quanta/FMfcgzGkZZpQrbrmcDWLGmgCHFnVpxNq

A centuries-old concept in soil science has recently been thrown out. Yet it remains a key ingredient in everything from climate models to advanced carbon-capture projects.

One teaspoon of healthy soil contains more bacteria, fungi and other microbes than there are humans on Earth. Those hungry organisms can make soil a difficult place to store carbon over long periods of time.



Gabriel Popkin

Contributing Writer


Quanta Magazine


July 27, 2021







The hope was that the soil might save us. With civilization continuing to pump ever-increasing amounts of carbon dioxide into the atmosphere, perhaps plants — nature’s carbon scrubbers — might be able to package up some of that excess carbon and bury it underground for centuries or longer.

That hope has fueled increasingly ambitious climate change–mitigation plans. Researchers at the Salk Institute, for example, hope to bioengineer plants whose roots will churn out huge amounts of a carbon-rich, cork-like substance called suberin. Even after the plant dies, the thinking goes, the carbon in the suberin should stay buried for centuries. This Harnessing Plants Initiative is perhaps the brightest star in a crowded firmament of climate change solutions based on the brown stuff beneath our feet.

Such plans depend critically on the existence of large, stable, carbon-rich molecules that can last hundreds or thousands of years underground. Such molecules, collectively called humus, have long been a keystone of soil science; major agricultural practices and sophisticated climate models are built on them.

But over the past 10 years or so, soil science has undergone a quiet revolution, akin to what would happen if, in physics, relativity or quantum mechanics were overthrown. Except in this case, almost nobody has heard about it — including many who hope soils can rescue the climate. “There are a lot of people who are interested in sequestration who haven’t caught up yet,” said Margaret Torn, a soil scientist at Lawrence Berkeley National Laboratory.

A new generation of soil studies powered by modern microscopes and imaging technologies has revealed that whatever humus is, it is not the long-lasting substance scientists believed it to be. Soil researchers have concluded that even the largest, most complex molecules can be quickly devoured by soil’s abundant and voracious microbes. The magic molecule you can just stick in the soil and expect to stay there may not exist.





Artificially colored scanning electron micrograph images of soils from the island of Hawai’i.

Thiago Inagaki, in collaboration with Lena Kourkoutis, Angela Possinger and Johannes Lehmann

“I have The Nature and Properties of Soils in front of me — the standard textbook,” said Gregg Sanford, a soil researcher at the University of Wisconsin, Madison. “The theory of soil organic carbon accumulation that’s in that textbook has been proven mostly false … and we’re still teaching it.”

The consequences go far beyond carbon sequestration strategies. Major climate models such as those produced by the Intergovernmental Panel on Climate Change are based on this outdated understanding of soil. Several recent studies indicate that those models are underestimating the total amount of carbon that will be released from soil in a warming climate. In addition, computer models that predict the greenhouse gas impacts of farming practices — predictions that are being used in carbon markets — are probably overly optimistic about soil’s ability to trap and hold on to carbon.

It may still be possible to store carbon underground long term. Indeed, radioactive dating measurements suggest that some amount of carbon can stay in the soil for centuries. But until soil scientists build a new paradigm to replace the old — a process now underway — no one will fully understand why.
The Death of Humus

Soil doesn’t give up its secrets easily. Its constituents are tiny, varied and outrageously numerous. At a bare minimum, it consists of minerals, decaying organic matter, air, water, and enormously complex ecosystems of microorganisms. One teaspoon of healthy soil contains more bacteria, fungi and other microbes than there are humans on Earth.


The fine hairs surrounding roots are covered in hungry bacteria; soils slightly further away from the roots may have an order of magnitude fewer microbes.

Courtesy of Jennifer Pett-Ridge and Erin Nuccio

The German biologist Franz Karl Achard was an early pioneer in making sense of the chaos. In a seminal 1786 study, he used alkalis to extract molecules made of long carbon chains from peat soils. Over the centuries, scientists came to believe that such long chains, collectively called humus, constituted a large pool of soil carbon that resists decomposition and pretty much just sits there. A smaller fraction consisting of shorter molecules was thought to feed microbes, which respired carbon dioxide to the atmosphere.

This view was occasionally challenged, but by the mid-20th century, the humus paradigm was “the only game in town,” said Johannes Lehmann, a soil scientist at Cornell University. Farmers were instructed to adopt practices that were supposed to build humus. Indeed, the existence of humus is probably one of the few soil science facts that many non-scientists could recite.

What helped break humus’s hold on soil science was physics. In the second half of the 20th century, powerful new microscopes and techniques such as nuclear magnetic resonance and X-ray spectroscopy allowed soil scientists for the first time to peer directly into soil and see what was there, rather than pull things out and then look at them.

What they found — or, more specifically, what they didn’t find — was shocking: there were few or no long “recalcitrant” carbon molecules — the kind that don’t break down. Almost everything seemed to be small and, in principle, digestible.

“We don’t see any molecules in soil that are so recalcitrant that they can’t be broken down,” said Jennifer Pett-Ridge, a soil scientist at Lawrence Livermore National Laboratory. “Microbes will learn to break anything down — even really nasty chemicals.”

Lehmann, whose studies using advanced microscopy and spectroscopy were among the first to reveal the absence of humus, has become the concept’s debunker-in-chief. A 2015 Nature paper he co-authored states that “the available evidence does not support the formation of large-molecular-size and persistent ‘humic substances’ in soils.” In 2019, he gave a talk with a slide containing a mock death announcement for “our friend, the concept of Humus.”

Over the past decade or so, most soil scientists have come to accept this view. Yes, soil is enormously varied. And it contains a lot of carbon. But there’s no carbon in soil that can’t, in principle, be broken down by microorganisms and released into the atmosphere. The latest edition of The Nature and Properties of Soils, published in 2016, cites Lehmann’s 2015 paper and acknowledges that “our understanding of the nature and genesis of soil humus has advanced greatly since the turn of the century, requiring that some long-accepted concepts be revised or abandoned.”

Old ideas, however, can be very recalcitrant. Few outside the field of soil science have heard of humus’s demise.
Buried Promises

At the same time that soil scientists were rediscovering what exactly soil is, climate researchers were revealing that increasing amounts of carbon dioxide in the atmosphere were rapidly warming the climate, with potentially catastrophic consequences.

Thoughts soon turned to using soil as a giant carbon sink. Soils contain enormous amounts of carbon — more carbon than in Earth’s atmosphere and all its vegetation combined. And while certain practices such as plowing can stir up that carbon — farming, over human history, has released an estimated 133 billion metric tons of carbon into the atmosphere — soils can also take up carbon, as plants die and their roots decompose.


Farming practices such as plowing can reduce the amount of carbon stored in soil.

Scientists began to suggest that we might be able to coax large volumes of atmospheric carbon back into the soil to dampen or even reverse the damage of climate change.

In practice, this has proved difficult. An early idea to increase carbon stores — planting crops without tilling the soil — has mostly fallen flat. When farmers skipped the tilling and instead drilled seeds into the ground, carbon stores grew in upper soil layers, but they disappeared from lower layers. Most experts now believe that the practice redistributes carbon within the soil rather than increases it, though it can improve other factors such as water quality and soil health.

Efforts like the Harnessing Plants Initiative represent something like soil carbon sequestration 2.0: a more direct intervention to essentially jam a bunch of carbon into the ground.

The initiative emerged when a team of scientists at the Salk Institute came up with an idea: Create plants whose roots produce an excess of carbon-rich molecules. By their calculations, if grown widely, such plants might sequester up to 20% of the excess carbon dioxide that humans add to the atmosphere every year.

The Salk scientists zeroed in on a complex, cork-like molecule called suberin, which is produced by many plant roots. Studies from the 1990s and 2000s had hinted that suberin and similar molecules could resist decomposition in soil.



José Graça

With flashy marketing, the Harnessing Plants Initiative gained attention. An initial round of fundraising in 2019 brought in over $35 million. Last year, the multibillionaire Jeff Bezos contributed $30 million from his “Earth Fund.”

But as the project gained momentum, it attracted doubters. One group of researchers noted in 2016 that no one had actually observed the suberin decomposition process. When those authors did the relevant experiment, they found that much of the suberin decayed quickly.

In 2019, Joanne Chory, a plant geneticist and one of the Harnessing Plant Initiative’s project leaders, described the project at a TED conference. Asmeret Asefaw Berhe, a soil scientist at the University of California, Merced, who spoke at the same conference, pointed out to Chory that according to modern soil science, suberin, like any carbon-containing compound, should break down in soil. (Berhe, who has been nominated to lead the U.S. Department of Energy’s Office of Science, declined an interview request.)

Around the same time, Hanna Poffenbarger, a soil researcher at the University of Kentucky, made a similar comment after hearing Wolfgang Busch, the other project leader, speak at a workshop. “You should really get some soil scientists on board, because the assumption that we can breed for more recalcitrant roots — that may not be valid,” Poffenbarger recalls telling Busch.

Questions about the project surfaced publicly earlier this year, when Jonathan Sanderman, a soil scientist at the Woodwell Climate Research Center in Woods Hole, Massachusetts, tweeted, “I thought the soil biogeochem community had moved on from the idea that there is a magical recalcitrant plant compound. Am I missing some important new literature on suberin?” Another soil scientist responded, “Nope, the literature suggests that suberin will be broken down just like every other organic plant component. I’ve never understood why the @salkinstitute has based their Harnessing Plant Initiative on this premise.”

Busch, in an interview, acknowledged that “there is no unbreakable biomolecule.” But, citing published papers on suberin’s resistance to decomposition, he said, “We are still very optimistic when it comes to suberin.”

“THe also noted a second initiative Salk researchers are pursuing in parallel to enhancing suberin. They are trying to design plants with longer roots that could deposit carbon deeper in soil. Independent experts such as Sanderman agree that carbon tends to stick around longer in deeper soil layers, putting that solution on potentially firmer conceptual ground.

Chory and Busch have also launched collaborations with Berhe and Poffenbarger, respectively. Poffenbarger, for example, will analyze how soil samples containing suberin-rich plant roots change under different environmental conditions. But even those studies won’t answer questions about how long suberin sticks around, Poffenbarger said — important if the goal is to keep carbon out of the atmosphere long enough to make a dent in global warming.

Beyond the Salk project, momentum and money are flowing toward other climate projects that would rely on long-term carbon sequestration and storage in soils. In an April speech to Congress, for example, President Biden suggested paying farmers to plant cover crops, which are grown not for harvest but to nurture the soil in between plantings of cash crops. Evidence suggests that when cover crop roots break down, some of their carbon stays in the soil — although as with suberin, how long it lasts is an open question.
Not Enough Bugs in the Code

Recalcitrant carbon may also be warping climate prediction.

In the 1960s, scientists began writing large, complex computer programs to predict the global climate’s future. Because soil both takes up and releases carbon dioxide, climate models attempted to take into account soil’s interactions with the atmosphere. But the global climate is fantastically complex, and to enable the programs to run on the machines of the time, simplifications were necessary. For soil, scientists made a big one: They ignored microbes in the soil entirely. Instead, they basically divided soil carbon into short-term and long-term pools, in accordance with the humus paradigm.

More recent generations of models, including ones that the Intergovernmental Panel on Climate Change uses for its widely read reports, are essentially palimpsests built on earlier ones, said Torn. They still assume soil carbon exists in long-term and short-term pools. As a consequence, these models may be overestimating how much carbon will stick around in soils and underestimating how much carbon dioxide they will emit.

Last summer, a study published in Nature examined how much carbon dioxide was released when researchers artificially warmed the soil in a Panamanian rainforest to mimic the long-term effects of climate change. They found that the warmed soil released 55% more carbon than nearby unwarmed areas — a much larger release than predicted by most climate models. The researchers think that microbes in the soil grow more active at the warmer temperatures, leading to the increase.



The study was especially disheartening because most of the world’s soil carbon is in the tropics and the northern boreal zone. Despite this, leading soil models are calibrated to results of soil studies in temperate countries such as the U.S. and Europe, where most studies have historically been done. “We’re doing pretty bad in high latitudes and the tropics,” said Lehmann.

Even temperate climate models need improvement. Torn and colleagues reported earlier this year that, contrary to predictions, deep soil layers in a California forest released roughly a third of their carbon when warmed for five years.

Ultimately, Torn said, models need to represent soil as something closer to what it actually is: a complex, three-dimensional environment governed by a hyper-diverse community of carbon-gobbling bacteria, fungi and other microscopic beings. But even smaller steps would be welcome. Just adding microbes as a single class would be major progress for most models, she said.
Fertile Ground

If the humus paradigm is coming to an end, the question becomes: What will replace it?

One important and long-overlooked factor appears to be the three-dimensional structure of the soil environment. Scientists describe soil as a world unto itself, with the equivalent of continents, oceans and mountain ranges. This complex microgeography determines where microbes such as bacteria and fungi can go and where they can’t; what food they can gain access to and what is off limits.

A soil bacterium “may be only 10 microns away from a big chunk of organic matter that I’m sure they would love to degrade, but it’s on the other side of a cluster of minerals,” said Pett-Ridge. “It’s literally as if it’s on the other side of the planet.”



Another related, and poorly understood, ingredient in a new soil paradigm is the fate of carbon within the soil. Researchers now believe that almost all organic material that enters soil will get digested by microbes. “Now it’s really clear that soil organic matter is just this loose assemblage of plant matter in varying degrees of degradation,” said Sanderman. Some will then be respired into the atmosphere as carbon dioxide. What remains could be eaten by another microbe — and a third, and so on. Or it could bind to a bit of clay or get trapped inside a soil aggregate: a porous clump of particles that, from a microbe’s point of view, could be as large as a city and as impenetrable as a fortress. Studies of carbon isotopes have shown that a lot of carbon can stick around in soil for centuries or even longer. If humus isn’t doing the stabilizing, perhaps minerals and aggregates are.

Before soil science settles on a new theory, there will doubtless be more surprises. One may have been delivered recently by a group of researchers at Princeton University who constructed a simplified artificial soil using microfluidic devices — essentially, tiny plastic channels for moving around bits of fluid and cells. The researchers found that carbon they put inside an aggregate made of bits of clay was protected from bacteria. But when they added a digestive enzyme, the carbon was freed from the aggregate and quickly gobbled up. “To our surprise, no one had drawn this connection between enzymes, bacteria and trapped carbon,” said Howard Stone, an engineer who led the study.


Lehmann is pushing to replace the old dichotomy of stable and unstable carbon with a “soil continuum model” of carbon in progressive stages of decomposition. But this model and others like it are far from complete, and at this point, more conceptual than mathematically predictive.

Researchers agree that soil science is in the midst of a classic paradigm shift. What nobody knows is exactly where the field will land — what will be written in the next edition of the textbook. “We’re going through a conceptual revolution,” said Mark Bradford, a soil scientist at Yale University. “We haven’t really got a new cathedral yet. We have a whole bunch of churches that have popped up.”

Correction: July 28, 2021

This article was revised to credit a team at Salk with the idea of using suberin-enriched plants to sequester carbon in soil.

Wednesday, July 14, 2021

The “Feminist Methodology” Muddle

 The “Feminist Methodology” Muddle


Susan Haack



[I]f I should ever attack that excessively difficult question, “What is for the true interest of society?” I should feel I stood in need of a great deal of help from the science of legitimate inference.—C. S. Peirce 


Should scientists and philosophers use “feminist methodology”? No; for more reasons than I can spell out here, but first and foremost because their business is figuring things out, not promoting social justice

“Methodology” is a much overworked and underspecified word; but “feminist methodology” is especially vague, ambiguous, and ill-defined. Even a brief survey of syllabi to be found online for courses on feminist methodology confirms this: one syllabus I found said that the students are to “design a feminist methodology” for their work themselves; and another that in the course “we” (i.e., presumably, the professor and the students) will try to answer the questions, “what counts as a feminist method?” and “who gets to say?”

Presumably, “feminist methodology” means something like “methodology informed by feminist values.” But this raises a whole raft of problems. In the first place, feminism is hardly monolithic, so we can expect there to be competing understandings of what values qualify as feminist. For a humanist, individualist feminist such as myself, a recognition of every woman’s full humanity and of each woman’s unique individuality will have priority; for many academic feminists today, apparently, it is what they take to be the shared oppression of women-as-a-class that matters. In the second place: however those feminist values are construed, though they may have some bearing on some issues in the social sciences and a few in the life and medical sciences, they are essentially irrelevant to physical cosmology, the theory of magnetism, quantum chemistry, molecular biology, etc., etc., and their relevance to philosophy seems even more limited. 

In any case, the idea that we should conduct scientific and philosophical work in such a way as to advance the interests of women faces an insuperable hurdle even within the limited sphere where it’s relevant: such advice could be followed only if we already knew what women’s interests really are, and what would really advance those interests; and to know this, obviously, we’d need serious philosophical and scientific work independent of any feminist agenda. So to urge that science and philosophy use feminist methodology is, in effect, to urge the deliberate politicization of inquiry, the deliberate blurring of the line between honest investigation and disguised advocacy; which both corrupts inquiry—which, as we should know from the awful examples of “Nazi physics” and “Soviet biology” is bound to be a disaster—and leaves advocacy without the firm factual basis it needs.

We can’t overcome the problem of limited scope by appealing to supposed “women’s ways of knowing” anything and everything, such as reliance on emotion rather than reason, or on the subjective rather than the objective—which just reintroduces old, sexist stereotypes under the guise of “feminist values”; nor can we avoid it by pointing to supposedly sexist metaphors in science or philosophy of science—which is, frankly, silly. And, of course, we can’t overcome the hurdle of identifying women’s interests and understanding what advances them by appeal to “feminist philosophy” or “feminist science,” or avoid the danger of transmuting inquiry into advocacy by suggesting that we are doing no more than detecting and correcting sexist biases in philosophical or scientific work. 

Am I saying that there have never been biases of this sort? No; I daresay there have. And such bias is, of course, regrettable—damaging not only to science and to philosophy, but also to women’s interests. Still, I very much doubt that sexist bias is the commonest form, or the most seriously damaging to inquiry—confirmation bias and bias in favor of an accepted theory are probably both commoner and more serious. And in any case the best way to avoid deleterious bias is simply to seek out as much evidence as possible, and to assess as honestly as possible where it points. 

Am I saying that advocacy is a bad thing? No, of course not; it’s often needed, and it’s fine in its proper place—in law, in politics, etc. The law relies on cross-examination and advocacy on each side; but the purpose of a trial is to arrive, within a reasonable time, at a verdict—a verdict warranted to the required degree by the evidence presented. Unlike a trial, however, scientific and philosophical work isn’t constrained by the desire for a prompt decision, but takes the time it takes; and often enough, the best “verdict” we can give is “as yet, we just don’t know.” 

Am I saying that I don’t care about social justice? No; though I do think the way the phrase combines highly nebulous content with strongly favorable connotation is potentially dangerous. Still, a society where everyone is free and no one oppressed is certainly desirable—unclear as it is how such a society might look in the specific, or how we might bring such a situation about. But I have to say that the idea that, at this point in time, women in the developed Western world are an oppressed class strikes me as a grave exaggeration—and a dangerous one, for several reasons. Rather as over-broad definitions of sexual harassment trivialize the serious offenses, this idea trivializes the real oppression that some classes of people are suffering: the Rohingya of Myanmar, for example, the Uighurs in China, the ordinary people of Venezuela or Syria, not to mention the Saudi women who have only very recently been permitted some of the many freedoms we take for granted in the West. At the same time, it encourages women in the developed Western world to be preoccupied with slights—“micro-aggressions” in today’s catchphrase—at the expense of getting on with their lives and with productive work. Moreover, by conveying the false impression that the sciences and philosophy are pervasively riddled with sexist bias, it probably encourages some women who might otherwise have made a real contribution to these fields, and found satisfaction in doing so, choose other, and perhaps less rewarding, occupations instead.

The anonymous author of the Wikipedia entry on feminist method speaks of “a sense of despair and anger that knowledge, both academic and popular, [is] based on men’s lives, male ways of thinking, and directed towards the problems articulated by men.” I think it’s long past time we put such factitious anger and such factitious despair behind us, and long past time we moved beyond thinking in terms of male and female ways of thinking to a fuller appreciation of the richness, variety, and potential of human intelligence, regardless of sex or any other irrelevant consideration. 


Sunday, June 27, 2021

Science not uber alles


Science Should Not Try to Absorb Religion and Other Ways of Knowing

John Horgan

https://www.scientificamerican.com/article/science-should-not-try-to-absorb-religion-and-other-ways-of-knowing/ 

 

An edgy biography of Stephen Hawking has me reminiscing about science’s good old days. Or were they bad? I can’t decide. I’m talking about the 1990s, when scientific hubris ran rampant. As journalist Charles Seife recalls in Hawking Hawking: The Selling of a Scientific Celebrity, Hawking and other physicists convinced us that they were on the verge of a “theory of everything” that would solve the riddle of existence. It would reveal why there is something rather than nothing, and why that something is the way it is.

In this column, I’ll look at an equally ambitious and closely related claim, that science will absorb other ways of seeing the world, including the arts, humanities and religion. Nonscientific modes of knowledge won’t necessarily vanish, but they will become consistent with science, our supreme source of truth. The most eloquent advocate of this perspective is biologist Edward Wilson, one of our greatest scientist-writers.

In his 1998 bestseller Consilience: The Unity of Knowledge, Wilson prophesies that science will soon yield such a compelling, complete theory of nature, including human nature, that “the humanities, ranging from philosophy and history to moral reasoning, comparative religion, and interpretation of the arts, will draw closer to the sciences and partly fuse with them.” Wilson calls this unification of knowledge “consilience,” an old-fashioned term for coming together or converging. Consilience will resolve our age-old identity crisis, helping us understand once and for all “who we are and why we are here,” as Wilson puts it.


Dismissing philosophers’ warnings against deriving “ought” from “is,” Wilson insists that we can deduce moral principles from science. Science can illuminate our moral impulses and emotions, such as our love for those who share our genes, as well as giving us moral guidance. This linkage of science to ethics is crucial, because Wilson wants us to share his desire to preserve nature in all its wild variety, a goal that he views as an ethical imperative.

At first glance you might wonder: Who could possibly object to this vision? Wouldn’t we all love to agree on a comprehensive worldview, consistent with science, that tells us how to behave individually and collectively? And in fact. many scholars share Wilson’s hope for a merger of science with alternative ways of engaging with reality. Some enthusiasts have formed the Consilience Project, dedicated to “developing a body of social theory and analysis that explains and seeks solutions to the unique challenges we face today.” Last year, poet-novelist Clint Margrave wrote an eloquent defense of consilience for Quillette, noting that he has “often drawn inspiration from science.”

Another consilience booster is psychologist and megapundit Steven Pinker, who praised Wilson’s “excellent” book in 1998 and calls for consilience between science and the humanities in his 2018 bestseller Enlightenment Now. The major difference between Wilson and Pinker is stylistic. Whereas Wilson holds out an olive branch to “postmodern” humanities scholars who challenge science’s objectivity and authority, Pinker scolds them. Pinker accuses postmodernists of “defiant obscurantism, self-refuting relativism and suffocating political correctness.”

The enduring appeal of consilience makes it worth revisiting. Consilience raises two big questions: (1) Is it feasible? (2) Is it desirable? Feasibility first. As Wilson points out, physics has been an especially potent unifier, establishing over the past few centuries that the heavens and earth are made of the same stuff ruled by the same forces. Now physicists seek a single theory that fuses general relativity, which describes gravity, with quantum field theory, which accounts for electromagnetism and the nuclear forces. This is Hawking’s theory of everything and Steven Weinberg’s “final theory."

Writing in 1998, Wilson clearly expected physicists to find a theory of everything soon, but today they seem farther than ever from that goal. Worse, they still cannot agree on what quantum mechanics means. As science writer Philip Ball points out in his 2018 book Beyond Weird: Why Everything You Thought You Knew about Quantum Physics Is Different, there are more interpretations of quantum mechanics now than ever.

Advertisement

The same is true of scientific attempts to bridge the explanatory chasm between matter and mind. In the 1990s, it still seemed possible that researchers would discover how physical processes in the brain and other systems generate consciousness. Since then, mind-body studies have undergone a paradigm explosion, with theorists espousing a bewildering variety of models, involving quantum mechanics, information theory and Bayesian mathematics.  Some researchers suggest that consciousness pervades all matter, a view called panpsychism; others insist that the so-called hard problem of consciousness is a pseudoproblem because consciousness is an “illusion.”

There are schisms even within Wilson’s own field of evolutionary biology. In Consilience and elsewhere, Wilson suggests that natural selection promotes traits at the level of tribes and other groups; in this way, evolution might have bequeathed us a propensity for religion, war and other social behaviors. Other prominent Darwinians, notably Richard Dawkins and Robert Trivers, reject group selection, arguing that natural selection operates only at the level of individual organisms and even individual genes.

If scientists cannot achieve consilience even within specific fields, what hope is there for consilience between, say, quantum chromodynamics and queer theory? (Actually, in her fascinating 2007 book Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning, physicist-philosopher Karen Barad finds resonances between physics and gender politics; but Barad’s book represents the kind of postmodern analysis deplored by Wilson and Pinker.) If consilience entails convergence toward a consensus, science is moving away from consilience.



So, consilience doesn’t look feasible, at least not at the moment. Next question: Is consilience desirable? Although I’ve always doubted whether it could happen, I once thought consilience should happen. If humanity can agree on a single, rational worldview, maybe we can do a better job solving our shared problems, like climate change, inequality, pandemics and militarism. We could also get rid of bad ideas, such as the notion that God likes some of us more than others; or that racial and sexual inequality and war are inevitable consequences of our biology.

I also saw theoretical diversity, or pluralism, as philosophers call it, as a symptom of failure; the abundance of “solutions” to the mind-body problem, like the abundance of treatments for cancer, means that none works very well. But increasingly, I see pluralism as a valuable, even necessary counterweight to our yearning for certitude. Pluralism is especially important when it comes to our ideas about who we are, can be and should be. If we settle on a single self-conception, we risk limiting our freedom to reinvent ourselves, to discover new ways to flourish.

Advertisement

Wilson acknowledges that consilience is a reductionistic enterprise, which will eliminate many ways of seeing the world. Consider how he treats mystical visions, in which we seem to glimpse truths normally hidden behind the surface of things. To my mind, these experiences rub our faces in the unutterable weirdness of existence, which transcends all our knowledge and forms of expression. As William James says in The Varieties of Religious Experience, mystical experiences should “forbid a premature closing of our accounts with reality.”

Wilson disagrees. He thinks mystical experiences are reducible to physiological processes. In Consilience, he focuses on Peruvian shaman-artist Pablo Amaringo, whose paintings depict fantastical, jungly visions induced by ayahuasca, a hallucinogenic tea (which I happen to have taken) brewed from two Amazonian plants. Wilson attributes the snakes that slither through Amaringo’s paintings to natural selection, which instilled an adaptive fear of snakes in our ancestors; it should not be surprising that snakes populate many religious myths, such as the biblical story of Eden.

Moreover, ayahuasca contains psychotropic compounds, including the potent psychedelic dimethyltryptamine, like those that induce dreams, which stem from, in Wilson’s words, the “editing of information in the memory banks of the brain” that occurs while we sleep. These nightly neural discharges are “arbitrary in content,” that is, meaningless; but the brain desperately tries to assemble them into “coherent narratives,” which we experience as dreams.

In this way, Wilson “explains” Amaringo’s visions in terms of evolutionary biology, psychology and neurochemistry. This is a spectacular example of what Paul Feyerabend, my favorite philosopher and a fierce advocate for pluralism, calls “the tyranny of truth.” Wilson imposes his materialistic, secular worldview on the shaman, and he strips ayahuasca visions of any genuine spiritual significance. While he exalts biological diversity, Wilson shows little respect for the diversity of human beliefs.

Wilson is a gracious, courtly man in person as well on the page. But his consilience project stems from excessive faith in science, or scientism. (Both Wilson and Pinker embrace the term scientism, and they no doubt think that the phrase “excessive faith in science” is oxymoronic.) Given the failure to achieve consilience within physics and biology—not to mention the replication crisis and other problems—scientists should stop indulging in fantasies about conquering all human culture and attaining something akin to omniscience. Scientists, in short, should be more humble.

Advertisement

Ironically, Wilson himself questioned the desirability of final knowledge early in his career. At the end of his 1975 masterpiece Sociobiology, Wilson anticipates the themes of Consilience, predicting that evolutionary theory plus genetics will soon absorb the social sciences and humanities. But Wilson doesn’t exult at this prospect. When we can explain ourselves in “mechanistic terms,” he warns, “the result might be hard to accept”; we might find ourselves, as Camus put it, “divested of illusions.”

Wilson needn’t have worried. Scientific omniscience looks less likely than ever, and humans are far too diverse, creative and contrary to settle for a single worldview of any kind. Inspired by mysticism and the arts, as well as by science, we will keep arguing about who we are and reinventing ourselves forever. Is consilience a bad idea, which we’d be better off without? I wouldn’t go that far. Like utopia, another byproduct of our yearning for perfection, consilience, the dream of total knowledge, can serve as a useful goad to the imagination, as long as we see it as an unreachable ideal. Let’s just hope we never think we’ve reached it.

This is an opinion and analysis article; the views expressed by the author or authors are not necessarily those of Scientific American.

Further Reading:

The Delusion of Scientific Omniscience

Advertisement

The End of Science (updated 2015 edition)

Mind-Body Problems: Science, Subjectivity and Who We Really Are

I just talked about consilience with science journalist Philip Ball on my podcast “Mind-Body Problems.”

I brood over the limits of knowledge in my new book Pay Attention: Sex, Death, and Science.





--