Monday, September 13, 2021

Recommended reading for the remaining fans of Richard Dawkins

 

Pseudogenes Aren’t Nonfunctional Relics that Refute Intelligent Design

Casey Luskin

September 9, 2021, 6:36 AM


https://evolutionnews.org/2021/09/pseudogenes-arent-nonfunctional-relics-that-refute-intelligent-design/


[[If you need an example of the tone and accuracy of the debate about evolution, here is a good example. I highly recommend Dr. Luskin’s work in general.]]


Photo by Ann Kathrin Bopp via Unsplash.

We’ve been discussing a video in which Richard Dawkins claims that the evidence for common ancestry refutes intelligent design (see here, here, and here). We first saw that contrary to Dawkins, the genetic data does not yield “a perfect hierarchy” or “perfect family tree.” Then we saw that a treelike data structure does not necessarily refute intelligent design. But Dawkins isn’t done. At the end of his answer in the video, Dawkins raises the issue of “pseudogenes,” which he claims “don’t do anything but are vestigial relicts of genes that once did something.” Dawkins says elsewhere that pseudogenes “are never transcribed or translated. They might as well not exist, as far as the animal’s welfare is concerned.” These claims represent a classic but false “junk DNA” argument against intelligent design. 

Functions of Pseudogenes 

Pseudogenes can yield functional RNA transcripts, functional proteins, or perform a function without producing any transcript. A 2012 paper in Science Signaling noted that although “pseudogenes have long been dismissed as junk DNA,” recent advances have established that “the DNA of a pseudogene, the RNA transcribed from a pseudogene, or the protein translated from a pseudogene can have multiple, diverse functions and that these functions can affect not only their parental genes but also unrelated genes.” The paper concludes that “pseudogenes have emerged as a previously unappreciated class of sophisticated modulators of gene expression.” 

A 2011 paper in the journal RNA concurs:

Pseudogenes have long been labeled as ‘junk’ DNA, failed copies of genes that arise during the evolution of genomes. However, recent results are challenging this moniker; indeed, some pseudogenes appear to harbor the potential to regulate their protein-coding cousins. 

Likewise, a 2012 paper in RNA Biology states that “pseudogenes were long considered as junk genomic DNA” but “pseudogene regulation is widespread in eukaryotes.” Because pseudogenes may only function in specific tissues and/or only during particular stages of development, their true functions may be difficult to detect. The RNA Biology paper concludes that “the study of functional pseudogenes is just at the beginning” and predicts “more and more functional pseudogenes will be discovered as novel biological technologies are developed in the future.” 

When we do carefully study pseudogenes, we often find function. One paper in Annual Review of Genetics observed: “pseudogenes that have been suitably investigated often exhibit functional roles.” A 2020 paper in Nature Reviews Genetics cautioned that pseudogene function is “Prematurely Dismissed” due to “dogma.” The paper cautions that there are many instances where DNA that was dismissed as pseudogene junk was later found to be functional: “with a growing number of instances of pseudogene-annotated regions later found to exhibit biological function, there is an emerging risk that these regions of the genome are prematurely dismissed as pseudogenic and therefore regarded as void of function.” Indeed, the literature is full of papers reporting function in what have been wrongly labeled “pseudogenes.” 

Fingers in Ears?

At the end of the video, Dawkins says: “I find it extremely hard to imagine how any creationist who actually bothered to listen to that could possibly doubt the fact of evolution. But they don’t listen…they simply stick their fingers in their ear and say la la la.” It’s safe to say that Dawkins was wrong about many things in this video, but I’m not here to make any accusations about fingers and ears. I will say that the best resolution to these kinds of questions is to listen to the data, keep an open mind, and to think critically. When we’re wiling to do this, a lot of exciting new scientific possibilities open up — ones that don’t necessarily include traditional neo-Darwinian views of common ancestry or a “perfect hierarchy” in the tree of life, and ones that readily point toward intelligent design. 


Sunday, September 5, 2021

More evidence that more DNA is useful

 

The Complex Truth About ‘Junk DNA’

Genomes hold immense quantities of noncoding DNA. Some of it is essential for life, some seems useless, and some has its own agenda.

 

The 98% of the human genome that does not encode proteins is sometimes called junk DNA, but the reality is more complicated than that name implies.

Samuel Velasco/Quanta Magazine

Jake Buehler

Contributing Writer


September 1, 2021

 

https://www.quantamagazine.org/the-complex-truth-about-junk-dna-20210901/?utm_source=Quanta+Magazine&utm_campaign=a34a5832b8-RSS_Daily_Biology&utm_medium=email&utm_term=0_f0cb61321c-a34a5832b8-389846569&mc_cid=a34a5832b8&mc_eid=61275b7d81

Imagine the human genome as a string stretching out for the length of a football field, with all the genes that encode proteins clustered at the end near your feet. Take two big steps forward; all the protein information is now behind you.

The human genome has three billion base pairs in its DNA, but only about 2% of them encode proteins. The rest seems like pointless bloat, a profusion of sequence duplications and genomic dead ends often labeled “junk DNA.” This stunningly thriftless allocation of genetic material isn’t limited to humans: Even many bacteria seem to devote 20% of their genome to noncoding filler.

Many mysteries still surround the issue of what noncoding DNA is, and whether it really is worthless junk or something more. Portions of it, at least, have turned out to be vitally important biologically. But even beyond the question of its functionality (or lack of it), researchers are beginning to appreciate how noncoding DNA can be a genetic resource for cells and a nursery where new genes can evolve.

“Slowly, slowly, slowly, the terminology of ‘junk DNA’ [has] started to die,” said Cristina Sisu, a geneticist at Brunel University London.

Scientists casually referred to “junk DNA” as far back as the 1960s, but they took up the term more formally in 1972, when the geneticist and evolutionary biologist Susumu Ohno used it to argue that large genomes would inevitably harbor sequences, passively accumulated over many millennia, that did not encode any proteins. Soon thereafter, researchers acquired hard evidence of how plentiful this junk is in genomes, how varied its origins are, and how much of it is transcribed into RNA despite lacking the blueprints for proteins.

Technological advances in sequencing, particularly in the past two decades, have done a lot to shift how scientists think about noncoding DNA and RNA, Sisu said. Although these noncoding sequences don’t carry protein information, they are sometimes shaped by evolution to different ends. As a result, the functions of the various classes of “junk” — insofar as they have functions — are getting clearer.

Cells use some of their noncoding DNA to create a diverse menagerie of RNA molecules that regulate or assist with protein production in various ways. The catalog of these molecules keeps expanding, with small nuclear RNAs, microRNAs, small interfering RNAs and many more. Some are short segments, typically less than two dozen base pairs long, while others are an order of magnitude longer. Some exist as double strands or fold back on themselves in hairpin loops. But all of them can bind selectively to a target, such as a messenger RNA transcript, to either promote or inhibit its translation into protein.

These RNAs can have substantial effects on an organism’s well-being. Experimental shutdowns of certain microRNAs in mice, for instance, have induced disorders ranging from tremors to liver dysfunction.

By far the biggest category of noncoding DNA in the genomes of humans and many other organisms consists of transposons, segments of DNA that can change their location within a genome. These “jumping genes” have a propensity to make many copies of themselves — sometimes hundreds of thousands — throughout the genome, says Seth Cheetham, a geneticist at the University of Queensland in Australia. Most prolific are the retrotransposons, which spread efficiently by making RNA copies of themselves that convert back into DNA at another place in the genome. About half of the human genome is made up of transposons; in some maize plants, that figure climbs to about 90%.

Noncoding DNA also shows up within the genes of humans and other eukaryotes (organisms with complex cells) in the intron sequences that interrupt the protein-encoding exon sequences. When genes are transcribed, the exon RNA gets spliced together into mRNAs, while much of the intron RNA is discarded. But some of the intron RNA can get turned into small RNAs that are involved in protein production. Why eukaryotes have introns is an open question, but researchers suspect that introns help accelerate gene evolution by making it easier for exons to be reshuffled into new combinations.

A large and variable portion of the noncoding DNA in genomes consists of highly repeated sequences of assorted lengths. The telomeres capping the ends of chromosomes, for example, consist largely of these. It seems likely that the repeats help to maintain the integrity of chromosomes (the shortening of telomeres through the loss of repeats is linked to aging). But many of the repeats in cells serve no known purpose, and they can be gained and lost during evolution, seemingly without ill effects.


One category of noncoding DNA that intrigues many scientists these days is the pseudogenes, which are usually viewed as the remnants of working genes that were accidentally duplicated and then degraded through mutation. As long as one copy of the original gene works, natural selection may exert little pressure to keep the redundant copy intact.

Akin to broken genes, pseudogenes might seem like quintessential genomic junk. But Cheetham warns that some pseudogenes may not be “pseudo” at all. Many of them, he says, were presumed to be defective copies of recognized genes and labeled as pseudogenes without experimental evidence that they weren’t functional.

Pseudogenes can also evolve new functions. “Sometimes they can actually control the activity of the gene from which they were copied,” Cheetham said, if their RNA is similar enough to that of the working gene to interact with it. Sisu notes that the discovery in 2010 that the PTENP1 pseudogene had found a second life as an RNA regulating tumor growth convinced many researchers to look more closely at pseudogene junk.

Because dynamic noncoding sequences can produce so many genomic changes, the sequences can be both the engine for the evolution of new genes and the raw material for it. Researchers have found an example of this in the ERVW-1 gene, which encodes a protein essential to the development of the placenta in Old World monkeys, apes and humans. The gene arose from a retroviral infection in an ancestral primate about 25 million years ago, hitching a ride on a retrotransposon into the animal’s genome. The retrotransposon “basically co-opted this element, jumping around the genome, and actually turned that into something that’s really crucial for the way that humans develop,” Cheetham said.

But how much of this DNA therefore qualifies as true “junk” in the sense that it serves no useful purpose for a cell? This is hotly debated. In 2012, the Encyclopedia of DNA Elements (Encode) research project announced its findings that about 80% of the human genome seemed to be transcribed or otherwise biochemically active and might therefore be functional. However, this conclusion was widely disputed by scientists who pointed out that DNA can be transcribed for many reasons that have nothing to do with biological utility.

Alexander Palazzo of the University of Toronto and T. Ryan Gregory of the University of Guelph have described several lines of evidence — including evolutionary considerations and genome size  — that strongly suggest “eukaryotic genomes are filled with junk DNA that is transcribed at a low level.” Dan Graur of the University of Houston has argued that because of mutations, less than a quarter of the human genome can have an evolutionarily preserved function. Those ideas are still consistent with the evidence that the “selfish” activities of transposons, for example, can be consequential for the evolution of their hosts.

Cheetham thinks that dogma about “junk DNA” has weighed down inquiry into the question of how much of it deserves that description. “It’s basically discouraged people from even finding out whether there is a function or not,” he said. On the other hand, because of improved sequencing and other methods, “we’re in a golden age of understanding noncoding DNA and noncoding RNA,” said Zhaolei Zhang, a geneticist at the University of Toronto who studies the role of the sequences in some diseases.

In the future, researchers may be less and less inclined to describe any of the noncoding sequences as junk because there are so many other more precise ways of labeling them now. For Sisu, the field’s best way forward is to keep an open mind when assessing the eccentricities of noncoding DNA and RNA and their biological importance. People should “take a step back and realize that one person’s trash is another person’s treasure,” she said.


The Cosmological Principle seems to be false

 

New Evidence against the Standard Model of Cosmology

Sabine Hossenfelder


http://backreaction.blogspot.com/2021/09/new-evidence-against-standard-model-of.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+blogspot%2Fermku+%28Backreaction%29

 

[[I remember when i read Steven Weinberg's The First Three Minutes in the 70s where he was careful to say that everything we know about the universe as a whole depends upon the Cosmological Principle, and Stephen Hawking said we believe it on the grounds of humility [!]. And now there is significant evidence that it is false! Keep that in mind when you read the next “discovery” reported in the nyt.]]

 

Physicists believe they understand quite well how the universe works on large scales. There’s dark matter and there’s dark energy, and there’s the expansion of the universe that allows matter to cool and clump and form galaxies. The key assumption to this model for the universe is the cosmological principle, according to which the universe is approximately the same everywhere. But increasingly more observations show that the universe just isn’t the same everywhere. What are those observations? Why are they a problem? And what does it mean? That’s what we’ll talk about today.

 

Let’s begin with the cosmological principle, the idea that the universe looks the same everywhere. Well. Of course the universe does not look the same everywhere. There’s more matter under your feet than above your head and more matter in the Milky way than in intergalactic space, and so on. Physicists have noticed that too, so the cosmological principle more precisely says that matter in the universe is equally distributed when you average over sufficiently large distances.

 

To see what this means, forget about matter for a moment and suppose you have a row of detectors and they measure, say, temperature. Each detector gives you a somewhat different temperature but you can average over those detectors by taking a few of them at a time, let’s say 5, calculate the average value from the reading of those five detectors, and replace the values of the individual detectors with their average value. You can then ask how far away this averaged distribution is from one that’s the same everywhere. In this example it’s pretty close.

 

But suppose you have a different distribution, for example this one. If you average over sets of 5 detectors again, the result still does not look the same everywhere. Now, if you average over all detectors, then of course the average is the same everywhere. So if you want to know how close a distribution is to being uniform, you average it over increasingly large distances and ask from what distance on it’s very similar to just being the same everywhere.

 

In cosmology we don’t want to average over temperatures, but we want to average over the density of matter. On short scales, which for cosmologists is something like the size of the milky way, matter clearly is not uniformly distributed. If we average over the whole universe, then the average is uniform, but that’s uninteresting. What we want to know is, if we average over increasingly large distances, at what distance does the distribution of matter become uniform to good accuracy?

 

Yes, good question. One can calculate this distance using the concordance model, which is the currently accepted standard model of cosmology. It’s also often called ΛCDM, where Λ is the cosmological constant and CDM stands for cold dark matter. The distance at which the cosmological principle should be a good approximation to the real distribution of matter was calculated from the concordance model in a 2010 paper by Hunt and Sarkar.

 

They found that the deviations from a uniform distribution fall below one part in a hundred from an averaging distance of about 200-300 Mpc on. 300 Megaparsec are about 1 billion light years. And just to give you a sense of scale, our distance to the next closest galaxy, Andromeda, is about two and a half million light years. A billion light years is huge. But from that distance on at the latest, the cosmological principle should be fulfilled to good accuracy – if the concordance model is correct.

 

One problem with the cosmological principle is that astrophysicists have on occasion assumed it is valid already on shorter distances, down to about 100 Megaparsec. This is an unjustified assumption, but it has for example entered the analysis of supernovae data from which the existence of dark energy was inferred. And yes, that’s what the Nobel Prize in physics was awarded for in 2011.

 

Two years ago, I told you about a paper by Subir Sarkar and his colleagues, that showed if one analyses the supernovae data correctly, without assuming that the cosmological principle holds on too short distances, then the evidence for dark energy disappears. That paper has been almost entirely ignored by other scientists. Check out my earlier video for more about that.

 

Today I want to tell you about another problem with the cosmological principle. As I said, one can calculate the scale from which on it should be valid from the standard model of cosmology. Beyond that scale, the universe should look pretty much the same everywhere. This means in particular there shouldn’t be any clumps of matter on scales larger than about a billion light years. But. Astrophysicists keep on finding those.

 

Already in nineteen-ninety-one they found the Clowes-Campusano-Quasar group, which is a collection of thirty-four Quasars, about nine point five Billion light years away from us and it extends over two Billion Light-years, clearly too large to be compatible with the prediction from the concordance model.

 

Since 2003 astrophysicists know the „great wall“ a collection of galaxies about a billion light years away from us that extends over 1.5 billion light years. That too, is larger than it should be.

 

Then there’s the “Huge quasar group” which is… huge. It spans a whopping four Billion light-years. And just in July Alexia Lopez discovered the “Giant Arc” a collection of galaxies, galaxy clusters, gas and dust that spans 3 billion light years.

 

Theoretically, these structures shouldn’t exist. It can happen that such clumps appear coincidentally in the concordance model. That’s because this model uses an initial distribution of matter in the early universe with random fluctuations. So it could happen you end up with a big clump somewhere just by chance. But you can calculate the probability for that to happen. The Giant Arc alone has a probability of less than one in a hundred-thousand to have come about by chance. And that doesn’t factor in all the other big structures.

 

What does it mean? It means the evidence is mounting that the cosmological principle is a bad assumption to develop a model for the entire universe and it probably has to go. It increasingly looks like we live in a region in the universe that happens to have a significantly lower density than the average in the visible universe. This area of underdensity which we live in has been called the “local hole”, and it has a diameter of at least 600 million light years. This is the finding of a recent paper by a group of astrophysicists from Durham in the UK.

 

They also point out that if we live in a local hole then this means that the local value of the Hubble rate must be corrected down. This would be good news because currently measurements for the local value of the Hubble rate are in conflict with the value from the early universe. And that discrepancy has been one of the biggest headaches in cosmology in the past years. Giving up the cosmological principle could solve that problem.

 

However, the finding in that paper from the Durham group is only a mild tension with the concordance model, at about three sigma, which is not highly statistically significant. But Sarkar and his group had another paper recently in which they do a consistency check on the concordance model and find a conflict at four point nine sigma, that is a less than one in a million chance for it to be coincidence.

 

This works as follows. If we measure the temperature of the cosmic microwave background, it appears hotter into the direction which we move against it. This gives rise to the so-called CMB dipole. You can measure this dipole. You can also measure the dipole by inferring our motion from the observations of quasars. If the concordance model was right, the direction and magnitude of the dipoles should be the same. But they are not. You see this in this figure from Sarkar’s paper. The star is the location of the cmb dipole, the triangle that of the quasar dipole. In this figure you see how far away from the cmb expectation the quasar result is.

 

These recent developments make me think that in the next ten years or so, we will see a major paradigm shift in cosmology, where the current standard model will be replaced with another one. Just what the new model will be, and if it will still have dark energy, I don’t know.


Thursday, August 26, 2021

The extreme weakness of fMRI brain analysis

 

Seventy Teams of Scientists Analysed the Same Brain Data, and It Went Badly

What the latest fMRI “crisis” means for the rest of science


https://medium.com/the-spike/seventy-teams-of-scientists-analysed-the-same-brain-data-and-it-went-badly-e0d96c23dbf4 

Mark Humphries

Mark Humphries

To the outside observer, it can seem that fMRI research careens from one public crisis to another. Detecting brain activity in a dead salmon. Impossibly high correlations between brain activity and behaviour. Serious flaws in fMRI analysis software leading to claims of tens of thousands of papers being (partially) wrong. Finding wildly different active brain regions from the same set of fMRI scans by just varying the parameters in standard analysis pipelines. “Oof” says The Rest of Science, “glad that’s not us.”

And now a paper in Nature shows that a big groups of experts all looking at the same brain imaging data agree on almost nothing.

But within it is a warning for all of neuroscience, and beyond.

The group behind the Nature paper set a simple challenge: they asked teams of volunteers to each take the same set of fMRI scans from 108 people doing a decision-making task, and use them to test nine hypotheses of how brain activity would change during the task. Their goal was simply to test how many teams agreed on which hypotheses had significant evidence and which did not. The Neuroimaging Analysis Replication Study (NARPS) was born.

The task was simple too, cutting down on the complexity of the analysis. Lying in the scanner, you’d be shown the two potential outcomes of a coin-flip: if it comes up heads, you’d lose $X dollars; if tails, you’d win $Y dollars. Your decision is whether to accept or reject that gamble; accept it and the (virtual) coin is flipped, and your winnings adjusted accordingly. The clever bit is that the difference between the loss and win amount is varied on every trial, testing your tolerance for losing. And if you’re like most people, you have a strong aversion to losing, so will only regularly accept gambles where you could win at least twice as much as you lose.

From this simple task sprung those nine hypotheses, equally simple. Eight about how activity in a broad region of the brain should go up or down in response to wins or losses; one a comparison of changes within a brain region during wins and losses. And pretty broad regions of the brain too — a big chunk of the prefrontal cortex, the whole striatum, and the whole amygdala. Simple task, simple hypotheses, unmissably big chunks of brain — simple to get the same answer, right? Wrong.

Seventy teams stepped up to take the data and test the nine hypotheses. Of the nine, only one (Hypothesis 5) was reported as significant by more than 80% of the teams. Three were reported as significant by only about 5% of the teams, about as much as we’d expect by chance using classical statistics, so could be charitably interpreted as showing the hypotheses were not true. Which left five hypotheses in limbo, with between 20% and 35% of teams reporting a significant effect for each. Nine hypotheses: one agreed as correct; three rejected; five in limbo. Not a great scorecard for 70 teams looking at the same data.

Even worse were the predictions of how many teams would support each hypothesis. Whether made by the teams themselves, or by a group of experts not taking part, the predictions were wildly over-optimistic. The worst offender (hypothesis 2) was supported by the results of only about 25% of the teams, but its predicted support was about 75%. So not only did the teams not agree on what was true, they also couldn’t predict what was true and was not.

What then was it about the analysis pipelines used by the teams that led to big disagreements in what hypothesis was supported and what was not? The NARPS group could find little that systematically differed between them. One detectable effect was how smooth the teams made their brain maps — the more they were smoothed by averaging close-together brain bits, the more likely the team would find significant evidence for a hypotheses. But this smoothing effect only accounted for (roughly) 4% of variance in the outcomes, leaving 96% unaccounted for.

Whatever it was that differed between the teams, it came after the stage where each team built their initial statistical map of the brain’s activity, maps of which tiny cube of brain — each voxel — passed some test of significance. For these initial statistical maps of brain activity correlated quite well. So the NARPS people took a consensus of these maps across the groups, and claimed clear support for four of the hypotheses (numbers 2, 4, 5 and 6). Great: so all we need do to provide robust answers for every fMRI study is have 70 teams create maps from the same data then merge them together to find the answer. Let’s all watch the science funders line up behind that idea.

(And then some wag will run a study that tests if different teams will get the same answers from the same merged map, and around we go again).

Sarcasm aside, that is not the answer. Because the results from that consensus map did not agree with the actual results of the teams. The teams found hypotheses 1 and 3 to be significant equally as often as 2, 4 and 5, but hypothesis 1 and 3 were not well supported by the consensus map. So the polling of the teams provided different answers to the consensus of their maps. Which then are the supported hypotheses? At the end, we’re still none the wiser.

Some take pleasure in fMRI’s problems, and would add this NARPS paper to a long list of reasons not to take fMRI research seriously. But that would be folly.

Some of fMRI’s crises are more hype than substance. Finding activity in the brain of a dead salmon was not to show fMRI was broken, but was a teaching tool — an example of what could go wrong if for some reason you didn’t make the essential corrections for noise when analysing fMRI data, corrections that are built into neuroimaging analysis pipelines precisely so that you don’t find brain activity in a dead animal or, in one anecdotal case relayed to me, outside the skull. Those absurdly high “voodoo” correlations arise from double-dipping: first select out the most active voxels, and then correlate stuff only with them. The wrong thing to do, but fMRI research is hardly the only discipline that does double-dipping. And the much-ballyhooed software error turned out to maybe affect some of the results in a few hundred studies; but nonetheless was a warning to all to take care.

Everyone finds it inherently fascinating to see the activity deep within a living human brain. So fMRI studies are endlessly in the public eye, the media plastering coloured doodles of brains into their breathless reporting. But fMRI is a young field, so its growing pains are public too. Another “crisis” just broke — that when you re-scan the same person, the map of brain activity you get will likely differ quite a lot from the original scan. One crisis at a time please, fMRI. But this public coverage is not in proportion to its problems: there’s nothing special about its problems.

The fMRI analysis pipeline is fiercely complex. This is common knowledge. And because it’s common knowledge, many fMRI researchers look closely at the robustness of how fMRI data is analysed — at errors in correcting the maps of brain activity, about what happens if we don’t correct, about robustness of results to choices of how the analysis is setup, about robustness of results to having different scientists trying to obtain them. Rather than crises, one could equally interpret the above list — dead salmon, voodoo correlations and all — as a sign that fMRI is tackling its inevitable problems head on. And it just happens that they have to do it in public.

The NARPS paper ends with the warning that “although the present investigation was limited to the analysis of a single fMRI dataset, it seems highly likely that similar variability will be present for other fields of research in which the data are high-dimensional and the analysis workflows are complex and varied”.

The Rest of Science: “Do they mean us?”

Yes, they mean you. These crises should give any of us working on data from complex pipelines pause for serious thought. There is nothing unique to fMRI about the issues they raise. Other areas of neuroscience are just as bad. We can do issues of poor data collection: studies using too few subjects plague other areas of neuroscience just as much as neuroimaging. We can do absurdly high correlations too; for one thing if you use a tiny number of subjects then the correlations have to be absurdly high to pass as “significant”; for another most studies of neuron “function” are as double-dipped as fMRI studies, only analysing neurons that already passed some threshold for being tuned to the stimulus or movement studied. We can do dead salmon: without corrections for signal bleed (from the neuropil), calcium imaging can find neural activity outside of a neuron’s body. We can even do a version of this NARPS study, reaching wildly different conclusions about neural activity by varying the analysis pipelines applied to the same data-set. And the dark art of spike-sorting is, well, a dark art, with all that entails about the reliability of the findings that stem from the spikes (one solution might be: don’t sort them).

I come neither to praise fMRI nor to bury it. It’s a miraculous technology; but comes with deep limitations for anyone interested in how neurons do what they do — it records blood flow, slowly, at the resolution of millions of neurons. But all the above are crises of technique, of analysis, of statistics. They are likely common to many fields, and we should be so lucky that our field’s issues are not played out as publicly as those of fMRI. Indeed, in striving to sort out its own house, where fMRI research has gone, others should follow.



Thursday, July 29, 2021

Climate is very poorly understood




[[It cannot be emphasized enough that climate is an enormously complicated result of very many factors, many of which are not well understood, ;et alone their combined effects. Here is a very recent example.]]

A Soil-Science Revolution Upends Plans to Fight Climate Change

https://mail.google.com/mail/u/0/?zx=8nj49iv5nqad#search/quanta/FMfcgzGkZZpQrbrmcDWLGmgCHFnVpxNq

A centuries-old concept in soil science has recently been thrown out. Yet it remains a key ingredient in everything from climate models to advanced carbon-capture projects.

One teaspoon of healthy soil contains more bacteria, fungi and other microbes than there are humans on Earth. Those hungry organisms can make soil a difficult place to store carbon over long periods of time.



Gabriel Popkin

Contributing Writer


Quanta Magazine


July 27, 2021







The hope was that the soil might save us. With civilization continuing to pump ever-increasing amounts of carbon dioxide into the atmosphere, perhaps plants — nature’s carbon scrubbers — might be able to package up some of that excess carbon and bury it underground for centuries or longer.

That hope has fueled increasingly ambitious climate change–mitigation plans. Researchers at the Salk Institute, for example, hope to bioengineer plants whose roots will churn out huge amounts of a carbon-rich, cork-like substance called suberin. Even after the plant dies, the thinking goes, the carbon in the suberin should stay buried for centuries. This Harnessing Plants Initiative is perhaps the brightest star in a crowded firmament of climate change solutions based on the brown stuff beneath our feet.

Such plans depend critically on the existence of large, stable, carbon-rich molecules that can last hundreds or thousands of years underground. Such molecules, collectively called humus, have long been a keystone of soil science; major agricultural practices and sophisticated climate models are built on them.

But over the past 10 years or so, soil science has undergone a quiet revolution, akin to what would happen if, in physics, relativity or quantum mechanics were overthrown. Except in this case, almost nobody has heard about it — including many who hope soils can rescue the climate. “There are a lot of people who are interested in sequestration who haven’t caught up yet,” said Margaret Torn, a soil scientist at Lawrence Berkeley National Laboratory.

A new generation of soil studies powered by modern microscopes and imaging technologies has revealed that whatever humus is, it is not the long-lasting substance scientists believed it to be. Soil researchers have concluded that even the largest, most complex molecules can be quickly devoured by soil’s abundant and voracious microbes. The magic molecule you can just stick in the soil and expect to stay there may not exist.





Artificially colored scanning electron micrograph images of soils from the island of Hawai’i.

Thiago Inagaki, in collaboration with Lena Kourkoutis, Angela Possinger and Johannes Lehmann

“I have The Nature and Properties of Soils in front of me — the standard textbook,” said Gregg Sanford, a soil researcher at the University of Wisconsin, Madison. “The theory of soil organic carbon accumulation that’s in that textbook has been proven mostly false … and we’re still teaching it.”

The consequences go far beyond carbon sequestration strategies. Major climate models such as those produced by the Intergovernmental Panel on Climate Change are based on this outdated understanding of soil. Several recent studies indicate that those models are underestimating the total amount of carbon that will be released from soil in a warming climate. In addition, computer models that predict the greenhouse gas impacts of farming practices — predictions that are being used in carbon markets — are probably overly optimistic about soil’s ability to trap and hold on to carbon.

It may still be possible to store carbon underground long term. Indeed, radioactive dating measurements suggest that some amount of carbon can stay in the soil for centuries. But until soil scientists build a new paradigm to replace the old — a process now underway — no one will fully understand why.
The Death of Humus

Soil doesn’t give up its secrets easily. Its constituents are tiny, varied and outrageously numerous. At a bare minimum, it consists of minerals, decaying organic matter, air, water, and enormously complex ecosystems of microorganisms. One teaspoon of healthy soil contains more bacteria, fungi and other microbes than there are humans on Earth.


The fine hairs surrounding roots are covered in hungry bacteria; soils slightly further away from the roots may have an order of magnitude fewer microbes.

Courtesy of Jennifer Pett-Ridge and Erin Nuccio

The German biologist Franz Karl Achard was an early pioneer in making sense of the chaos. In a seminal 1786 study, he used alkalis to extract molecules made of long carbon chains from peat soils. Over the centuries, scientists came to believe that such long chains, collectively called humus, constituted a large pool of soil carbon that resists decomposition and pretty much just sits there. A smaller fraction consisting of shorter molecules was thought to feed microbes, which respired carbon dioxide to the atmosphere.

This view was occasionally challenged, but by the mid-20th century, the humus paradigm was “the only game in town,” said Johannes Lehmann, a soil scientist at Cornell University. Farmers were instructed to adopt practices that were supposed to build humus. Indeed, the existence of humus is probably one of the few soil science facts that many non-scientists could recite.

What helped break humus’s hold on soil science was physics. In the second half of the 20th century, powerful new microscopes and techniques such as nuclear magnetic resonance and X-ray spectroscopy allowed soil scientists for the first time to peer directly into soil and see what was there, rather than pull things out and then look at them.

What they found — or, more specifically, what they didn’t find — was shocking: there were few or no long “recalcitrant” carbon molecules — the kind that don’t break down. Almost everything seemed to be small and, in principle, digestible.

“We don’t see any molecules in soil that are so recalcitrant that they can’t be broken down,” said Jennifer Pett-Ridge, a soil scientist at Lawrence Livermore National Laboratory. “Microbes will learn to break anything down — even really nasty chemicals.”

Lehmann, whose studies using advanced microscopy and spectroscopy were among the first to reveal the absence of humus, has become the concept’s debunker-in-chief. A 2015 Nature paper he co-authored states that “the available evidence does not support the formation of large-molecular-size and persistent ‘humic substances’ in soils.” In 2019, he gave a talk with a slide containing a mock death announcement for “our friend, the concept of Humus.”

Over the past decade or so, most soil scientists have come to accept this view. Yes, soil is enormously varied. And it contains a lot of carbon. But there’s no carbon in soil that can’t, in principle, be broken down by microorganisms and released into the atmosphere. The latest edition of The Nature and Properties of Soils, published in 2016, cites Lehmann’s 2015 paper and acknowledges that “our understanding of the nature and genesis of soil humus has advanced greatly since the turn of the century, requiring that some long-accepted concepts be revised or abandoned.”

Old ideas, however, can be very recalcitrant. Few outside the field of soil science have heard of humus’s demise.
Buried Promises

At the same time that soil scientists were rediscovering what exactly soil is, climate researchers were revealing that increasing amounts of carbon dioxide in the atmosphere were rapidly warming the climate, with potentially catastrophic consequences.

Thoughts soon turned to using soil as a giant carbon sink. Soils contain enormous amounts of carbon — more carbon than in Earth’s atmosphere and all its vegetation combined. And while certain practices such as plowing can stir up that carbon — farming, over human history, has released an estimated 133 billion metric tons of carbon into the atmosphere — soils can also take up carbon, as plants die and their roots decompose.


Farming practices such as plowing can reduce the amount of carbon stored in soil.

Scientists began to suggest that we might be able to coax large volumes of atmospheric carbon back into the soil to dampen or even reverse the damage of climate change.

In practice, this has proved difficult. An early idea to increase carbon stores — planting crops without tilling the soil — has mostly fallen flat. When farmers skipped the tilling and instead drilled seeds into the ground, carbon stores grew in upper soil layers, but they disappeared from lower layers. Most experts now believe that the practice redistributes carbon within the soil rather than increases it, though it can improve other factors such as water quality and soil health.

Efforts like the Harnessing Plants Initiative represent something like soil carbon sequestration 2.0: a more direct intervention to essentially jam a bunch of carbon into the ground.

The initiative emerged when a team of scientists at the Salk Institute came up with an idea: Create plants whose roots produce an excess of carbon-rich molecules. By their calculations, if grown widely, such plants might sequester up to 20% of the excess carbon dioxide that humans add to the atmosphere every year.

The Salk scientists zeroed in on a complex, cork-like molecule called suberin, which is produced by many plant roots. Studies from the 1990s and 2000s had hinted that suberin and similar molecules could resist decomposition in soil.



José Graça

With flashy marketing, the Harnessing Plants Initiative gained attention. An initial round of fundraising in 2019 brought in over $35 million. Last year, the multibillionaire Jeff Bezos contributed $30 million from his “Earth Fund.”

But as the project gained momentum, it attracted doubters. One group of researchers noted in 2016 that no one had actually observed the suberin decomposition process. When those authors did the relevant experiment, they found that much of the suberin decayed quickly.

In 2019, Joanne Chory, a plant geneticist and one of the Harnessing Plant Initiative’s project leaders, described the project at a TED conference. Asmeret Asefaw Berhe, a soil scientist at the University of California, Merced, who spoke at the same conference, pointed out to Chory that according to modern soil science, suberin, like any carbon-containing compound, should break down in soil. (Berhe, who has been nominated to lead the U.S. Department of Energy’s Office of Science, declined an interview request.)

Around the same time, Hanna Poffenbarger, a soil researcher at the University of Kentucky, made a similar comment after hearing Wolfgang Busch, the other project leader, speak at a workshop. “You should really get some soil scientists on board, because the assumption that we can breed for more recalcitrant roots — that may not be valid,” Poffenbarger recalls telling Busch.

Questions about the project surfaced publicly earlier this year, when Jonathan Sanderman, a soil scientist at the Woodwell Climate Research Center in Woods Hole, Massachusetts, tweeted, “I thought the soil biogeochem community had moved on from the idea that there is a magical recalcitrant plant compound. Am I missing some important new literature on suberin?” Another soil scientist responded, “Nope, the literature suggests that suberin will be broken down just like every other organic plant component. I’ve never understood why the @salkinstitute has based their Harnessing Plant Initiative on this premise.”

Busch, in an interview, acknowledged that “there is no unbreakable biomolecule.” But, citing published papers on suberin’s resistance to decomposition, he said, “We are still very optimistic when it comes to suberin.”

“THe also noted a second initiative Salk researchers are pursuing in parallel to enhancing suberin. They are trying to design plants with longer roots that could deposit carbon deeper in soil. Independent experts such as Sanderman agree that carbon tends to stick around longer in deeper soil layers, putting that solution on potentially firmer conceptual ground.

Chory and Busch have also launched collaborations with Berhe and Poffenbarger, respectively. Poffenbarger, for example, will analyze how soil samples containing suberin-rich plant roots change under different environmental conditions. But even those studies won’t answer questions about how long suberin sticks around, Poffenbarger said — important if the goal is to keep carbon out of the atmosphere long enough to make a dent in global warming.

Beyond the Salk project, momentum and money are flowing toward other climate projects that would rely on long-term carbon sequestration and storage in soils. In an April speech to Congress, for example, President Biden suggested paying farmers to plant cover crops, which are grown not for harvest but to nurture the soil in between plantings of cash crops. Evidence suggests that when cover crop roots break down, some of their carbon stays in the soil — although as with suberin, how long it lasts is an open question.
Not Enough Bugs in the Code

Recalcitrant carbon may also be warping climate prediction.

In the 1960s, scientists began writing large, complex computer programs to predict the global climate’s future. Because soil both takes up and releases carbon dioxide, climate models attempted to take into account soil’s interactions with the atmosphere. But the global climate is fantastically complex, and to enable the programs to run on the machines of the time, simplifications were necessary. For soil, scientists made a big one: They ignored microbes in the soil entirely. Instead, they basically divided soil carbon into short-term and long-term pools, in accordance with the humus paradigm.

More recent generations of models, including ones that the Intergovernmental Panel on Climate Change uses for its widely read reports, are essentially palimpsests built on earlier ones, said Torn. They still assume soil carbon exists in long-term and short-term pools. As a consequence, these models may be overestimating how much carbon will stick around in soils and underestimating how much carbon dioxide they will emit.

Last summer, a study published in Nature examined how much carbon dioxide was released when researchers artificially warmed the soil in a Panamanian rainforest to mimic the long-term effects of climate change. They found that the warmed soil released 55% more carbon than nearby unwarmed areas — a much larger release than predicted by most climate models. The researchers think that microbes in the soil grow more active at the warmer temperatures, leading to the increase.



The study was especially disheartening because most of the world’s soil carbon is in the tropics and the northern boreal zone. Despite this, leading soil models are calibrated to results of soil studies in temperate countries such as the U.S. and Europe, where most studies have historically been done. “We’re doing pretty bad in high latitudes and the tropics,” said Lehmann.

Even temperate climate models need improvement. Torn and colleagues reported earlier this year that, contrary to predictions, deep soil layers in a California forest released roughly a third of their carbon when warmed for five years.

Ultimately, Torn said, models need to represent soil as something closer to what it actually is: a complex, three-dimensional environment governed by a hyper-diverse community of carbon-gobbling bacteria, fungi and other microscopic beings. But even smaller steps would be welcome. Just adding microbes as a single class would be major progress for most models, she said.
Fertile Ground

If the humus paradigm is coming to an end, the question becomes: What will replace it?

One important and long-overlooked factor appears to be the three-dimensional structure of the soil environment. Scientists describe soil as a world unto itself, with the equivalent of continents, oceans and mountain ranges. This complex microgeography determines where microbes such as bacteria and fungi can go and where they can’t; what food they can gain access to and what is off limits.

A soil bacterium “may be only 10 microns away from a big chunk of organic matter that I’m sure they would love to degrade, but it’s on the other side of a cluster of minerals,” said Pett-Ridge. “It’s literally as if it’s on the other side of the planet.”



Another related, and poorly understood, ingredient in a new soil paradigm is the fate of carbon within the soil. Researchers now believe that almost all organic material that enters soil will get digested by microbes. “Now it’s really clear that soil organic matter is just this loose assemblage of plant matter in varying degrees of degradation,” said Sanderman. Some will then be respired into the atmosphere as carbon dioxide. What remains could be eaten by another microbe — and a third, and so on. Or it could bind to a bit of clay or get trapped inside a soil aggregate: a porous clump of particles that, from a microbe’s point of view, could be as large as a city and as impenetrable as a fortress. Studies of carbon isotopes have shown that a lot of carbon can stick around in soil for centuries or even longer. If humus isn’t doing the stabilizing, perhaps minerals and aggregates are.

Before soil science settles on a new theory, there will doubtless be more surprises. One may have been delivered recently by a group of researchers at Princeton University who constructed a simplified artificial soil using microfluidic devices — essentially, tiny plastic channels for moving around bits of fluid and cells. The researchers found that carbon they put inside an aggregate made of bits of clay was protected from bacteria. But when they added a digestive enzyme, the carbon was freed from the aggregate and quickly gobbled up. “To our surprise, no one had drawn this connection between enzymes, bacteria and trapped carbon,” said Howard Stone, an engineer who led the study.


Lehmann is pushing to replace the old dichotomy of stable and unstable carbon with a “soil continuum model” of carbon in progressive stages of decomposition. But this model and others like it are far from complete, and at this point, more conceptual than mathematically predictive.

Researchers agree that soil science is in the midst of a classic paradigm shift. What nobody knows is exactly where the field will land — what will be written in the next edition of the textbook. “We’re going through a conceptual revolution,” said Mark Bradford, a soil scientist at Yale University. “We haven’t really got a new cathedral yet. We have a whole bunch of churches that have popped up.”

Correction: July 28, 2021

This article was revised to credit a team at Salk with the idea of using suberin-enriched plants to sequester carbon in soil.