Tuesday, August 28, 2018
Dads Pass On More Than Genetics in Their Sperm
Seminal research reveals that sperm change their cargo as they travel the reproductive tract—and the differences can have consequences for fertility
Katherine J. Wu
https://www.smithsonianmag.com/science-nature/dads-pass-more-genetics-their-sperm-180969760/
smithsonian.com
July 26, 2018
Eat poorly, and your body will remember—and possibly pass the consequences onto your kids. In the past several years, mounting evidence has shown that sperm can take note of a father’s lifestyle decisions, and transfer this baggage to offspring. Today, in twocomplementary studies, scientists tell us how.
As sperm traverse the male reproductive system, they jettison and acquire non-genetic cargo that fundamentally alters sperm before ejaculation. These modifications not only communicate the father’s current state of wellbeing, but can also have drastic consequences on the viability of future offspring.
Each year, over 76,000 children are born as a result of assisted reproduction techniques, the majority of which involve some type of in vitro fertilization (IVF). These procedures unite egg and sperm outside the human body, then transfer the resulting fertilized egg—the embryo—into a woman’s uterus. Multiple variations on IVF exist, but in some cases that involve male infertility—for instance, sperm that struggle to swim—sperm must be surgically extracted from the testes or epididymis, a lengthy, convoluted duct that cradles each testis.
After sperm are produced in the testes, they embark on a harrowing journey through the winding epididymis—which, in a human male, is about six meters long when unfurled—on their way to storage. Sperm wander the epididymis for about two weeks; only at the end of this path are they fully motile. Thus, while “mature” sperm can essentially be dumped on a waiting egg and be reasonably expected to achieve fertilization, sperm plucked from the testes and epididymis must be injected directly into the egg with a very fine needle. No matter the source of the sperm, these techniques have birthed healthy infants in four decades of successful procedures.
But scientists know genes are not the whole package. Over the course of a single lifetime, our genomes stay as they were originally written. However, how, when and why genetic instructions are followed can drastically differ without altering the manual itself—much like fiddling with the volume on a speaker without touching the wiring within. This phenomenon, called “epigenetics,” helps explain why genetically identical individuals in similar environments, such as twins or laboratory mice, can still look and act in very different ways. And things like diet or stress are capable of cranking our genes’ volume up and down.
One of the most powerful members of the epigenetic toolkit is a class of molecules called small RNAs. Small RNAs can conceal genetic information from the cellular machinery that carries out their instructions, effectively ghosting genes out of existence.
The legacy of a dad’s behavior can even live on in his child if his epigenetic elements enter an embryo. For instance, mice born to fathers that experience stress can inherit the behavioral consequences of traumatic memories. Additionally, mouse dads with less-than-desirable diets can pass a wonky metabolism onto their kids.
Upasna Sharma and Colin Conine, both working under Oliver Rando, a professor of biochemistry at the University of Massachusetts Medical School, were some of the researchers to report such findings in 2016. In their work, Sharma and Conine noted that, in mice, while immature testicular sperm contain DNA identical to that of mature sperm, immature sperm relay different epigenetic information. It turns out that sperm small RNAs undergo post-testes turnover, picking up intel on the father’s physical health (or lack thereof) after they’re manufactured, but before they exit the body. However, the exact pit stop at which these additional small RNAs hitch a ride remained unknown.
To solve the mystery, Sharma, who led the first of the two new studies, decided to track the composition of small RNAs within mouse sperm as they fled the testes and cruised through the epididymis. She and her colleagues isolated sperm of several different ages from mice, including those about to emerge from the testes, those entering the early part of the epididymis and those in the late part of the epididymis. Sharma was surprised to find that many small RNAs seemed to be discarded or destroyed upon entering the early epididymis; then, the newly vacated sperm reacquired epigenetic intel that reflected the father’s state of being, boasting a full set by the time they left the late epididymis.
There was only one possible source for the small RNA reacquisition: the cells of the epididymis—which meant that cells outside of the sperm were transmitting information into future generations.
“[The epididymis] is the least studied organ in the body,” says Rando, who was senior author on both papers. “And it turns out this tube that no one ever thinks about plays a central role in reproduction.”
To confirm that the epididymis was the culprit, Sharma’s team added a chemical marker to a set of small RNAs in the epididymis and tracked their migration. As they suspected, tiny shipments of RNAs popped off of cells in the epididymis and fused with the sperm. Each stealthy swimmer then bore these epigenetic elements all the way to its final union with the egg.
It seemed that sperm at different points along the reproductive tract had the same genetics, but not the same epigenetics. Was this difference big enough to matter? Colin Conine, who led the second of the two new studies, next tested if using immature sperm would have noticeable effects on the offspring of mice. He and his colleagues extracted sperm from the testes, early epididymis and late epididymis and injected them into eggs. All three types of sperm were able to fertilize eggs. However, when Conine transferred the resulting embryos into mouse surrogates, none derived from early epididymal sperm—the intermediate stage devoid of most small RNAs—implanted in the uterus. The least and most mature sperm of the bunch were winners—but somehow, those in the middle were burning out, even though all their genes were intact.
This was baffling to all involved. “This intermediate broken stage was really stunning,” says Rando.
At first, the researchers wondered if they had somehow isolated junky sperm doomed to be cleared from the early epididymis before reaching the ejaculate. But this didn’t seem to be the case: all three types of sperm could fertilize eggs. The only other explanation was that the defect was temporary. If this was the case, then perhaps, if fed the right small RNAs, the early epididymal sperm could be rescued.
In her work, Sharma had noted that while the epigenetic cargo of testicular sperm and late epididymal sperm differ vastly, they had a few groups in common—but these small RNAs were evicted from sperm as they entered the epididymis, then reacquired from the cells along the meandering duct. Though bookended by success, the early epididymal flop was the only stage that lacked these elements—and the only stage incapable of generating an implantable embryo.
To test if these particular small RNAs were the key to fertility, the researchers pulled small RNAs out of the late epididymis and injected them into embryos fertilized with early epididymal sperm. To their amazement, these embryos not only implanted, but also yielded mouse pups—indistinguishable from embryos fertilized by late epididymal sperm. The early epididymal sperm was defective, but not irreversibly so. This hinted that the deficiency wasn’t a fluke, but a normal part of the journey through the epididymal labyrinth. In other words, on the path to maturation, males were breaking sperm, then repairing the damage.
“It’s very bizarre to see them lose [viability] and gain it back,” says Sharma. And the utility of this back-and-forth remains entirely enigmatic. But whatever the reason, it’s clear that sperm vary enormously along the length of the reproductive tract.
Mollie Manier, a professor who studies sperm genetics at George Washington University and was not affiliated with the study, praised the rigorous nature of this “very exciting” research. “These papers really add to our understanding of [how] dads can pass non-genetic information onto their kids,” she explains. According to Heidi Fisher, a professor who studies sperm at the University of Maryland and also did not participate in the research, these “elegantly designed” experiments may also shed light on how problems with the epididymis could cause otherwise unexplained cases of male infertility.
In their future work, Rando’s group will continue to study the mouse pups generated from sperm of various ages, keeping a close lookout for any long-term issues in their health. The team also hopes to pinpoint which small RNAs are directly responsible for successful implantation—and why sperm enter this bewildering period of incompetence.
“There’s a lot of inheritance that we haven’t yet explained,” says Conine. “But animals are not just their DNA.” However, Conine cautions that different doesn’t always mean worse. Testicular and epididymal sperm from humans have helped, and continue to help, thousands around the world conceive children.
This comes with a small caveat. It wasn’t until 1978 that the first baby was successfully born of an IVF procedure—and though thousands have followed since, this generation is still young. As of yet, there’s no reason to suspect any negative consequences of in vitro versus natural conception; as this population ages, researchers will continue to keep close tabs. Since the majority of IVF procedures are performed with mature sperm that have cleared the late epididymis, Rando is not concerned.
And, in the unlikely case that there are repercussions to using testicular or epididymal sperm in these procedures, Rando remains hopeful that future work will enable scientists to restore the necessary information immature sperm might lack. Someday, addressing epigenetics may be key to enhancing assisted reproduction technology—and ensuring that sperm are as mature as they come.
Sunday, August 26, 2018
Photographer behind
viral image of starving polar bear raises questions about climate change
narrative
The narrative behind the viral photo of a polar bear starving,
reportedly thanks to climate change, has been called into question by the
National Geographic photographer who took it in the first place.
In an article for the August issue of National Geographic
titled “Starving-Polar-Bear Photographer Recalls What Went Wrong,” Cristina
Mittermeier talks about the intended message of the image versus the message
that was received.
“We had lost control of the narrative,” she said.
“Photographer Paul Nicklen and I are on a mission to capture
images that communicate the urgency of climate change. Documenting its effects
on wildlife hasn’t been easy,” she wrote in the article. “With this image, we
thought we had found a way to help people imagine what the future of climate
change might look like. We were, perhaps, naive. The picture went viral — and
people took it literally.”
The image she is referencing shows an emaciated polar bear with
hardly any fur covering its bony frame. In a video that was also taken of the
bear, it can be seen slowly moving through the terrain, rummaging through an
empty can.
Mittermeier goes on to say that it was the language put out by
the publication that led to the message being misconstrued.
“The first line of the National Geographic video said, ‘This is
what climate change looks like’ — with ‘climate change’ then highlighted in the
brand’s distinctive yellow. In retrospect, National Geographic went too far
with the caption.”
She estimated that 2.5 billion people saw the footage: “It
became the most viewed video on National Geographic’s website — ever,” she
said.
From there, social media and news outlets erupted over the
message that was being portrayed.
Some experts suggested a number of reason besides climate change
that could’ve led to the animal’s condition, including age, illness or even
injury.
Mittermeier admits that she couldn’t “say that this bear was
starving because of climate change.”
“Perhaps we made a mistake in not telling the full story — that
we were looking for a picture that foretold the future and that we didn’t know
what had happened to this particular polar bear.”
The photographer says that her image became another example of
"environmentalist exaggeration,” but added that her intentions were
“clear” and that if she had the opportunity to share “a scene like this one”
again, she would.
Sunday, August 19, 2018
Book Recommendations:
There Is a God: How the World's Most Notorious Atheist Changed His Mind
https://www.amazon.com/s/ref=nb_sb_ss_i_1_11?url=search-alias%3Dstripbooks&field-keywords=antony+flew+there+is+a+god&sprefix=antony+flew%2Caps%2C326&crid=3ESIV8WAO30EK
This book is very well written for the general reader. In addition to giving the reasons for his change of mind, Flew describes the arguments he used in the past to defend atheism and their weaknesses. This covers some of the continuing reasons presented today to defend atheism.
https://www.amazon.com/s/ref=nb_sb_ss_i_2_12?url=search-alias%3Dstripbooks&field-keywords=hossenfelder+lost+in+math&sprefix=hossenfelder%2Cstripbooks%2C403&crid=7JF0Q1N04YL0&rh=n%3A283155%2Ck%3Ahossenfelder+lost+in+math
With a Ph.D. in theoretical physics and published research, the author is clearly qualified to criticize her own field. She also has strong philosophical understanding [though I did disagree with her a few times]. She also applies some of the sociological critique of science to current theories. For the most part the book is written in non-technical language and is very engaging. Skipping the few technical parts will not hurt the overall effect of the book.
The Character of Consciousness
https://www.amazon.com/s/ref=nb_sb_ss_i_6_8?url=search-alias%3Dstripbooks&field-keywords=chalmers+consciousness&sprefix=chalmers%2Cstripbooks%2C1663&crid=38R3WNVE8RQTZ&rh=n%3A283155%2Ck%3Achalmers+consciousness
For those interested in the mind-body problem, this is the most complete exposition of the
anti-materialist p
oint of view. Chalmers is one of the very best contemporary philosophers. His writings are characterized by including a complete survey of the filed, answering his critics in detail, and writing with extreme clarity. The book is written for philosophers, but skimming it can give a sense of the issues, and it is a great resource to quote in debate.
Monday, August 13, 2018
Biblical critics on "Across the Jordan"
I wrote about this problem briefly here: https://www.dovidgottlieb.com/comments/Who_Wrote_The_Bible.htm
Now Rabbi Zvi Lampel has done a much more complete job - reproduced below with permission.
Eyver HaYarden
The first verse Bible critics (such as Spinoza) invoke to
allegedly prove that the Torah was written after Moshe passed away is the first
verse of Devarim: These are the words that Moshe spoke...b’Eyver
HaYarden. Now, they reason, Moshe would not have referred to the eastern
side of the Jordan as “the other side” of it or the Transjordan, because
that is where he was! (I suspect the critics were using a translation that, in
order to be helpful, translated Eyvar HaYarden as the Transjordan, which
is referring specifically to the eastern side.) Only someone stationed on the
western side of the Jordan, they reason, would refer to the eastern side, where
Moshe was, as the other side of the Jordan. So it must have been written by
someone after the Hebrews entered Canaan proper, and since Moshe never entered
the land, he could not have authored that narrative.
Now, if this were solid reasoning, based on a tad of
biblical scholarship, it might serve as support for Chazal. They condemn
the idea that Moshe, rather than Hashem, authored the Torah. Hashem above,
being Eretz-Yisroel-proper centric, could refer to Moshe’s position on
the eastern side of the Jordan as “the other side of the Jordan” even though
that was the side Moshe was on.
But it is not solid reasoning, and it demonstrates lack of
biblical scholarship.
The reasoning is loose, because the Hebrews had been living
in Canaan and Egypt for centuries. They could be expected to have long labeled the
east side of the Jordan as “the other side,” because both Canaan and Egypt are
to the Jordan’s west, and they would likely maintain that name even when
temporarily situated on that eastern side. After all, one refers to Chutz
LaAretz regardless of whether he is in Israel or not, and one refers to the
Lower East Side as such regardless of where he lives.
On literary grounds, Devarim 3:20 demonstrates the
silliness of the argument. There, Moshe--who is of course on the eastern side
of the Jordan--nevertheless refers to the 2-1/2 tribes on that same east of the
Jordon as dwelling b’Eyver HaYarden. And a mere four verses later (3:25)
he relates beseeching Hashem, Let me pass and see the good land in the Ever
HaYarden. So Eyver HaYarden was used by the same person in the same
place to describe either side of the Jordan.
Indeed, there are several other passages where one stationed
to the east of the Jordan is still quoted as referring to it as the Eyver
HaYarden, and vice versa. Likewise in narratives, Eyver HaYarden is
used for either side. For there was an Eyver HaYarden (Kaydmah) Mizrachah,
and an Eyver HaYarden Maaravah.
Examples:
Moshe on the eastern side of the Jordon refers to it as Eyver
HaYarden: Bamidbar specifying Eyver HaYarden Mizrachah)
32:19, Bamidbar 34:15 (Eyver HaYarden Kaydmah Mizrachah-- although
this may be the narrative) Devarim 1:8, And of course Devarim
3:20, noted above.
As noted above, in Devarim 3:25, Moshe standing on
the eastern side of the Jordan refers to the western side as Eyver HaYarden.
In sefer Yehoshua, Yehoshua, on the western side of the Jordon, calls the
eastern side the Jordan, Eyver HaYarden (Yehoshua 1:14), and then in 9:1
refers to the western side by that name.
The narrative also calls the western side of the Jordan Eyver
HaYarden: Breishis 50:10 (where Yosef’s family travelled west from Egypt to
the Eyver HaYarden of Canaan to bury him. Will the critics claim the
narrator must have lived on the eastern side to have called it the Eyver
HaYarden?!), and of course Devarim 1:1 does the same, as does Devarim
11:30 (which may either be the narrative or Moshe speaking).
Zvi Lampel
Sunday, July 15, 2018
[[From my look at her writing she
is a scientist who is exceptionally well informed in philosophy. ]]
http://backreaction.blogspot.com/
Naturalness
is an old idea; it dates back at least to the 16th century and captures the
intuition that a useful explanation shouldn’t rely on improbable coincidences.
Typical examples for such coincidences, often referred to as “conspiracies,”
are two seemingly independent parameters that almost cancel each other, or an
extremely small yet nonzero number. Physicists believe that theories which do
not have such coincidences, and are natural in this particular sense, are more
promising than theories that are unnatural.
Naturalness has its roots in human experience. If you go for a walk and encounter a delicately balanced stack of stones, you conclude someone constructed it. This conclusion is based on your knowledge that stones distributed throughout landscapes by erosion, weathering, deposition, and other geological processes aren’t likely to end up on neat piles. You know this quite reliably because you have seen a lot of stones, meaning you have statistics from which you can extract a likelihood.
As the example hopefully illustrates, naturalness is a good criterion in certain circumstances, namely when you have statistics, or at least means to derive statistics. A solar system with ten planets in almost the same orbit is unlikely. A solar system with ten planets in almost the same plane isn’t. We know this both because we’ve observed a lot of solar systems, and also because we can derive their likely distribution using the laws of nature discovered so far, and initial conditions that we can extract from yet other observations. So that’s a case where you can use arguments from naturalness.
But this isn’t how arguments from naturalness are used in theory-development today. In high energy physics and some parts of cosmology, physicists use naturalness to select a theory for which they do not have – indeed cannot ever have – statistical distributions. The trouble is that they ask which values of parameters in a theory are natural. But since we can observe only one set of parameters – the one that describes our universe – we have no way of collecting data for the likelihood of getting a specific set of parameters.
Physicists use criteria from naturalness anyway. In such arguments, the probability distribution is unspecified, but often implicitly assumed to be almost uniform over an interval of size one. There is, however, no way to justify this distribution; it is hence an unscientific assumption. This problem was made clear already in a 1994 paper by Anderson and Castano.
The standard model of particle physics, or the mass of the Higgs-boson more specifically, is unnatural in the above described way, and this is currently considered ugly. This is why theorists invented new theories to extend the Standard Model so that naturalness would be reestablished. The most popular way to do this is by making the Standard Model supersymmetric, thereby adding a bunch of new particles.
The Large Hadron Collider (LHC), as several previous experiments, has not found any evidence for supersymmetric particles. This means that according to the currently used criterion of naturalness, the theories of particle physics are, in fact, unnatural. That’s also why we presently do not have reason to think that a larger particle collider would produce so-far unknown particles.
In my book “Lost in Math: How Beauty Leads Physics Astray,” I use naturalness as an example for unfounded beliefs that scientists adhere to. I chose naturalness because it’s timely, as with the LHC ruling it out, but I could have used other examples.
A lot of physicists, for example, believe that experiments have ruled out hidden variables explanations of quantum mechanics, which is just wrong (experiments have ruled out only certain types of local hidden variable models). Or they believe that observations of the Bullet Cluster have ruled out modified gravity, which is similarly wrong (the Bullet Clusters is a statistical outlier that is hard to explain both with dark matter and modified gravity). Yes, the devil’s in the details.
Remarkable about these cases isn’t that scientists make mistakes – everyone does – but that they insist on repeating wrong claims, in many cases publicly, even after you explained them why they’re wrong. These and other examples like this leave me deeply frustrated because they demonstrate that even in science it’s seemingly impossible to correct mistakes once they have been adopted by sufficiently many practitioners. It’s this widespread usage that makes it “safe” for individuals to repeat statements they know are wrong, or at least do not know to be correct.
I think this highlights a serious problem with the current organization of academic research. That this can happen worries me considerably because I have no reason to think it’s confined to my own discipline.
Naturalness is an interesting case to keep an eye on. That’s because the LHC now has delivered data that shows the idea was wrong – none of the predictions for supersymmetric particles, or extra dimensions, or tiny black holes, and so on, came true. One possible way for particle physicists to deal with the situation is to amend criteria of naturalness so that they are no longer in conflict with data. I sincerely hope this is not the way it’ll go. The more enlightened way would be to find out just what went wrong.
That you can’t speak about probabilities without a probability distribution isn’t a particularly deep insight, but I’ve had a hard time getting particle physicists to acknowledge this. I summed up my arguments in my January paper, but I’ve been writing and talking about this for 10+ years without much resonance.
I was therefore excited to see that James Wells has a new paper on the arXiv
Naturalness has its roots in human experience. If you go for a walk and encounter a delicately balanced stack of stones, you conclude someone constructed it. This conclusion is based on your knowledge that stones distributed throughout landscapes by erosion, weathering, deposition, and other geological processes aren’t likely to end up on neat piles. You know this quite reliably because you have seen a lot of stones, meaning you have statistics from which you can extract a likelihood.
As the example hopefully illustrates, naturalness is a good criterion in certain circumstances, namely when you have statistics, or at least means to derive statistics. A solar system with ten planets in almost the same orbit is unlikely. A solar system with ten planets in almost the same plane isn’t. We know this both because we’ve observed a lot of solar systems, and also because we can derive their likely distribution using the laws of nature discovered so far, and initial conditions that we can extract from yet other observations. So that’s a case where you can use arguments from naturalness.
But this isn’t how arguments from naturalness are used in theory-development today. In high energy physics and some parts of cosmology, physicists use naturalness to select a theory for which they do not have – indeed cannot ever have – statistical distributions. The trouble is that they ask which values of parameters in a theory are natural. But since we can observe only one set of parameters – the one that describes our universe – we have no way of collecting data for the likelihood of getting a specific set of parameters.
Physicists use criteria from naturalness anyway. In such arguments, the probability distribution is unspecified, but often implicitly assumed to be almost uniform over an interval of size one. There is, however, no way to justify this distribution; it is hence an unscientific assumption. This problem was made clear already in a 1994 paper by Anderson and Castano.
The standard model of particle physics, or the mass of the Higgs-boson more specifically, is unnatural in the above described way, and this is currently considered ugly. This is why theorists invented new theories to extend the Standard Model so that naturalness would be reestablished. The most popular way to do this is by making the Standard Model supersymmetric, thereby adding a bunch of new particles.
The Large Hadron Collider (LHC), as several previous experiments, has not found any evidence for supersymmetric particles. This means that according to the currently used criterion of naturalness, the theories of particle physics are, in fact, unnatural. That’s also why we presently do not have reason to think that a larger particle collider would produce so-far unknown particles.
In my book “Lost in Math: How Beauty Leads Physics Astray,” I use naturalness as an example for unfounded beliefs that scientists adhere to. I chose naturalness because it’s timely, as with the LHC ruling it out, but I could have used other examples.
A lot of physicists, for example, believe that experiments have ruled out hidden variables explanations of quantum mechanics, which is just wrong (experiments have ruled out only certain types of local hidden variable models). Or they believe that observations of the Bullet Cluster have ruled out modified gravity, which is similarly wrong (the Bullet Clusters is a statistical outlier that is hard to explain both with dark matter and modified gravity). Yes, the devil’s in the details.
Remarkable about these cases isn’t that scientists make mistakes – everyone does – but that they insist on repeating wrong claims, in many cases publicly, even after you explained them why they’re wrong. These and other examples like this leave me deeply frustrated because they demonstrate that even in science it’s seemingly impossible to correct mistakes once they have been adopted by sufficiently many practitioners. It’s this widespread usage that makes it “safe” for individuals to repeat statements they know are wrong, or at least do not know to be correct.
I think this highlights a serious problem with the current organization of academic research. That this can happen worries me considerably because I have no reason to think it’s confined to my own discipline.
Naturalness is an interesting case to keep an eye on. That’s because the LHC now has delivered data that shows the idea was wrong – none of the predictions for supersymmetric particles, or extra dimensions, or tiny black holes, and so on, came true. One possible way for particle physicists to deal with the situation is to amend criteria of naturalness so that they are no longer in conflict with data. I sincerely hope this is not the way it’ll go. The more enlightened way would be to find out just what went wrong.
That you can’t speak about probabilities without a probability distribution isn’t a particularly deep insight, but I’ve had a hard time getting particle physicists to acknowledge this. I summed up my arguments in my January paper, but I’ve been writing and talking about this for 10+ years without much resonance.
I was therefore excited to see that James Wells has a new paper on the arXiv
Naturalness, Extra-Empirical
Theory Assessments, and the Implications of Skepticism
James D. Wells
arXiv:1806.07289 [physics.hist-ph]
James D. Wells
arXiv:1806.07289 [physics.hist-ph]
In
his paper, Wells lays out the problems with the lacking probability
distribution with several simple examples. And in contrast to me, Wells isn’t a
no-one; he’s a well-known US-American particle physicist and Professor at the
University of Michigan.
So, now that a man has said it, I hope physicists will listen.
So, now that a man has said it, I hope physicists will listen.
Wednesday, July 11, 2018
Slime
Molds Remember — but Do They Learn?
Evidence mounts that organisms
without nervous systems can in some sense learn and solve problems, but
researchers disagree about whether this is “primitive cognition.”
https://www.quantamagazine.org/slime-molds-remember-but-do-they-learn-20180709/?utm_source=Quanta+Magazine&utm_campaign=2ecb15ab76-RSS_Daily_Biology&utm_medium=email&utm_term=0_f0cb61321c-2ecb15ab76-389846569&mc_cid=2ecb15ab76&mc_eid=61275b7d81
[[Absolutely fascinating, and
shows how little we understand.]]
Despite its single-celled
simplicity and lack of a nervous system, the slime mold Physarum
polycephalum may be capable of an elementary form of learning,
according to some suggestive experimental results.
Audrey
Dussutour, CNRS
July
9, 2018
Slime
molds are among the world’s strangest organisms. Long mistaken for fungi, they
are now classed as a type of amoeba. As single-celled organisms, they have
neither neurons nor brains. Yet for about a decade, scientists have debated
whether slime molds have the capacity to learn about their environments and
adjust their behavior accordingly.
For Audrey Dussutour, a biologist at France’s
National Center for Scientific Research and a team leader at the Research
Center on Animal Cognition at Université Paul Sabatier in Toulouse, that debate
is over. Her group not only taught slime molds to ignore noxious substances
that they would normally avoid, but demonstrated that the organisms could
remember this behavior after a year of physiologically disruptive enforced
sleep. But do these results prove that slime molds — and perhaps a wide range of
other organisms that lack brains — can exhibit a form of primitive cognition?
Slime molds are relatively easy
to study, as protozoa go. They are macroscopic organisms that can be easily
manipulated and observed. There are more than 900 species of slime mold; some
live as single-celled organisms most of the time, but come together in a swarm
to forage and procreate when food is short. Others, so-called plasmodial slime
molds, always live as one huge cell containing thousands of nuclei. Most
importantly, slime molds can be taught new tricks; depending on the species,
they may not like caffeine, salt or strong light, but they can learn that no-go
areas marked with these are not as bad as they seem, a process known as
habituation.
“By classical definitions of habituation,
this primitive unicellular organism is learning, just as animals with brains
do,” said Chris Reid, a behavioral
biologist at Macquarie University in Australia. “As slime molds don’t have any
neurons, the mechanisms of the learning process must be completely different;
however, the outcome and functional significance are the same.”
For Dussutour, “that such
organisms have the capacity to learn has considerable implications beyond
recognizing learning in nonneural systems.” She believes that slime molds may
help scientists to understand when and where in the tree of life the earliest
manifestations of learning evolved.
Even more intriguingly, and
perhaps controversially, research by Dussutour and others suggests that slime
molds can transfer their acquired memories from cell to cell, said František Baluška, a plant
cell biologist at the University of Bonn. “This is extremely exciting for our
understanding of much larger organisms such as animals, humans and plants.”
A History of Habituation
Studies of the behavior of
primitive organisms go all the way back to the late 1800s, when Charles Darwin
and his son Francis proposed that in plants, the very tips of their roots (a
small region called the root apex) could act as their brains. Herbert Spencer
Jennings, an influential zoologist and early geneticist, made the same argument
in his seminal 1906 book Behavior of the Lower Organisms.
However, the notion that
single-celled organisms can learn something and retain their memory of it at
the cellular level is new and controversial. Traditionally, scientists have
directly linked the phenomenon of learning to the existence of a nervous
system. A number of people, Dussutour said, thought that her research “was a
terrible waste of time and that I would reach a dead end.”
Audrey Dussutour, a biologist who
studies animal cognition and the plasticity of organisms at France’s National
Center for Scientific Research, holds a dish of cultured slime mold. She
believes that such organisms might clarify how learning first evolved.
She
started studying the slimy blobs by putting herself “in the position of the
slime mold,” she said — wondering what it would need to learn about its
environment to survive and thrive. Slime molds crawl slowly, and they can
easily find themselves stuck in environments that are too dry, salty or acidic.
Dussutour wondered if slime molds could get used to uncomfortable conditions,
and she came up with a way to test their habituation abilities.
Habituation is not just
adaptation; it’s considered to be the simplest form of learning. It refers to
how an organism responds when it encounters the same conditions repeatedly, and
whether it can filter out a stimulus that it has realized is irrelevant. For
humans, a classic example of habituation is that we stop noticing the sensation
of our clothes against our skin moments after we put them on. We can similarly
stop noticing many unpleasant smells or background sounds, especially if they
are unchanging, when they are unimportant to our survival. For us and for other
animals, this form of learning is made possible by the networks of neurons in
our nervous systems that detect and process the stimuli and mediate our
responses. But how could habituation happen in unicellular organisms without
neurons?
Starting in 2015, Dussutour and
her team obtained samples of slime molds from colleagues at Hakodate University
in Japan and tested their ability to habituate. The researchers set up pieces
of slime mold in the lab and placed dishes of oatmeal, one of the organism’s
favorite foods, a short distance away. To reach the oatmeal, the slime molds
had to grow across gelatin bridges laced with either caffeine or quinine,
harmless but bitter chemicals that the organisms are known to avoid.
“In the first experiment, the
slime molds took 10 hours to cross the bridge and they really tried not to
touch it,” Dussutour said. After two days, the slime molds began to ignore the
bitter substance, and after six days each group stopped responding to the deterrent.
The habituation that the slime
molds had learned was specific to the substance: Slime molds that had
habituated to caffeine were still reluctant to cross a bridge containing
quinine, and vice versa. This showed that the organisms had learned to
recognize a particular stimulus and to adjust their response to it, and not to
push across bridges indiscriminately.
In experiments conducted by
Dussutour’s team, disks of yellow slime mold (at bottom) can eat plates of
oatmeal (at top) — but only if they cross gelatinous bridges (at center) laced
with noxious but harmless compounds. Here, the middle slime mold sample has
learned to disregard the chemicals, a process called habituation.
Finally,
the scientists let the slime molds rest for two days in situations where they
were exposed to neither quinine nor caffeine, and then tested them with the
noxious bridges again. “We saw that they recover — as they show avoidance
again,” Dussutour said. The slime molds had gone back to their original
behavior.
Of course, organisms can adapt to
environmental changes in ways that don’t necessarily imply learning. But
Dussutour’s work suggests that the slime molds can sometimes pick up these
behaviors through a form of communication, not just through experience.
In a follow-up study, her team showed that “naïve,”
non-habituated slime molds can directly acquire a learned behavior from
habituated ones via cell fusion.
Unlike complex multicellular
organisms, slime molds can be cut into many pieces; once they’re put back
together, they fuse and make a single giant slime mold, with veinlike tubes
filled with fast-flowing cytoplasm forming between pieces as they connect.
Dussutour cut her slime molds into more than 4,000 pieces and trained half of
them with salt — another substance that the organisms dislike, though not as
strongly as quinine and caffeine. The team fused the assorted pieces in various
combinations, mixing slime molds habituated to salt with non-habituated ones.
They then tested the new entities.
“We showed that when there was
one habituated slime mold in the entity that we were forming, the entity was
showing habituation,” she said. “So one slime mold would transfer this
habituated response to the other.” The researchers then separated the different
molds again after three hours — the time it took for all the veins of cytoplasm
to form properly — and both parts still showed habituation. The organism had
learned.
Hints of Primitive Cognition
But Dussutour wanted to push
further and see whether that habituating memory could be recalled in the long
term. So she and her team put the blobs to sleep for a year by drying them up
in a controlled manner. In March, they woke up the blobs — which found
themselves surrounded by salt. The non-habituated slime molds died, perhaps
from osmotic shock because they could not cope with how rapidly moisture leaked
out of their cells. “We lost a lot of slime molds like that,” Dussutour said.
“But habituated ones survived.” They also quickly started extending out
across their salty surroundings to hunt for food.
What that means, according to
Dussutour, who described this unpublished work at a scientific meeting in April
at the University of Bremen in Germany, is that a slime mold can learn — and it
can keep that knowledge during dormancy, despite the extensive physical and
biochemical changes in the cells that accompany that transformation. Being able
to remember where to find food is a useful skill for a slime mold to have in
the wild, because its environment can be treacherous. “It’s very good it can
habituate, otherwise it’d be stuck,” Dussutour said.
More fundamentally, she said,
this result also means that there is such a thing as “primitive cognition,” a
form of cognition that is not restricted to organisms with a brain.
Scientists have no idea what
mechanism underpins this kind of cognition. Baluška thinks that a number of
processes and molecules might be involved, and that they may vary among simple
organisms. In the case of slime molds, their cytoskeleton may form smart, complex
networks able to process sensory information. “They feed this information up to
the nuclei,” he said.
It’s not just slime molds that
may be able to learn. Researchers are investigating other nonneural organisms,
such as plants, to discover whether they can display the most basic form of
learning. For example, in 2014 Monica Gagliano and
her colleagues at the University of Western Australia and the University of
Firenze in Italy published a paper that
caused a media frenzy, on experiments with Mimosa pudicaplants. Mimosa plants
are famously sensitive to being touched or otherwise physically disturbed: They
immediately curl up their delicate leaves as a defense mechanism. Gagliano
built a mechanism that would abruptly drop the plants by about a foot without
harming them. At first, the plants would retract and curl their leaves when
they were dropped. But after a while, the plants stopped reacting — they
seemingly “learned” that no defensive response was necessary.
Slime molds are highly efficient
at exploring their environment and making use of the resources they find there.
Researchers have harnessed this ability to solve mazes and other problems under
controlled conditions.
Traditionally,
simple organisms without brains or neurons were thought to be capable of simple
stimulus-response behavior at most. Research into the behavior of protozoa such
as the slime mold Physarum polycephalum (especially the work
of Toshiyuki Nakagaki at
Hokkaido University in Japan) suggests that these seemingly simple organisms are capable of complex decision-making and
problem-solving within their environments. Nakagaki and his
colleagues have shown, for example, that slime molds are capable of solving maze problems and laying out distribution networks as
efficient as ones designed by humans (in one famous result, slime molds recreated
the Tokyo rail system).
Chris Reid and his
colleague Simon Garnier, who heads
the Swarm Lab at the New Jersey Institute of Technology, are working on the
mechanism behind how a slime mold transfers information between all of its
parts to act as a kind of collective that mimics the capabilities of a brain
full of neurons. Each tiny part of the slime mold contracts and expands over
the course of about one minute, but the contraction rate is linked to the
quality of the local environment. Attractive stimuli cause faster pulsations,
while negative stimuli cause the pulsations to slow. Each pulsing part also
influences the pulsing frequency of its neighbors, not unlike the way the
firing rates of linked neurons influence one another. Using computer vision
techniques and experiments that might be likened to a slime mold version of an
MRI brain scan, the researchers are examining how the slime mold uses this
mechanism to transfer information around its giant unicellular body and make
complex decisions between conflicting stimuli.
Fighting to Keep Brains Special
But some mainstream biologists
and neuroscientists are critical of the results. “Neuroscientists are objecting
to the ‘devaluing’ of the specialness of the brain,” said Michael Levin, a biologist at Tufts University.
“Brains are great, but we have to remember where they came from. Neurons
evolved from nonneural cells, they did not magically appear.”
Some biologists also object “to
the idea that cells can have goals, memories and so on, because it sounds like
magic,” he added. But we have to remember, he said, that work on control
theory, cybernetics, artificial intelligence and machine learning over the last
century or so has shown that mechanistic systems can have goals and make
decisions. “Computer science long ago learned that information processing is
substrate-independent,” Levin said. “It’s not about what you’re made of, it’s
about how you compute.”
It all depends on how one defines
learning, according to John Smythies, the director of the Laboratory for
Integrative Neuroscience at the University of California, San Diego. He is not
persuaded that Dussutour’s experiment with slime molds staying habituated to
salt after extended dormancy shows much. “‘Learning’ implies behavior and dying
is not that!” he said.
To Fred Kaijzer, a cognitive scientist at the
University of Groningen in the Netherlands, the question of whether these
interesting behaviors show that slime molds can learn is similar to the debate
over whether Pluto is a planet: The answer depends as much on how the concept
of learning is cast as on the empirical evidence. Still, he said, “I do not see
any clear-cut scientific reasons for denying the option that nonneural
organisms can actually learn”.
Baluška said that many
researchers also fiercely disagree about whether plants can have memories,
learning and cognition. Plants are still considered to be “zombielike automata
rather than full-blown living organisms,” he said.
But the common perception is
slowly changing. “In plants, we started the plant neurobiology initiative in
2005, and although still not accepted by the mainstream, we already changed it
so much that terms like plant signaling, communication and behavior are
more or less accepted now,” he said.
The debate is arguably not a war
about the science, but about words. “Most neuroscientists I have talked to
about slime mold intelligence are quite happy to accept that the experiments
are valid and show similar functional outcomes to the same experiments
performed on animals with brains,” Reid said. What they seem to take issue with
is the use of terms traditionally reserved for psychology and neuroscience and
almost universally associated with brains, such as learning, memory and
intelligence. “Slime mold researchers insist that functionally equivalent
behavior observed in the slime mold should use the same descriptive terms as
for brained animals, while classical neuroscientists insist that the very
definition of learning and intelligence requires a neuron-
Baluška said that as a result,
it’s not that easy to get grants for primitive-cognition studies. “The most
important issue is that grant agencies and funding bodies will start to support
such project proposals. Until now, the mainstream science, despite a few
exceptions, is rather reluctant in this respect, which is a real pity.”
To gain mainstream recognition,
researchers of primitive cognition will have to demonstrate habituation to a
broad range of stimuli, and — most importantly — determine the exact mechanisms
by which habituation is achieved and how it can be transferred between single
cells, Reid said. “This mechanism must be quite different to that observed in
brains, but the similarities in functional outcomes make the comparison
extremely interesting.”
Subscribe to:
Posts (Atom)