Thursday, January 30, 2020

Wise Oysters, Galloping Sea Stars, and More


Wise Oysters, Galloping Sea Stars, and More: Biological Marvels Keep Coming
January 28, 2020, 12:34 PM


Strong theories in science require fewer auxiliary hypotheses when new discoveries come to light. Design advocates can gain confidence when discoveries continue to illustrate the core principles of intelligent design, like irreducible complexity, meaningful information, and hierarchical design, while undermining the blind, gradualistic principles of Darwinian evolution. Here are some recent illustrations.
“Pearls of Wisdom”
That’s the headline on news from the Okinawa Institute of Science and Technology, where the only thing said about evolution is that “From a genetic and evolutionary perspective, scientists have known little about the source of these pearls” in the Japanese pearl oyster, Pinctada fucata. By implication, don’t look for pearls of wisdom from evolutionary theory. The research published in Evolutionary Applications only concerns genetic variations within the species and the geographic distributions of isolated populations. If it helps conserve these oysters with their magnificent mother-of-pearl nacre — the envy of materials scientists — well, it’s wise to keep jewelry makers in business. Design scores as evolution fumbles.
Flight Feathers
Another level of design has been uncovered in bird feathers. In Science Magazine, Matloff et al. discuss “How flight feathers stick together to form a continuous morphing wing.” Pigeon and dove wing feathers spread out from their folded position into beautiful fans, as most people know. But how do birds prevent gaps from opening up between individual wing feathers? The team found a combination of factors at work. 
Birds can dynamically alter the shape of their wings during flight, although how this is accomplished is poorly understood. Matloff et al. found that two mechanisms control the movement of the individual feathers. Whenever the skeleton moves, the feathers are redistributed passively through compliance of the elastic connective tissue at the feather base. To prevent the feathers from spreading too far apart, hook-shaped microstructures on adjacent feathers form a directional fastener that locks adjacent feathers.
Notice that the muscles, bones, and connective tissue inside the skin work in synergy with the exterior hooks on the wings. Using a robot mimic, the team found that (1) the muscles for each feather keep the angle just right to spread them into a fan arrangement, and (2) the barbules snap together quickly to create a lightweight, flexible surface without breaks. The barbules can quickly detach like the hook-and-loop materials we are all familiar with.
This clarifies the function of the thousands of fastening barbules on the underlapping flight feathers; they lock probabilistically with the tens to hundreds of hooked rami of the overlapping flight feather and form a feather-separation end stop. The emergent properties of the interfeather fastener are not only probabilistic like bur fruit hooks, which inspired Velcro, but also highly directional like gecko feet setae — a combination that has not been observed before.
Rapid opening and closing of wings makes a little bit of noise a bit like Velcro does, explaining the din when a flock of geese takes off. Interestingly, the researchers found that night flyers like owls, which need silent wings as they hunt, “lack the lobate cilia and hooked rami in regions of feather overlap and instead have modified barbules with elongated, thin, velvety pennualue” that produce relatively little noise. Otherwise, this amazing complex mechanism works at scales all the way from a tiny 40-gram Cassin’s hummingbird to the 9000-gram California condor. What’s an evolutionist going to say about this ingenious mechanism? Once upon a time, a dinosaur leaped out of a tree and… died.
Distributed Running
Sea stars, seen in time-lapse videos, appear to “run” across the sea floor, bouncing as they go:
Scientists at the University of Southern California wondered how the echinoderms do it without a brain or centralized nervous system. The undersides of sea stars are composed of hundreds of “tube feet” which can move autonomously. How do they engage in coordinated motion? 
The answer, from researchers at the USC Viterbi School of Engineering, was recently published in the Journal of the Royal Society Interface: sea star[s] couple a global directionality command from a “dominant arm” with individual, localized responses to stimuli to achieve coordinated locomotion. In other words, once the sea star provides an instruction on which way to move, the individual feet figure out how to achieve this on their own, without further communication.
That would be a cool strategy for robots, the engineers figure. In fact, they built a model based on sea star motion, and show both the animal and robot movement side by side in the video above. No other animal movement seems to use this strategy. 
“In the case of the sea star, the nervous system seems to rely on the physics of the interaction between the body and the environment to control locomotion. All of the tube feet are attached structurally to the sea star and thus, to each other.”
In this way, there is a mechanism for “information” to be communicated mechanically between tube feet. 
Even though one of the team members was a “professor of ecology and evolutionary biology,” he seemed to rely more on the engineers than on Darwin. 
Understanding how a distributed nervous system, like that of a sea star, achieves complex, coordinated motions could lead to advancements in areas such as robotics. In robotics systems, it is relatively straightforward to program a robot to perform repetitive tasks. However, in more complex situations where customization is required, robots face difficulties. How can robots be engineered to apply the same benefits to a more complex problem or environment?
The answer might lie in the sea star model, [Eva] Kanso said. “Using the example of a sea star, we can design controllers so that learning can happen hierarchically. There is a decentralized component for both decision-making and for communicating to a global authority. This could be useful for designing control algorithms for systems with multiple actuators, where we are delegating a lot of the control to the physics of the system — mechanical coupling — versus the input or intervention of a central controller.”
Once again, the search to understand a design in nature propels further research that can aid in the design of products for human flourishing.
Quickies:
Grasshoppers don’t faint when they leap. Why? Arizona State wants to know how the insects keep their heads while taking off and landing in all kinds of different orientations. Gravity should be making the blood slosh around, causing dizziness and disorientation, but it doesn’t. Apparently it has something to do with the distribution of air sacs that automatically adjust to gravity, keeping the hemolymph (insect blood) from rapidly moving about in the head and body. “Thus, similar to vertebrates, grasshoppers have mechanisms to adjust to gravitational effects on their blood,” they say.
Cows know more than their blank stares indicate. Articles from Fox News and the New York Post had fun with a “shocking study” about “cowmoooonication” published in Nature’s open-access journal Scientific Reports. Experiments with 13 Holstein heifers seem to indicate that they all know each other’s names, and can learn where food is located, and more, from each other’s “individual moos.” They regularly share “cues in certain situations and express different emotions, including excitement, arousal, engagement and distress.” Other scientists are praising young researcher Ali Green, whose 333 recordings and voice analysis studies of moooosic is like “building a Google translate for cows.”
Design appears everywhere scientists look when they take their Darwin glasses off. For quality research that actually does some good for people, join the Uprising.
Photo credit: Japanese pearl oyster, Pinctada fuc


Wednesday, January 29, 2020

The Real Butterfly Effect


The Real Butterfly Effect


If a butterfly flaps its wings in China today, it may cause a tornado in America next week. Most of you will be familiar with this “Butterfly Effect” that is frequently used to illustrate a typical behavior of chaotic systems: Even smallest disturbances can grow and have big consequences.
 The name “Butterfly Effect” was popularized by James Gleick in his 1987 book “Chaos” and is usually attributed to the meteorologist Edward Lorenz. But I recently learned that this is not what Lorenz actually meant by Butterfly Effect.

I learned this from a paper by Tim Palmer, Andreas Döring, and Gregory Seregin called “The Real Butterfly Effect” and that led me to dig up Lorenz’ original paper from 1969.

Lorenz, in this paper, does not write about butterfly wings. He instead refers to a sea gull’s wings, but then attributes that to a meteorologist whose name he can’t recall. The reference to a butterfly seems to have come from a talk that Lorenz gave in 1972, which was titled “Does the Flap of a Butterfly’s Wings in Brazil set off a Tornado in Texas?”

The title of this talk was actually suggested by the session chair, a meteorologist by name Phil Merilees. In any case, it was the butterfly that stuck instead of the sea gull. And what was the butterfly talk about? It was a summary of Lorentz 1969 paper. So what’s in that paper?

In that paper, Lorenz made a much stronger claim than that a chaotic system is sensitive to the initial conditions. The usual butterfly effect says that any small inaccuracy in the knowledge that you have about the initial state of the system will eventually blow up and make a large difference. But if you did precisely know the initial state, then you could precisely predict the outcome, and if only you had good enough data you could make predictions as far ahead as you like. It’s chaos, alright, but it’s still deterministic.

Now, in the 1969 paper, Lorenz looks at a system that has an even worse behavior. He talks about weather, so the system he considers is the Earth, but that doesn’t really matter, it could be anything. He says, let us divide up the system into pieces of equal size. In each piece we put a detector that makes a measurement of some quantity. That quantity is what you need as input to make a prediction. Say, air pressure and temperature. He further assumes that these measurements are arbitrarily accurate. Clearly unrealistic, but that’s just to make a point.

How well can you make predictions using the data from your measurements? You have data on that finite grid. But that does not mean you can generally make a good prediction on the scale of that grid, because errors will creep into your prediction from scales smaller than the grid. You expect that to happen of course because that’s chaos; the non-linearity couples all the different scales together and the error on the small scales doesn’t stay on the small scales.

But you can try to combat this error by making the grid smaller and putting in more measurement devices. For example, Lorenz says, if you have a typical grid of some thousand kilometers, you can make a prediction that’s good for, say, 5 days. After these 5 days, the errors from smaller distances screw you up. So then you go and decrease your grid length by a factor of two.

Now you have many more measurements and much more data. But, and here comes the important point: Lorenz says this may only increase the time for which you can make a good prediction by half of the original time. So now you have 5 days plus 2 and a half days. Then you can go and make your grid finer again. And again you will gain half of the time. So now you have 5 days plus 2 and half plus 1 and a quarter. And so on.

Most of you will know that if you sum up this series all the way to infinity it will converge to a finite value, in this case that’s 10 days. This means that even if you have an arbitrarily fine grid and you know the initial condition precisely, you will only be able to make predictions for a finite amount of time.

And this is the real butterfly effect. That a chaotic system may be deterministic and yet still be non-predictable beyond a finite amount of time .

This of course raises the question whether there actually is any system that has such properties. There are differential equations which have such a behavior. But whether the real butterfly effect occurs for any equation that describes nature is unclear. The Navier-Stokes equation, which Lorenz was talking about may or may not suffer from the “real” butterfly effect. No one knows. This is presently one of the big unsolved problems in mathematics.

However, the Navier-Stokes equation, and really any other equation for macroscopic systems, is strictly speaking only an approximation. On the most fundamental level it’s all particle physics and, ultimately, quantum mechanics. And the equations of quantum mechanics do not have butterfly effects because they are linear. Then again, no one would use quantum mechanics to predict the weather, so that’s a rather theoretical answer.

The brief summary is that even in a deterministic system predictions may only be possible for a finite amount of time and that is what Lorenz really meant by “Butterfly Effect.”


Glial Brain Cells Reveal Hidden Powers


Glial Brain Cells, Long in Neurons’ Shadow, Reveal Hidden Powers
Elena Renken

[[Gee - scientists are still studying E. elegans with 302 neurons and  have not explained all of its function. And our brains have approximately 80b [80,000,000,000] neurons, so understanding the human brain seems to be just a  little beyond our present reach. But there are another 80b glial cells in the brain and they too have a wide variety of functions. So now it is 160b to 302. Maybe it will take a little longer than we thought to figure out how our brains work....]]

The sting of a paper cut or the throb of a dog bite is perceived through the skin, where cells react to mechanical forces and send an electrical message to the brain. These signals were believed to originate in the naked endings of neurons that extend into the skin. But a few months ago, scientists came to the surprising realization that some of the cells essential for sensing this type of pain aren’t neurons at all. It’s a previously overlooked type of specialized glial cell that intertwines with nerve endings to form a mesh in the outer layers of the skin. The information the glial cells send to neurons is what initiates the “ouch”: When researchers stimulated only the glial cells, mice pulled back their paws or guarded them while licking or shaking — responses specific to pain.


This discovery is only one of many recent findings showing that glia, the motley collection of cells in the nervous system that aren’t neurons, are far more important than researchers expected. Glia were long presumed to be housekeepers that only nourished, protected and swept up after the neurons, whose more obvious role of channeling electric signals through the brain and body kept them in the spotlight for centuries. But over the last couple of decades, research into glia has increased dramatically.
“In the human brain, glial cells are as abundant as neurons are. Yet we know orders of magnitude less about what they do than we know about the neurons,” said Shai Shaham, a professor of cell biology at the Rockefeller University who focuses on glia. As more scientists turn their attention to glia, findings have been piling up to reveal a family of diverse cells that are unexpectedly crucial to vital processes.
It turns out that glia perform a staggering number of functions. They help process memories. Some serve as immune system agents and ward off infection, while some communicate with neurons. Others are essential to brain development. Far from being mere valets to neurons, glia often take leading roles in protecting the brain’s health and directing its development. “Pick any question in the nervous system, and glial cells will be involved,” Shaham said.
More Than Just ‘Glue’
Glia take many forms to perform their specialized functions: Some are sheathlike, while others are spindly, bushy or star-shaped. Many tangle around neurons and form a network so dense that individual cells are hard to distinguish. To some early observers, they didn’t even look like cells — they were considered a supportive matrix within the skull. This prompted the 19th-century researcher Rudolph Virchow to dub this non-neuronal material “neuroglia,” drawing on the Greek word for glue.



In this magnified image of brain tissue, neurons (blue) are surrounded by large numbers of glial cells, including astrocytes (red) and oligodendrocytes (green).

One reason glia were given such short shrift was that when researchers first began staining nervous system tissue, their methods revealed the convoluted shapes of neurons but rendered only select glia visible. Santiago Ramón y Cajal, who is credited with the discovery of neurons and widely regarded as the founder of neuroscience, illustrated one subtype of glia but lumped the rest together as “the third element.” His focus on neurons set the stage for the burgeoning field of euroscience but shoved the glia behind the curtains.
In addition, some glia are challenging to study because their fates are so entwined with those of neurons that it’s hard to learn about them separately. If researchers try to learn about the glia’s functions by knocking them out and observing the effects, the neurons they support will die along with them.
But the revolution in cell biology techniques in recent decades has generated an arsenal of tools offering greater access to glia, Shaham said. Advances in live imaging, fluorescent labeling and genetic manipulation are revealing the breadth of glia’s forms and functions.
Microglia Reveal Their Versatility
Several cell types are contained within the umbrella category of glia, with varied functions that are still coming to light. Oligodendrocytes and Schwann cells wrap around nerve fibers and insulate them in fatty myelin sheaths, which help to confine the electrical signals moving through neurons and speed their passage. Astrocytes, with their complex branching shapes, direct the flow of fluid in the brain, reshape the synaptic connections between neurons, and recycle the released neurotransmitter molecules that enable neurons to communicate, among other jobs.

The highly versatile microglia seem to serve a variety of functions in the brain, such as removing cellular debris and determining which synapses between neurons are unnecessary.

But the cells that have been the subjects of an especially strong spike in interest over the last decade or so are the ones called microglia.
Microglia were originally defined in four papers published in 1919 by Pío del Río-Hortega, but the study of them then stalled for decades, until finally picking up in the 1980s. Microglia research is now growing exponentially, said Amanda Sierra, a group leader at the Achucarro Basque Center for Neuroscience. The work is exposing how microglia respond to brain trauma and other injuries, how they suppress inflammation, and how they behave in the presence of neurodegenerative diseases. The cells “really are at the edge between immunology and neuroscience,” Sierra said.
Guy Brown, a professor of biochemistry at the University of Cambridge, was first drawn to microglia by their star shapes and dynamic movements, but it was their behavior that held his attention. In recent years, microglia have been found to mimic the macrophages of the immune system by engulfing threats to the brain such as cellular debris and microbes. Microglia also seem to go after obsolete synapses. “If you live-image them, you can see them eating neurons,” Brown said.
Some of these active functions are shared with other types of glia as well. Astrocytes and Schwann cells, for example, may also prune synaptic connections. But despite the commonalities among different subsets of glia, researchers are starting to realize that there’s little to unify glial cells as a group. In fact, in a 2017 article, scientists argued for discarding the general term “glia” altogether. “They don’t have an enormous amount in common, different glial cells,” Brown said. “I don’t think there’s much future to glia as a label.”


Ben Barres, a neuroscientist who championed glia research and passed away in 2017, considered deeper investigations of glia essential to the advance of neurobiology as a field. Others have taken up that cause as well. To them, the historical emphasis on neurons made sense at one time: “They are the ones who process the information from the outside world into our memories, our thinking, our processing,” Sierra said. “They are us.” But now the importance of glia is clear.
Neurons and glia cannot function independently: Their interactions are vital to the survival of the nervous system and the memories, thoughts and emotions it generates. But the nature of their partnership is still mysterious, notes Staci Bilbo, a professor of psychology and neuroscience at Duke University. Glia are gaining a reputation for the complexity long attributed to neurons, but it’s still unclear whether one cell type primarily directs the other. “The big unknown in the field is: Who is driving the response?” she said.


Monday, January 6, 2020

How a Flawed Experiment "Proved" That Free Will Doesn't Exist


How a Flawed Experiment "Proved" That Free Will Doesn't Exist - Scientific American Blog Network

Steve Taylor December 6, 2019
In the second half of the 19th century, scientific discoveries—in particular, Darwin’s theory of evolution—meant that Christian beliefs were no longer feasible as a way of explaining the world. The authority of the Bible as an explanatory text was fatally damaged. The new findings of science could be utilized to provide an alternative conceptual system to make sense of the world—a system that insisted that nothing existed apart from basic particles of matter, and that all phenomena could be explained in terms of the organization and the interaction of these particles.
One of the most fervent of late 19th century materialists, T.H. Huxley, described human beings as “conscious automata” with no free will. As he explained in 1874, “Volitions do not enter into the chain of causation…. The feeling that we call volition is not the cause of a voluntary act, but the symbol of that state of the brain which is the immediate cause."
This was a very early formulation of an idea that has become commonplace amongst modern scientists and philosophers who hold similar materialist views: that free will is an illusion. According to Daniel Wegner, for instance, “The experience of willing an act arises from interpreting one’s thought as the cause of the act.” In other words, our sense of making choices or decisions is just an awareness of what the brain has already decided for us. When we become aware of the brain’s actions, we think about them and falsely conclude that our intentions have caused them. You could compare it to a king who believes he is making all his own decisions, but is constantly being manipulated by his advisors and officials, who whisper in his ear and plant ideas in his head. 
Many people believe that evidence for a lack of free will was found when, in the 1980s, scientist Benjamin Libet conducted experiments that seemed to show that the brain “registers” the decision to make movements before a person consciously decides to move. In Libet’s experiments, participants were asked to perform a simple task such as pressing a button or flexing their wrist. Sitting in front of a timer, they were asked to note the moment at which they were consciously aware of the decision to move, while EEG electrodes attached to their head monitored their brain activity.
Libet showed consistently that there was unconscious brain activity associated with the action—a change in EEG signals that Libet called “readiness potential”—for an average of half a second before the participants were aware of the decision to move. This experiment appears to offer evidence of Wegner’s view that decisions are first made by the brain, and there is a delay before we become conscious of them—at which point we attribute our own conscious intention to the act.         
However, if we look more closely, Libet’s experiment is full of problematic issues. For example, it relies on the participants’ own recording of when they feel the intention to move. One issue here is that there may be a delay between the impulse to act and their recording of it—after all, this means shifting their attention from their own intention to the clock. In addition, it is debatable whether people are able to accurately record the moment of their decision to move. Our subjective awareness of decisions is very unreliable. If you try the experiment yourself—and you can do it right now, just by holding out your own arm, and deciding at some point to flex your wrist—you’ll become aware that it’s difficult to pinpoint the moment at which you make the decision. 
An even more serious issue with the experiment is that it is by no means clear that the electrical activity of the “readiness potential” is related to the decision to move, and to the actual movement. Some researchers have suggested that the readiness potential could just relate to the act of paying attention to the wrist or a button, rather the decision to move. Others have suggested that it only reflects the expectation of some kind of movement, rather being related to a specific moment. In a modified version of Libet’s experiment (in which participants were asked to press one of two buttons in response to images on a computer screen), participants showed “readiness potential” even before the images came up on the screen, suggesting that it was not related to deciding which button to press. 
Still others have suggested that the area of the brain where the "readiness potential" occurs—the supplementary motor area, or SMA—is usually associated with imagining movements rather than actually performing them. The experience of willing is usually associated with other areas of the brain (the parietal areas). And finally, in another modified version of Libet’s experiment, participants showed readiness potential even when they made a decision not to move, which again casts doubt on the assumption that the readiness potential is actually registering the brain’s “decision” to move. 
A further, more subtle, issue has been suggested by psychiatrist and philosopher Iain McGilchrist. Libet's experiment seems to assume that the act of volition consists of clear-cut decisions, made by a conscious, rational mind. But McGilchrist points out that decisions are often made in a more fuzzy, ambiguous way. They can be made on a partly intuitive, impulsive level, without clear conscious awareness. But this doesn't necessarily mean that you haven't made the decision.
As McGilchrist puts it, Libet’s apparent findings are only problematic "if one imagines that, for me to decide something, I have to have willed it with the conscious part of my mind. Perhaps my unconscious is every bit as much 'me.'" Why shouldn't your will be associated with deeper, less conscious areas of your mind (which are still you)? You might sense this if, while trying Libet’s experiment, you find your wrist just seeming to move of its own accord. You feel that you have somehow made the decision, even if not wholly consciously. 
Because of issues such as these—and others that I don’t have space to mention—it seems strange that such a flawed experiment has become so influential, and has been (mis)used so frequently as evidence against the idea of free will. You might ask: why are so many intellectuals so intent on proving that they have no free will? (As the philosopher Alfred North Whitehead pointed out ironically, “Scientists animated by the purpose of proving themselves purposeless constitute an interesting subject for study.”)

This is probably because the nonexistence of free will seems a logical extension of some of the primary assumptions of the materialist paradigm—such as the idea that our sense of self is an illusion, and that consciousness and mental activity are reducible to neurological activity. However, as I suggest in my book Spiritual Science, it is entirely possible that these assumptions are false. The mind may be more than just a shadow of the brain, and free will may not be an illusion but an invaluable human attribute, which can be cultivated and whose development makes our lives more meaningful and purposeful.