Monday, January 6, 2020


How a Flawed Experiment "Proved" That Free Will Doesn't Exist - Scientific American Blog Network

Steve Taylor December 6, 2019
In the second half of the 19th century, scientific discoveries—in particular, Darwin’s theory of evolution—meant that Christian beliefs were no longer feasible as a way of explaining the world. The authority of the Bible as an explanatory text was fatally damaged. The new findings of science could be utilized to provide an alternative conceptual system to make sense of the world—a system that insisted that nothing existed apart from basic particles of matter, and that all phenomena could be explained in terms of the organization and the interaction of these particles.
One of the most fervent of late 19th century materialists, T.H. Huxley, described human beings as “conscious automata” with no free will. As he explained in 1874, “Volitions do not enter into the chain of causation…. The feeling that we call volition is not the cause of a voluntary act, but the symbol of that state of the brain which is the immediate cause."
This was a very early formulation of an idea that has become commonplace amongst modern scientists and philosophers who hold similar materialist views: that free will is an illusion. According to Daniel Wegner, for instance, “The experience of willing an act arises from interpreting one’s thought as the cause of the act.” In other words, our sense of making choices or decisions is just an awareness of what the brain has already decided for us. When we become aware of the brain’s actions, we think about them and falsely conclude that our intentions have caused them. You could compare it to a king who believes he is making all his own decisions, but is constantly being manipulated by his advisors and officials, who whisper in his ear and plant ideas in his head. 
Many people believe that evidence for a lack of free will was found when, in the 1980s, scientist Benjamin Libet conducted experiments that seemed to show that the brain “registers” the decision to make movements before a person consciously decides to move. In Libet’s experiments, participants were asked to perform a simple task such as pressing a button or flexing their wrist. Sitting in front of a timer, they were asked to note the moment at which they were consciously aware of the decision to move, while EEG electrodes attached to their head monitored their brain activity.
Libet showed consistently that there was unconscious brain activity associated with the action—a change in EEG signals that Libet called “readiness potential”—for an average of half a second before the participants were aware of the decision to move. This experiment appears to offer evidence of Wegner’s view that decisions are first made by the brain, and there is a delay before we become conscious of them—at which point we attribute our own conscious intention to the act.         
However, if we look more closely, Libet’s experiment is full of problematic issues. For example, it relies on the participants’ own recording of when they feel the intention to move. One issue here is that there may be a delay between the impulse to act and their recording of it—after all, this means shifting their attention from their own intention to the clock. In addition, it is debatable whether people are able to accurately record the moment of their decision to move. Our subjective awareness of decisions is very unreliable. If you try the experiment yourself—and you can do it right now, just by holding out your own arm, and deciding at some point to flex your wrist—you’ll become aware that it’s difficult to pinpoint the moment at which you make the decision. 
An even more serious issue with the experiment is that it is by no means clear that the electrical activity of the “readiness potential” is related to the decision to move, and to the actual movement. Some researchers have suggested that the readiness potential could just relate to the act of paying attention to the wrist or a button, rather the decision to move. Others have suggested that it only reflects the expectation of some kind of movement, rather being related to a specific moment. In a modified version of Libet’s experiment (in which participants were asked to press one of two buttons in response to images on a computer screen), participants showed “readiness potential” even before the images came up on the screen, suggesting that it was not related to deciding which button to press. 
Still others have suggested that the area of the brain where the "readiness potential" occurs—the supplementary motor area, or SMA—is usually associated with imagining movements rather than actually performing them. The experience of willing is usually associated with other areas of the brain (the parietal areas). And finally, in another modified version of Libet’s experiment, participants showed readiness potential even when they made a decision not to move, which again casts doubt on the assumption that the readiness potential is actually registering the brain’s “decision” to move. 
A further, more subtle, issue has been suggested by psychiatrist and philosopher Iain McGilchrist. Libet's experiment seems to assume that the act of volition consists of clear-cut decisions, made by a conscious, rational mind. But McGilchrist points out that decisions are often made in a more fuzzy, ambiguous way. They can be made on a partly intuitive, impulsive level, without clear conscious awareness. But this doesn't necessarily mean that you haven't made the decision.
As McGilchrist puts it, Libet’s apparent findings are only problematic "if one imagines that, for me to decide something, I have to have willed it with the conscious part of my mind. Perhaps my unconscious is every bit as much 'me.'" Why shouldn't your will be associated with deeper, less conscious areas of your mind (which are still you)? You might sense this if, while trying Libet’s experiment, you find your wrist just seeming to move of its own accord. You feel that you have somehow made the decision, even if not wholly consciously. 
Because of issues such as these—and others that I don’t have space to mention—it seems strange that such a flawed experiment has become so influential, and has been (mis)used so frequently as evidence against the idea of free will. You might ask: why are so many intellectuals so intent on proving that they have no free will? (As the philosopher Alfred North Whitehead pointed out ironically, “Scientists animated by the purpose of proving themselves purposeless constitute an interesting subject for study.”)

This is probably because the nonexistence of free will seems a logical extension of some of the primary assumptions of the materialist paradigm—such as the idea that our sense of self is an illusion, and that consciousness and mental activity are reducible to neurological activity. However, as I suggest in my book Spiritual Science, it is entirely possible that these assumptions are false. The mind may be more than just a shadow of the brain, and free will may not be an illusion but an invaluable human attribute, which can be cultivated and whose development makes our lives more meaningful and purposeful.   


Monday, December 30, 2019


How did the universe begin?

Sabine Hossenfelder

[[Very very revealing and useful article.]]



The year is almost over and a new one about to begin. So today I want to talk about the beginning of everything, the whole universe. What do scientists think how it all started?

We know that the universe expands, and as the universe expands, matter and energy in it dilutes. So when the universe was younger, matter and energy was much denser. Because it was denser, it had a higher temperature. And a higher temperature means that on the average particles collided at higher energies.

Now you can ask, what do we know about particles colliding at high energies? Well, the highest collision energies between particles that we have experimentally tested are those produced at the Large Hadron Collider. These are energies about a Tera-electron Volt or TeV for short, which, if you convert it into a temperature, comes out to be 1016 Kelvin. In words that’s ten million billion Kelvin which sounds awkward and is the reason no one quotes such temperatures in Kelvin.

So, up to a temperature of about a TeV, we understand the physics of the early universe and we can reliably tell what happened. Before that, we have only speculation.

The simplest way to speculate about the early universe is just to extrapolate the known theories back to even higher temperatures, assuming that the theories do not change. What happens then is that you eventually reach energy densities so high that the quantum fluctuations of space and time become relevant. To calculate what happens then, we would need a theory of quantum gravity, which we do not have.
So, in brief, the scientific answer is that we have no idea how the universe began.

But that’s a boring answer and one you cannot publish, so it’s not how the currently most popular theories for the beginning of the universe work. The currently most popular theories assume that the electromagnetic interaction must have been unified with the strong and the weak nuclear force at high energies. They also assume that an additional field exists, which is the so-called inflaton field.

The purpose of the inflaton is to cause the universe to expand very rapidly early on, in a period which is called “inflation”. The inflaton field then has to create all the other matter in the universe and basically disappear because we don’t see it today. In these theories, our universe was born from a quantum fluctuation of the inflaton field and this birth event is called the “Big Bang”.

Actually, if you believe this idea, the quantum fluctuations still go on outside of our universe, so there are constantly other universes being created.

How scientific is this idea?
Well, we have zero evidence that the forces were ever unified and have equally good evidence, namely none, that the inflaton field exists. The idea that the early universe underwent a phase of rapid expansion fits to some data, but the evidence is not overwhelming, and in any case, what the cause of this rapid expansion would have been – an inflaton field or something else – the data don’t tell us.

So that the universe began from a quantum fluctuations is one story. Another story has it that the universe was not born once but is born over and over again in what is called a “cyclic” model. In cyclic models, the Big Bang is replaced by an infinite sequence of Big Bounces.

There are several types of cyclic models. One is called the Ekpyrotic Universe. The idea of the Ekpyrotic Universe was originally borrowed from string theory and had it that higher-dimensional membranes collided and our universe was created from that collision.

Another idea of a cyclic universe is due to Roger Penrose and is called Conformal Cyclic Cosmology. Penrose’s idea is basically that when the universe gets very old, it loses all sense of scale, so really there is no point in distinguishing the large from the small anymore, and you can then glue together the end of one universe with the beginning of a new one.

Yet another theory has it that new universes are born inside black holes.
You can speculate about this because no one has any idea what goes on inside black holes anyway.

An idea that sounds similar but is actually very different is that the universe started from a black hole in 4 dimensions of space. This is a speculation that was put forward by Niayesh Afshordi some years ago.

 Then there is the possibility that the universe didn’t really “begin” but that before a certain time there was only space without any time. This is called the “no-boundary proposal” and it goes back to Jim Hartle and Stephen Hawking. A very similar disappearance of time was more recently found in calculations based on loop quantum cosmology where the researchers referred to it as “Asymptotic Silence”.

Then we have String Gas Cosmology, in which the early universe lingered in an almost steady state for an infinite amount of time before beginning to expand, …….

So, as you see, physicists have many ideas about how the universe began.
The trouble is that not a single one of those ideas is backed up by evidence. And they may never be backed up by evidence, because the further back in time you try to look, the fewer data we have. While some of those speculations for the early universe result in predictions, confirming those predictions would not allow us to conclude that the theory must have been correct because there are many different theories that could give rise to the same prediction.

This is a way in which our scientific endeavors are fundamentally limited. Physicists may simply have produced a lot of mathematical stories about how it all began, but these aren’t any better than traditional tales of creation.


Sunday, December 15, 2019


Cognitive biases prevent science from working properly.
by Sabine Hossenfelder

Today I want to talk about a topic that is much, much more important than anything I have previously talked about. And that’s how cognitive biases prevent science from working properly.


Cognitive biases have received some attention in recent years, thanks to books like “Thinking Fast and Slow,” “You Are Not So Smart,” or “Blind Spot.” Unfortunately, this knowledge has not been put into action in scientific research. Scientists do correct for biases in statistical analysis of data and they do correct for biases in their measurement devices, but they still do not correct for biases in the most important apparatus that they use: Their own brain.

Before I tell you what problems this creates, a brief reminder what a cognitive bias is. A cognitive bias is a thinking shortcut which the human brain uses to make faster decisions.

Cognitive biases work much like optical illusions. Take this example of an optical illusion. If your brain works normally, then the square labelled A looks much darker than the square labelled B.
[Example of optical illusion. Image: Wikipedia]
But if you compare the actual color of the pixels, you see that these squares have exactly the same color.
[Example of optical illusion. Image: Wikipedia]
The reason that we intuitively misjudge the color of these squares is that the image suggests it is really showing a three-dimensional scene where part of the floor is covered by a shadow. Your brain factors in the shadow and calculates back to the original color, correctly telling you that the actual color of square B must have been lighter than that of square A.

So, if someone asked you to judge the color in a natural scene, your answer would be correct. But if your task was to evaluate the color of pixels on the screen, you would give a wrong answer – unless you know of your bias and therefore do not rely on your intuition.

Cognitive biases work the same way and can be prevented the same way: by not relying on intuition. Cognitive biases are corrections that your brain applies to input to make your life easier. We all have them, and in every-day life, they are usually beneficial.

The maybe best-known cognitive bias is attentional bias. It means that the more often you hear about something, the more important you think it is. This normally makes a lot of sense. Say, if many people you meet are talking about the flu, chances are the flu’s making the rounds and you are well-advised to pay attention to what they’re saying and get a flu shot.

But attentional bias can draw your attention to false or irrelevant information, for example if the prevalence of a message is artificially amplified by social media, causing you to misjudge its relevance for your own life. A case where this frequently happens is terrorism. Receives a lot of media coverage, has people hugely worried, but if you look at the numbers for most of us terrorism is very unlikely to directly affect our life.

And this attentional bias also affects scientific judgement. If a research topic receives a lot of media coverage, or scientists hear a lot about it from their colleagues, those researchers who do not correct for attentional bias are likely to overrate the scientific relevance of the topic.

There are many other biases that affect scientific research. Take for example loss aversion. This is more commonly known as “throwing good money after bad”. It means that if we have invested time or money into something, we are reluctant to let go of it and continue to invest in it even if it no longer makes sense, because getting out would mean admitting to ourselves that we made a mistake. Loss aversion is one of the reasons scientists continue to work on research agendas that have long stopped being promising.

But the most problematic cognitive bias in science is social reinforcement, also known as group think. This is what happens in almost closed, likeminded, communities, if you have people reassuring each other that they are doing the right thing. They will develop a common narrative that is overly optimistic about their own research, and they will dismiss opinions from people outside their own community. Group think makes it basically impossible for researchers to identify their own mistakes and therefore stands in the way of the self-correction that is so essential for science.

A bias closely linked to social reinforcement is the shared information bias. This bias has the consequence that we are more likely to pay attention to information that is shared by many people we know, rather than to the information held by only few people. You can see right away how this is problematic for science: That’s because how many people know of a certain fact tells you nothing about whether that fact is correct or not. And whether some information is widely shared should not be a factor for evaluating its correctness.

Now, there are lots of studies showing that we all have these cognitive biases and also that intelligence does not make it less likely to have them. It should be obvious, then, that we organize scientific research so that scientists can avoid or at least alleviate their biases. Unfortunately, the way that research is currently organized has exactly the opposite effect: It makes cognitive biases worse.

For example, it is presently very difficult for a scientist to change their research topic, because getting a research grant requires that you document expertise. Likewise, no one will hire you to work on a topic you do not already have experience with.

Superficially this seems like good strategy to invest money into science, because you reward people for bringing expertise. But if you think about the long-term consequences, it is a bad investment strategy. Because now, not only do researchers face a psychological hurdle to leaving behind a topic they have invested time in, they would also cause themselves financial trouble. As a consequence, researchers are basically forced to continue to claim that their research direction is promising and to continue working on topics that lead nowhere.

Another problem with the current organization of research is that it rewards scientists for exaggerating how exciting their research is and for working on popular topics, which makes social reinforcement worse and adds to the shared information bias.

I know this all sounds very negative, but there is good news too: Once you are aware that these cognitive biases exist and you know the problems that they can cause, it is easy to think of ways to work against them.

For example, researchers should be encouraged to change topics rather than basically being forced to continue what they’re already doing. Also, researchers should always list shortcoming of their research topics, in lectures and papers, so that the shortcomings stay on the collective consciousness. Similarly, conferences should always have speakers from competing programs, and scientists should be encouraged to offer criticism on their community and not be avoided for it. These are all little improvements that every scientist can make individually, and once you start thinking about it, it’s not hard to come up with further ideas.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality.

The reason this is so, so important to me, is that science drives innovation and if science does not work properly, progress in our societies will slow down. But cognitive bias in science is a problem we can solve, and that we should solve. Now you know how.

Sunday, December 8, 2019

This Is Why Scientists Will Never Exactly Solve General Relativity



[[It is worth looking at the original for the excellent graphics.]]

Ethan Siegel 

[[Ah the wonder of the exact mathematical sciences!!]]


In theory, Einstein's equations are deterministic as well, so you can imagine something similar would occur: if you could only know the mass, position, and momentum of each particle in the Universe, you could compute anything as far into the future as you were willing to look. But whereas you can write down the equations that would govern how these particles would behave in a Newtonian Universe, we can't practically achieve even that step in a Universe governed by General Relativity. Here's why.

But in General Relativity, the challenge is much greater. Even if you knew those same pieces of information — positions, masses, and momenta of each particle — plus the particular relativistic reference frame in which they were valid, that wouldn't be enough to determine how things evolve. The structure of Einstein's greatest theory is too complex even for that.
In General Relativity, it isn't the net force acting on an object that determines how it moves and accelerates, but rather the curvature of space (and spacetime) itself. This immediately poses a problem, because the entity that determines the curvature of space is all of the matter and energy present within the Universe, which includes a lot more than merely the positions and momenta of the massive particles we have.
In General Relativity, unlike Newtonian gravity, the interaction of any mass you consider also plays a role: the fact that it also has energy means that it also deforms the fabric of spacetime. When you have any two massive objects moving and/or accelerating relative to one another in space, it causes the emission of gravitational radiation, too. That radiation isn't instantaneous, but only propagates outwards at the speed of light. This is an enormously difficult factor to account for.

Perhaps the most demonstrative example is to imagine the simplest Universe possible: one that was empty, with no matter or energy, and that never changed with time. That's completely plausible, and is the special case that gives us plain old special relativity and flat, euclidean space. It's the simplest, most uninteresting case possible.

Instead of flat, euclidean space, we find that space is curved, no matter how far away you get from the mass. We find that the closer you get, the faster the space beneath you "flows" towards the location of that point mass. We find that there's a specific distance at which you'll cross the event horizon: the point-of-no-return, where you cannot escape even if you were to move arbitrarily close to the speed of light.
This spacetime is much more complicated than empty space, and all we did was add one mass. This was the first exact, non-trivial solution ever discovered in General Relativity: the Schwarzschild solution, which corresponds to a non-rotating black hole.

·         perfect fluid solutions, where the energy, momentum, pressure, and shear stress of the fluid determine your spacetime,
·         electrovacuum solutions, where gravitational, electric and magnetic fields can exist (but not masses, electric charges or currents),
·         scalar field solutions, including a cosmological constant, dark energy, inflationary spacetimes, and quintessence models,
·         solutions with one point mass that rotates (Kerr), has charge (Reissner-Nordstrom), or rotates and has charge (Kerr-Newman),
·         or a fluid solution with a point mass (e.g., Schwarzschild-de Sitter space).
You might notice that these solutions are also extraordinarily simple, and don't include the most basic gravitational system we consider all the time: a Universe where two masses are gravitationally bound together.

Instead, all we can do is make assumptions and either tease out some higher-order approximate terms (the post-Newtonian expansion) or to examine the specific form of a problem and attempt to solve it numerically. Advances in the science of numerical relativity, particularly in the 1990s and later, are what enabled astrophysicists to calculate and determine templates for a variety of gravitational wave signatures in the Universe, including approximate solutions for two merging black holes. Whenever LIGO or Virgo make a detection, this is the theoretical work that makes it possible.

We can extract how the behavior of a solvable system differs from Newtonian gravity and then apply those corrections to a more complicated system that perhaps we cannot solve.
Or we can develop novel numerical methods for solving problems that are entirely intractable from a theoretical point of view; so long as the gravitational fields are relatively weak (i.e., we aren't too close to too large a mass), this is a plausible approach.

·         the curvature of space is continuously changing,
·         every mass has its own self-energy that also changes spacetime's curvature,
·         objects moving through curved space interact with it and emit gravitational radiation,
·         all the gravitational signals generated only move at the speed of light,
·         and the object's velocity relative to any other object results in a relativistic (length contraction and time dilation) transformation that must be accounted for.
When you take all of these into account, it all adds up to most spacetimes that you can imagine, even relatively simple ones, leading to equations that are so complex that we cannot find a solution to Einstein's equations.

We cannot even write down the Einstein field equations that describe most spacetimes or most Universes we can imagine. Most of the ones we can write down cannot be solved. And most of the ones that can be solved cannot be solved by me, you, or anyone. But still, we can make approximations that allow us to extract some meaningful predictions and descriptions. In the grand scheme of the cosmos, that's as close as anyone's ever gotten to figuring it all out, but there's still much farther to go. May we never give up until we get there.


Thursday, December 5, 2019


Small Wonders: Design in Tiny Creatures
December 2, 2019, 5:52 AM
[[Yes – everything is much much more complicated than we thought.]]
Miniature designs often require more foresight and delicate engineering than large designs. For example, think of how difficult it would be to design a nano air vehicle (NAV) that could flip over and land feet up on a glass ceiling. Yet we hardly notice when a fly does that. Scientists who look more closely at these things often stand in awe of what animals do. Here are some small wonders that deserve our admiration and respect.
The Fly
Scientists from the U.S. and India slowed down and magnified how flies could land on a ceiling. In their paper “Flies land upside down on a ceiling using rapid visually mediated rotational maneuvers,” published in the AAAS open-access journal Science Advances, they share what they learned.
Flies and other insects routinely land upside down on a ceiling. These inverted landing maneuvers are among the most remarkable aerobatic feats, yet the full range of these behaviors and their underlying sensorimotor processes remain largely unknown. Here, we report that successful inverted landing in flies involves a serial sequence of well-coordinated behavioral modules, consisting of an initial upward acceleration followed by rapid body rotation and leg extension, before terminating with a leg-assisted body swing pivoted around legs firmly attached to the ceiling. Statistical analyses suggest that rotational maneuvers are triggered when flies’ relative retinal expansion velocity reaches a threshold. Also, flies exhibit highly variable pitch and roll rates, which are strongly correlated to and likely mediated by multiple sensory cues. When flying with higher forward or lower upward velocities, flies decrease the pitch rate but increase the degree of leg-assisted swing, thereby leveraging the transfer of body linear momentum. [Emphasis added.]
Penn State researchers, who participated in the study, call this “arguably the most difficult and least-understood aerobatic maneuver conducted by flying insects.” Lead author Bo Cheng said, “Ultimately, we want to replicate that in engineering, but we have to understand it first.” The team was astonished to see how the fly could achieve four “perfectly timed maneuvers” to land upside down in the blink of an eye: acceleration, cartwheel, leg extension, and whole-body swing assisted by the legs.
The fly’s maneuvers “exhibited remarkably high angular velocity,” the scientists found, as they watched how the small insect “cartwheels” around its forelegs. Its body comes well equipped to handle the strain. “This process relies heavily on the adhesion from cushion-like pads on their feet (called pulvilli), which ensures a firm grip, and the viscoelasticity of the compliant leg joints, which damps out impact upon contact.” The research team was apparently too fascinated with the aerodynamics to speculate about evolution.
A fly is also well-equipped for stable flying. Michael Dickinson has been studying insect flight for years in his specialized lab at Caltech. His team published another “remarkable” paper in Current Biology, reporting that “Flies Regulate Wing Motion via Active Control of a Dual-Function Gyroscope.” Fruit flies are members of Diptera (two-wing), because their shriveled-up hind wings, called halteres, have been considered vestigial flight wings. Some have thought they function as gyroscopes. Dickinson decided to test that idea:
Flies execute their remarkable aerial maneuvers using a set of wing steering muscles, which are activated at specific phases of the stroke cycle. The activation phase of these muscles — which determines their biomechanical output — arises via feedback from mechanoreceptors at the base of the wings and structures unique to flies called halteresEvolved from the hindwings, the tiny halteres oscillate at the same frequency as the wings, although they serve no aerodynamic function and are thought to act as gyroscopes. Like the wings, halteres possess minute control muscles whose activity is modified by descending visual input, raising the possibility that flies control wing motion by adjusting the motor output of their halteres, although this hypothesis has never been directly tested.
Evolutionists who have treated halteres as useless vestigial organs are now going to have to explain even more function than previously thought.
Our results suggest that rather than acting solely as a gyroscope to detect body rotation, halteres also function as an adjustable clock to set the spike timing of wing motor neurons, a specialized capability that evolved from the generic flight circuitry of their four-winged ancestors. In addition to demonstrating how the efferent control loop of a sensory structure regulates wing motion, our results provide insight into the selective scenario that gave rise to the evolution of halteres.
But if the halteres serve useful timing and control functions now, who is to say they were not original equipment? After all, dipterans in general are among the most versatile flyers in the insect world. If something works, as Paul Nelson has pointed out, it’s not happening by accident. “Although the haltere is commonly described as a gyroscope,” Dickinson’s team says, “the structure is better interpreted as a multifunctional sensory organ.” Compared with other insects with four wings, flies have this advantage: “the wing mechanoreceptors can never provide as clean a clock signal as the mechanoreceptors on a haltere.” At best, the benefit can be seen as subfunctionalization of working hindwings. That would represent an example of devolution, not evolution of new functional traits. Like a driver low on gas, he eliminated the trunk to get better gas mileage.
Rapid Antics
A new land speed record has been discovered in ants. New Scientist writes, “Desert ant runs so fast it covers 100 times its body length per second.” Reporter Michael Marshall doesn’t say if the ant cries “Ouch!” at every footstep on the hot Sahara sand, but this ant looks like a blur as it runs, imitating the Road Runner of cartoon fame. The ant’s trick is to synchronize all six legs and take up to 47 steps per second. Hunting for heat-exhausted insects in the daytime, the Saharan silver ant has another adaptation: its body is coated with silvery hairs that beat the heat.
Nature’s coverage includes a video showing the ant’s running technique slowed down by a factor of 44 — and that is still almost too quick to concentrate on. Galloping at 85 centimeters per second, the ant practically flies with all its feet off the ground at some points in its gait. Touching down with three feet on the ground at a time also gives it stability, like a tripod, that helps keep the ant from sinking into the sand.
Burrow Masters
NASA’s engineers are trying to solve a problem with their newest lander on Mars, named Insight. Its “mole,” an instrument designed to burrow 16 feet into the Martian soil to measure Marsquakes, is stuck at 14 inches. It was equipped with an inertial hammer for digging, but the soil is proving harder than expected, JPL says. Perhaps they should have mimicked earthworms instead. How do soft, squishy animals manage to loosen the soil so effectively?
Helen Briggs of BBC News reports that “The first global atlas of earthworms has been compiled, based on surveys at 7,000 sites in 56 countries.” The atlas of global earthworm diversity, published by the AAAS in Science, begins by explaining why this is important. “Earthworms are key components of soil ecological communities, performing vital functions in decomposition and nutrient cycling through ecosystems.”
Separately, Liu et al. in Current Biology investigated how “Earthworms Coordinate Soil Biota to Improve Multiple Ecosystem Functions.” Their key concept was “multifunctionality” of soils, which refers to “aggregated measures of the ability of ecosystems to simultaneously provide multiple ecosystem functions.” Their experiments and observations showed that worms offer their vital contribution primarily by “shifting the functional composition toward a soil community favoring the bacterial energy channel and strengthening the biotic associations of soil microbial and microfaunal communities.” Less important were their effects on soil structure and pH. In other words, earthworms cooperate with the soil biota to promote the most possible ecosystem functions. 
One cubic meter of soil can contain 150 individual earthworms, the BBC says. How do soft, flexible earthworms squeeze through hard soils, then accomplish so much multifunctional good with small brains and no eyes? These papers don’t get into that, but suffice it to say, without them, Earth soil would likely be as inhospitable as that on Mars. 
A Dynamic Planet 
At many levels, our privileged planet was designed with the foresight to promote habitability. Environments on a dynamic planet are likely to change. When the habitat changes, organisms must be flexible enough to adapt. Intelligent design theory can support diversification, the “lawn” of life branching at the tips, instead of Darwin’s tree with a single root. The silver Sahara ant, for instance, could have diversified from other ants once the Sahara dried up from its former riparian habitat (as evidenced by river channels detectable under the sand). It would only require modifications or exaggerations of existing traits: body hairs, legs, and behaviors. 
There are some 6,000 species of earthworms, including species just a few centimeters in length to giants as long as 3 meters; these also could have diversified based on their local environments. A fly’s hind wings could shrink and degrade if the wings subfunctionalized, moving from multiple purposes to focus on the most important for its needs. This is not too different from blind cave fish that, having lost eyes, compensate with exaggerated senses of touch and smell.
None of these considerations affect the argument from design. Wings, legs, and the ability to burrow do not happen by accident. We can marvel at the foresight built into these creatures that become champions at particular traits in their respective family contests.