Thursday, January 30, 2020


Wise Oysters, Galloping Sea Stars, and More: Biological Marvels Keep Coming
January 28, 2020, 12:34 PM


Strong theories in science require fewer auxiliary hypotheses when new discoveries come to light. Design advocates can gain confidence when discoveries continue to illustrate the core principles of intelligent design, like irreducible complexity, meaningful information, and hierarchical design, while undermining the blind, gradualistic principles of Darwinian evolution. Here are some recent illustrations.
“Pearls of Wisdom”
That’s the headline on news from the Okinawa Institute of Science and Technology, where the only thing said about evolution is that “From a genetic and evolutionary perspective, scientists have known little about the source of these pearls” in the Japanese pearl oyster, Pinctada fucata. By implication, don’t look for pearls of wisdom from evolutionary theory. The research published in Evolutionary Applications only concerns genetic variations within the species and the geographic distributions of isolated populations. If it helps conserve these oysters with their magnificent mother-of-pearl nacre — the envy of materials scientists — well, it’s wise to keep jewelry makers in business. Design scores as evolution fumbles.
Flight Feathers
Another level of design has been uncovered in bird feathers. In Science Magazine, Matloff et al. discuss “How flight feathers stick together to form a continuous morphing wing.” Pigeon and dove wing feathers spread out from their folded position into beautiful fans, as most people know. But how do birds prevent gaps from opening up between individual wing feathers? The team found a combination of factors at work. 
Birds can dynamically alter the shape of their wings during flight, although how this is accomplished is poorly understood. Matloff et al. found that two mechanisms control the movement of the individual feathers. Whenever the skeleton moves, the feathers are redistributed passively through compliance of the elastic connective tissue at the feather base. To prevent the feathers from spreading too far apart, hook-shaped microstructures on adjacent feathers form a directional fastener that locks adjacent feathers.
Notice that the muscles, bones, and connective tissue inside the skin work in synergy with the exterior hooks on the wings. Using a robot mimic, the team found that (1) the muscles for each feather keep the angle just right to spread them into a fan arrangement, and (2) the barbules snap together quickly to create a lightweight, flexible surface without breaks. The barbules can quickly detach like the hook-and-loop materials we are all familiar with.
This clarifies the function of the thousands of fastening barbules on the underlapping flight feathers; they lock probabilistically with the tens to hundreds of hooked rami of the overlapping flight feather and form a feather-separation end stop. The emergent properties of the interfeather fastener are not only probabilistic like bur fruit hooks, which inspired Velcro, but also highly directional like gecko feet setae — a combination that has not been observed before.
Rapid opening and closing of wings makes a little bit of noise a bit like Velcro does, explaining the din when a flock of geese takes off. Interestingly, the researchers found that night flyers like owls, which need silent wings as they hunt, “lack the lobate cilia and hooked rami in regions of feather overlap and instead have modified barbules with elongated, thin, velvety pennualue” that produce relatively little noise. Otherwise, this amazing complex mechanism works at scales all the way from a tiny 40-gram Cassin’s hummingbird to the 9000-gram California condor. What’s an evolutionist going to say about this ingenious mechanism? Once upon a time, a dinosaur leaped out of a tree and… died.
Distributed Running
Sea stars, seen in time-lapse videos, appear to “run” across the sea floor, bouncing as they go:
Scientists at the University of Southern California wondered how the echinoderms do it without a brain or centralized nervous system. The undersides of sea stars are composed of hundreds of “tube feet” which can move autonomously. How do they engage in coordinated motion? 
The answer, from researchers at the USC Viterbi School of Engineering, was recently published in the Journal of the Royal Society Interface: sea star[s] couple a global directionality command from a “dominant arm” with individual, localized responses to stimuli to achieve coordinated locomotion. In other words, once the sea star provides an instruction on which way to move, the individual feet figure out how to achieve this on their own, without further communication.
That would be a cool strategy for robots, the engineers figure. In fact, they built a model based on sea star motion, and show both the animal and robot movement side by side in the video above. No other animal movement seems to use this strategy. 
“In the case of the sea star, the nervous system seems to rely on the physics of the interaction between the body and the environment to control locomotion. All of the tube feet are attached structurally to the sea star and thus, to each other.”
In this way, there is a mechanism for “information” to be communicated mechanically between tube feet. 
Even though one of the team members was a “professor of ecology and evolutionary biology,” he seemed to rely more on the engineers than on Darwin. 
Understanding how a distributed nervous system, like that of a sea star, achieves complex, coordinated motions could lead to advancements in areas such as robotics. In robotics systems, it is relatively straightforward to program a robot to perform repetitive tasks. However, in more complex situations where customization is required, robots face difficulties. How can robots be engineered to apply the same benefits to a more complex problem or environment?
The answer might lie in the sea star model, [Eva] Kanso said. “Using the example of a sea star, we can design controllers so that learning can happen hierarchically. There is a decentralized component for both decision-making and for communicating to a global authority. This could be useful for designing control algorithms for systems with multiple actuators, where we are delegating a lot of the control to the physics of the system — mechanical coupling — versus the input or intervention of a central controller.”
Once again, the search to understand a design in nature propels further research that can aid in the design of products for human flourishing.
Quickies:
Grasshoppers don’t faint when they leap. Why? Arizona State wants to know how the insects keep their heads while taking off and landing in all kinds of different orientations. Gravity should be making the blood slosh around, causing dizziness and disorientation, but it doesn’t. Apparently it has something to do with the distribution of air sacs that automatically adjust to gravity, keeping the hemolymph (insect blood) from rapidly moving about in the head and body. “Thus, similar to vertebrates, grasshoppers have mechanisms to adjust to gravitational effects on their blood,” they say.
Cows know more than their blank stares indicate. Articles from Fox News and the New York Post had fun with a “shocking study” about “cowmoooonication” published in Nature’s open-access journal Scientific Reports. Experiments with 13 Holstein heifers seem to indicate that they all know each other’s names, and can learn where food is located, and more, from each other’s “individual moos.” They regularly share “cues in certain situations and express different emotions, including excitement, arousal, engagement and distress.” Other scientists are praising young researcher Ali Green, whose 333 recordings and voice analysis studies of moooosic is like “building a Google translate for cows.”
Design appears everywhere scientists look when they take their Darwin glasses off. For quality research that actually does some good for people, join the Uprising.
Photo credit: Japanese pearl oyster, Pinctada fuc


Wednesday, January 29, 2020


The Real Butterfly Effect


If a butterfly flaps its wings in China today, it may cause a tornado in America next week. Most of you will be familiar with this “Butterfly Effect” that is frequently used to illustrate a typical behavior of chaotic systems: Even smallest disturbances can grow and have big consequences.
 The name “Butterfly Effect” was popularized by James Gleick in his 1987 book “Chaos” and is usually attributed to the meteorologist Edward Lorenz. But I recently learned that this is not what Lorenz actually meant by Butterfly Effect.

I learned this from a paper by Tim Palmer, Andreas Döring, and Gregory Seregin called “The Real Butterfly Effect” and that led me to dig up Lorenz’ original paper from 1969.

Lorenz, in this paper, does not write about butterfly wings. He instead refers to a sea gull’s wings, but then attributes that to a meteorologist whose name he can’t recall. The reference to a butterfly seems to have come from a talk that Lorenz gave in 1972, which was titled “Does the Flap of a Butterfly’s Wings in Brazil set off a Tornado in Texas?”

The title of this talk was actually suggested by the session chair, a meteorologist by name Phil Merilees. In any case, it was the butterfly that stuck instead of the sea gull. And what was the butterfly talk about? It was a summary of Lorentz 1969 paper. So what’s in that paper?

In that paper, Lorenz made a much stronger claim than that a chaotic system is sensitive to the initial conditions. The usual butterfly effect says that any small inaccuracy in the knowledge that you have about the initial state of the system will eventually blow up and make a large difference. But if you did precisely know the initial state, then you could precisely predict the outcome, and if only you had good enough data you could make predictions as far ahead as you like. It’s chaos, alright, but it’s still deterministic.

Now, in the 1969 paper, Lorenz looks at a system that has an even worse behavior. He talks about weather, so the system he considers is the Earth, but that doesn’t really matter, it could be anything. He says, let us divide up the system into pieces of equal size. In each piece we put a detector that makes a measurement of some quantity. That quantity is what you need as input to make a prediction. Say, air pressure and temperature. He further assumes that these measurements are arbitrarily accurate. Clearly unrealistic, but that’s just to make a point.

How well can you make predictions using the data from your measurements? You have data on that finite grid. But that does not mean you can generally make a good prediction on the scale of that grid, because errors will creep into your prediction from scales smaller than the grid. You expect that to happen of course because that’s chaos; the non-linearity couples all the different scales together and the error on the small scales doesn’t stay on the small scales.

But you can try to combat this error by making the grid smaller and putting in more measurement devices. For example, Lorenz says, if you have a typical grid of some thousand kilometers, you can make a prediction that’s good for, say, 5 days. After these 5 days, the errors from smaller distances screw you up. So then you go and decrease your grid length by a factor of two.

Now you have many more measurements and much more data. But, and here comes the important point: Lorenz says this may only increase the time for which you can make a good prediction by half of the original time. So now you have 5 days plus 2 and a half days. Then you can go and make your grid finer again. And again you will gain half of the time. So now you have 5 days plus 2 and half plus 1 and a quarter. And so on.

Most of you will know that if you sum up this series all the way to infinity it will converge to a finite value, in this case that’s 10 days. This means that even if you have an arbitrarily fine grid and you know the initial condition precisely, you will only be able to make predictions for a finite amount of time.

And this is the real butterfly effect. That a chaotic system may be deterministic and yet still be non-predictable beyond a finite amount of time .

This of course raises the question whether there actually is any system that has such properties. There are differential equations which have such a behavior. But whether the real butterfly effect occurs for any equation that describes nature is unclear. The Navier-Stokes equation, which Lorenz was talking about may or may not suffer from the “real” butterfly effect. No one knows. This is presently one of the big unsolved problems in mathematics.

However, the Navier-Stokes equation, and really any other equation for macroscopic systems, is strictly speaking only an approximation. On the most fundamental level it’s all particle physics and, ultimately, quantum mechanics. And the equations of quantum mechanics do not have butterfly effects because they are linear. Then again, no one would use quantum mechanics to predict the weather, so that’s a rather theoretical answer.

The brief summary is that even in a deterministic system predictions may only be possible for a finite amount of time and that is what Lorenz really meant by “Butterfly Effect.”



Glial Brain Cells, Long in Neurons’ Shadow, Reveal Hidden Powers
Elena Renken

[[Gee - scientists are still studying E. elegans with 302 neurons and  have not explained all of its function. And our brains have approximately 80b [80,000,000,000] neurons, so understanding the human brain seems to be just a  little beyond our present reach. But there are another 80b glial cells in the brain and they too have a wide variety of functions. So now it is 160b to 302. Maybe it will take a little longer than we thought to figure out how our brains work....]]

The sting of a paper cut or the throb of a dog bite is perceived through the skin, where cells react to mechanical forces and send an electrical message to the brain. These signals were believed to originate in the naked endings of neurons that extend into the skin. But a few months ago, scientists came to the surprising realization that some of the cells essential for sensing this type of pain aren’t neurons at all. It’s a previously overlooked type of specialized glial cell that intertwines with nerve endings to form a mesh in the outer layers of the skin. The information the glial cells send to neurons is what initiates the “ouch”: When researchers stimulated only the glial cells, mice pulled back their paws or guarded them while licking or shaking — responses specific to pain.


This discovery is only one of many recent findings showing that glia, the motley collection of cells in the nervous system that aren’t neurons, are far more important than researchers expected. Glia were long presumed to be housekeepers that only nourished, protected and swept up after the neurons, whose more obvious role of channeling electric signals through the brain and body kept them in the spotlight for centuries. But over the last couple of decades, research into glia has increased dramatically.
“In the human brain, glial cells are as abundant as neurons are. Yet we know orders of magnitude less about what they do than we know about the neurons,” said Shai Shaham, a professor of cell biology at the Rockefeller University who focuses on glia. As more scientists turn their attention to glia, findings have been piling up to reveal a family of diverse cells that are unexpectedly crucial to vital processes.
It turns out that glia perform a staggering number of functions. They help process memories. Some serve as immune system agents and ward off infection, while some communicate with neurons. Others are essential to brain development. Far from being mere valets to neurons, glia often take leading roles in protecting the brain’s health and directing its development. “Pick any question in the nervous system, and glial cells will be involved,” Shaham said.
More Than Just ‘Glue’
Glia take many forms to perform their specialized functions: Some are sheathlike, while others are spindly, bushy or star-shaped. Many tangle around neurons and form a network so dense that individual cells are hard to distinguish. To some early observers, they didn’t even look like cells — they were considered a supportive matrix within the skull. This prompted the 19th-century researcher Rudolph Virchow to dub this non-neuronal material “neuroglia,” drawing on the Greek word for glue.



In this magnified image of brain tissue, neurons (blue) are surrounded by large numbers of glial cells, including astrocytes (red) and oligodendrocytes (green).

One reason glia were given such short shrift was that when researchers first began staining nervous system tissue, their methods revealed the convoluted shapes of neurons but rendered only select glia visible. Santiago Ramón y Cajal, who is credited with the discovery of neurons and widely regarded as the founder of neuroscience, illustrated one subtype of glia but lumped the rest together as “the third element.” His focus on neurons set the stage for the burgeoning field of euroscience but shoved the glia behind the curtains.
In addition, some glia are challenging to study because their fates are so entwined with those of neurons that it’s hard to learn about them separately. If researchers try to learn about the glia’s functions by knocking them out and observing the effects, the neurons they support will die along with them.
But the revolution in cell biology techniques in recent decades has generated an arsenal of tools offering greater access to glia, Shaham said. Advances in live imaging, fluorescent labeling and genetic manipulation are revealing the breadth of glia’s forms and functions.
Microglia Reveal Their Versatility
Several cell types are contained within the umbrella category of glia, with varied functions that are still coming to light. Oligodendrocytes and Schwann cells wrap around nerve fibers and insulate them in fatty myelin sheaths, which help to confine the electrical signals moving through neurons and speed their passage. Astrocytes, with their complex branching shapes, direct the flow of fluid in the brain, reshape the synaptic connections between neurons, and recycle the released neurotransmitter molecules that enable neurons to communicate, among other jobs.

The highly versatile microglia seem to serve a variety of functions in the brain, such as removing cellular debris and determining which synapses between neurons are unnecessary.

But the cells that have been the subjects of an especially strong spike in interest over the last decade or so are the ones called microglia.
Microglia were originally defined in four papers published in 1919 by Pío del Río-Hortega, but the study of them then stalled for decades, until finally picking up in the 1980s. Microglia research is now growing exponentially, said Amanda Sierra, a group leader at the Achucarro Basque Center for Neuroscience. The work is exposing how microglia respond to brain trauma and other injuries, how they suppress inflammation, and how they behave in the presence of neurodegenerative diseases. The cells “really are at the edge between immunology and neuroscience,” Sierra said.
Guy Brown, a professor of biochemistry at the University of Cambridge, was first drawn to microglia by their star shapes and dynamic movements, but it was their behavior that held his attention. In recent years, microglia have been found to mimic the macrophages of the immune system by engulfing threats to the brain such as cellular debris and microbes. Microglia also seem to go after obsolete synapses. “If you live-image them, you can see them eating neurons,” Brown said.
Some of these active functions are shared with other types of glia as well. Astrocytes and Schwann cells, for example, may also prune synaptic connections. But despite the commonalities among different subsets of glia, researchers are starting to realize that there’s little to unify glial cells as a group. In fact, in a 2017 article, scientists argued for discarding the general term “glia” altogether. “They don’t have an enormous amount in common, different glial cells,” Brown said. “I don’t think there’s much future to glia as a label.”


Ben Barres, a neuroscientist who championed glia research and passed away in 2017, considered deeper investigations of glia essential to the advance of neurobiology as a field. Others have taken up that cause as well. To them, the historical emphasis on neurons made sense at one time: “They are the ones who process the information from the outside world into our memories, our thinking, our processing,” Sierra said. “They are us.” But now the importance of glia is clear.
Neurons and glia cannot function independently: Their interactions are vital to the survival of the nervous system and the memories, thoughts and emotions it generates. But the nature of their partnership is still mysterious, notes Staci Bilbo, a professor of psychology and neuroscience at Duke University. Glia are gaining a reputation for the complexity long attributed to neurons, but it’s still unclear whether one cell type primarily directs the other. “The big unknown in the field is: Who is driving the response?” she said.


Monday, January 6, 2020


How a Flawed Experiment "Proved" That Free Will Doesn't Exist - Scientific American Blog Network

Steve Taylor December 6, 2019
In the second half of the 19th century, scientific discoveries—in particular, Darwin’s theory of evolution—meant that Christian beliefs were no longer feasible as a way of explaining the world. The authority of the Bible as an explanatory text was fatally damaged. The new findings of science could be utilized to provide an alternative conceptual system to make sense of the world—a system that insisted that nothing existed apart from basic particles of matter, and that all phenomena could be explained in terms of the organization and the interaction of these particles.
One of the most fervent of late 19th century materialists, T.H. Huxley, described human beings as “conscious automata” with no free will. As he explained in 1874, “Volitions do not enter into the chain of causation…. The feeling that we call volition is not the cause of a voluntary act, but the symbol of that state of the brain which is the immediate cause."
This was a very early formulation of an idea that has become commonplace amongst modern scientists and philosophers who hold similar materialist views: that free will is an illusion. According to Daniel Wegner, for instance, “The experience of willing an act arises from interpreting one’s thought as the cause of the act.” In other words, our sense of making choices or decisions is just an awareness of what the brain has already decided for us. When we become aware of the brain’s actions, we think about them and falsely conclude that our intentions have caused them. You could compare it to a king who believes he is making all his own decisions, but is constantly being manipulated by his advisors and officials, who whisper in his ear and plant ideas in his head. 
Many people believe that evidence for a lack of free will was found when, in the 1980s, scientist Benjamin Libet conducted experiments that seemed to show that the brain “registers” the decision to make movements before a person consciously decides to move. In Libet’s experiments, participants were asked to perform a simple task such as pressing a button or flexing their wrist. Sitting in front of a timer, they were asked to note the moment at which they were consciously aware of the decision to move, while EEG electrodes attached to their head monitored their brain activity.
Libet showed consistently that there was unconscious brain activity associated with the action—a change in EEG signals that Libet called “readiness potential”—for an average of half a second before the participants were aware of the decision to move. This experiment appears to offer evidence of Wegner’s view that decisions are first made by the brain, and there is a delay before we become conscious of them—at which point we attribute our own conscious intention to the act.         
However, if we look more closely, Libet’s experiment is full of problematic issues. For example, it relies on the participants’ own recording of when they feel the intention to move. One issue here is that there may be a delay between the impulse to act and their recording of it—after all, this means shifting their attention from their own intention to the clock. In addition, it is debatable whether people are able to accurately record the moment of their decision to move. Our subjective awareness of decisions is very unreliable. If you try the experiment yourself—and you can do it right now, just by holding out your own arm, and deciding at some point to flex your wrist—you’ll become aware that it’s difficult to pinpoint the moment at which you make the decision. 
An even more serious issue with the experiment is that it is by no means clear that the electrical activity of the “readiness potential” is related to the decision to move, and to the actual movement. Some researchers have suggested that the readiness potential could just relate to the act of paying attention to the wrist or a button, rather the decision to move. Others have suggested that it only reflects the expectation of some kind of movement, rather being related to a specific moment. In a modified version of Libet’s experiment (in which participants were asked to press one of two buttons in response to images on a computer screen), participants showed “readiness potential” even before the images came up on the screen, suggesting that it was not related to deciding which button to press. 
Still others have suggested that the area of the brain where the "readiness potential" occurs—the supplementary motor area, or SMA—is usually associated with imagining movements rather than actually performing them. The experience of willing is usually associated with other areas of the brain (the parietal areas). And finally, in another modified version of Libet’s experiment, participants showed readiness potential even when they made a decision not to move, which again casts doubt on the assumption that the readiness potential is actually registering the brain’s “decision” to move. 
A further, more subtle, issue has been suggested by psychiatrist and philosopher Iain McGilchrist. Libet's experiment seems to assume that the act of volition consists of clear-cut decisions, made by a conscious, rational mind. But McGilchrist points out that decisions are often made in a more fuzzy, ambiguous way. They can be made on a partly intuitive, impulsive level, without clear conscious awareness. But this doesn't necessarily mean that you haven't made the decision.
As McGilchrist puts it, Libet’s apparent findings are only problematic "if one imagines that, for me to decide something, I have to have willed it with the conscious part of my mind. Perhaps my unconscious is every bit as much 'me.'" Why shouldn't your will be associated with deeper, less conscious areas of your mind (which are still you)? You might sense this if, while trying Libet’s experiment, you find your wrist just seeming to move of its own accord. You feel that you have somehow made the decision, even if not wholly consciously. 
Because of issues such as these—and others that I don’t have space to mention—it seems strange that such a flawed experiment has become so influential, and has been (mis)used so frequently as evidence against the idea of free will. You might ask: why are so many intellectuals so intent on proving that they have no free will? (As the philosopher Alfred North Whitehead pointed out ironically, “Scientists animated by the purpose of proving themselves purposeless constitute an interesting subject for study.”)

This is probably because the nonexistence of free will seems a logical extension of some of the primary assumptions of the materialist paradigm—such as the idea that our sense of self is an illusion, and that consciousness and mental activity are reducible to neurological activity. However, as I suggest in my book Spiritual Science, it is entirely possible that these assumptions are false. The mind may be more than just a shadow of the brain, and free will may not be an illusion but an invaluable human attribute, which can be cultivated and whose development makes our lives more meaningful and purposeful.   


Monday, December 30, 2019


How did the universe begin?

Sabine Hossenfelder

[[Very very revealing and useful article.]]



The year is almost over and a new one about to begin. So today I want to talk about the beginning of everything, the whole universe. What do scientists think how it all started?

We know that the universe expands, and as the universe expands, matter and energy in it dilutes. So when the universe was younger, matter and energy was much denser. Because it was denser, it had a higher temperature. And a higher temperature means that on the average particles collided at higher energies.

Now you can ask, what do we know about particles colliding at high energies? Well, the highest collision energies between particles that we have experimentally tested are those produced at the Large Hadron Collider. These are energies about a Tera-electron Volt or TeV for short, which, if you convert it into a temperature, comes out to be 1016 Kelvin. In words that’s ten million billion Kelvin which sounds awkward and is the reason no one quotes such temperatures in Kelvin.

So, up to a temperature of about a TeV, we understand the physics of the early universe and we can reliably tell what happened. Before that, we have only speculation.

The simplest way to speculate about the early universe is just to extrapolate the known theories back to even higher temperatures, assuming that the theories do not change. What happens then is that you eventually reach energy densities so high that the quantum fluctuations of space and time become relevant. To calculate what happens then, we would need a theory of quantum gravity, which we do not have.
So, in brief, the scientific answer is that we have no idea how the universe began.

But that’s a boring answer and one you cannot publish, so it’s not how the currently most popular theories for the beginning of the universe work. The currently most popular theories assume that the electromagnetic interaction must have been unified with the strong and the weak nuclear force at high energies. They also assume that an additional field exists, which is the so-called inflaton field.

The purpose of the inflaton is to cause the universe to expand very rapidly early on, in a period which is called “inflation”. The inflaton field then has to create all the other matter in the universe and basically disappear because we don’t see it today. In these theories, our universe was born from a quantum fluctuation of the inflaton field and this birth event is called the “Big Bang”.

Actually, if you believe this idea, the quantum fluctuations still go on outside of our universe, so there are constantly other universes being created.

How scientific is this idea?
Well, we have zero evidence that the forces were ever unified and have equally good evidence, namely none, that the inflaton field exists. The idea that the early universe underwent a phase of rapid expansion fits to some data, but the evidence is not overwhelming, and in any case, what the cause of this rapid expansion would have been – an inflaton field or something else – the data don’t tell us.

So that the universe began from a quantum fluctuations is one story. Another story has it that the universe was not born once but is born over and over again in what is called a “cyclic” model. In cyclic models, the Big Bang is replaced by an infinite sequence of Big Bounces.

There are several types of cyclic models. One is called the Ekpyrotic Universe. The idea of the Ekpyrotic Universe was originally borrowed from string theory and had it that higher-dimensional membranes collided and our universe was created from that collision.

Another idea of a cyclic universe is due to Roger Penrose and is called Conformal Cyclic Cosmology. Penrose’s idea is basically that when the universe gets very old, it loses all sense of scale, so really there is no point in distinguishing the large from the small anymore, and you can then glue together the end of one universe with the beginning of a new one.

Yet another theory has it that new universes are born inside black holes.
You can speculate about this because no one has any idea what goes on inside black holes anyway.

An idea that sounds similar but is actually very different is that the universe started from a black hole in 4 dimensions of space. This is a speculation that was put forward by Niayesh Afshordi some years ago.

 Then there is the possibility that the universe didn’t really “begin” but that before a certain time there was only space without any time. This is called the “no-boundary proposal” and it goes back to Jim Hartle and Stephen Hawking. A very similar disappearance of time was more recently found in calculations based on loop quantum cosmology where the researchers referred to it as “Asymptotic Silence”.

Then we have String Gas Cosmology, in which the early universe lingered in an almost steady state for an infinite amount of time before beginning to expand, …….

So, as you see, physicists have many ideas about how the universe began.
The trouble is that not a single one of those ideas is backed up by evidence. And they may never be backed up by evidence, because the further back in time you try to look, the fewer data we have. While some of those speculations for the early universe result in predictions, confirming those predictions would not allow us to conclude that the theory must have been correct because there are many different theories that could give rise to the same prediction.

This is a way in which our scientific endeavors are fundamentally limited. Physicists may simply have produced a lot of mathematical stories about how it all began, but these aren’t any better than traditional tales of creation.


Sunday, December 15, 2019


Cognitive biases prevent science from working properly.
by Sabine Hossenfelder

Today I want to talk about a topic that is much, much more important than anything I have previously talked about. And that’s how cognitive biases prevent science from working properly.


Cognitive biases have received some attention in recent years, thanks to books like “Thinking Fast and Slow,” “You Are Not So Smart,” or “Blind Spot.” Unfortunately, this knowledge has not been put into action in scientific research. Scientists do correct for biases in statistical analysis of data and they do correct for biases in their measurement devices, but they still do not correct for biases in the most important apparatus that they use: Their own brain.

Before I tell you what problems this creates, a brief reminder what a cognitive bias is. A cognitive bias is a thinking shortcut which the human brain uses to make faster decisions.

Cognitive biases work much like optical illusions. Take this example of an optical illusion. If your brain works normally, then the square labelled A looks much darker than the square labelled B.
[Example of optical illusion. Image: Wikipedia]
But if you compare the actual color of the pixels, you see that these squares have exactly the same color.
[Example of optical illusion. Image: Wikipedia]
The reason that we intuitively misjudge the color of these squares is that the image suggests it is really showing a three-dimensional scene where part of the floor is covered by a shadow. Your brain factors in the shadow and calculates back to the original color, correctly telling you that the actual color of square B must have been lighter than that of square A.

So, if someone asked you to judge the color in a natural scene, your answer would be correct. But if your task was to evaluate the color of pixels on the screen, you would give a wrong answer – unless you know of your bias and therefore do not rely on your intuition.

Cognitive biases work the same way and can be prevented the same way: by not relying on intuition. Cognitive biases are corrections that your brain applies to input to make your life easier. We all have them, and in every-day life, they are usually beneficial.

The maybe best-known cognitive bias is attentional bias. It means that the more often you hear about something, the more important you think it is. This normally makes a lot of sense. Say, if many people you meet are talking about the flu, chances are the flu’s making the rounds and you are well-advised to pay attention to what they’re saying and get a flu shot.

But attentional bias can draw your attention to false or irrelevant information, for example if the prevalence of a message is artificially amplified by social media, causing you to misjudge its relevance for your own life. A case where this frequently happens is terrorism. Receives a lot of media coverage, has people hugely worried, but if you look at the numbers for most of us terrorism is very unlikely to directly affect our life.

And this attentional bias also affects scientific judgement. If a research topic receives a lot of media coverage, or scientists hear a lot about it from their colleagues, those researchers who do not correct for attentional bias are likely to overrate the scientific relevance of the topic.

There are many other biases that affect scientific research. Take for example loss aversion. This is more commonly known as “throwing good money after bad”. It means that if we have invested time or money into something, we are reluctant to let go of it and continue to invest in it even if it no longer makes sense, because getting out would mean admitting to ourselves that we made a mistake. Loss aversion is one of the reasons scientists continue to work on research agendas that have long stopped being promising.

But the most problematic cognitive bias in science is social reinforcement, also known as group think. This is what happens in almost closed, likeminded, communities, if you have people reassuring each other that they are doing the right thing. They will develop a common narrative that is overly optimistic about their own research, and they will dismiss opinions from people outside their own community. Group think makes it basically impossible for researchers to identify their own mistakes and therefore stands in the way of the self-correction that is so essential for science.

A bias closely linked to social reinforcement is the shared information bias. This bias has the consequence that we are more likely to pay attention to information that is shared by many people we know, rather than to the information held by only few people. You can see right away how this is problematic for science: That’s because how many people know of a certain fact tells you nothing about whether that fact is correct or not. And whether some information is widely shared should not be a factor for evaluating its correctness.

Now, there are lots of studies showing that we all have these cognitive biases and also that intelligence does not make it less likely to have them. It should be obvious, then, that we organize scientific research so that scientists can avoid or at least alleviate their biases. Unfortunately, the way that research is currently organized has exactly the opposite effect: It makes cognitive biases worse.

For example, it is presently very difficult for a scientist to change their research topic, because getting a research grant requires that you document expertise. Likewise, no one will hire you to work on a topic you do not already have experience with.

Superficially this seems like good strategy to invest money into science, because you reward people for bringing expertise. But if you think about the long-term consequences, it is a bad investment strategy. Because now, not only do researchers face a psychological hurdle to leaving behind a topic they have invested time in, they would also cause themselves financial trouble. As a consequence, researchers are basically forced to continue to claim that their research direction is promising and to continue working on topics that lead nowhere.

Another problem with the current organization of research is that it rewards scientists for exaggerating how exciting their research is and for working on popular topics, which makes social reinforcement worse and adds to the shared information bias.

I know this all sounds very negative, but there is good news too: Once you are aware that these cognitive biases exist and you know the problems that they can cause, it is easy to think of ways to work against them.

For example, researchers should be encouraged to change topics rather than basically being forced to continue what they’re already doing. Also, researchers should always list shortcoming of their research topics, in lectures and papers, so that the shortcomings stay on the collective consciousness. Similarly, conferences should always have speakers from competing programs, and scientists should be encouraged to offer criticism on their community and not be avoided for it. These are all little improvements that every scientist can make individually, and once you start thinking about it, it’s not hard to come up with further ideas.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality.

The reason this is so, so important to me, is that science drives innovation and if science does not work properly, progress in our societies will slow down. But cognitive bias in science is a problem we can solve, and that we should solve. Now you know how.