Thursday, December 10, 2020

Fine-tuning debate - Hossenfelder vs. Miller and Meyer

 

Fine-tuning debate - Hossenfelder vs. Miller and Meyer


I have posted Hossenfelder’s blog posts often because I find her discussions usually well argued and clearly presented. But in this case [and a few others] I don't think she has settled the issue. So here is her critique of the fine-tuning argument and the reply by Brian Miller and Stephen Meyer. 

 

Sorry, the universe wasn’t made for you


Sabine Hossenfelder

http://backreaction.blogspot.com/2016/09/sorry-universe-wasnt-made-for-you.html

Last month, game reviewers were all over No Man’s Sky, a new space adventure launched to much press attention. Unlike previous video games, this one calculates players’ environments from scratch rather than revealing hand-crafted landscapes and creatures. The calculations populate No Man’s Sky’s virtual universe with about 1019 planets, all with different flora and fauna – at least that’s what we’re told, not like anyone actually checked. That seems a giganourmous number but is still less than there’s planets in the actual universe, estimated at roughly 1024.

User’s expectations of No Man’s Sky were high – and were highly disappointed. All the different planets, it turns out, still get a little repetitive with their limited set of options and features. It’s hard to code a universe as surprising as reality and run it on processors that occupy only a tiny fraction of that reality.

 

Theoretical physicists, meanwhile, have the opposite problem: The fictive universes they calculate are more surprising than they’d like them to be.

 

Having failed on their quest for a theory of everything, in the area of quantum gravity many theoretical physicists now accept that a unique theory can’t be derived from first principles. Instead, they believe, additional requirements must be used to select the theory that actually describes the universe we observe. That, of course, is what we’ve always done to develop theories – the additional requirements being empirical adequacy.

 

The new twist is that many of these physicists think the missing observational input is the existence of life in our universe. I hope you just raised an eyebrow or two because physicists don’t normally have much business with “life.” And indeed, they usually only speak about preconditions of life, such as atoms and molecules. But that the sought-after theory must be rich enough to give rise to complex structures has become the most popular selection principle.

 

Known as “anthropic principle” this argument allows physicists to discard all theories that can’t produce sentient observers on the rationale that we don’t inhabit a universe that lacks them. One could of course instead just discard all theories with parameters that don’t match the measured values, but that would be so last century.

 

The anthropic principle is often brought up in combination with the multiverse, but logically it’s a separate argument. The anthropic principle – that our theories must be compatible with the existence of life in our universe – is an observational requirement that can lead to constraints on the parameters of a theory. This requirement must be fulfilled whether or not universes for different parameters actually exist. In the multiverse, however, the anthropic principle is supposedly the only criterion by which to select the theory for our universe, at least in terms of probability so that we are likely to find ourselves here. Hence the two are often discussed together.

 

Anthropic selection had a promising start with Weinberg’s prescient estimate for the cosmological constant. But the anthropic princple hasn’t solved the problem it was meant to solve, because it does not single out one unique theory either. This has been known at least since a decade, but the myth that our universe is “finetuned for life” still hasn’t died.

 

The general argument against the success of anthropic selection is that all evidence for the finetuning of our theories explores only a tiny space of all possible combinations of parameters. A typical argument for finetuning goes like this: If parameter X was only a tiny bit larger or smaller than the observed value, then atoms couldn’t exist or all stars would collapse or something similarly detrimental to the formation of large molecules. Hence, parameter X must have a certain value to high precision. However, these arguments for finetuning – of which there exist many – don’t take into account simultaneous changes in several parameters and are therefore inconclusive.

 

Importantly, besides this general argument there also exist explicit counterexamples. In the 2006 paper A Universe Without Weak Interactions, Harnik, Kribs, and Perez discussed a universe that seems capable of complex chemistry and yet has fundamental particles entirely different from our own. More recently, Abraham Loeb from Harvard argued that primitive forms of life might have been possible already in the early universe under circumstances very different from today’s. And a recent paper (ht Jacob Aron) adds another example:

 

 

Stellar Helium Burning in Other Universes: A solution to the triple alpha fine-tuning problem

By Fred C. Adams and Evan Grohs

1608.04690 [astro-ph.CO]

 

In this work the authors show that some combinations of fundamental constants would actually make it easier for stars to form Carbon, an element often assumed to be essential for the development of life.

 

This is a fun paper because it extends on the work by Fred Hoyle, who was the first to use the anthropic principle to make a prediction (though some historians question whether that was his actual motivation). He understood that it’s difficult for stars to form heavy elements because the chain is broken in the first steps by Beryllium. Beryllium has atomic number 4, but the version that’s created in stellar nuclear fusion from Helium (with atomic number 2) is unstable and therefore can’t be used to build even heavier nuclei.

 

Hoyle suggested that the chain of nuclear fusion avoids Beryllium and instead goes from three Helium nuclei straight to carbon (with atomic number 6). Known as the triple-alpha process (because Helium nuclei are also referred to as alpha-particles), the chances of this happening are slim – unless the Helium merger hits a resonance of the Carbon nucleus. Which it does if the parameters are “just right.” Hoyle hence concluded that such a resonance must exist, and that was later experimentally confirmed.

 

Adams and Groh now point out that there are other sets of parameters altogether in which case Beryllium is just stable and the Carbon resonance doesn’t have to be finely tuned. In their paper, they do not deal with the fundamental constants that we normally use in the standard model – they instead discuss nuclear structure which has constants that are derived from the standard model constants, but are quite complicated functions thereof (if known at all). Still, they have basically invented a fictional universe that seems at least as capable of producing life as ours.

 

This study is hence another demonstration that a chemistry complex enough to support life can arise under circumstances that are not anything like the ones we experience today.

 

I find it amusing that many physicists believe the evolution of complexity is the exception rather than the rule. Maybe it’s because they mostly deal with simple systems, at equilibrium or close by equilibrium, with few particles, or with many particles of the same type – systems that the existing math can deal with.

 

It makes me wonder how many more fictional universes physicists will invent and write papers about before they bury the idea that anthropic selection can single out a unique theory. Fewer, I hope, than there are planets in No Man’s Sky.



Physicist Sabine Hossenfelder Challenges the Evidence for Cosmological Fine-Tuning

Brian MillerStephen C. Meyer

 

https://evolutionnews.org/2020/10/physicist-sabine-hossenfelder-challenges-the-evidence-for-cosmological-fine-tuning/

 

 

October 16, 2020, 6:40 AM

Many have argued that some of the most compelling evidence for design in nature is the required fine-tuning of the universe to support life (see here, here, here). Namely, a life-permitting universe requires that the values of various parameters be set within a narrow range of possible values. Examples include the strength of the fundamental forces of nature (e.g., gravity), the masses of particles (e.g., electrons), and the universe’s initial conditions (e.g., initial energy density). The conclusion of fine-tuning has been accepted by a veritable who’s who of leading physicists including John Barrow, Bernard Carr, Paul Davies, and George Ellis.

Yet some scientists have strongly opposed this conclusion, due less to the scientific evidence than to its philosophical implications, specifically how it points to the universe having a creator. One of the leading proponents of fine-tuning is theoretical physicist Luke Barnes, and he has responded in detail to critics such as Victor Stenger and Sean Carroll (see here, here, here). More recently, theoretical physicist Sabine Hossenfelder entered the debate with a series of tweets, blog posts, and a journal article (see here, here, here) where she reiterates the assertion that any claims of the universe being fine-tuned for life are unscientific and fruitless.

Common Errors

Hossenfelder’s arguments represent common errors committed by critics, so they deserve special attention. Her weakest argument is that the assumption of fine-tuning has led to inaccurate predictions related to the discovery of new fundamental particles. This assertion is without merit since a few inaccurate predictions related to one set of parameters in no way challenge the generally accepted evidence of fine-tuning for a completely different set. Another weak argument is that the analyses of individual parameters, such as the mass of an electron, “don’t take into account simultaneous changes in several parameters and are therefore inconclusive.” This criticism completely overlooks Luke Barnes’s careful studies of the effect of altering multiple parameters at the same time. His results only reinforce the fine-tuning conclusion. 

A more substantive argument is that some details of the universe might be less restrictive than originally assumed. Hossenfelder cites a paper by Harnik, Kribs, and Perez that asserts that a universe without a weak force could still support life. Yet such claims have not withstood careful scrutiny. For instance, a paper by Louis Clavelli and Raymond White demonstrates that the authors of the initial paper only considered some of the consequences of removing the weak force. They ignored other consequences that would likely have precluded any possibility of the universe hosting complex life:

  • Core collapse supernovae would not occur, which are essential for the production and distribution of sufficient oxygen for a life-permitting universe. 

  • The time required for galaxy and star formation would delay their genesis until a time dangerously close to the age when the universe started to expand rapidly due to the cosmological constant/dark energy. After that point, planets would never form, thus precluding the possibility of life. 

  • Radioactive decay of heavy elements would not occur, so planets’ cores would cool more quickly. The rapid cooling would result in a lack of volcanic activity. This activity is essential for maintaining a stable greenhouse effect on Earth-like planets, so habitable temperatures would be far less probable.   

Her Strongest Objection

Hossenfelder’s strongest argument is that many fine-tuning parameters cannot in fact be quantified. On this basis, she contests the reality of fine-tuning as a feature of nature that has to be explained. To support her claim, she points out that many physicists calculate the degree of fine-tuning associated with different parameters by assuming that all possible values of different physical constants, for example, within a given range are equally probable. She then argues that physicists have no way of knowing whether or not this assumption is true.   

Perhaps, she suggests, some universe-generating mechanism exists that produces universes with, for example, certain gravitational force constants more frequently than universes with other gravitational force constants. Taking such biasing into account would clearly change the calculated degree of fine-tuning (or the probability) associated with any given range of values that correspond to a life-permitting universe. Thus she argues that the possibility of such biasing in the generation of universes implies that we cannot make accurate assessments of fine-tuning — and, therefore, that we cannot be sure that the universe actually is fine-tuned for life.

Nevertheless, Hossenfelder’s objection has an obvious problem. The allowable ranges of many physical constants and other parameters are incredibly narrow within the vast array of other possible values. For instance, the ratio of the strengths of the electromagnetic force to gravity must be accurate to 1 part in 1040 ( a number greater than a trillion trillion trillion). Consequently, any universe-generating mechanism capable of favoring the production of those specific and tiny ranges would itself need to be finely tuned in order to produce life with high probability. In other words, her universe-generating mechanism would require fine-tuning to ensure the biasing that would allow her to explain away fine-tuning in our universe. In summary, Hossenfelder’s criticisms represent no serious challenge to the fine-tuning argument.