Tuesday, December 15, 2020

The End of the World as We Know It? [[due to DECLINING population!]]

 


The End of the World as We Know It?

written by Glenn T. Stanton


https://quillette.com/2020/12/11/the-end-of-the-world-as-we-know-it/ 


How is the world going to end? Polls consistently show that most believe the cause will be environmental. “Climate anxiety” has reached such a fevered pitch among young people across the globe that the Lancet recently issued a special “call to action” to help with the problem. Clinicians have even created “climate anxiety scales” to measure the runaway angst spreading through our children, and the rest of us.

 

But what if the best, emerging science is actually telling us quite firmly that such fears are not only deeply misplaced, but that the most realistic cause of our collective human demise is likely the precise opposite of what most assume?

This is the conclusion of a very interesting body of highly sophisticated and inter-disciplinary research. The greatest threat to humanity’s future is certainly not too many people consuming too many limited natural resources, but rather too few people giving birth to the new humans who will continue the creative work of making the world a better, more hospitable place through technological innovation. Data released this summer indicates the beginning of the end of humanity can be glimpsed from where we now stand. That end is a dramatic population bust that will nosedive toward an empty planet. New research places the beginning of that turn at about 30 years from today.

This means that Thomas Robert Malthus, and his many influential disciples, had it precisely wrong. More people are not only not the problem, but a growing population is the very answer to a more humane future in which more people are living better, healthier, longer lives than they ever have in our race’s tumultuously dynamic history.

 

We are not killing the planet

 

Pop voices like those of Congresswoman Alexandria Ocasio-Cortez and Swedish teenage activist Greta Thunberg and countless Hollywood celebrities have warned that unless drastic action is taken at once, we face irrevocable global catastrophe. The Climate Clock in Manhattan’s Union Square pegs the start of the Earth’s deadline at a little more than seven years from today. But this is not science. The most sophisticated examination considering the Earth’s eco-deadline was just published in August in the journal Nature Ecology & Evolution. Drawing upon 36 meta-analyses, involving more than 4,600 individual studies spanning the last 45 years, nine ecologists, working from universities in Germany, France, Ireland, and Finland, explain that the empirical data simply does not permit the determination of any kind of environmental dooms date, or “thresholds” as scientists call them.

 

These scholars state frankly: “We lack systematic quantitative evidence as to whether empirical data allow definition of such thresholds” and “our results thus question the pervasive presence of threshold concepts” found in environmental politics and policy today. They explain that natural bio-systems are so dynamic—ever evolving and adapting over the long-term—that determining longevity timeframes is impossible. Talk of a ticking eco-clock is simply dogma. Two major books published in 2020 serve as carefully researched and copiously documented critiques of environmental scaremongering. Both are written by pedigreed progressive environmentalists concerned about the irrationally wild rhetoric of late.

The first is Apocalypse Never: Why Environmental Alarmism Hurts Us All  by Michael Shellenberger, who TIME magazine has lauded as a “hero of the environment.” Shellenberger explains that not only is the world not going to end due to climate catastrophe, but in very important ways, the environment is getting markedly better and healthier. He adds that technology, commerce, and industry are doing more to fix the Earth’s problems than Greenpeace and other activists. As an environmentalist, he is strongly pro-people and pro-technology, explaining counter-intuitively that the scientific “evidence is overwhelming that our high-energy civilization is better for people and nature than the low-energy civilization that climate alarmists would return us to.” He is right.

 

The other major environmentalist challenging eco-doom is Bjørn Lomborg of the Copenhagen Consensus Center, a think tank that seeks global solutions to humanity’s most pressing problems. The Guardian feted Lomborg as “one of the 50 people who could save the planet.” In his book False Alarm, he explains how “climate change panic” is not only unfounded, but wasting trillions of dollars globally, hurting the poor and failing to fix the very problems it warns us about. Lomborg explains ironically that “the rhetoric on climate change has become more extreme and less moored to the actual science” at the very time that “climate scientists have painstakingly increased knowledge about climate change, and we have more—and more reliable—data than ever before.”

 

Lomborg holds that while “global warming is real… it is not the end of the world.” “It is a manageable problem” he adds. He is increasingly dismayed that we live in a world “where almost half the population believes climate change will extinguish humanity” at the precise moment when “the science shows us that fears of a climate apocalypse are unfounded.” Demonstrating this is not difficult. Simply consider what we all need to live: air, water, abundant food, and protection from nature. Each of these are improving in dramatic ways precisely because of technology and growth. The scholars at Our World in Data and the Oxford Martin School at the University of Oxford demonstrate this.

The world’s air is getting cleaner overall, and markedly so.

At the very time that population and industry have both grown dramatically across the globe, not only is the problem not getting worse, but human death rates from air pollution have declined by nearly half since just 1990. And it is not people driving less or living by fewer factories that’s saving lives. Counterintuitively, air pollution deaths are more than 100 times higher in non-industrial societies where cooking over wood or coal burning fires is a regular part of daily life. And as the world develops, such cooking declines. This means growth and technology are literally helping people breathe easier. And ozone pollution, or smog, has been declining rapidly throughout the world even in high-income, heavy manufacturing Asian Pacific regions.

 

Water is humanity’s second most immediate life need. The number of people around the world with improved access to clean drinking water increased 68 percent from 1990 to 2015, even as the population itself has expanded. That is astounding. Roughly 290,000 people have gained access to improved drinking water every single day across the globe over the last 25 years and that number is only increasing of late.

Food is our third greatest survival need. Contrary to grim Malthusian predictions, the United Nations explains that humanity now produces more than enough food to feed everyone on the planet. In fact, the Journal of Sustainable Agriculture revealed back in 2012 that “we already grow enough food for 10 billion people.” This is a 25 percent bounty over our current global population, a surplus which we will never need. And, as we will see in the next section, our world population is soon to top out at just 9.73 billion people and then start declining precipitously into the coming century. While we must do a better job politically at distributing that bounty, our food supply is not only more plentiful, but of better nutritional quality thanks to technology. It’s why malnutrition is declining dramatically across the world.

 

And the number of people around the world living in dramatic poverty is dropping, even as we grow in number—a direct refutation of ubiquitous Malthusian projections.

The Earth is actually doing better at providing what is needed to sustain human life as a consequence of human ingenuity of industry and technology. And what about the Earth itself? Let’s look at two important measures.

First, is it becoming more hospitable to human thriving, or less? A major 2019 study in the journal Global Environmental Change drawing from “one of the most complete natural disaster loss databases” reveals “a clear decreasing in both human and economic vulnerability” to “the seven most common climate-related hazards” by up to 80 to 90 percent over the last four decades. These hazards include all forms of flooding, drought, and deaths related to extreme wind, cold or heat. The trend lines are dramatic.

 

The scholars at Our World in Data add that this also holds for other natural disasters such as earthquakes, volcano activity, wildfire, and landslides. “This decline is even more impressive,” they explain, “when we consider the rate of population growth over this period” revealing a greater than 10-fold decline in nature-related human deaths worldwide over the last century.

This means the Earth is becoming a much safer place for humans to live precisely because we are adapting to it better. That is precisely the opposite of catastrophe by most people’s honest math.

Second, is the Earth itself being more widely exploited or getting a break? The 2018 United Nations List of Protected Areas report (Table 1, p. 41) demonstrates that the total number of protected sites in the world has increased 2,489 percent since 1962 and the total protected terrestrial and aquatic area grew by 1,834 percent. The proportion of land used for all agriculture (crops and grazing) per person across the globe has plummeted dramatically over the last 100 years as technology allows us to grow more food than we can consume on less land per capita than ever before.

And this is true across all continents.

As stewards of the planet, we still have much work to do in improving the environment. But note the key word: improve. The empirical data persuasively indicate the most significant trend lines are moving in the right directions in profound ways for billions of people around the globe, and the reason is technology and human progress. These truths are the exact opposite of an eco-Armageddon.

 

What does the likeliest end of humanity look like?

 

So does this mean there are no concerns about humanity’s future? New research published this summer has many of the world’s leading scientists extremely concerned, much more so than when 2020 began. A major demographic study published in the Lancet in July provides a glimpse of humanity’s end if things continue as they are. This work was conducted by 24 leading demographers and funded by the Bill & Melinda Gates Foundation. What concerns these scholars is certainly not too many people, as nearly everyone assumes, but a relatively near future of far too few.

 

Demographers have long been concerned about this. The “news” part here is how much more dire the Gates research is. Using a more sophisticated analysis than the United Nations and other leading global think tanks have employed to date reveals the world’s population shortfall will be markedly more dramatic, and sooner, than anyone anticipated. The BBC described it as a “jaw dropping global crash.” And none of these demographers see this as a good thing. Quite the opposite. No fewer than 23 leading nations—including Japan, Spain, South Korea, and Italy—will see their population cut in half by 2100. China’s will drop by a stunning 48 percent, knocking it out of contention as the world’s economic super-power. This precipitous decline will not be caused by disease, famine, or any kind of natural disaster. The missing population will simply never have been born. Their would-be parents are simply forgetting to have them.

 

Imagine any of these countries getting a military intelligence report that a foreign enemy was set to reduce their population by more than half over the next 60 years. But in this case, the dramatic act of war is self-inflicted by each country’s growing cohort of non-parents. Another 34 countries will see dramatic population declines by 25 to 50 percent by 2100. Beyond this, the projected fertility rates in 183 of 195 countries will not be high enough to maintain current populations by the century’s end. That is called negative population growth and once it starts, it probably won’t stop. These scholars predict that sub-Saharan and North Africa, as well as the Middle East, will be the only super regions fertile enough to maintain their populations without dramatic immigration policies.

 

To say the geopolitical and economic consequences of this fact will be profound is an understatement. The Gates research further darkens the already bleak picture painted last year by two Canadian researchers, Darrell Bricker and John Ibbitson, in their insightful and carefully documented book, Empty Planet: The Shock of Global Population Decline. They warn:

 

The great defining event of the twenty-first century—one of the great defining events in human history—will occur in three decades, give or take, when the global population starts to decline. Once that decline begins, it will never end. We do not face the challenge of a population bomb, but of a population bust—a relentless, generation-after-generation culling of the human herd. [emphasis added]

 

The Gates scholars agree with the Empty Planet scenario, marking 2064 as humanity’s demographic high-water mark at just 9.73 billion human souls, short of the long predicted 10 billion. Academic demographers are not given to hyperbole. The unsustainability at work here is extreme. The Gates team explains:

 

  • The number of global citizens under five years of age will fall from 681 million in 2017 to 401 million in 2100, a 41 percent drop.

  • The number of over 80-year-olds will soar from 141 million in 2017 to 866 million in 2100, a whopping 514 percent increase.

Imagine these are your company’s future customer projections. You don’t get to the future with numbers like this. Putting this in very stark, recent historical perspective, there were 25 worldwide births for every person turning 80 in 1950, a healthy demographic dividend. In 2017, that ratio shrank to 7:1. Not so healthy. These 24 Gates demographers explain, “in 2100 we forecasted one birth for every person turning 80 years old.” (See it for yourself at p.1297.)

 

This is what the end of humanity looks like. Professor Christopher Murray, director of the Institute for Health Metrics and Evaluation at the University of Washington’s School of Medicine and head of the Gates study, told the BBC, “I find people laugh it off… they can’t imagine it could be true, they think women will just decide to have more kids. If you can’t [find a solution] then eventually the species disappears.” And the solutions that developed countries have tried of late are not working.

 

The twilight of economic and technological growth

 

Few scholars have appreciated the full consequences of this implosion like Professor Charles Jones of Stanford University’s King Center on Global Development. In October, he published a persuasive paper entitled ‘The End of Economic Growth? Unintended Consequences of a Declining Population,’ in which he asked what happens to global economic and technological growth, not just when population growth slows or goes to zero, but actually turns negative? Elaborating upon Bricker and Ibbitson’s work, he contends that we must consider what he calls “an Empty Planet result” where “knowledge and living standards stagnate for a population that gradually vanishes.”

 

Like Shellenberger, Jones is “pro-people” for empirical reasons. He explained to me that contrary to nearly all demographic predictions, “we simultaneously have many more people and much higher living standards” precisely because “people are a crucial input into the production of the new ideas responsible for economic growth.” Jones calls our attention to the groundbreaking work of his mentor, economist Paul Romer, on Endogenous Growth Theory, which explains why more people are not only a good thing but essential to improvements in human thriving and a better world documented above.

 

Their concern is far more nuanced than fewer babies not becoming the needed taxpayers to support tomorrow’s mushrooming non-working elderly. Endogenous Growth Theory is more subtle and elegant as it actually explains our current developing world. In a 2019 paper in the Scandinavian Journal of Economics, Jones calls Endogenous Growth Theory “truly beautiful,” a superlative seldom employed by nerdy economist types. It earned Romer the 2018 Nobel Prize in Economics.

 

Thomas Malthus saw new people as zero-sum consumers of our precious limited resources. Thus, fewer are better. Romer’s Endogenous Growth Theory demonstrates precisely why Malthus was so spectacularly wrong. He failed to appreciate that humanity’s power as innovators is positively and exponentially greater than our collective drag as consumers. Romer recognized why, rather than devastating scarcity, which breeds fear and drives the need to control, a rapidly growing human population has actually produced unimagined abundance. Human ingenuity and innovation are far richer blessings to the world than our appetites are a curse. The latter drives the former.

And this is not just happy talk. The data bears it out. More people are the answer to a better world for everyone. This is why our global political moment is so critical. Policies that favor difference and competing ideas are where growth happens. That is precisely what good science and democracy require. Death happens when competing ideas are shut down in favor of strictly enforced homogeneity. Endogeny requires the dynamic competition of heterodox ideas so that they can be aired, challenged, and refined by others. Current “progressive thought” is really a new fundamentalism that is contrary to growth. It is fear-based and leads to death. This is precisely what we are seeing today.

The magic of what Romer and Jones describe is found in the codification of human knowledge and the non-rivalry of ideas. Natural resources are what economists call “rival.” You and I cannot eat the same potato or drink the same glass of water simultaneously. We must either compete for it or produce twice as much. But the idea of how to find and store more potatoes or water is non-rival. It can be written down and shared all around the world by people at the same time without diminishing its full power. So, as Jones explains, “because knowledge is non-rival, growth in the aggregate stock of knowledge at the rate of population growth will cause income per person to grow.” [p. 878, emphasis in original]

 

Oral rehydration theory is one of Romer’s favorite examples of the power of codified ideas. Dehydration from diarrhea has long been the primary driver of child mortality—deadlier than AIDS, malaria, and measles combined. As Jones explains, some medical workers discovered that “dissolving a few inexpensive minerals, salts, and a little sugar in water in just the right proportions produces a solution” that prevents death from dehydration. That relatively simple recipe could be written down, shared, and used by billions at the same time. It has since saved untold lives. Objects are rival. Ideas are non-rival and thus, exponentially powerful. And humans are the globe’s only inhabitants that produce ideas. And when growing groups of people cooperate around and share these ideas, stunning things happen. This is Endogenous Growth Theory and it explains the wonder of the modern world in which we have more wealth and food at a time when we have the most people. Malthus and his disciples said the opposite would happen.

 

Romer entitled his 2018 Nobel acceptance talk in Stockholm “On the Possibility of Progress,” as an obvious challenge to Malthus, and at an efficient 30 minutes, his lecture is worth watching. He spoke of how his work—and that of Yale’s William Nordhaus, his co-recipient—demonstrates “the benefit of other people.” Our scientific, industrial, and tech revolutions, and their dramatic improvements to human flourishing, were, he explains, “driven by a process of more discoveries, leading to the production of more food, which led to more people, who in turn developed more and more discoveries” which have improved the lives of billions. As Romer explains, “This is not just exponential growth. This is exponential growth in the rate of exponential growth…”

 

He went on to explain that this “combinatorial explosion” of more people cooperating around ever-growing, world-changing, life-improving ideas makes it “immediately obvious that the discovery of new ideas from an almost infinite set of possibilities could offset the scarce resources implied by the Malthusian analysis.” And it obviously has. If the eco-doomsayers could choose to live at any time in human history, they would undoubtably choose today if their dream is physical safety and a long, prosperous, and contemplative life with an abundance of essential resources and a substantially improving eco-system.

As Romer explained to his Nobel audience on that lovely winter evening in Stockholm, Endogenous Growth Theory is the beautiful explanation of why, “on balance, it is better to have more people” rather than fewer. Limiting our population is not a progressive idea. The most sophisticated, cross-disciplinary science emerging from academia appears to tell us that the ancient Mosaic wisdom of the Judeo/Christian tradition, to “be fruitful and multiply and fill the earth” is exactly the correct progressive prescription for the continuation of human well-being. And failing to do this is what the end of the world actually looks like.

 

 Glenn T. Stanton is the director of global family formation studies at Focus on the Family in Colorado Springs, CO. His latest book is The Myth of the Dying Church: How Christianity is Actually Thriving in America and the World. You can follow him on Twitter @GlennStanton.



Monday, December 14, 2020

Why American Children Stopped Believing in God

 

Why American Children Stopped Believing in God

By CAMERON HILDITCH

https://www.nationalreview.com/2020/12/why-american-children-stopped-believing-in-god/

December 13, 2020 6:30 AM


The time has come for religious parents to take their children back from the state.

In a report released earlier this year from the American Enterprise Institute, Lyman Stone tracked the history of religious belief, behavior, and association in the United States since the Founding. It’s a magisterial work, and I encourage readers to download the report here and peruse it for themselves.

Stone’s research helps us to understand the decline of religious faith in America over the past 60 years. Secularization is, to be sure, a hugely overdetermined development in American history, and just about everyone has a theory about how it’s happened and why. Religious conservatives would probably cite the loosening of the country’s morals that began in the ’60s and ’70s. Secular progressives might mutter something about the onward march of “Science” and “Reason” over time. But the data seem to show that the main driver of secularization in the United States has been the acceleration of government spending on education and government control over the curricular content taught in schools.

Here our secular progressive might raise his head again, perhaps feeling a bit smug about this finding. “See!”, he says. “Children used to be deprived of education and the life of the mind! They were stuck in the doldrums of ignorance and squalor before the benevolent hand of the state reached down and lifted them up into the world of literacy and critical thought. All that was needed was a little education to free them from hokey superstitions.”

It’s a simple theory, befitting the minds of those who have historically espoused it. But it’s falsified by the data. Stone cites the seminal work of Raphael Franck and Laurence Iannaccone on this point, who meticulously tracked religious behavior over time in their own work. According to Franck and Iannaccone, “higher educational attainment did not predict lower religiosity: More and less educated people are similarly religious.” Nor did they “find that industrialized, urban life reduces religiosity: A more urban and industrialized population was associated with greater religiosity.” The link between intellectual progression/modernization and secularization is non-existent. As Stone summarizes:

Theories that religion has declined because urbanization is hostile to religiosity — or because modern, educated people are inherently skeptical of religion — get no support in the actual historic record.

It turns out that religiosity is usually determined very early in life. All the data suggest that, by and large, kids brought up in religious households stay religious and kids who aren’t, don’t. Consequently, childhood religiosity has been, and remains, the most important indicator of America’s religious trajectory. The story of religious decline in America is not the story of adults consciously rejecting the faith of their forefathers: It’s the story of each generation receiving a more secular upbringing than the generation preceding it. What accounts for this secularization of childhood over time? Taxpayer dollars.

Childhood religiosity was heavily affected by government spending on education and, to a lesser degree, government spending on old-age pensions. Thus, while more educated people were not less religious, societies that spent more public money on education were less religious. It is not educational attainment per se that reduces religiosity, but government control of education and, to a lesser extent, government support for retirement.

Researchers originally tried to explain the relationship between government control of education and secularization by putting it down to the state’s increasing willingness to care for the needs and wants of its citizens in a comprehensive way — a task traditionally carried out by religious institutions. Once people are no longer beholden to a church/synagogue/mosque for their material well-being — or so the theory goes — they see little reason to stay.

But this theory just doesn’t account for the data we have. As Stone observes, it’s belied by the fact “that the vast majority of declining religiosity can be attributed to changes in educational policy, rather than welfare generally.”


So how do we explain this link between education policy and religious belief given that academic attainment itself isn’t a factor? It’s quite simple, really. Children learn more at school than reading, writing, and arithmetic. They imbibe a whole set of implied assumptions about what’s important in life. By excluding religious instruction from public schools, the government-run education system tacitly teaches students that religious commitments are not a first-order priority in life. Faith in God becomes a sort of optional weekend hobby akin to playing tennis or video games. Christ and Moses are treated by teachers and administrators like weapons or drugs — confiscated upon discovery.

In this way, the hierarchy of values communicated both explicitly and implicitly to students in American high schools excludes religious claims from the outset. College, career, and popularity become the existential targets toward which the arrow of each student’s soul is aimed by bow-wielding commissars across the country. In a context such as this, secularization becomes ineluctable. The New Testament itself says that religious belief is shaped more by the places we look for praise and validation than by naked ratiocinations: “How can you believe, when you receive glory from one another, and you’re not looking for the glory which comes from the one and only God?” (John 5:44). But the secular public high school dispenses validation and praise according to different criteria than any of the major faiths. This is why government control of education has resulted in religious decline. As Stone writes:

. . . the content of education matters. Evidence that education reduces religiosity is fairly weak: American religiosity rose considerably from 1800 until the 1970s, despite rapidly rising educational attainment. But the evidence that specifically secular education might reduce religiosity is more compelling. Indeed, statistically, most researchers who have explored long-run change in religiosity find that education-related variables, which I have argued are a proxy for secular education, can explain nearly the totality of change in religiosity.

That last point bears repeating. Most researchers have found that “education-related variables . . . can explain nearly the totality of change in religiosity.” For religious conservatives who care about the fate of American culture, it cannot be emphasized enough that education is the whole ball game. All other policy areas amount to little more than tinkering around the edges. How we got to a place where this is the case is a sad story in and of itself (and one that I told in part here). Nevertheless, it remains the case that public schools often are not a smooth fit for conservative families, especially religious ones. Even worse than that, we can now see signs that the ideology imposed upon government-educated children is changing. What used to be the state-imposed orthodoxy of benign agnosticism is being replaced by a full-blown intersectional pseudo-religion with its own priests, prophets, saints, and martyrs.

The time has come for religious parents to take their children back from the state. It simply will not do anymore for faithful Americans to drop their sons and daughters off at the curbside every morning for the government to collect as if they were taking out the trash. As I’ve written before, a broader reconsideration of public schooling will not be cheap. It will require, among other things, the establishment of charitable private education co-operatives if we’re to heed the dictates of the world’s great faiths by keeping the interests of the poor at the forefront of our minds. But the only real road to religious revival is the one that begins with each parent’s first step out of the public school’s doors.

CAMERON HILDITCH is a William F. Buckley Fellow in Political Journalism at National Review Institute. @cameronhilditch


Thursday, December 10, 2020

Fine-tuning debate - Hossenfelder vs. Miller and Meyer

 

Fine-tuning debate - Hossenfelder vs. Miller and Meyer


I have posted Hossenfelder’s blog posts often because I find her discussions usually well argued and clearly presented. But in this case [and a few others] I don't think she has settled the issue. So here is her critique of the fine-tuning argument and the reply by Brian Miller and Stephen Meyer. 

 

Sorry, the universe wasn’t made for you


Sabine Hossenfelder

http://backreaction.blogspot.com/2016/09/sorry-universe-wasnt-made-for-you.html

Last month, game reviewers were all over No Man’s Sky, a new space adventure launched to much press attention. Unlike previous video games, this one calculates players’ environments from scratch rather than revealing hand-crafted landscapes and creatures. The calculations populate No Man’s Sky’s virtual universe with about 1019 planets, all with different flora and fauna – at least that’s what we’re told, not like anyone actually checked. That seems a giganourmous number but is still less than there’s planets in the actual universe, estimated at roughly 1024.

User’s expectations of No Man’s Sky were high – and were highly disappointed. All the different planets, it turns out, still get a little repetitive with their limited set of options and features. It’s hard to code a universe as surprising as reality and run it on processors that occupy only a tiny fraction of that reality.

 

Theoretical physicists, meanwhile, have the opposite problem: The fictive universes they calculate are more surprising than they’d like them to be.

 

Having failed on their quest for a theory of everything, in the area of quantum gravity many theoretical physicists now accept that a unique theory can’t be derived from first principles. Instead, they believe, additional requirements must be used to select the theory that actually describes the universe we observe. That, of course, is what we’ve always done to develop theories – the additional requirements being empirical adequacy.

 

The new twist is that many of these physicists think the missing observational input is the existence of life in our universe. I hope you just raised an eyebrow or two because physicists don’t normally have much business with “life.” And indeed, they usually only speak about preconditions of life, such as atoms and molecules. But that the sought-after theory must be rich enough to give rise to complex structures has become the most popular selection principle.

 

Known as “anthropic principle” this argument allows physicists to discard all theories that can’t produce sentient observers on the rationale that we don’t inhabit a universe that lacks them. One could of course instead just discard all theories with parameters that don’t match the measured values, but that would be so last century.

 

The anthropic principle is often brought up in combination with the multiverse, but logically it’s a separate argument. The anthropic principle – that our theories must be compatible with the existence of life in our universe – is an observational requirement that can lead to constraints on the parameters of a theory. This requirement must be fulfilled whether or not universes for different parameters actually exist. In the multiverse, however, the anthropic principle is supposedly the only criterion by which to select the theory for our universe, at least in terms of probability so that we are likely to find ourselves here. Hence the two are often discussed together.

 

Anthropic selection had a promising start with Weinberg’s prescient estimate for the cosmological constant. But the anthropic princple hasn’t solved the problem it was meant to solve, because it does not single out one unique theory either. This has been known at least since a decade, but the myth that our universe is “finetuned for life” still hasn’t died.

 

The general argument against the success of anthropic selection is that all evidence for the finetuning of our theories explores only a tiny space of all possible combinations of parameters. A typical argument for finetuning goes like this: If parameter X was only a tiny bit larger or smaller than the observed value, then atoms couldn’t exist or all stars would collapse or something similarly detrimental to the formation of large molecules. Hence, parameter X must have a certain value to high precision. However, these arguments for finetuning – of which there exist many – don’t take into account simultaneous changes in several parameters and are therefore inconclusive.

 

Importantly, besides this general argument there also exist explicit counterexamples. In the 2006 paper A Universe Without Weak Interactions, Harnik, Kribs, and Perez discussed a universe that seems capable of complex chemistry and yet has fundamental particles entirely different from our own. More recently, Abraham Loeb from Harvard argued that primitive forms of life might have been possible already in the early universe under circumstances very different from today’s. And a recent paper (ht Jacob Aron) adds another example:

 

 

Stellar Helium Burning in Other Universes: A solution to the triple alpha fine-tuning problem

By Fred C. Adams and Evan Grohs

1608.04690 [astro-ph.CO]

 

In this work the authors show that some combinations of fundamental constants would actually make it easier for stars to form Carbon, an element often assumed to be essential for the development of life.

 

This is a fun paper because it extends on the work by Fred Hoyle, who was the first to use the anthropic principle to make a prediction (though some historians question whether that was his actual motivation). He understood that it’s difficult for stars to form heavy elements because the chain is broken in the first steps by Beryllium. Beryllium has atomic number 4, but the version that’s created in stellar nuclear fusion from Helium (with atomic number 2) is unstable and therefore can’t be used to build even heavier nuclei.

 

Hoyle suggested that the chain of nuclear fusion avoids Beryllium and instead goes from three Helium nuclei straight to carbon (with atomic number 6). Known as the triple-alpha process (because Helium nuclei are also referred to as alpha-particles), the chances of this happening are slim – unless the Helium merger hits a resonance of the Carbon nucleus. Which it does if the parameters are “just right.” Hoyle hence concluded that such a resonance must exist, and that was later experimentally confirmed.

 

Adams and Groh now point out that there are other sets of parameters altogether in which case Beryllium is just stable and the Carbon resonance doesn’t have to be finely tuned. In their paper, they do not deal with the fundamental constants that we normally use in the standard model – they instead discuss nuclear structure which has constants that are derived from the standard model constants, but are quite complicated functions thereof (if known at all). Still, they have basically invented a fictional universe that seems at least as capable of producing life as ours.

 

This study is hence another demonstration that a chemistry complex enough to support life can arise under circumstances that are not anything like the ones we experience today.

 

I find it amusing that many physicists believe the evolution of complexity is the exception rather than the rule. Maybe it’s because they mostly deal with simple systems, at equilibrium or close by equilibrium, with few particles, or with many particles of the same type – systems that the existing math can deal with.

 

It makes me wonder how many more fictional universes physicists will invent and write papers about before they bury the idea that anthropic selection can single out a unique theory. Fewer, I hope, than there are planets in No Man’s Sky.



Physicist Sabine Hossenfelder Challenges the Evidence for Cosmological Fine-Tuning

Brian MillerStephen C. Meyer

 

https://evolutionnews.org/2020/10/physicist-sabine-hossenfelder-challenges-the-evidence-for-cosmological-fine-tuning/

 

 

October 16, 2020, 6:40 AM

Many have argued that some of the most compelling evidence for design in nature is the required fine-tuning of the universe to support life (see here, here, here). Namely, a life-permitting universe requires that the values of various parameters be set within a narrow range of possible values. Examples include the strength of the fundamental forces of nature (e.g., gravity), the masses of particles (e.g., electrons), and the universe’s initial conditions (e.g., initial energy density). The conclusion of fine-tuning has been accepted by a veritable who’s who of leading physicists including John Barrow, Bernard Carr, Paul Davies, and George Ellis.

Yet some scientists have strongly opposed this conclusion, due less to the scientific evidence than to its philosophical implications, specifically how it points to the universe having a creator. One of the leading proponents of fine-tuning is theoretical physicist Luke Barnes, and he has responded in detail to critics such as Victor Stenger and Sean Carroll (see here, here, here). More recently, theoretical physicist Sabine Hossenfelder entered the debate with a series of tweets, blog posts, and a journal article (see here, here, here) where she reiterates the assertion that any claims of the universe being fine-tuned for life are unscientific and fruitless.

Common Errors

Hossenfelder’s arguments represent common errors committed by critics, so they deserve special attention. Her weakest argument is that the assumption of fine-tuning has led to inaccurate predictions related to the discovery of new fundamental particles. This assertion is without merit since a few inaccurate predictions related to one set of parameters in no way challenge the generally accepted evidence of fine-tuning for a completely different set. Another weak argument is that the analyses of individual parameters, such as the mass of an electron, “don’t take into account simultaneous changes in several parameters and are therefore inconclusive.” This criticism completely overlooks Luke Barnes’s careful studies of the effect of altering multiple parameters at the same time. His results only reinforce the fine-tuning conclusion. 

A more substantive argument is that some details of the universe might be less restrictive than originally assumed. Hossenfelder cites a paper by Harnik, Kribs, and Perez that asserts that a universe without a weak force could still support life. Yet such claims have not withstood careful scrutiny. For instance, a paper by Louis Clavelli and Raymond White demonstrates that the authors of the initial paper only considered some of the consequences of removing the weak force. They ignored other consequences that would likely have precluded any possibility of the universe hosting complex life:

  • Core collapse supernovae would not occur, which are essential for the production and distribution of sufficient oxygen for a life-permitting universe. 

  • The time required for galaxy and star formation would delay their genesis until a time dangerously close to the age when the universe started to expand rapidly due to the cosmological constant/dark energy. After that point, planets would never form, thus precluding the possibility of life. 

  • Radioactive decay of heavy elements would not occur, so planets’ cores would cool more quickly. The rapid cooling would result in a lack of volcanic activity. This activity is essential for maintaining a stable greenhouse effect on Earth-like planets, so habitable temperatures would be far less probable.   

Her Strongest Objection

Hossenfelder’s strongest argument is that many fine-tuning parameters cannot in fact be quantified. On this basis, she contests the reality of fine-tuning as a feature of nature that has to be explained. To support her claim, she points out that many physicists calculate the degree of fine-tuning associated with different parameters by assuming that all possible values of different physical constants, for example, within a given range are equally probable. She then argues that physicists have no way of knowing whether or not this assumption is true.   

Perhaps, she suggests, some universe-generating mechanism exists that produces universes with, for example, certain gravitational force constants more frequently than universes with other gravitational force constants. Taking such biasing into account would clearly change the calculated degree of fine-tuning (or the probability) associated with any given range of values that correspond to a life-permitting universe. Thus she argues that the possibility of such biasing in the generation of universes implies that we cannot make accurate assessments of fine-tuning — and, therefore, that we cannot be sure that the universe actually is fine-tuned for life.

Nevertheless, Hossenfelder’s objection has an obvious problem. The allowable ranges of many physical constants and other parameters are incredibly narrow within the vast array of other possible values. For instance, the ratio of the strengths of the electromagnetic force to gravity must be accurate to 1 part in 1040 ( a number greater than a trillion trillion trillion). Consequently, any universe-generating mechanism capable of favoring the production of those specific and tiny ranges would itself need to be finely tuned in order to produce life with high probability. In other words, her universe-generating mechanism would require fine-tuning to ensure the biasing that would allow her to explain away fine-tuning in our universe. In summary, Hossenfelder’s criticisms represent no serious challenge to the fine-tuning argument. 

 


Wednesday, November 25, 2020

The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare

 AstrobiologyAhead of PrintResearch ArticleOpen Access

Open Access license

https://www.liebertpub.com/doi/10.1089/ast.2019.2149

The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare

Andrew E. Snyder-Beattie, Anders Sandberg, K. Eric Drexler, and Michael B. Bonsall

Published Online:19 Nov 2020https://doi.org/10.1089/ast.2019.2149

Astract

It is unknown how abundant extraterrestrial life is, or whether such life might be complex or intelligent. On Earth, the emergence of complex intelligent life required a preceding series of evolutionary transitions such as abiogenesis, eukaryogenesis, and the evolution of sexual reproduction, multicellularity, and intelligence itself. Some of these transitions could have been extraordinarily improbable, even in conducive environments. The emergence of intelligent life late in Earth's lifetime is thought to be evidence for a handful of rare evolutionary transitions, but the timing of other evolutionary transitions in the fossil record is yet to be analyzed in a similar framework. Using a simplified Bayesian model that combines uninformative priors and the timing of evolutionary transitions, we demonstrate that expected evolutionary transition times likely exceed the lifetime of Earth, perhaps by many orders of magnitude. Our results corroborate the original argument suggested by Brandon Carter that intelligent life in the Universe is exceptionally rare, assuming that intelligent life elsewhere requires analogous evolutionary transitions. Arriving at the opposite conclusion would require exceptionally conservative priors, evidence for much earlier transitions, multiple instances of transitions, or an alternative model that can explain why evolutionary transitions took hundreds of millions of years without appealing to rare chance events. Although the model is simple, it provides an initial basis for evaluating how varying biological assumptions and fossil record data impact the probability of evolving intelligent life, and also provides a number of testable predictions, such as that some biological paradoxes will remain unresolved and that planets orbiting M dwarf stars are uninhabitable.

7. Conclusions

It took approximately 4.5 billion years for a series of evolutionary transitions resulting in intelligent life to unfold on Earth. In another billion years, the increasing luminosity of the Sun will make Earth uninhabitable for complex life. Intelligence therefore emerged late in Earth's lifetime. Together with the dispersed timing of key evolutionary transitions and plausible priors, one can conclude that the expected transition times likely exceed the lifetime of Earth, perhaps by many orders of magnitude. In turn, this suggests that intelligent life is likely to be exceptionally rare. Arriving at an alternative conclusion would require either exceptionally conservative priors, finding additional instances of evolutionary transitions, or adopting an alternative model that can explain why evolutionary transitions took so long on Earth without appealing to rare stochastic occurrences. The model provides a number of other testable predictions, including that M dwarf stars are uninhabitable, that many biological paradoxes will remain unsolved without allowing for extremely unlikely events, and that, counterintuitively, we might be slightly more likely to find simple life on Mars.





Intelligent Life Really Can't Exist Anywhere Else

Hell, our own evolution on Earth was pure luck.

https://www.popularmechanics.com/science/a34771475/does-intelligent-life-exist-elsewhere/

 

BY CAROLINE DELBERT

NOV 24, 2020


In newly published research from Oxford University's Future of Humanity Institute, scientists study the likelihood of key times for evolution of life on Earth and conclude that it would be virtually impossible for that life to evolve the same way somewhere else.

Life has come a very long way in a very short time on Earth, relatively speaking—and scientists say that represents even more improbable luck for intelligent life that is rare to begin with.

For decades, scientists and even philosophers have chased many explanations for the Fermi paradox. How, in an infinitely big universe, can we be the only intelligent life we’ve ever encountered? Even on Earth itself, they wonder, how are we the only species that ever has evolved advanced intelligence?

 

 

There are countless naturally occurring, but extremely lucky ways in which Earth is special, sheltered, protected, and encouraged to have evolved life. And some key moments of emerging life seem much more likely than others, based on what really did happen.

This content is imported from {embed-name}. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

“The fact that eukaryotic life took over a billion years to emerge from prokaryotic precursors suggests it is a far less probable event than the development of multicellular life, which is thought to have originated independently over 40 times,” the researchers explain. They continue:

“The early emergence of abiogenesis is one example that is frequently cited as evidence that simple life must be fairly common throughout the Universe. By using the timing of evolutionary transitions to estimate the rates of transition, we can derive information about the likelihood of a given transition even if it occurred only once in Earth's history.”

In this paper, researchers from Oxford University’s illustrious Future of Humanity Institute continue to wonder how all this can be and what it means. The researchers include mathematical ecologists, who do a kind of forensic mathematics of Earth’s history.

In this case, they’ve used a Bayesian model of factors related to evolutionary transitions, which are the key points where life on Earth has turned from ooze to eukaryotes, for example, and from fission and other asexual reproduction to sexual reproduction, which greatly accelerates the rate of mutation and development of species by mixing DNA as a matter of course.

 

Most of these “evolutionary transitions” are poorly understood and have not been very well studied by the scientists of likelihoods. And using their model, these scientists say that Earth’s series of Goldilocks lottery tickets are more likely to have taken far longer than they really did on Earth.

There’s an iconic scene in the 2001 movie Ocean’s Eleven where George Clooney explains the series of escalating improbabilities of his planned crime. After several hugely unlikely outcomes, he says, “Then it's a piece of cake: just three more guards with Uzis, and the most elaborate vault door conceived by man.” In a way, the unlikely hurdles to the rapid flourishing of complex life on Earth are the same way.

First, we win the lottery for surface temperature and protection from spaceborne dangers. Second, we win the lottery for the presence of building blocks of life. Third, we win the lottery for the right location for the right building blocks. That’s before anything like the most primitive single cell has even emerged.

Using some information we do know, like the age of Earth and the expected end of its habitable lifetime due to the expanding heat radius of our sun, these researchers have turned evolutionary transitions into a series of existential scratch-off tickets. Read the whole fascinating study here.