[[From my look at her writing she
is a scientist who is exceptionally well informed in philosophy. ]]
http://backreaction.blogspot.com/
Naturalness
is an old idea; it dates back at least to the 16th century and captures the
intuition that a useful explanation shouldn’t rely on improbable coincidences.
Typical examples for such coincidences, often referred to as “conspiracies,”
are two seemingly independent parameters that almost cancel each other, or an
extremely small yet nonzero number. Physicists believe that theories which do
not have such coincidences, and are natural in this particular sense, are more
promising than theories that are unnatural.
Naturalness has its roots in human experience. If you go for a walk and encounter a delicately balanced stack of stones, you conclude someone constructed it. This conclusion is based on your knowledge that stones distributed throughout landscapes by erosion, weathering, deposition, and other geological processes aren’t likely to end up on neat piles. You know this quite reliably because you have seen a lot of stones, meaning you have statistics from which you can extract a likelihood.
As the example hopefully illustrates, naturalness is a good criterion in certain circumstances, namely when you have statistics, or at least means to derive statistics. A solar system with ten planets in almost the same orbit is unlikely. A solar system with ten planets in almost the same plane isn’t. We know this both because we’ve observed a lot of solar systems, and also because we can derive their likely distribution using the laws of nature discovered so far, and initial conditions that we can extract from yet other observations. So that’s a case where you can use arguments from naturalness.
But this isn’t how arguments from naturalness are used in theory-development today. In high energy physics and some parts of cosmology, physicists use naturalness to select a theory for which they do not have – indeed cannot ever have – statistical distributions. The trouble is that they ask which values of parameters in a theory are natural. But since we can observe only one set of parameters – the one that describes our universe – we have no way of collecting data for the likelihood of getting a specific set of parameters.
Physicists use criteria from naturalness anyway. In such arguments, the probability distribution is unspecified, but often implicitly assumed to be almost uniform over an interval of size one. There is, however, no way to justify this distribution; it is hence an unscientific assumption. This problem was made clear already in a 1994 paper by Anderson and Castano.
The standard model of particle physics, or the mass of the Higgs-boson more specifically, is unnatural in the above described way, and this is currently considered ugly. This is why theorists invented new theories to extend the Standard Model so that naturalness would be reestablished. The most popular way to do this is by making the Standard Model supersymmetric, thereby adding a bunch of new particles.
The Large Hadron Collider (LHC), as several previous experiments, has not found any evidence for supersymmetric particles. This means that according to the currently used criterion of naturalness, the theories of particle physics are, in fact, unnatural. That’s also why we presently do not have reason to think that a larger particle collider would produce so-far unknown particles.
In my book “Lost in Math: How Beauty Leads Physics Astray,” I use naturalness as an example for unfounded beliefs that scientists adhere to. I chose naturalness because it’s timely, as with the LHC ruling it out, but I could have used other examples.
A lot of physicists, for example, believe that experiments have ruled out hidden variables explanations of quantum mechanics, which is just wrong (experiments have ruled out only certain types of local hidden variable models). Or they believe that observations of the Bullet Cluster have ruled out modified gravity, which is similarly wrong (the Bullet Clusters is a statistical outlier that is hard to explain both with dark matter and modified gravity). Yes, the devil’s in the details.
Remarkable about these cases isn’t that scientists make mistakes – everyone does – but that they insist on repeating wrong claims, in many cases publicly, even after you explained them why they’re wrong. These and other examples like this leave me deeply frustrated because they demonstrate that even in science it’s seemingly impossible to correct mistakes once they have been adopted by sufficiently many practitioners. It’s this widespread usage that makes it “safe” for individuals to repeat statements they know are wrong, or at least do not know to be correct.
I think this highlights a serious problem with the current organization of academic research. That this can happen worries me considerably because I have no reason to think it’s confined to my own discipline.
Naturalness is an interesting case to keep an eye on. That’s because the LHC now has delivered data that shows the idea was wrong – none of the predictions for supersymmetric particles, or extra dimensions, or tiny black holes, and so on, came true. One possible way for particle physicists to deal with the situation is to amend criteria of naturalness so that they are no longer in conflict with data. I sincerely hope this is not the way it’ll go. The more enlightened way would be to find out just what went wrong.
That you can’t speak about probabilities without a probability distribution isn’t a particularly deep insight, but I’ve had a hard time getting particle physicists to acknowledge this. I summed up my arguments in my January paper, but I’ve been writing and talking about this for 10+ years without much resonance.
I was therefore excited to see that James Wells has a new paper on the arXiv
Naturalness has its roots in human experience. If you go for a walk and encounter a delicately balanced stack of stones, you conclude someone constructed it. This conclusion is based on your knowledge that stones distributed throughout landscapes by erosion, weathering, deposition, and other geological processes aren’t likely to end up on neat piles. You know this quite reliably because you have seen a lot of stones, meaning you have statistics from which you can extract a likelihood.
As the example hopefully illustrates, naturalness is a good criterion in certain circumstances, namely when you have statistics, or at least means to derive statistics. A solar system with ten planets in almost the same orbit is unlikely. A solar system with ten planets in almost the same plane isn’t. We know this both because we’ve observed a lot of solar systems, and also because we can derive their likely distribution using the laws of nature discovered so far, and initial conditions that we can extract from yet other observations. So that’s a case where you can use arguments from naturalness.
But this isn’t how arguments from naturalness are used in theory-development today. In high energy physics and some parts of cosmology, physicists use naturalness to select a theory for which they do not have – indeed cannot ever have – statistical distributions. The trouble is that they ask which values of parameters in a theory are natural. But since we can observe only one set of parameters – the one that describes our universe – we have no way of collecting data for the likelihood of getting a specific set of parameters.
Physicists use criteria from naturalness anyway. In such arguments, the probability distribution is unspecified, but often implicitly assumed to be almost uniform over an interval of size one. There is, however, no way to justify this distribution; it is hence an unscientific assumption. This problem was made clear already in a 1994 paper by Anderson and Castano.
The standard model of particle physics, or the mass of the Higgs-boson more specifically, is unnatural in the above described way, and this is currently considered ugly. This is why theorists invented new theories to extend the Standard Model so that naturalness would be reestablished. The most popular way to do this is by making the Standard Model supersymmetric, thereby adding a bunch of new particles.
The Large Hadron Collider (LHC), as several previous experiments, has not found any evidence for supersymmetric particles. This means that according to the currently used criterion of naturalness, the theories of particle physics are, in fact, unnatural. That’s also why we presently do not have reason to think that a larger particle collider would produce so-far unknown particles.
In my book “Lost in Math: How Beauty Leads Physics Astray,” I use naturalness as an example for unfounded beliefs that scientists adhere to. I chose naturalness because it’s timely, as with the LHC ruling it out, but I could have used other examples.
A lot of physicists, for example, believe that experiments have ruled out hidden variables explanations of quantum mechanics, which is just wrong (experiments have ruled out only certain types of local hidden variable models). Or they believe that observations of the Bullet Cluster have ruled out modified gravity, which is similarly wrong (the Bullet Clusters is a statistical outlier that is hard to explain both with dark matter and modified gravity). Yes, the devil’s in the details.
Remarkable about these cases isn’t that scientists make mistakes – everyone does – but that they insist on repeating wrong claims, in many cases publicly, even after you explained them why they’re wrong. These and other examples like this leave me deeply frustrated because they demonstrate that even in science it’s seemingly impossible to correct mistakes once they have been adopted by sufficiently many practitioners. It’s this widespread usage that makes it “safe” for individuals to repeat statements they know are wrong, or at least do not know to be correct.
I think this highlights a serious problem with the current organization of academic research. That this can happen worries me considerably because I have no reason to think it’s confined to my own discipline.
Naturalness is an interesting case to keep an eye on. That’s because the LHC now has delivered data that shows the idea was wrong – none of the predictions for supersymmetric particles, or extra dimensions, or tiny black holes, and so on, came true. One possible way for particle physicists to deal with the situation is to amend criteria of naturalness so that they are no longer in conflict with data. I sincerely hope this is not the way it’ll go. The more enlightened way would be to find out just what went wrong.
That you can’t speak about probabilities without a probability distribution isn’t a particularly deep insight, but I’ve had a hard time getting particle physicists to acknowledge this. I summed up my arguments in my January paper, but I’ve been writing and talking about this for 10+ years without much resonance.
I was therefore excited to see that James Wells has a new paper on the arXiv
Naturalness, Extra-Empirical
Theory Assessments, and the Implications of Skepticism
James D. Wells
arXiv:1806.07289 [physics.hist-ph]
James D. Wells
arXiv:1806.07289 [physics.hist-ph]
In
his paper, Wells lays out the problems with the lacking probability
distribution with several simple examples. And in contrast to me, Wells isn’t a
no-one; he’s a well-known US-American particle physicist and Professor at the
University of Michigan.
So, now that a man has said it, I hope physicists will listen.
So, now that a man has said it, I hope physicists will listen.