Sunday, May 27, 2018

Is nature continuous or discrete? How the atomist error was born

[[A little difficult, but an excellent example of how deep msitakes can go, and how subtle our world is - or I should say "looks" under this latest comment.]]

is associate professor of philosophy at the University of Denver. His latest book is Lucretius I: An Ontology of Motion (2018).
The modern idea that nature is discrete originated in Ancient Greek atomism. Leucippus, Democritus and Epicurus all argued that nature was composed of what they called τομος (átomos) or ‘indivisible individuals’. Nature was, for them, the totality of discrete atoms in motion. There was no creator god, no immortality of the soul, and nothing static (except for the immutable internal nature of the atoms themselves). Nature was atomic matter in motion and complex composition – no more, no less.
Despite its historical influence, however, atomism was eventually all but wiped out by Platonism, Aristotelianism and the Christian tradition that followed throughout the Middle Ages. Plato told his followers to destroy Democritus’ books whenever they found them, and later the Christian tradition made good on this demand. Today, nothing but a few short letters from Epicurus remain.
Atomism was not finished, however. It reemerged in 1417, when an Italian book-hunter named Poggio Bracciolini discovered a copy of an ancient poem in a remote monastery: De Rerum Natura (On the Nature of Things), written by Lucretius (c99-55 BCE), a Roman poet heavily influenced by Epicurus. This book-length philosophical poem in epic verse puts forward the most detailed and systematic account of ancient materialism that we’ve been fortunate enough to inherit. In it, Lucretius advances a breathtakingly bold theory on foundational issues in everything from physics to ethics, aesthetics, history, meteorology and religion. Against the wishes and best efforts of the Christian church, Bracciolini managed to get it into print, and it soon circulated across Europe. 
This book was one of the most important sources of inspiration for the scientific revolution of the 16th and 17th centuries. Nearly every Renaissance and Enlightenment intellectual read it and became an atomist to some degree (they often made allowances for God and the soul). Indeed, this is the reason why, to make a long and important story very short, science and philosophy even today still tend to look for and assume a fundamental discreteness in nature. Thanks in no small part to Lucretius’ influence, the search for discreteness became part of our historical DNA. The interpretive method and orientation of modern science in the West literally owe their philosophical foundations to ancient atomism via Lucretius’ little book on nature. Lucretius, as Stephen Greenblatt says in his book The Swerve (2011), is ‘how the world became modern’.
There is a problem, however. If this story is true, then modern Western thought is based on a complete misreading of Lucretius’ poem. It was not a wilful misreading, of course, but one in which readers committed the simple error of projecting what little they knew second-hand about Greek atomism (mostly from the testimonia of its enemies) onto Lucretius’ text. They assumed a closer relationship between Lucretius’ work and that of his predecessors than actually exists. Crucially, they inserted the words ‘atom’ and ‘particle’ into the translated text, even though Lucretius never used them. Not even once! A rather odd omission for a so-called ‘atomist’ to make, no? Lucretius could easily have used the Latin words atomus (smallest particle) or particula (particle), but he went out of his way not to. Despite his best efforts, however, the two very different Latin terms he did use, corpora(matters) and rerum (things), were routinely translated and interpreted as synonymous with discrete ‘atoms’.
Further, the moderns either translated out or ignored altogether the nearly ubiquitous language of continuum and folding used throughout his book, in phrases such as ‘solida primordia simplicitate’ (simplex continuum). As a rare breed of scholar interested in both classical texts and quantum physics, the existence of this material continuum in the original Latin struck me quite profoundly. I have tried to show all of this in my recent translation and commentary, Lucretius I: An Ontology of Motion (2018), but here is the punchline: this simple but systematic and ubiquitous interpretive error constitutes what might well be the single biggest mistake in the history of modern science and philosophy.
This mistake sent modern science and philosophy on a 500-year quest for what Sean Carroll in his 2012 book called the ‘particle at the end of the universe’. It gave birth to the laudable virtues of various naturalisms and materialisms, but also to less praiseworthy mechanistic reductionisms, patriarchal rationalisms, and the overt domination of nature by humans, none of which can be found in Lucretius’ original Latin writings. What’s more, even when confronted with apparently continuous phenomena such as gravity, electric and magnetic fields, and eventually space-time, Isaac Newton, James Maxwell and even Albert Einstein fell back on the idea of an atomistic ‘aether’ to explain them. All the way back to the ancients, aether was thought to be a subtle fluid-like substance composed of insensibly tiny particles. Today, we no longer believe in the aether or read Lucretius as an authoritative scientific text. Yet in our own way, we still confront the same problem of continuity vs discreteness originally bequeathed to us by the moderns: in quantum physics.
Bottom of Form
Theoretical physics today is at a critical turning point. General relativity and quantum field theory are the two biggest parts of what physicists now call ‘the standard model’, which has enjoyed incredible predictive success. The problem, however, is that they have not yet been unified as two aspects of one overarching theory. Most physicists think that such unification is only a matter of time, even though the current theoretical frontrunners (string theory and loop quantum gravity) have yet to produce experimental confirmations.
Quantum gravity is of enormous importance. According to its proponents, it stands poised to show the world that the ultimate fabric of nature (space-time) is not continuous at all, but granular, and fundamentally discrete. The atomist legacy might finally be secured, despite its origins in an interpretive error.
There is just one nagging problem: quantum field theory claims that all discrete quanta of energy (particles) are merely the excitations or fluctuations in completely continuous quantum fields. Fields are notfundamentally granular. For quantum field theory, everything might be made of granules, but all granules are made of folded-up continuous fields that we simply measure as granular. This is what physicists call ‘perturbation theory’: the discrete measure of that which is infinitely continuous and so ‘perturbs one’s complete discrete measurement’, as Frank Close puts it in The Infinity Puzzle (2011). Physicists also have a name for the sub-granular movement of this continuous field: ‘vacuum fluctuations’. Quantum fields are nothing but matter in constant motion (energy and momentum). They are therefore never ‘nothing’, but more like a completely positive void (the flux of the vacuum itself) or an undulating ocean (appropriately called ‘the Dirac sea’) in which all discrete things are its folded-up bubbles washed ashore, as Carlo Rovelli puts it in Reality Is Not What it Seems (2016). Discrete particles, in other words, are folds in continuous fields.
The answer to the central question at the heart of modern science, ‘Is nature continuous or discrete?’ is as radical as it is simple. Space-time is not continuous because it is made of quantum granules, but quantum granules are not discrete because they are folds of infinitely continuous vibrating fields. Nature is thus not simply continuous, but an enfolded continuum.
This brings us right back to Lucretius and our original error. Working at once within and against the atomist tradition, Lucretius put forward the first materialist philosophy of an infinitely continuous nature in constant flux and motion. Things, for Lucretius, are nothing but folds (duplex), pleats (plex), bubbles or pores (foramina) in a single continuous fabric (textum) woven by its own undulations. Nature is infinitely turbulent or perturbing, but it also washes ashore, like the birth of Venus, in meta-stable forms – as Lucretius writes in the opening lines of De Rerum Natura: ‘Without you [Venus] nothing emerges into the sunlit shores of light.’ It has taken 2,000 years, but perhaps Lucretius has finally become our contemporary.

Two recent critical views of Biblical Criticism 1/2/19

Two recent critical views of Biblical Criticism 1/2/19

David Stern studied in Yeshivat Har Etzion, took Bible courses at the Herzog Teachers College, and majored in Jewish Studies at Yeshiva University. He taught Bible in a number of different contexts, and is currently pursuing a career in medicine.


. . . all hypotheses are working proposals until confirmed in detail,        and . . . many must be discarded while others will require drastic overhauling in the face of new evidence. There is a grave temptation to hold on to a hypothesis that has served well in the past, and the more serious temptation to bend data to fit, or to dismiss what cannot be accommodated into the system. The commitment must always be to observable or discoverable data, and not to a hypothesis, which is        always expendable.1

 In the 19th century, scholars of the Bible posited the Documentary Hypothesis. According to this theory, the Torah subsumes a composite of literary works, or sources, instead of being the work of a single author. Proponents of this theory, the "sources critics," identify these sources by highlighting sections of the Torah that display different writing styles, ideological assumptions, word choice, particularly with regard to Divine names, and any number of other differences. Source critics attribute the sources to authors coming from different time periods and ideological backgrounds, and have named them "J" (for passages that use the Tetragrammaton),"E" (for passages that use Elohim), "P" (Priestly) and "D" (Deuteronomist). Until recently, this theory was considered the unshakable bedrock upon which any academic Bible study was to be proposed.

The mid-1980s and the early 1990s witnessed a resurgence of biblical scholars challenging, revising, and even rejecting the Documentary Hypothesis. First and foremost, scholars relinquished claims to a scientific methodology. In Empirical Models for Biblical Criticism, 2 Jeffery Tigay insists that "The degree of subjectivity which such hypothetical [source critical] procedures permit is notorious." In fact, he characterizes these procedures as "reading between the lines." Moreover, Edward Greenstein maintains that source critical analysis is analogous to the blind men and the elephant: "Each of five blind men approaches a different part of an elephant's anatomy. Perceiving only part of the elephant, each man draws a different conclusion as to the identity of what he encounters."3 According to the preceding remarks, not only are source critical methods subjective, but also account for only a fraction of the total evidence. Especially when analyzing a literary corpus "as bulky and complex as an elephant,"4 a system which fails to consider all the evidence, and wherein "scholars shape the data into the configurations of their own imagination"5 hardly warrants the label scientific.

While surveying many conflicting proposals for the nature of the hypothetical sources, Gerhard Larsson gives a more specific account of the methodological shortcomings. He says that
 . . . there is no sound objective method for recognizing the different sources, there is also no real consensus about the character and extent of sources like J and E, [and] no unity concerning limits between original sources and the insertions made by redactors.6

Rather, as Greenstein says, "each scholar defines and adapts the evidence according to his own point of view."7 Such an approach not only yields results which are, as Tigay highlights, "hypothetical (witness the term 'documentary hypothesis'),"8 but, as David Noel Freedman declares, allows and encourages, "the pages of our literature [to be] filled with endless arguments between scholars who simply reiterate their prejudices."9

The lack of a sound and rigorous methodology leads scholars to produce varying and even contradictory theories, which ultimately undermine the enterprise as a whole. In addition to Wellhausen's four sources J, E, P, and D, some scholars speculate about sources labeled Lay (L), Nomadic (N), Kenite (K), Southern or Seir (S) and the "foundational source" Grundlage (G). Not only do scholars multiply the number of sources, some, applying the same methodology, fragment J, E, P, and D into further subdivisions, and view these documents as products of "schools" which "shaped and reshaped these documents by further additions."10 After summarizing the different opinions,11 Pauline Viviano says,
The more "sources" one finds, the more tenuous the evidence for the existence of continuous documents becomes, and the less likely that four unified documents ever existed. Even for those able to avoid skepticism and confusion in the face of the ever increasing number of sources, the only logical conclusion seems to be to  move away from [Wellhausen's] Documentary Hypothesis toward a position closer to the Fragmentary Hypothesis.12

 In addition to being a victim of its own ambition, the Documentary Hypothesis suffered many challenges, from the time of its inception through contemporary scholarship. Scholars have contested and even refuted the arguments from Divine names, doublets, contradictions, late words, late morphology, Aramaisms, and every other aspect of the Documentary Hypothesis.13 As a result, some scholars denounce source criticism en toto, 14 while others posit alternate hypotheses. However, one wonders if these hypotheses will not share the same fate as the ones they just disproved.

These problems have brought source criticism to a sad state. In Greenstein's words, "Many contemporary Biblicists are experiencing a crisis in faith . . . . The objective truths of the past we increasingly understand as the creations of our own vision."15 He continues, "all scholarship relies on theories and methods that come and go, and . . . modern critical approaches are no more or less than our own midrash."16 This "crisis," or "breakdown" to use Jon Levenson's characterization, has encouraged droves of scholars to study the Bible synchronically, a method which effectively renders source criticism irrelevant.

 Among other advantages, the synchronic method of biblical study encourages scholars to detect textual phenomena which, upon reflection, seem obvious, but have not been recognized until recently. Levenson explains these recent detections as follows:
Many scholars whose deans think they are studying the Hebrew Bible are, instead, concentrating on Syrio-Palestinian archeology, the historical grammar of Biblical Hebrew, Northwest Semitic epigraphy, or the like – all of which are essential, but no combination of which produces a Biblical scholar. The context often supplants the text and, far worse, blinds the interpreters to features of the text that their method has not predisposed them to see.17

This statement could not be truer when referring to source criticism, and to this end Larsson says, albeit in a harsher tone: "Source criticism obscures the analysis. Only when the text is considered as a whole do the special features and structures of the final version emerge."18

The rediscovery of the Bible's special features and structures has proven to be extremely rewarding in its own right, and, in addition, it has recurrently forced scholars to revise and even reject source critical theories. Larrson states this latter statement quite clearly: "Many scholars have found that when the different [patriarchal] cycles are studied in depth it is no longer possible to support the traditional documentary hypothesis."19 Even the Flood narrative, traditionally explained as two independent strands (J and P) woven together, has been unified by scholars who perceive a literary structure integrating the various sections of the story.20 In fact, a statistical analysis of linguistic features in Genesis lead by Yehuda Radday and Haim Shore demonstrates that
. . . with all due respect to the illustrious documentarians past and present, there is massive evidence that the pre-biblical triplicity of Genesis, which their line of thought postulates to have been worked over by a late and gifted editor into a trinity, is actually a unity.21

NOTES 1. D. Freedman, Divine Commitment and Human Obligation: Selected Writings of David Noel Freedman. (Grand Rapids, Michigan: Eerdmans, 1997) p. 160. 2. J. Tigay, Empirical Models for Biblical Criticism. (Philidelphia: University of Pennsylvania Press, 1985) p. 2. He says this despite the fact that his book attempts to demonstrate that other features of source criticism are methodologically sound. 
3. E. Greenstein, "Formation of the Biblical Narrative Corpus," AJS Review 15,1 (1990) p. 164. 
4. Ibid. 
5. E. Greenstein, "Biblical Studies in a State," in The State of Jewish Studies (Detroit: Wayne State University, 1990) p. 30. 
6. G. Larsson, "Documentary Hypothesis and Chronological Structure of the Old Testament," Zeitschrift fur Die Alttestamentliche Wissenschaft 97 (1985) p. 319. 
7. Greenstein, "Biblical Studies in a State," p. 31. 
8. Tigay, p. 2. 
9. Freedman, p. 153. 
10. P. Viviano, An Introduction to Biblical Criticisms and their Application. (Louisville, Kentucky: Westminster/John Knox Press, 1993) p. 43. 
11. Ibid. pp. 43-44. 
12. Ibid p. 44. 
13. See Viviano, especially note 29; L. Walker, A Tribute to Gleason Archer. (Chicago: Moody Press, 1986); R. Whybray Journal for the Study of the Old Testament Supplement 53. (England: Sheffield 1987) and many others.. See, for example, C. Brichto, The Names of God: Poetic Readings in Biblical Beginnings. (New York: Oxford University Press, 1998) p. ix. 
15. Greenstein, "Biblical Studies in a State," p. 36. 
16. Ibid p. 37. 
17. J. Levenson, The State of Jewish Studies (Detroit: Wayne State University, 1990) p. 51. 
18. Larsson, p. 322. 
19. Ibid. 
20. J. Emerton, "An Examination of Some Attempts to Defend the Unity of the Flood Narrative in Genesis." Vetus Testamentus 38 (1988) pp, 1-21. 
21. J. Wenham, "Genesis: An Authorship Study and Current Pentateuchal Criticism," Journal for the Study of the Old Testament 42 (1988) p. 10.  

Excerpts from the introduction and conclusion of Inconsistency in the Torah : ancient literary convention and the limits of source criticism by Joshua A. Berman. Emphasis is mine throughout.          


On a Sunday morning in May 2013, nearly one hundred scholars from around the world awaited the beginning of the proceedings of a conference titled, “Convergence and Divergence in Pentateuchal Theory: Bridging the Academic Cultures of Israel, North America, and Europe”
The conference opened with a report of the group’s accomplishments over that time. Speaking on behalf of the conveners, Bernard M. Levinson explained that the discipline is in a state of fragmentary discourse, where scholars talk past  each other, and mean different things even when they use the same terms. As he put it, “scholars tend to operate from such different premises, employing such divergent methods, and reaching such inconsistent results, that meaningful progress has become impossible. The models continue to proliferate, but the communication seems only to diminish.”1

A colleague sitting next to me commented that he was not surprised to hear this description of gridlock and crisis. As he put it, this should have been the expected result of bringing together so many accomplished and senior members of the same field. If you are a scholar whose entire output has consisted of studies predicated on say, source criticism, it is probably quite difficult for you to imagine that perhaps sources, classically conceived, do not exist. The American novelist Upton Sinclair said, “It is difficult to get a man to understand something, when his salary depends on his not understanding it” ; and we may apply Sinclair’s observation to the world of academic publishing and say, “It is difficult to get a scholar to understand something, when his entire scholarly oeuvre depends on his not understanding it.” Put differently, perhaps this deadlock stems from what Thomas Kuhn explains in his Structures of Scientific Revolutions: paradigms do not shift overnight. When scholars have worked with a given paradigm for a long time, he writes, the problems of the paradigm are never quickly acknowledged. The old paradigm will not be discarded until another paradigm is proposed that is demonstrably more compelling.3 We stand today in diachronic study of the Bible at a midpoint in this process. Problems have been identified with the reigning paradigms, yet no alternatives have been proposed that are demonstrably better. In this intellectual climate, it is to be expected that different scholars will stick to their different academic guns, so to speak. In this volume I offer no panacea to the questions and issues raised concerning the formation of the Torah. Instead, I offer a contribution to a recent and growing movement within historical-critical scholarship on the Torah.

The root of the problem heretofore, according to this movement, is that scholars have rooted their compositional theories for the growth of the biblical text entirely in their own intuition of what constitutes literary unity. For those of us working in this new movement, the time has come to root compositional theory in the so-called empirical findings of the writings of the ancient Near East. We must canvas and analyze documented examples of compositional growth and editing across a wide field of ancient Near Eastern texts, both within ancient Israel and outside it.4 How did these scribes go about editing and revising revered texts? What editorial trends do we see when we compare earlier versions of a text to later ones? For these scholars it is an axiom that the Hebrew Bible, and with it the Torah, is a product of an ancient Near Eastern milieu, which deeply influences not only its content, but also its poetics and process of composition. The turn by these scholars toward empirical models for compositional theory has met with resistance in some quarters, because essentially, these scholars claim that the only way to right the ship is by jettisoning many sacred cows of compositional theory, and effectively throwing them overboard. Whereas other scholars have examined the editorial practices of ancient scribes, I seek here to question our own notions of consistency and unity in a text, in light of what we discover from the writings of the ancient Near East. Scholars have long known that this corpus can surprise us with the seeming “inconsistencies” that it yields. 

A foundational staple of early Penateuchal criticism maintained that the disparity of divine names found in the Torah was itself proof positive of composite authorship, and a key to determining and delimiting its sources.5 This axiom had to be walked back, however, in light of evidence that the ancients were quite comfortable referring to the same deity by multiple names, even within a single passage. Witness what we find in Tablet IV of the Ugaritic Ba’al Cycle: “Baʽlu’s enemies grasp hold of (the trees of ) the forest, Haddu’s adversaries (grasp hold of ) the flanks of the mountain(s). Mighty Baʽlu speaks up:  Enemies of Haddu, why do you shake with fear?” (4, vi.36–vii.38). In like fashion, the alternation in address between singular and plural pronouns, sometimes referred to as Numeruswechsel, was thought to designate various sources or strata in the Hebrew Bible. However, the phenomenon is also found in the Sefire treaty, that is, in a literary setting where we cannot propose diachronic composition. In Stele III, the suzerain commands the vassal to hand over fugitives, warning him that if he fails to do so, “You (pl.) shall have been unfaithful to all the gods of the treaty” (III:4; cf. similarly, ll. 16 and 23). However, further on, the suzerain demands freedom of passage in the vassal’s territory, and warns him that if he fails to do so, “you (s.) shall be unfaithful to this treaty” (cf. similarly ll. 14, 20, and 27).

These examples serve as a warning flag for scholars looking to parse the text on the basis of their own notions of literary unity. The ancient text is a minefield of literary phenomena that are culturally dependent. The diachronic scholar who treads there based solely on his own modern notions of literary unity risks serious interpretive missteps. Passages such as that from the Baal cycle above, or from the Sefire treaty can be safely assumed to have been written by a single hand. The rhetoric we find in these comparative materials can offer a control. Of course, the presence of these phenomena elsewhere does not prove that the Torah must be read this way as well. Even if we assume that the passage from the Baal Cyle cited above was composed by one hand, this does not mandate that the presence of two (or even three) divine names in Genesis must all stem from the same authorial hand—but it should, at the very least, place a check on the confidence that a modern exegete can have when approaching the biblical text and encountering literary phenomena that seem inconsistent. Perhaps the most prudent lesson from such examples is that we must attain competency as readers before we engage the text—and this we can do only by canvassing the available cognate materials.
The evidence that I adduce in the first two parts of the book lead me to Part III, Renewing Pentateuchal Criticism, in which I critique current methodology and seek a new path forward. The Jerusalem conference to which I alluded to earlier was subtitled: “Bridging the Academic Cultures of Israel, North America, and Europe.” There is a fundamentally correct understanding in this framing of the current state of the field: scholars do not work in a vacuum. Rather, they ply their trade within specific academic cultures of first assumptions. In his introductory comments at the Jerusalem conference, Bernard Levinson noted that American and Israeli scholars contend that “the current proliferation of European hypotheses and multiple layers of redactional development is theory driven and selfgenerated without adequate consideration of comparative literary evidence.” However, the Jerusalem conference devoted no time to laying bare these cultural axioms. What is it, for example, about the academic culture of German-speaking lands that leads scholars there to rally around a certain set of methodological presumptions? An awareness of our cultural presuppositions and of the intellectual heritage to which we are heirs is essential if we are to be self-critical about our own work.

I seek to address these issues in chapter 11, “A Critical Intellectual History of the Historical-Critical Paradigm in Biblical Studies.” My goal is to understand the origins of the intellectual commitments that shape the discipline today, and its reluctant disposition toward empirical models of textual growth. I examine how theorists over three centuries have entertained the most fundamental questions: what is the goal of the historical-critical study of the Hebrew Bible? What is the probative value of evidence internal within the text itself, relative to evidence from external sources? What is the role of intuition in the scholar’s work? What is the role of methodological control? The axioms that governed nineteenth-century German scholarship were at a great remove from those that governed earlier historical-critical scholarship, in the thought of critics such as Spinoza. These axioms were based in intellectual currents that were particular to the nineteenth century, and especially so in Germany. From there, I  offer a brief summary of the claims of contemporary scholars who are looking toward empirical models to reconstruct the textual development of Hebrew scriptures. I conclude by demonstrating how this vein of scholarship undermines an array of nineteenth-century intellectual assumptions, but would have been quite at home in the earlier periods of the discipline’s history, and call for a return to Spinozan hermeneutics. 

I continue my critique of current historical-critical method in chapter  12, “The Abuses of Negation, Bisection, and Suppression in the Dating of Biblical Texts: The Rescue of Moses (Exodus 2:1–10).” Here I maintain that the scholarly aim to clearly delineate and definitively date layers in a text unwittingly leads to three malpractices of historical-critical method. Handling complex evidence in a reductive fashion, scholars routinely engage in what I call, respectively, the undue negation of evidence, the suppression of evidence, and the forced bisection of a text. Scholarship on the account of the rescue of Moses serves as an illustration. The division of the Genesis flood account is one of the most celebrated achievements of modern biblical criticism. In chapter  13, “Source Criticism and Its Biases: The Flood Narrative of Genesis 6–9,” I take a critical look at the source-critical paradigm and examine its hermeneutics. Here, too, we will see that historical-critical scholarship applies a series of double standards that all work in concert to support the source-critical aims and results. Moreover, it consistently suppresses evidence adduced from cognate materials (particularly from the Mesopotamian version of the flood story contained in Tablet XI of the Giglamesh Epic) that threatens its validity by simply ignoring it, or otherwise negating the validity of that evidence through unwarranted means. In the Conclusion, I offer a new path forward for historical-critical study of the Hebrew Bible, calling for three imperatives. First, I suggest an epistemological shift that soberly acknowledges the limits of what we may determine, both in terms of the dates of the texts we study, and of the prehistory of those texts.

 A New Path Forward The methodological impasse gripping the field, its extreme fragmentation and seemingly unbridgeable diversity, should give us pause and encourage us to explore some of the fundamental assumptions that have girded diachronic study for two centuries. To renew the field of pentateuchal criticism—and indeed, the historical-critical paradigm in biblical studies more broadly—I believe that historical-critical scholars will need to adopt three new priorities in their work. The first is an epistemological shift toward modesty in our goals and toward accepting contingency in our results. ……..As I demonstrated in chapter 11, historical criticism is caught in a vicious loop. The holy grail of historical criticism of the Hebrew Bible is the attainment of answers to four fundamental questions: who wrote this text? When was it written? What are the historical circumstances that occasioned its composition? What were the stages of the text’s development? Because these questions are fundamental to the discipline’s self-identity, scholars take it as axiomatic that they possess the capacity to provide reliable answers to these questions. The possibility that we may not have this capacity is not widely entertained, for if that really were the case, the very enterprise of understanding the biblical texts in historical context is threatened. This in turn leads to scholarship biased toward producing results that answer these questions. In chapter 12, I showed how scholarship on the question of the dating of the account of the rescue of Moses in Exodus 2:1–10 handles complex evidence in a reductive fashion so that the passage may be firmly dated. And in chapter 13 I highlighted the many ways in which source-critical scholars display a proclivity and predisposition to parse evidence in an unfounded way, but one that serves the goal of discovering within the Genesis flood narrative two original sources.

 We are committed to two methodological callings. As biblicists, we are called to examine the texts we work with in their historical and social settings. But no less, as scientific investigators, we are called to put forth arguments only to the degree that they are supported by the evidence. It must be starkly admitted that these two callings stand in fundamental tension, especially when we are dealing with the texts of the Pentateuch, where the events recorded have scant attestation outside of the Hebrew Bible. Out of a healthy commitment to examining texts within a specific historical setting we too often compromise on the second calling: to offer a specific historical setting for a text only when the evidence for it is strong and unambiguous. Responsible scholarship mandates a proper ordering of first questions. The classical questions that address historical context are vital ones. But the sine qua non of any critical quest must begin with the frank and sober question: what are the limits of what we may know? What will be the controls in place that check our conclusions? The warning of the eminent historian Arnaldo Momigliano is in order: “The most dangerous type of researcher in any historical field is the man who, because he is intelligent enough to ask a good question, believes that he is good enough to give a satisfactory answer.”

 This will require biblicists to think differently about their work. To animate just how difficult—and yet necessary—this is, I draw attention to a similar change of mindset now underway in a distant branch of the academy. No field of academic study today is in as much turmoil as the field of economics. The financial collapse of 2008 was predicted by only a handful of doomsday prophets, who were largely ignored. The Nobel laureate in economics, Paul Krugman, asks how it is that the entire guild of economists—himself included—got it so wrong.2 Krugman concludes: “As I see it the economics profession went astray because economists, as a group, mistook beauty, for truth. The central cause of the profession’s failure,” he goes on, “was the desire for an all-encompassing, intellectually elegant approach that also gave economists a chance to show off their mathematical prowess.” Krugman details how the neoclassical belief in markets had an allure because it allowed scholars to do macroeconomics with clarity, completeness, and beauty. The approach seemed to explain so many things. Krugman’s analysis of what happened to an entire guild of economists should give us pause for reflection as biblicists. Could it be that we, too, fall victim to the allure of mistaking beauty for truth? By positing the date of a text and the stages of its composition, we create an elegant narrative of the text’s history and of the evolution of religious ideas in ancient Israel. But could we, too, be mistaking beauty for truth—the truth that dating biblical texts and recovering their stages of growth is harder than we would like to admit? Or the truth that we actually have limited access to the minds and hearts of the scribes of ancient Israel, and cannot know the full range of motivations that drove them to compose the texts they did? Krugman writes, “if the economics profession is to redeem itself, it will have to reconcile itself to a less alluring vision,” and that, “what’s almost certain is that economists will have to learn to live with messiness.” That constituted an incredibly bitter pill for economists to swallow. Possessed with elegant models that claimed to predict economic performance in the future, economists became indispensible figures, necessary to the financial prosperity of those who would hire their services. To admit that the economic world is “messy” is, essentially, to concede defeat. If, at best, economists can describe only the past, then the field of economics loses its lofty status, and is reduced to merely a subsection of the history department.

 Perhaps we, too, “will have to learn to live with messiness” and avoid the pitfall of mistaking beauty for truth. Perhaps we, too, may have to settle for the realization that we cannot work back from a received text and reconstruct its compositional history with clarity. Finally, this epistemological shift will need to necessitate a rethinking of the habits of biblicists with regard to the relationship between the processes of dating a text and uncovering its meaning. There exists a pervasive but mistaken assumption that if an idea is particularly relevant to one historical era, it must have originated in that era. The covenant curses of Leviticus 26 and Deuteronomy 28 warn of exile; but this does not ensure that these texts were composed following the exile.
After all, the Bible claims that Israel’s origins are outside the land of Canaan, and that the land is a divine gift, denied to the Amorites because of their wickedness (Gen 15:16; Deut 9:5). Genesis tells us that Adam himself had been exiled from the Garden of Eden (Gen 3:23–24). The notion of exile is integral to the warp and woof of many Pentateuchal texts. Another example: the boundaries of the promised land in Gen 15:18–21 correspond to those of the empire of David and Solomon, as portrayed in 1 Kgs 4:21. For some, this suggests that the border list of Gen 15:18–21 originates from this period.11 But, in theory, writers from either earlier or later periods could also have yearned for stronger borders and greater hegemony than was available in their own day. Moreover, the impulse to date a text based on an idea it presents overlooks the fact that these texts were copied and handed down across many generations and many historical circumstances. These texts endured precisely because they were seen as transcending the original setting of their composition and offering insights into the human condition and the condition of the people Israel. The inheritors of these texts deemed them relevant long after the original historical and social conditions of their composition were forgotten. Phenomenologists of religion such as Moshe Idel and Mircea Eliade have taught us that we need to be open to the possibility that the intuitions of a religious text can be understood as timeless.

Tuesday, May 22, 2018

Self-verifying theories

[[So all the philosophical baloney about the impossibility of knowing that you are consistent etc. is non-sense.....]]

From Wikipedia, the free encyclopedia
Self-verifying theories are consistent first-order systems of arithmetic much weaker than Peano arithmetic that are capable of proving their own consistencyDan Willard was the first to investigate their properties, and he has described a family of such systems. According to Gödel's incompleteness theorem, these systems cannot contain the theory of Peano arithmetic, and in fact, not even its weak fragment Robinson arithmetic; nonetheless, they can contain strong theorems.
In outline, the key to Willard's construction of his system is to formalise enough of the Gödel machinery to talk about provability internally without being able to formalise diagonalisation. Diagonalisation depends upon being able to prove that multiplication is a total function (and in the earlier versions of the result, addition also). Addition and multiplication are not function symbols of Willard's language; instead, subtraction and division are, with the addition and multiplication predicates being defined in terms of these. Here, one cannot prove the  sentence expressing totality of multiplication:
where  is the three-place predicate which stands for . When the operations are expressed in this way, provability of a given sentence can be encoded as an arithmetic sentence describing termination of an analytic tableau. Provability of consistency can then simply be added as an axiom. The resulting system can be proven consistent by means of a relative consistency argument with respect to ordinary arithmetic.
We can add any true  sentence of arithmetic to the theory and still remain consistent.

Monday, May 21, 2018

What Does Quantum Physics Actually Tell Us About the World?
May 8, 2018
The Unfinished Quest for the Meaning of Quantum Physics
By Adam Becker
370 pp. Basic Books. $32.
Are atoms real? Of course they are. Everybody believes in atoms, even people who don’t believe in evolution or climate change. If we didn’t have atoms, how could we have atomic bombs? But you can’t see an atom directly. And even though atoms were first conceived and named by ancient Greeks, it was not until the last century that they achieved the status of actual physical entities — real as apples, real as the moon.
The first proof of atoms came from 26-year-old Albert Einstein in 1905, the same year he proposed his theory of special relativity. Before that, the atom served as an increasingly useful hypothetical construct. At the same time, Einstein defined a new entity: a particle of light, the “light quantum,” now called the photon. Until then, everyone considered light to be a kind of wave. It didn’t bother Einstein that no one could observe this new thing. “It is the theory which decides what we can observe,” he said.
Which brings us to quantum theory. The physics of atoms and their ever-smaller constituents and cousins is, as Adam Becker reminds us more than once in his new book, “What Is Real?,” “the most successful theory in all of science.” Its predictions are stunningly accurate, and its power to grasp the unseen ultramicroscopic world has brought us modern marvels. But there is a problem: Quantum theory is, in a profound way, weird. It defies our common-sense intuition about what things are and what they can do.
“Figuring out what quantum physics is saying about the world has been hard,” Becker says, and this understatement motivates his book, a thorough, illuminating exploration of the most consequential controversy raging in modern science.
The debate over the nature of reality has been growing in intensity for more than a half-century; it generates conferences and symposiums and enough argumentation to fill entire journals. Before he died, Richard Feynman, who understood quantum theory as well as anyone, said, “I still get nervous with it...I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem.” The problem is not with using the theory — making calculations, applying it to engineering tasks — but in understanding what it means. What does it tell us about the world?

From one point of view, quantum physics is just a set of formalisms, a useful tool kit. Want to make better lasers or transistors or television sets? The Schrödinger equation is your friend. The trouble starts only when you step back and ask whether the entities implied by the equation can really exist. Then you encounter problems that can be described in several familiar ways:
Wave-particle duality. Everything there is — all matter and energy, all known forces — behaves sometimes like waves, smooth and continuous, and sometimes like particles, rat-a-tat-tat. Electricity flows through wires, like a fluid, or flies through a vacuum as a volley of individual electrons. Can it be both things at once?
The uncertainty principle. Werner Heisenberg famously discovered that when you measure the position (let’s say) of an electron as precisely as you can, you find yourself more and more in the dark about its momentum. And vice versa. You can pin down one or the other but not both.
The measurement problem. Most of quantum mechanics deals with probabilities rather than certainties. A particle has a probability of appearing in a certain place. An unstable atom has a probability of decaying at a certain instant. But when a physicist goes into the laboratory and performs an experiment, there is a definite outcome. The act of measurement — observation, by someone or something — becomes an inextricable part of the theory.

The strange implication is that the reality of the quantum world remains amorphous or indefinite until scientists start measuring. Schrödinger’s cat, as you may have heard, is in a terrifying limbo, neither alive nor dead, until someone opens the box to look. Indeed, Heisenberg said that quantum particles “are not as real; they form a world of potentialities or possibilities rather than one of things or facts.”
This is disturbing to philosophers as well as physicists. It led Einstein to say in 1952, “The theory reminds me a little of the system of delusions of an exceedingly intelligent paranoiac.”
So quantum physics — quite unlike any other realm of science — has acquired its own metaphysics, a shadow discipline tagging along like the tail of a comet. You can think of it as an “ideological superstructure” (Heisenberg’s phrase). This field is called quantum foundations, which is inadvertently ironic, because the point is that precisely where you would expect foundations you instead find quicksand.
Competing approaches to quantum foundations are called “interpretations,” and nowadays there are many. The first and still possibly foremost of these is the so-called Copenhagen interpretation. “Copenhagen” is shorthand for Niels Bohr, whose famous institute there served as unofficial world headquarters for quantum theory beginning in the 1920s. In a way, the Copenhagen is an anti-interpretation. “It is wrong to think that the task of physics is to find out how nature is,” Bohr said. “Physics concerns what we can say about nature.Nothing is definite in Bohr’s quantum world until someone observes it. Physics can help us order experience but should not be expected to provide a complete picture of reality. The popular four-word summary of the Copenhagen interpretation is: “Shut up and calculate!”
For much of the 20th century, when quantum physicists were making giant leaps in solid-state and high-energy physics, few of them bothered much about foundations. But the philosophical difficulties were always there, troubling those who cared to worry about them.
Becker sides with the worriers. He leads us through an impressive account of the rise of competing interpretations, grounding them in the human stories, which are naturally messy and full of contingencies. He makes a convincing case that it’s wrong to imagine the Copenhagen interpretation as a single official or even coherent statement. It is, he suggests, a “strange assemblage of claims.”
The Times needs your voice. We welcome your on-topic commentary, criticism and expertise.
An American physicist, David Bohm, devised a radical alternative at midcentury, visualizing “pilot waves” that guide every particle, an attempt to eliminate the wave-particle duality. For a long time, he was mainly lambasted or ignored, but variants of the Bohmian interpretation have supporters today. Other interpretations rely on “hidden variables” to account for quantities presumed to exist behind the curtain. Perhaps the most popular lately — certainly the most talked about — is the “many-worlds interpretation”: Every quantum event is a fork in the road, and one way to escape the difficulties is to imagine, mathematically speaking, that each fork creates a new universe.
So in this view, Schrödinger’s cat is alive and well in one universe while in another she goes to her doom. And we, too, should imagine countless versions of ourselves. Everything that can happen does happen, in one universe or another. “The universe is constantly splitting into a stupendous number of branches,” said the theorist Bryce DeWitt, “every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies of itself.”
This is ridiculous, of course. “A heavy load of metaphysical baggage,” John Wheeler called it. How could we ever prove or disprove such a theory? But if you think the many-worlds idea is easily dismissed, plenty of physicists will beg to differ. They will tell you that it could explain, for example, why quantum computers (which admittedly don’t yet quite exist) could be so powerful: They would delegate the work to their alter egos in other universes.
Is any of this real? At the risk of spoiling its suspense, I will tell you that this book does not propose a definite answer to its title question. You weren’t counting on one, were you? The story is far from finished.
When scientists search for meaning in quantum physics, they may be straying into a no-man’s-land between philosophy and religion. But they can’t help themselves. They’re only human. “If you were to watch me by day, you would see me sitting at my desk solving Schrödinger’s equation...exactly like my colleagues,” says Sir Anthony Leggett, a Nobel Prize winner and pioneer in superfluidity. “But occasionally at night, when the full moon is bright, I do what in the physics community is the intellectual equivalent of turning into a werewolf: I question whether quantum mechanics is the complete and ultimate truth about the physical universe.”
James Gleick

Sam Harris and the Myth of Perfectly Rational Thought

Robert Wright

Sam Harris, one of the original members of the group dubbed the “New Atheists” (by Wired!) 12 years ago, says he doesn’t like tribalism. During his recent, much-discussed debate with Vox founder Ezra Klein about race and IQ, Harris declared that tribalism “is a problem we must outgrow.”
But apparently Harris doesn’t think he is part of that “we.” After he accused Klein of fomenting a “really indissoluble kind of tribalism” in the form of identity politics, and Klein replied that Harris exhibits his own form of tribalism, Harris said coolly, “I know I’m not thinking tribally in this respect.”
Not only is Harris capable of transcending tribalism—so is his tribe! Reflecting on his debate with Klein, Harris said that his own followers care “massively about following the logic of a conversation” and probe his arguments for signs of weakness, whereas Klein’s followers have more primitive concerns: “Are you making political points that are massaging the outraged parts of our brains? Do you have your hands on our amygdala and are you pushing the right buttons?”
Of the various things that critics of the New Atheists find annoying about them—and here I speak from personal experience—this ranks near the top: the air of rationalist superiority they often exude. Whereas the great mass of humankind remains mired in pernicious forms of illogical thought—chief among them, of course, religion—people like Sam Harris beckon from above: All of us, if we will just transcend our raw emotions and rank superstitions, can be like him, even if precious few of us are now.
We all need role models, and I’m not opposed in principle to Harris’s being mine. But I think his view of himself as someone who can transcend tribalism—and can know for sure that he’s transcending it—may reflect a crude conception of what tribalism is. The psychology of tribalism doesn’t consist just of rage and contempt and comparably conspicuous things. If it did, then many of humankind’s messes—including the mess American politics is in right now—would be easier to clean up.
What makes the psychology of tribalism so stubbornly powerful is that it consists mainly of cognitive biases that easily evade our awareness. Indeed, evading our awareness is something cognitive biases are precision-engineered by natural selection to do. They are designed to convince us that we’re seeing clearly, and thinking rationally, when we’re not. And Harris’s work features plenty of examples of his cognitive biases working as designed, warping his thought without his awareness. He is a case study in the difficulty of transcending tribal psychology, the importance of trying to, and the folly of ever feeling sure we’ve succeeded.
To be clear: I’m not saying Harris’s cognition is any more warped by tribalism than, say, mine or Ezra Klein’s. But somebody’s got to serve as an example of how deluded we all are, and who better than someone who thinks he’s not a good example?
There’s another reason Harris makes a good Exhibit A. This month Bari Weiss, in a now famous (and, on the left, infamous) New York Times piece, celebrated a coalescing group of thinkers dubbed the “Intellectual Dark Web”—people like Harris and Jordan Peterson and Christina Hoff Sommers, people for whom, apparently, the ideal of fearless truth telling trumps tribal allegiance. Andrew Sullivan, writing in support of Weiss and in praise of the IDW, says it consists of “nontribal thinkers.” OK, let’s take a look at one of these thinkers and see how nontribal he is.
Examples of Harris’s tribal psychology date back to the book that put him on the map: The End of Faith. The book exuded his conviction that the reason 9/11 happened—and the reason for terrorism committed by Muslims in general—was simple: the religious beliefs of Muslims. As he has put it: “We are not at war with ‘terrorism.’ We are at war with Islam.”
Believing that the root of terrorism is religion requires ruling out other root causes, so Harris set about doing that. In his book he listed such posited causes as “the Israeli occupation of the West Bank and Gaza…the collusion of Western powers with corrupt dictatorships…the endemic poverty and lack of economic opportunity that now plague the Arab world.”
Then he dismissed them. He wrote that “we can ignore all of these things—or treat them only to place them safely on the shelf—because the world is filled with poor, uneducated, and exploited peoples who do not commit acts of terrorism, indeed who would never commit terrorism of the sort that has become so commonplace among Muslims.”
If you’re tempted to find this argument persuasive, I recommend that you first take a look at a different instance of the same logic. Suppose I said, “We can ignore the claim that smoking causes lung cancer because the world is full of people who smoke and don’t get lung cancer.” You’d spot the fallacy right away: Maybe smoking causes lung cancer under some circumstances but not others; maybe there are multiple causal factors—all necessary, but none sufficient—that, when they coincide, exert decisive causal force.
Or, to put Harris’s fallacy in a form that he would definitely recognize: Religion can’t be a cause of terrorism, because the world is full of religious people who aren’t terrorists.
Harris isn’t stupid. So when he commits a logical error this glaring—and when he rests a good chunk of his world view on the error—it’s hard to escape the conclusion that something has biased his cognition.
As for which cognitive bias to blame: A leading candidate would be “attribution error.” Attribution error leads us to resist attempts to explain the bad behavior of people in the enemy tribe by reference to “situational” factors—poverty, enemy occupation, humiliation, peer group pressure, whatever. We’d rather think our enemies and rivals do bad things because that’s the kind of people they are: bad.
With our friends and allies, attribution error works in the other direction. We try to explain their bad behavior in situational terms, rather than attribute it to “disposition,” to the kind of people they are.
You can see why attribution error is an important ingredient of tribalism. It nourishes our conviction that the other tribe is full of deeply bad, and therefore morally culpable, people, whereas members of our tribe deserve little if any blame for the bad things they do.
This asymmetrical attribution of blame was visible in the defense of Israel that Harris famously mounted during Israel’s 2014 conflict with Gaza, in which some 70 Israelis and 2,300 Palestinians died.
Granted, Harris said, Israeli soldiers may have committed war crimes, but that’s because they have “been brutalized…that is, made brutal by” all the fighting they’ve had to do. And this brutalization “is largely due to the character of their enemies.”
Get the distinction? When Israelis do bad things, it’s because of the circumstances they face—in this case repeated horrific conflict that is caused by the bitter hatred emanating from Palestinians. But when Palestinians do bad things—like bitterly hate Israelis—this isn’t the result of circumstance (the long Israeli occupation of Gaza, say, or the subsequent, impoverishing, economic blockade); rather, it’s a matter of the “character” of the Palestinians.
This is attribution error working as designed. It sustains your conviction that, though your team may do bad things, it’s only the other team that’s actually bad. Your badness is “situational,” theirs is “dispositional.”
After Harris said this, and the predictable blowback ensued, he published an annotated version of his remarks in which he hastened to add that he wasn’t justifying war crimes and hadn’t meant to discount “the degree to which the occupation, along with collateral damage suffered in war, has fueled Palestinian rage.”
That’s progress. “But,” he immediately added, “Palestinian terrorism (and Muslim anti-Semitism) is what has made peaceful coexistence thus far impossible.” In other words: Even when the bad disposition of the enemy tribe is supplemented by situational factors, the buck still stops with the enemy tribe. Even when Harris struggles mightily against his cognitive biases, a more symmetrical allocation of blame remains elusive.
Another cognitive bias—probably the most famous—is confirmation bias, the tendency to embrace, perhaps uncritically, evidence that supports your side of an argument and to either not notice, reject, or forget evidence that undermines it. This bias can assume various forms, and one was exhibited by Harris in his exchange with Ezra Klein over political scientist Charles Murray’s controversial views on race and IQ.
Harris and Klein were discussing the “Flynn effect”—the fact that average IQ scores have tended to grow over the decades. No one knows why, but such factors as nutrition and better education are possibilities, and many of the other possibilities also fall under the heading of “improved living conditions.”
So the Flynn effect would seem to underscore the power of environment. Accordingly, people who see the black-white IQ gap as having no genetic component have cited it as reason to expect that the gap could move toward zero as average black living conditions approach average white living conditions. The gap has indeed narrowed, but people like Murray, who believe a genetic component is likely, have asked why it hasn’t narrowed more.
This is the line Harris pursued in an email exchange with Klein before their debate. He wrote that, in light of the Flynn effect, “the mean IQs of African American children who are second- and third-generation upper middle class should have converged with those of the children of upper-middle-class whites, but (as far as I understand) they haven’t.”
Harris’s expectation of such a convergence may seem reasonable at first, but on reflection you realize that it assumes a lot.
It assumes that when African Americans enter the upper middle class—when their income reaches some specified level—their learning environments are in all relevant respects like the environments of whites at the same income level: Their public schools are as good, their neighborhoods are as safe, their social milieus reward learning just as much, their parents are as well educated, they have no more exposure to performance-impairing drugs like marijuana and no less access to performance-enhancing (for test-taking purposes, at least) drugs like ritalin. And so on.
Klein alluded to this kink in Harris’s argument in an email to Harris: “We know, for instance, that African American families making $100,000 a year tend to live in neighborhoods with the same income demographics as white families making $30,000 a year.”
Harris was here exhibiting a pretty subtle form of confirmation bias. He had seen a fact that seemed to support his side of the argument—the failure of IQ scores of two groups to fully converge—and had embraced it uncritically; he accepted its superficial support of his position without delving deeper and asking any skeptical questions about the support.
I want to emphasize that Klein may here also be under the influence of confirmation bias. He saw a fact that seemed to threaten his views—the failure of IQ scores to fully converge—and didn’t embrace it, but rather viewed it warily, looking for things that might undermine its significance. And when he found such a thing—the study he cited—he embraced that.
And maybe he embraced it uncritically. For all I know it suffers from flaws that he would have looked for and found had it undermined his views. That’s my point: Cognitive biases are so pervasive and subtle that it’s hubristic to ever claim we’ve escaped them entirely.
In addition to exhibiting one side of confirmation bias—uncritically embracing evidence congenial to your world view—Harris recently exhibited a version of the flip side: straining to reject evidence you find unsettling. He did so in discussing the plight of physicist and popular writer Lawrence Krauss, who was recently suspended by Arizona State University after multiple women accused him of sexual predation.
Krauss is an ally of Harris’s in the sense of being not just an atheist, but a “new” atheist. He considers religion not just confused but pernicious and therefore in urgent need of disrespect and ridicule, which he is good at providing.
After the allegations against Krauss emerged, Harris warned against rushing to judgment. I’m in favor of such warnings, but Harris didn’t stop there. He said the following about the website that had first reported the allegations against Krauss: “Buzzfeed is on the continuum of journalistic integrity and unscrupulousness somewhere toward the unscrupulous side.”
So far as I can tell, this isn’t true in any relevant sense. Yes, Buzzfeed has had the kinds of issues that afflict even the most elite journalistic outlets: a firing over plagiarism, an undue-advertiser-influence incident, a you-didn’t-explicitly-warn-us-that-this-conversation-was-on-the-record complaint. And there was a time when Buzzfeed wasn’t really a journalistic outlet at all, but more of a spawning ground for cheaply viral content—a legacy that lives on as a major part of Buzzfeed’s business model and as a parody site called clickhole.
Still, since 2011, when Buzzfeed got serious about news coverage and hired Ben Smith as editor, the journalistic part of its operation has earned mainstream respect. And its investigative piece about Krauss was as thoroughly sourced as #metoo pieces that have appeared in places like the New York Times and the New Yorker.
But you probably shouldn’t take my word for that. I’ve had my contentious conversations with Krauss, and maybe this tension left me inclined to judge allegations against him too generously. In any event, I suspect that if the Buzzfeed piece were about someone Harris has had tensions with (Ezra Klein, maybe, or me), he might have just read it, found it pretty damning, and left it at that. But it was about Krauss—who is, if Harris will pardon the expression, a member of Harris’s tribe.
Most of these examples of tribal thinking are pretty pedestrian—the kinds of biases we all exhibit, usually with less than catastrophic results. Still, it is these and other such pedestrian distortions of thought and perception that drive America’s political polarization today.
For example: How different is what Harris said about Buzzfeed from Donald Trump talking about “fake news CNN”? It’s certainly different in degree. But is it different in kind? I would submit that it’s not.
When a society is healthy, it is saved from all this by robust communication. Individual people still embrace or reject evidence too hastily, still apportion blame tribally, but civil contact with people of different perspectives can keep the resulting distortions within bounds. There is enough constructive cross-tribal communication—and enough agreement on what the credible sources of information are—to preserve some overlap of, and some fruitful interaction between, world views.
Now, of course, we’re in a technological environment that makes it easy for tribes to not talk to each other and seems to incentivize the ridiculing of one another. Maybe there will be long-term fixes for this. Maybe, for example, we’ll judiciously amend our social media algorithms, or promulgate practices that can help tame cognitive biases.
Meanwhile, the closest thing to a cure may be for all of us to try to remember that natural selection has saddled us with these biases—and also to remember that, however hard we try, we’re probably not entirely escaping them. In this view, the biggest threat to America and to the world may be a simple lack of intellectual humility.
Harris, though, seems to think that the biggest threat to the world is religion. I guess these two views could be reconciled if it turned out that only religious people are lacking in intellectual humility. But there’s reason to believe that’s not the case.