Wednesday, July 12, 2017

It’s hard to imagine how the game got here—it's even harder to imagine what happens next, let alone a scenario in which four white pawns and a white king could play to a draw, or even win this game. The Penrose Chess Puzzle: Can you find the solution that results in either a white win or a game draw? The Penrose Chess Puzzle: Yet: scientists at the newly-formed Penrose Institute say it’s not only possible, but that human players see the solution almost instantly, while chess computers consistently fail to find the right move. “We plugged it into Fritz, the standard practice computer for chess players, which did three-quarters of a billion calculations, 20 moves ahead," explained James Tagg Co-Founder and Director of the Penrose Institute, which was founded this week to understand human consciousness through physics. "It says that one-side or the other wins. But," Tagg continued, "the answer that it gives is wrong." Tagg and his co-founder, Mathematical Physicist and professor Sir Roger Penrose—who successfully proved that black holes have a singularity in them—cooked up the puzzle to prove a point: Human brains think differently. (Those who figure out the puzzle can send their answers to Penrose to be entered to win the professor's latest book.) Humans can look at a problem like this strange chess board configuration, said Tagg, and understand it. “What a computer does is brute force calculation, which is different. This is set up, rather exquisitely, to show the difference,” he added. They forced the computer out of its comfort zone by, at least in part, making an unusual choice: the third bishop. “All those bishops can move in lots of different ways, so you get computation explosion. To calculate it out would suck up more than computing power than is available on earth,” claimed Tagg. Tagg told us that there is, in fact, a natural way to get to this board configuration. We're trying to figure it our here, but lacked an extra black bishop. So we tagged one to keep track. We're trying to figure it our here, but lacked an extra black bishop. So we tagged one to keep track. Sir Richard Penrose’s brother is, according to Tagg, a very strong chess player. “He assures me that it’s a position you can get to, but I have not played it through. Question is, is there a rational game that gets you there?” In fact, those who can figure out that second puzzle and get the answer to Penrose, could also receive a free copy of Professor Penrose’s book. Chess computers fail at Penrose’s chess puzzle because they have a database of end-games to choose from. This board is not, Tagg and Penrose believe, in the computer’s playbook. “We’re forcing the chess machine to actually think about the position, as opposed to cheat and just regurgitate a pre-programmed answer, which computers are perfect at,” said Tagg. So far, Tagg and the Penrose Institute haven't heard from an artificial intelligence experts refuting their claims. “I’m quite surprised,” said Tagg. Mashable has contacted several AI experts for comment and will update this post with their response. Aside from the fun of solving this puzzle (Tagg said hundreds already have and claim they have done so in seconds), it poses a deeper question: Are we executing some fiendishly clever algorithm in our brain, that cuts through the chaff? It is just a higher level of computation, one that computers can still aspire to or something unique to brain-matter-based thought? Tagg said Penrose Institute falls into the latter camp. Penrose and Tagg don’t think you can simply call a brain a machine. “It sits in skull, made of gray matter and we don’t understand how it works. Simply calling it a clever computer, this sort of puzzle shows that it clearly is not,” he said. You can send your Chess Puzzle solution to the Penrose Institute here:

Monday, July 10, 2017

Israel Finkelstein - drastic weaknesses - note in particular that the reviewers do not focus solely on the newest book, but also cite many refutations of his prior work

Divided Kingdom, United Critics
Two archaeologists independently review Israel Finkelstein's The Forgotten Kingdom in the July/August 2014 issue of BAR
Reviews by William G. Dever and Aaron Burke  

Israel Finkelstein, The Forgotten Kingdom: The Archaeology and History of Northern Israel, Society of Biblical Literature Ancient Near East Monographs 5 (Atlanta: Society of Biblical Literature, 2013), 210 pp., $39.95 (hardcover), $24.95 (paperback)

Learning More About Israel? Or Israel?
Review by William G. Dever

William G. Dever
It is impossible to summarize Israel Finkelstein’s latest book, The Forgotten Kingdom, in a brief review because its numerous errors, misrepresentations, over-simplifications and contradictions make it too unwieldy. Specialists will know these flaws, since all of Finkelstein’s pivotal views have been published elsewhere. Here I can only alert unwary BAR readers that this book is not really about sound historical scholarship: It is all about theater. Finkelstein is a magician, conjuring a “lost kingdom” by sleight-of-hand, intending to convince readers that the illusion is real and expecting that they will go away marveling at how clever the magician is. Finkelstein was once an innovative scholar, pioneering new methods; now he has become a showman. A tragic waste of talent, energy and charm—and a detriment to our discipline.
This book is such a good read, so drama-filled, so clever that it took me—a specialist—a bit of time to see through it. For example, Finkelstein reconstructs an early Israelite “sanctuary” at Shiloh (where he excavated) to give it the necessary prominence in Israel’s formative period (pp. 23–27; 49, 50). He makes three arguments: (1) Although he now admits that he found no archaeological evidence (as he claimed in his original excavation report on Shiloh), the Bible’s “cultural memory” nevertheless requires that there must have been such a cult-place there. (2) In Iron I (1200–1000 B.C.E.—Israel’s formative period) there “was not a single house” at Shiloh, only public buildings. (3) Shiloh was later destroyed, just as implied in the Hebrew Bible.
What are the facts?
(1) All of Finkelstein’s evidence of a “cult-place” at Shiloh is circumstantial; he himself admits this.
(2) He interprets the well-known Iron I pillar-courtyard House A at Shiloh that was originally excavated by a Danish expedition and later reinvestigated by himself as a public building only because it contains “too many” storejars (as many as 20). Yet his own Iron I house excavated at Megiddo (o/K/10)—published explicitly as an “ordinary private house”—contained more than 40 large storejars. And he can presume the absence of other private houses at Shiloh only because few other areas have been excavated, and where they have been excavated other houses are known.
(3) As for reliance on the Hebrew Bible’s “cultural memory” (the latest fad in Biblical studies), Finkelstein has famously rejected this “cultural memory” as unreliable. And he has castigated other archaeologists for invoking it! Yet here (and elsewhere) he does not hesitate to appeal to Biblical tradition when it suits his purposes. As for evidence of Shiloh’s destruction c. 1050 B.C.E. at the hands of the Philistines, Finkelstein cites not archaeological evidence, but only Jeremiah 7:12–14; 26:6–9, which he admits refers rather to Shiloh’s destruction by the Assyrians in the late eighth century B.C.E. (impossible even then, since the site was deserted). He himself is driven to fall back on the Bible’s “cultural memory.”
In sum, Finkelstein’s “early cult center at Shiloh” is a fantasy, a product of his imagination.

Fundamental to Finkelstein’s entire reconstruction of the history of the northern kingdom of Israel is a “Saulide polity,” from its center at Gibeon all the way up to the Jezreel Valley (pp. 37–43, 52–61).
Again, what are the facts? (1) The stratigraphy and chronology of James Pritchard’s excavations at Gibeon (Tell el-Jib) are notoriously flawed, so much so that the scant evidence for the Iron I period cannot be dated within a margin of less than a century; it cannot be used for any historical reconstruction (there are not even any stratum numbers in the excavation reports). (2) Finkelstein’s only evidence for an administrative center at Gibeon is the supposed plan of a casemate wall (Fig. 11:2, drawn up by a student). Yet even a casual glance reveals nothing but a short stretch of a broken single wall abutted by two ephemeral wall fragments. (3) With regard to the claim that Saul ruled from a sort of capital at Gibeon, the only Biblical references are to his having visited there once, and being remembered in David’s day as having slaughtered its inhabitants, who were not even Israelites (cf. 2 Samuel 21:1ff.).1
Finkelstein has simply invented out of whole cloth a “Saulide polity at Gibeon.”
The real point of this book is to argue that the Biblical “United Monarchy” of David and Solomon in the tenth century B.C.E. is a later fiction, concocted by the southern, Judahite-biased Biblical writers. The real “Israelite state,” according to Finkelstein, is the northern kingdom of Israel, and even this arose only in the ninth century B.C.E., that is, in the days of the Omrides (as Finkelstein has claimed for some 20 years).
Just look at what Finkelstein’s “Saulide polity” would actually imply, using his own assumptions and assertions: (1) Saul was a Judahite, a southerner of the tribe of Benjamin. (2) Gibeon was a southern site, only 6 miles from Jerusalem. (3) Saul is rightly regarded as a historical figure, a king, dated correctly to the early tenth century B.C.E. at latest. (4) Saul’s effective rule extended through Samaria (!) up to the Jezreel Valley in the north. Thus there was a tenth-century kingdom that embraced both Judah and Samaria, ruled from a Judahite capital. In other words, Finkelstein’s “Saulide polity” is in fact much the same as the Bible’s “United Monarchy”—established even before the time of David and Solomon. The irony is that this time Finkelstein has been so clever that he has outwitted himself. With his imagined “Saulide polity,” his oft-repeated claim that the northern kingdom of Israel is a late development, independent from Judah, unrecognized until he discovered it, falls apart. So does this book.

The other pillar on which Finkelstein’s rediscovered northern kingdom rests is his vaunted “low chronology,” in which he down-dates the previously accepted dates for the origins of Israel by as much as a hundred years. Yet this, too, is regarded by most mainstream archaeologists as without substantial foundations. First suggested some 20 years ago, Finkelstein has tirelessly championed his “low chronology” ever since. Here he presents it without so much as a single reference to its numerous critiques, some of them devastating (as Kletter 2004; Ben Tor and Ben Ami 1998; Dever 1997; Mazar 2007; Stager 2003; and others).
2 In numerous publications over 20 years, Finkelstein has relentlessly reworked the stratigraphy and chronology of site after site, not only in Israel and the West Bank, but even in Jordan, in order to defend his “low chronology.”
In fact, there has never been any unequivocal empirical evidence in support of the “low chronology.” Only some carbon-14 dates offer any evidence at all, and many other dates support the conventional chronology (as at Tel Rehov, which Finkelstein never cites here). At best, the low chronology is a possibility for a 40-year, not a 100-year, adjustment. Even this is not probable, and it is certainly not proven.
Yet on this flimsy foundation Finkelstein rests his entire elaborate reconstruction, with far-reaching implications for southern Levantine and Israelite history. Set that scheme aside, and Finkelstein’s claim to have discovered a “lost kingdom” disappears in smoke—a book without any rationale.
What’s going on here? It took me a while to figure it out. What Finkelstein is doing is gradually distancing himself from the extremes of his low chronology—without ever admitting he is doing so—and counting on the likelihood that readers will not check his “facts.” Even he now realizes that a Judahite state did exist in the tenth century B.C.E. and that it could have extended its rule to the north. He cannot bring himself to admit that David and Solomon were real tenth-century kings since he is on record as denying the existence of any Judahite state before the eighth century B.C.E. (or lately, the ninth century). So he does an end run around the impasse by distracting attention to their predecessor Saul as king!
Ever since the discoveries at Khirbet Qeiyafa a decade ago, where Judahite state-formation is clear by the early tenth centurya (and Finkelstein accepts this early date), his “low chronology” has been progressively undermined. It should be abandoned.3
In this book Finkelstein has not “discovered a lost kingdom”; he has invented it. The careful reader will nevertheless gain some insights into Israel—Israel Finkelstein, that is.
Minimizing David, Maximizing Labayu
Review by Aaron Burke

Aaron Burke
Israel Finkelstein’s The Forgotten Kingdom invites us to reconsider the archaeology and history of the northern kingdom of Israel in an effort to integrate the most recent textual and archaeological approaches. It is an ambitious work that few would attempt—or be capable of. Such an effort is commendable; this might be the first book-length treatment to try to give us an archaeological and Biblical text-critical synthesis of ancient Israel’s history.
Prior works on the archaeology of ancient Israel, which are now more than 20 years old, largely sought to catalog archaeological finds without rewriting or recasting the accepted historical narrative of the Bible for the Iron Age. Such approaches are unsatisfying for a lack of rigorous engagement with a wide range of methodologies common in Biblical studies. As he explains in the introduction, Finkelstein instead seeks to use archaeology to provide a sense of the historical development of the northern kingdom of Israel (the “forgotten” kingdom of his title), which lost much of its identity to Judah in the Biblical account. To do this, Finkelstein musters not only a wide geographical scope of data, but he also takes a longer historical perspective. He draws heavily on his own personal accomplishments, which are highlighted as “the personal perspective”—not only his excavations and surveys but more recently his “Exact Sciences” research initiative.
At the outset Finkelstein describes what we might call the background of power in the northern highlands. This is an important starting point for Finkelstein’s entire premise, namely the independent character of the northern kingdom of Israel. The underlying goal is to articulate an evolutionary trajectory for Iron Age political organization in the northern highlands that is independent of the traditional understanding of the northern kingdom’s relationship to a United Monarchy of Israel, as depicted in the Biblical narrative, which is centered instead on Jerusalem. Finkelstein reconstructs a so-called Shechem polity during the Late Bronze Age, which is intended to reveal a long historical trajectory of political and socio-economic developments that evolves into the Iron Age kingdom of Israel, long before the appearance of the southern kingdom of Judah.

THE AMARNA LETTERS, a collection of more than 300 Late Bronze Age cuneiform tablets discovered at el-Amarna in Egypt, record the royal correspondence of mid-14th-century pharaohs Amenophis III and his son, Akhenaten, with local rulers of various Canaanite city-states. The letters frequently mention “the land of Shechem” and a character named Labayu, who led an insurgency against Egypt. Israel Finkelstein believes that Labayu ruled Shechem and its territory, which was a powerful polity of the northern highlands long before the southern kingdom of Judah existed. Reviewer Aaron Burke points out that the Amarna letters never explicitly identify Labayu as the ruler of Shechem. Thus this “central tenet” of Finkelstein’s argument collapses.
The Egyptian 14th-century B.C.E. Amarna letters, from a time when Egypt controlled Canaan, play a central role in Finkelstein’s discussion. The problem, however, as Finkelstein recognizes, is that although “the land of Shechem” appears prominently in the Amarna correspondence, it is impossible to identify it with any certainty as the central and strongest polity in the northern highlands, particularly since neither Labayu, the leader of an insurgency against Egyptian rule, nor his sons is ever identified as ruler of Shechem and its territory. Consequently, this central tenet of Finkelstein’s argument in support of the prominence of a polity in the northern highlands (and all of Finkelstein’s speculation therefrom) does not follow from the evidence he adduces. Indeed, it seems that Finkelstein’s Labayu tradition here serves only as a surrogate to support the origin of the Iron Age northern kingdom of Israel. Finkelstein simply uses Labayu to replace the role of the United Monarchy in the tenth century as the origin of Israel (and Judah). This reconstruction, while entirely plausible, does not replace the explanatory framework provided by the Davidic tradition in the Bible. The fact is that we know next to nothing about the northern highlands in the early Iron IIA, as Finkelstein’s own review makes plain.
The textual record for this early period is very limited; therefore, archaeological evidence and a wider regional perspective are essential to the reconstruction Finkelstein is attempting here. Nevertheless, his treatment remains entirely textual. Only passing references are made, for example, to pre-Iron Age settlements at Judahite Shechem, Jerusalem and Hebron, each of which experienced comparable, developmental trajectories as Shiloh, as revealed in their archaeological records. Only Shiloh, which is never identified as a Bronze Age polity and is important only as a cult center in later Biblical tradition, receives any substantive discussion by Finkelstein.4
From the Iron Age I onward, Finkelstein’s discussion centers largely on an acceptance of the lowest version of his famous “low chronology.” For those who may not recognize immediately what is at stake here, to put it simply, if one shifts the dates of archaeological phenomena later in time, one will be required to identify different sets of events to which these phenomena correlate. Consequently, if one cannot accept these dates, there is little basis to accept most of the nuances of Finkelstein’s ensuing analysis. Unfortunately, his chronology continues to rest exclusively on Megiddo, his own site, with very limited acknowledgment of the results of his discussion with Amihai Mazar, Bronk Ramsey and others from sites such as Rehov, Khirbet en-Nahas and Tel Dor. This is strange since the book’s framework otherwise concedes a less-than-fully-realized “low chronology.” Indeed, Finkelstein’s positions in this book are sometimes tortuously maintained in the face of contravening data.5
I conclude by turning to Finkelstein’s persistent minimizing of the role played in the northern kingdom’s development by a historical David and a United Monarchy, however short-lived it may have been. Finkelstein must do this, however, to create a lacuna that can then be filled by the northern kingdom of Israel, despite the absence of evidence for any such process occurring in the northern highlands until the ninth century B.C.E. Since Finkelstein is unable to demonstrate an unequivocal basis for the indigenous origins of political power in the northern highlands, his central argument fails.

Even if one adopts a more limited view of David’s accomplishments than the Bible gives him, he remains a foundational figure of the United Kingdom. Finkelstein’s analysis, both textual and archaeological, cannot be reconciled with a founding Biblical figure (David), whose existence is already corroborated by extra-Biblical inscriptional data, that is, the Tel Dan inscription.
b This inscription evidences not only David’s existence but also the dynasty he established.
Concededly, a clear fingerprint of David’s patrimonial kingdom may be elusive, contrary to earlier scholarly expectations, but how is this any less the case for what is lacking in the north to illustrate the legacy of political legitimacy in the northern highlands, as Finkelstein asserts? Finkelstein makes allowances in his reconstruction of an early Iron IIA polity in the north, but he is unwilling to do that for the southern highlands. This is a major weakness of his work.
Although Finkelstein’s greatest career achievement may be the demonstration of the value of hard sciences to traditional Biblical archaeology, in this book he wades deep into the morass of traditional text-critical studies of the Bible only to demonstrate how unsatisfying the results can be. The book is replete with speculative reconstructions that depend on a series of assumptions about chronology and Biblical history that cannot be substantiated. Thus his book lacks an articulated methodology. His effort here to integrate Biblical text-studies with archaeology only reveals both how difficult such an enterprise is and fundamentally how uncertain the results will be. Most of all, it reminds us that we must take a more evenhanded approach to the application of the assumptions and the allowances we make when attempting historical reconstructions of the Biblical periods.
William G. Dever is professor emeritus of Near Eastern Archaeology and Anthropology at the University of Arizona. Prior to that he served for four years as director of the American School of Oriental Research in Jerusalem (now the Albright Institute of Archaeological Research). A world-renowned archaeologist, Professor Dever has dug at numerous sites in Jordan and Israel. He served as director of the major excavations at Gezer from 1966 to 1971. His most recent book, The Lives of Ordinary People in Ancient Israel, was published in 2012 (Eerdmans).
Aaron Burke is associate professor of the archaeology of the Levant and ancient Israel at the University of California, Los Angeles. He is the codirector of the Jaffa Cultural Heritage Project, which coordinates the research and preservation of the archaeological site of Jaffa, and is the author of “Walled up to Heaven”: The Evolution of the Middle Bronze Age Fortification Strategies in the Levant (Eisenbrauns, 2008).

Dever Notes
a. See Yosef Garfinkel, Michael Hasel and Martin Klingbeil, “An Ending and a Beginning,”BAR, November/December 2013; Christopher A. Rollston, “What’s the Oldest Hebrew Inscription?” and Gerard Leval, “Ancient Inscription Refers to Birth of Israelite Monarchy,”BAR, May/June 2012; Hershel Shanks, “Newly Discovered: A Fortified City from King David’s Time,” BAR, January/February 2009.
1. As for Finkelstein’s additional claim that Gibeon ceased to be a district center when it “paid tribute,” but was destroyed in the mid- to late tenth century raid of Pharaoh Shoshenq, there is no archaeological evidence whatsoever for such a “destruction.” And while Gibeon is mentioned in the Shoshenq list (no. 23), there is no reference there (or in the Bible) to either tribute or a destruction (the mention of tribute in 2 Kings 14:25, 26 refers only to Jerusalem).
2. See the following works, none cited by Finkelstein: Amnon Ben-Tor and Doron Ben-Ami, “Hazor and the Archaeology of the Tenth Century B.C.E.,” Israel Exploration Journal 48 (1998), pp. 1–37; William G. Dever, “Archaeology, Urbanism and the Rise of the Israelite State,” in Walter E. Aufrecht, Neil A. Mirau and Steven W. Gauley, eds., Aspects of Urbanism in Antiquity, From Mesopotamia to Crete (Sheffield: Sheffield Academic Press, 1997), pp. 172–193; Lawrence E. Stager, “The Patrimonial Kingdom of Solomon,” in William G. Dever and Seymour Gitin, eds., Symbiosis, Symbolism and the Power of the Past: Canaan, Ancient Israel and Their Neighbors from the Late Bronze Age Through Roman Palaestina (Winona Lake, IN: Eisenbrauns, 2003), pp. 63–74; Raz Kletter, “Chronology and United Monarchy: A Methodological Review,” Zeitschrift des Deutschen Palastina-Vereins120 (2004), pp. 13–54; Amihai Mazar, “The Spade and the Text: The Interaction Between Archaeology and Israelite History,” in H.G.M. Williamson, ed., Understanding the History of Ancient Israel (Oxford: Oxford Univ. Press, 2007), pp. 143–171.
3. Among the book’s many other distortions, I can list here only a few: (1) Finkelstein claims carbon-14 dates have corrected the dates of Ramses III (p. 24). Actually they are exactly the same. (2) Finkelstein claims Shechem was destroyed at the end of the Late Bronze Age (p. 22). The excavators have emphasized that it was not. (3) Finkelstein claims that Tell Keisan, Tel Kinrot, Tel Reḥov, Yokneam and Dor were all “Canaanite city-states” (p. 30). But “city-state” is never defined, and at least two that are so claimed are Phoenician, one is probably Aramaic, and none would actually qualify as a city-state. (4) Finkelstein claims that there are dozens, even hundreds, of carbon-14 dates supporting the “low chronology” (p. 33); in the latest Megiddo report (Megiddo IV), there are three published for the pivotal Stratum VA/IVB, and if anything they support the conventional chronology. (5) Finkelstein claims that Jerusalem in the tenth century B.C.E. was a poor village with no monumental architecture (p. 43). Even Finkelstein’s colleague Nadav Na’aman disagrees with him, as nearly all archaeologists do. (6) Finkelstein radically challenges conventional dates by putting the Iron I/IIA transition in the second half of the tenth century B.C.E. (p. 64). That’s scarcely later than most, and even earlier than Amihai Mazar’s “modified” conventional chronology. Finkelstein claims that Hazor X was destroyed in the late ninth century (840–800 B.C.E.), as confirmed by carbon-14 dates (pp. 75, 122). But no evidence is cited for this, and excavator Amnon Ben-Tor disagrees. (8) Finkelstein claims that those scholars who see Jerusalem as an early state capital are “desperate,” Bible-based people (p. 80). That tells us who is really desperate. (9) For the view that the Field III city gate at Gezer dates to the ninth century B.C.E., Finkelstein cites me (William G. Dever et al., “Further Excavations at Gezer, 1967–1971,” Biblical Archaeologist 34 [1971], p. 103). I never said anything of the sort—quite the opposite. (10) Finkelstein says that Megiddo in the ninth century B.C.E. was “set aside for breeding and training horses” for chariotry (pp. 113; 133–135). Some of his own staff members (and others) dispute the famous “stables” in Megiddo IV. (11) Finkelstein claims that Tel Masos near Beersheba was the center of a far-flung “desert polity” in the tenth century B.C.E. (p. 126). But the relevant Stratum II follows a massive destruction of the walled town, and the scant remains consist of only a few tattered houses. There hardly seems any point in continuing. Finkelstein simply does not care much about facts, as many have long since concluded.

Burke Notes
b. “‘David’ Found at Dan,” BAR, March/April 1994; Yosef Garfinkel, “The Birth & Death of Biblical Minimalism,” BAR, May/June 2011.
4. While Finkelstein acknowledges that Hazor, a major site in his analysis of the Iron Age northern kingdom, was “probably the most important city-state in the north” (p. 21), neither its Late Bronze Age nor Iron I phases are discussed, presumably because it would complicate the highland-centered interpretive framework he offers. The weakness of this analysis is the mistaken assumption that chapter one establishes a Braudelian longue duréeperspective (as explicitly stated but only in the concluding chapter), when in fact this analysis does not meet those criteria.
5. For example, at the start of one particular paragraph we are told that the transition from Iron I to Iron IIA “should probably be fixed … in the beginning of the second half of the tenth” century (i.e., 950 B.C.E.; p. 63). This is, however, substantially later than Finkelstein’s low chronology start date of 920 B.C.E. by 30 years, or it is half the distance between the start date for Iron IIA in the so-called Low Chronology date (920 B.C.E.) and that of the Modified Conventional Chronology (980 B.C.E.). (Keep in mind that such seemingly small decadal shifts in the chronology is what we are fundamentally talking about, whether in connection with the shortening of David and Solomon’s reigns as raised by the Biblical tradition—to less than the 40 years each assigned to them—or in the shifting of the start dates of Iron IIA later.) However, at the end of the same paragraph we are asked to accept that Finkelstein would place the transition between 940/930 B.C.E. (a figure seemingly grabbed out of thin air), conceding 10 to 20 years on the 920 date for no explicitly stated reason (p. 94). Attentive readers will wonder what they are missing, given that three different dates are suggested for the start of the Iron IIA (i.e., 950, 940/930 and 920). The answer would be a litany of relevant publications that are not discussed.

“Divided Kingdom, United Critics” was originally published in the July/August 2014 issue of Biblical Archaeology Review.

Tuesday, June 6, 2017

Critical Failure - colleges fail to improve students’ critical-thinking Exclusive test data show that many colleges fail to improve students’ critical-thinking skills. Freshmen and seniors at about 200 colleges across the U.S. take a little-known test every year to measure how much better they get at thinking. At more than half of schools, at least a third of seniors were unable to make a cohesive argument, assess the quality of evidence in a document or interpret data in a table, our analysis of the latest results found. At some of the most prestigious universities, test results indicate the average graduate shows little or no improvement in critical thinking over four years. Instead, some of the biggest gains occur at smaller colleges where students are less accomplished on arrival. We examine the questions the test raises about the purpose of a college degree.

Monday, April 3, 2017

Minding matter [[against materialim]] The closer you look, the more the materialist position in physics appears to rest on shaky metaphysical ground [[Not easy reading but a good introduction to one facet of the weakness of materialism.]] Bits of stuff called matter. Photo by Peter Marlow/Magnum Adam Frank is professor of astronomy at the University of Rochester in New York and the co-founder of NPR's blog 13.7: Cosmos & Culture where he is also a regular contributor. He is the author of several books, the latest being About Time: Cosmology and Culture at the Twilight of the Big Bang (2011). FOLLOWADAM 3,500 words Edited by Corey S Powell REPUBLISH - LICENCE ONLY 9,274Tweet If consciousness is not a purely material problem, how can we best make sense of it? 498 Responses Materialism holds the high ground these days in debates over that most ultimate of scientific questions: the nature of consciousness. When tackling the problem of mind and brain, many prominent researchers advocate for a universe fully reducible to matter. ‘Of course you are nothing but the activity of your neurons,’ they proclaim. That position seems reasonable and sober in light of neuroscience’s advances, with brilliant images of brains lighting up like Christmas trees while test subjects eat apples, watch movies or dream. And aren’t all the underlying physical laws already known? From this seemly hard-nosed vantage, the problem of consciousness seems to be just one of wiring, as the American physicist Michio Kaku argued in The Future of the Mind (2014). In the very public version of the debate over consciousness, those who advocate that understanding the mind might require something other than a ‘nothing but matter’ position are often painted as victims of wishful thinking, imprecise reasoning or, worst of all, an adherence to a mystical ‘woo’. It’s hard not to feel the intuitional weight of today’s metaphysical sobriety. Like Pickett’s Charge up the hill at Gettysburg, who wants to argue with the superior position of those armed with ever more precise fMRIs, EEGs and the other material artefacts of the materialist position? There is, however, a significant weakness hiding in the imposing-looking materialist redoubt. It is as simple as it is undeniable: after more than a century of profound explorations into the subatomic world, our best theory for how matter behaves still tells us very little about what matter is. Materialists appeal to physics to explain the mind, but in modern physics the particles that make up a brain remain, in many ways, as mysterious as consciousness itself. When I was a young physics student I once asked a professor: ‘What’s an electron?’ His answer stunned me. ‘An electron,’ he said, ‘is that to which we attribute the properties of the electron.’ That vague, circular response was a long way from the dream that drove me into physics, a dream of theories that perfectly described reality. Like almost every student over the past 100 years, I was shocked by quantum mechanics, the physics of the micro-world. In place of a clear vision of little bits of matter that explain all the big things around us, quantum physics gives us a powerful yet seemly paradoxical calculus. With its emphasis on probability waves, essential uncertainties and experimenters disturbing the reality they seek to measure, quantum mechanics made imagining the stuff of the world as classical bits of matter (or miniature billiard balls) all but impossible. Like most physicists, I learned how to ignore the weirdness of quantum physics. ‘Shut up and calculate!’ (the dictum of the American physicist David Mermin) works fine if you are trying to get 100 per cent on your Advanced Quantum Theory homework or building a laser. But behind quantum mechanics’ unequaled calculational precision lie profound, stubbornly persistent questions about what those quantum rules imply about the nature of reality – including our place in it. Those questions are well-known in the physics community, but perhaps our habit of shutting up has been a little too successful. A century of agnosticism about the true nature of matter hasn’t found its way deeply enough into other fields, where materialism still appears to be the most sensible way of dealing with the world and, most of all, with the mind. Some neuroscientists think that they’re being precise and grounded by holding tightly to materialist credentials. Molecular biologists, geneticists, and many other types of researchers – as well as the nonscientist public – have been similarly drawn to materialism’s seeming finality. But this conviction is out of step with what we physicists know about the material world – or rather, what we don’t know. Albert Einstein and Max Planck introduced the idea of the quantum at the beginning of the 20th century, sweeping away the old classical view of reality. We have never managed to come up with a definitive new reality to take its place. The interpretation of quantum physics remains as up for grabs as ever. As a mathematical description of solar cells and digital circuits, quantum mechanics works just fine. But if one wants to apply the materialist position to a concept as subtle and profound as consciousness, something more must clearly be asked for. The closer you look, the more it appears that the materialist (or ‘physicalist’) position is not the safe harbor of metaphysical sobriety that many desire. Get Aeon straight to your inbox Email address Daily Weekly SUBSCRIBE For physicists, the ambiguity over matter boils down to what we call the measurement problem, and its relationship to an entity known as the wave function. Back in the good old days of Newtonian physics, the behaviour of particles was determined by a straightforward mathematical law that reads F = ma. You applied a force F to a particle of mass m, and the particle moved with acceleration a. It was easy to picture this in your head. Particle? Check. Force? Check. Acceleration? Yup. Off you go. The equation F = ma gave you two things that matter most to the Newtonian picture of the world: a particle’s location and its velocity. This is what physicists call a particle’s state. Newton’s laws gave you the particle’s state for any time and to any precision you need. If the state of every particle is described by such a simple equation, and if large systems are just big combinations of particles, then the whole world should behave in a fully predictable way. Many materialists still carry the baggage of that old classical picture. It’s why physics is still widely regarded as the ultimate source of answers to questions about the world, both outside and inside our heads. In Isaac Newton’s physics, position and velocity were indeed clearly defined and clearly imagined properties of a particle. Measurements of the particle’s state changed nothing in principle. The equation F = ma was true whether you were looking at the particle or not. All of that fell apart as scientists began probing at the scale of atoms early last century. In a burst of creativity, physicists devised a new set of rules known as quantum mechanics. A critical piece of the new physics was embodied in Schrödinger’s equation. Like Newton’s F = ma, the Schrödinger equation represents mathematical machinery for doing physics; it describes how the state of a particle is changing. But to account for all the new phenomena physicists were finding (ones Newton knew nothing about), the Austrian physicist Erwin Schrödinger had to formulate a very different kind of equation. When calculations are done with the Schrödinger equation, what’s left is not the Newtonian state of exact position and velocity. Instead, you get what is called the wave function (physicists refer to it as psi after the Greek symbol Ψ used to denote it). Unlike the Newtonian state, which can be clearly imagined in a commonsense way, the wave function is an epistemological and ontological mess. The wave function does not give you a specific measurement of location and velocity for a particle; it gives you only probabilities at the root level of reality. Psi appears to tell you that, at any moment, the particle has many positions and many velocities. In effect, the bits of matter from Newtonian physics are smeared out into sets of potentials or possibilities. How can there be one rule for the objective world before a measurement is made, and another that jumps in after the measurement? It’s not just position and velocity that get smeared out. The wave function treats all properties of the particle (electric charge, energy, spin, etc) the same way. They all become probabilities holding many possible values at the same time. Taken at face value, it’s as if the particle doesn’t have definite properties at all. This is what the German physicist Werner Heisenberg, one of the founders of quantum mechanics, meant when he advised people not to think of atoms as ‘things’. Even at this basic level, the quantum perspective adds a lot of blur to any materialist convictions of what the world is built from. Then things get weirder still. According to the standard way of treating the quantum calculus, the act of making a measurement on the particle kills off all pieces of the wave function, except the one your instruments register. The wave function is said to collapse as all the smeared-out, potential positions or velocities vanish in the act of measurement. It’s like the Schrödinger equation, which does such a great job of describing the smeared-out particle before the measurement is made, suddenly gets a pink slip. You can see how this throws a monkey wrench into a simple, physics-based view of an objective materialist world. How can there be one mathematical rule for the external objective world before a measurement is made, and another that jumps in after the measurement occurs? For a hundred years now, physicists and philosophers have been beating the crap out of each other (and themselves) trying to figure out how to interpret the wave function and its associated measurement problem. What exactly is quantum mechanics telling us about the world? What does the wave function describe? What really happens when a measurement occurs? Above all, what is matter? There are today no definitive answers to these questions. There is not even a consensus about what the answers should look like. Rather, there are multiple interpretations of quantum theory, each of which corresponds to a very different way of regarding matter and everything made of it – which, of course, means everything. The earliest interpretation to gain force, the Copenhagen interpretation, is associated with Danish physicist Niels Bohr and other founders of quantum theory. In their view, it was meaningless to speak of the properties of atoms in-and-of-themselves. Quantum mechanics was a theory that spoke only to our knowledge of the world. The measurement problem associated with the Schrödinger equation highlighted this barrier between epistemology and ontology by making explicit the role of the observer (that is: us) in gaining knowledge. Not all researchers were so willing to give up on the ideal of objective access to a perfectly objective world, however. Some pinned their hopes on the discovery of hidden variables – a set of deterministic rules lurking beneath the probabilities of quantum mechanics. Others took a more extreme view. In the many-worlds interpretation espoused by the American physicist Hugh Everett, the authority of the wave function and its governing Schrödinger equation was taken as absolute. Measurements didn’t suspend the equation or collapse the wave function, they merely made the Universe split off into many (perhaps infinite) parallel versions of itself. Thus, for every experimentalist who measures an electron over here, a parallel universe is created in which her parallel copy finds the electron over there. The many-worlds Interpretation is one that many materialists favor, but it comes with a steep price. Here is an even more important point: as yet there is no way to experimentally distinguish between these widely varying interpretations. Which one you choose is mainly a matter of philosophical temperament. As the American theorist Christopher Fuchs puts it, on one side there are the psi-ontologists who want the wave function to describe the objective world ‘out there’. On the other side, there are the psi-epistemologists who see the wave function as a description of our knowledge and its limits. Right now, there is almost no way to settle the dispute scientifically (although a standard form of hidden variables does seem to have been ruled out). This arbitrariness of deciding which interpretation to hold completely undermines the strict materialist position. The question here is not if some famous materialist’s choice of the many-worlds interpretation is the correct one, any more than whether the silliness of The Tao of Physics and its quantum Buddhism is correct. The real problem is that, in each case, proponents are free to single out one interpretation over others because … well … they like it. Everyone, on all sides, is in the same boat. There can be no appeal to the authority of ‘what quantum mechanics says’, because quantum mechanics doesn’t say much of anything with regard to its own interpretation. Putting the perceiving subject back into physics seems to undermine the whole materialist perspective Each interpretation of quantum mechanics has its own philosophical and scientific advantages, but they all come with their own price. One way or another, they force adherents to take a giant step away from the kind of ‘naive realism’, the vision of little bits of deterministic matter, that was possible with the Newtonian world view; switching to a quantum ‘fields’ view doesn’t solve the problem. It was easy to think that the mathematical objects involved with Newtonian mechanics referred to real things out there in some intuitive way. But those ascribing to psi-ontology – sometimes called wave function realism – must now navigate a labyrinth of challenges in holding their views. The Wave Function (2013), edited by the philosophers Alyssa Ney and David Z Albert, describes many of these options, which can get pretty weird. Reading through the dense analyses quickly dispels any hope that materialism offers a simple, concrete reference point for the problem of consciousness. The attraction of the many-worlds interpretation, for instance, is its ability to keep the reality in the mathematical physics. In this view, yes, the wave function is real and, yes, it describes a world of matter that obeys mathematical rules, whether someone is watching or not. The price you pay for this position is an infinite number of parallel universes that are infinitely splitting off into an infinity of other parallel universes that then split off into … well, you get the picture. There is a big price to pay for the psi-epistemologist positions too. Physics from this perspective is no longer a description of the world in-and-of itself. Instead, it’s a description of the rules for our interaction with the world. As the American theorist Joseph Eberly says: ‘It’s not the electron’s wave function, it’s your wave function.’ A particularly cogent new version of the psi-epistemological position, called Quantum Bayesianism or QBism, raises this perspective to a higher level of specificity by taking the probabilities in quantum mechanics at face value. According to Fuchs, the leading proponent of QBism, the irreducible probabilities in quantum mechanics tell us that it’s really a theory about making bets on the world’s behaviour (via our measurements) and then updating our knowledge after those measurements are done. In this way, QBism points explicitly to our failure to include the observing subject that lies at the root of quantum weirdness. As Mermin wrote in the journal Nature: ‘QBism attributes the muddle at the foundations of quantum mechanics to our unacknowledged removal of the scientist from the science.’ Putting the perceiving subject back into physics would seem to undermine the whole materialist perspective. A theory of mind that depends on matter that depends on mind could not yield the solid ground so many materialists yearn for. It is easy to see how we got here. Materialism is an attractive philosophy – at least, it was before quantum mechanics altered our thinking about matter. ‘I refute it thus,’ said the 18th-century writer Samuel Johnson kicking a large rock as refutation to arguments against materialism he’d just endured. Johnson’s stony drop-kick is the essence of a hard-headed (and broken-footed) materialist vision of the world. It provides an account of exactly what the world is made of: bits of stuff called matter. And since matter has properties that are independent and external to anything having to do with us, we can use that stuff to build a fully objective account of a fully objective world. This ball-and-stick vision of reality seems to inspire much of materialism’s public confidence about cracking the mystery of the human mind. Today, though, it is hard to reconcile that confidence with the multiple interpretations of quantum mechanics. Newtonian mechanics might be fine for explaining the activity of the brain. It can handle things such as blood flow through capillaries and chemical diffusion across synapses, but the ground of materialism becomes far more shaky when we attempt to grapple with the more profound mystery of the mind, meaning the weirdness of being an experiencing subject. In this domain, there is no avoiding the scientific and philosophical complications that come with quantum mechanics. First, the differences between the psi-ontological and psi-epistemological positions are so fundamental that, without knowing which one is correct, it’s impossible to know what quantum mechanics is intrinsically referring to. Imagine for a moment that something like the QBist interpretation of quantum mechanics were true. If this emphasis on the observing subject were the correct lesson to learn from quantum physics, then the perfect, objective access to the world that lies at the heart of materialism would lose a lot of wind. Put another way: if QBism or other Copenhagen-like views are correct, there could be enormous surprises waiting for us in our exploration of subject and object, and these would have to be included in any account of mind. On the other hand, old-school materialism – being a particular form of psi-ontology – would by necessity be blind to these kinds of additions. A second and related point is that, in the absence of experimental evidence, we are left with an irreducible democracy of possibilities. At a 2011 quantum theory meeting, three researchers conducted just such a poll, asking participants: ‘What is your favourite interpretation of quantum mechanics?’ (Six different models got votes, along with some preferences for ‘other’ and ‘no preference’.) As useful as this exercise might be for gauging researchers’ inclinations, holding a referendum for which interpretation should become ‘official’ at the next meeting of the American Physical Society (or the American Philosophical Society) won’t get us any closer to the answers we seek. Nor will stomping our feet, making loud proclamations, or name-dropping our favourite Nobel-prizewinning physicists. Rather than trying to sweep away the mystery of mind by attributing it to the mechanisms of matter, we must grapple with the intertwined nature of the two Given these difficulties, one must ask why certain weird alternatives suggested by quantum interpretations are widely preferred over others within the research community. Why does the infinity of parallel universes in the many-worlds interpretation get associated with the sober, hard-nosed position, while including the perceiving subject gets condemned as crossing over to the shores of anti-science at best, or mysticism at worst? It is in this sense that the unfinished business of quantum mechanics levels the playing field. The high ground of materialism deflates when followed to its quantum mechanical roots, because it then demands the acceptance of metaphysical possibilities that seem no more ‘reasonable’ than other alternatives. Some consciousness researchers might think that they are being hard-nosed and concrete when they appeal to the authority of physics. When pressed on this issue, though, we physicists are often left looking at our feet, smiling sheepishly and mumbling something about ‘it’s complicated’. We know that matter remains mysterious just as mind remains mysterious, and we don’t know what the connections between those mysteries should be. Classifying consciousness as a material problem is tantamount to saying that consciousness, too, remains fundamentally unexplained. Rather than sweeping away the mystery of mind by attributing it to the mechanisms of matter, we can begin to move forward by acknowledging where the multiple interpretations of quantum mechanics leave us. It’s been more than 20 years since the Australian philosopher David Chalmers introduced the idea of a ‘hard problem of consciousness’. Following work by the American philosopher Thomas Nagel, Chalmers pointed to the vividness – the intrinsic presence – of the perceiving subject’s experience as a problem no explanatory account of consciousness seems capable of embracing. Chalmers’s position struck a nerve with many philosophers, articulating the sense that there was fundamentally something more occurring in consciousness than just computing with meat. But what is that ‘more’? Some consciousness researchers see the hard problem as real but inherently unsolvable; others posit a range of options for its account. Those solutions include possibilities that overly project mind into matter. Consciousness might, for example, be an example of the emergence of a new entity in the Universe not contained in the laws of particles. There is also the more radical possibility that some rudimentary form of consciousness must be added to the list of things, such as mass or electric charge, that the world is built of. Regardless of the direction ‘more’ might take, the unresolved democracy of quantum interpretations means that our current understanding of matter alone is unlikely to explain the nature of mind. It seems just as likely that the opposite will be the case. While the materialists might continue to wish for the high ground of sobriety and hard-headedness, they should remember the American poet Richard Wilbur’s warning: Kick at the rock, Sam Johnson, break your bones: But cloudy, cloudy is the stuff of stones. Physics History of Science Quantum The

Thursday, February 23, 2017

Global warming: thoughtful observations on reporting Richard Muller, Prof. Physics UC Berkeley, author "Physics for Future Presidents" What are some widely cited studies in the news that are false? Whenever I see the latest headline grabber article citing a certain study as evidence that doing something will cause you to be more rich or have a higher risk of cancer, I am always skeptical if they've really taken the steps to find a cause and effect, or if they are only looking for correlation. I'm looking of good examples of studies that people still talk about that have been clearly disproven and how. That 97% of all climate scientists accept that climate change is real, large, and a threat to the future of humanity. That 97% basically concur with the vast majority of claims made by Vice President Al Gore in his Nobel Peace Prize winning film, An Inconvenient Truth. The question asked in typical surveys is neither of those. It is this: “Do you believe that humans are affecting climate?” My answer would be yes. Humans are responsible for about a 1 degree C rise in the average temperature in the last 100 years. So I would be included as one of the 97% who believe. Yet the observed changes that are scientifically established, in my vast survey of the science, are confined to temperature rise and the resulting small (4-inch) rise in sea level. (The huge “sea level rise” seen in Florida is actually subsidence of the land mass, and is not related to global warming.) There is no significant change in the rate of storms, or of violent storms, including hurricanes and volcanoes. The temperature variability is not increasing. There is no scientifically significant increase in floods or droughts. Even the widely reported warming of Alaska (“the canary in the mine”) doesn’t match the pattern of carbon dioxide increase; and it may have an explanation in terms of changes in the northern Pacific and Atlantic currents. Moreover, the standard climate models have done a very poor job of predicting the temperature rise in Antarctica, so we must be cautious about the danger of confirmation bias. My friend Will Happer believes that humans do affect the climate, particularly in cities where concrete and energy use cause what is called the “urban heat island effect”. So he would be included in the 97% who believe that humans affect climate, even though he is usually included among the more intense skeptics of the IPCC. He also feels that humans cause a small amount of global warming (he isn’t convinced it is as large as 1 degree), but he does not think it is heading towards a disaster; he has concluded that the increase in carbon dioxide is good for food production, and has helped mitigate global hunger. Yet he would be included in the 97%. The problem is not with the survey, which asked a very general question. The problem is that many writers (and scientists!) look at that number and mischaracterize it. The 97% number is typically interpreted to mean that 97% accept the conclusions presented in An Inconvenient Truth by former Vice President Al Gore. That’s certainly not true; even many scientists who are deeply concerned by the small global warming (such as me) reject over 70% of the claims made by Mr. Gore in that movie (as did a judge in the UK; see the following link: Gore climate film's nine 'errors'). The pollsters aren’t to blame. Well, some of them are; they too can do a good poll and then misrepresent what it means. The real problem is that many people who fear global warming (include me) feel that it is necessary to exaggerate the meaning of the polls in order to get action from the public (don’t include me). There is another way to misrepresent the results of the polls. Yes, 97% of those polled believe that there is human caused climate change. How did they reach that decision? Was it based on a careful reading of the IPCC report? Was it based on their knowledge of the potential systematic uncertainties inherent in the data? Or was it based on their fear that opponents to action are anti-science, so we scientists have to get together and support each other. There is a real danger in people with Ph.D.s joining a consensus that they haven’t vetted professionally. I like to ask scientists who “believe” in global warming what they think of the data. Do they believe hurricanes are increasing? Almost never do I get the answer “Yes, I looked at that, and they are.” Of course they don’t say that, because if they did I would show them the actual data! Do they say, “I’ve looked at the temperature record, and I agree that the variability is going up”? No. Sometimes they will say, “There was a paper by Jim Hansen that showed the variability was increasing.” To which I reply, “I’ve written to Jim Hansen about that paper, and he agrees with me that it shows no such thing. He even expressed surprise that his paper has been so misinterpreted.” A really good question would be: “Have you studied climate change enough that you would put your scientific credentials on the line that most of what is said in An Inconvenient Truth is based on accurate scientific results?” My guess is that a large majority of the climate scientists would answer no to that question, and the true percentage of scientists who support the statement I made in the opening paragraph of this comment, that true percentage would be under 30%. That is an unscientific guestimate, based on my experience in asking many scientists about the claims of Al Gore.
Gone: Kahaneman on priming Reconstruction of a Train Wreck: How Priming Research Went off the Rails February 2, 2017Kahneman, Priming, r-index, Statistical Power, Thinking Fast and Slow Authors: Ulrich Schimmack, Moritz Heene, and Kamini Kesavan Abstract: We computed the R-Index for studies cited in Chapter 4 of Kahneman’s book “Thinking Fast and Slow.” This chapter focuses on priming studies, starting with John Bargh’s study that led to Kahneman’s open email. The results are eye-opening and jaw-dropping. The chapter cites 12 articles and 11 of the 12 articles have an R-Index below 50. The combined analysis of 31 studies reported in the 12 articles shows 100% significant results with average (median) observed power of 57% and an inflation rate of 43%. The R-Index is 14. This result confirms Kahneman’s prediction that priming research is a train wreck and readers of his book “Thinking Fast and Slow” should not consider the presented studies as scientific evidence that subtle cues in their environment can have strong effects on their behavior outside their awareness. Introduction In 2011, Nobel Laureate Daniel Kahneman published a popular book, “Thinking Fast and Slow”, about important finding in social psychology. In the same year, questions about the trustworthiness of social psychology were raised. A Dutch social psychologist had fabricated data. Eventually over 50 of his articles would be retracted. Another social psychologist published results that appeared to demonstrate the ability to foresee random future events (Bem, 2011). Few researchers believed these results and statistical analysis suggested that the results were not trustworthy (Francis, 2012; Schimmack, 2012). Psychologists started to openly question the credibility of published results. In the beginning of 2012, Doyen and colleagues published a failure to replicate a prominent study by John Bargh that was featured in Daniel Kahneman’s book. A few month later, Daniel Kahneman distanced himself from Bargh’s research in an open email addressed to John Bargh (Young, 2012): “As all of you know, of course, questions have been raised about the robustness of priming results…. your field is now the poster child for doubts about the integrity of psychological research… people have now attached a question mark to the field, and it is your responsibility to remove it… all I have personally at stake is that I recently wrote a book that emphasizes priming research as a new approach to the study of associative memory…Count me as a general believer… My reason for writing this letter is that I see a train wreck looming.” Five years later, Kahneman’s concerns have been largely confirmed. Major studies in social priming research have failed to replicate and the replicability of results in social psychology is estimated to be only 25% (OSC, 2015). Looking back, it is difficult to understand the uncritical acceptance of social priming as a fact. In “Thinking Fast and Slow” Kahneman wrote “disbelief is not an option. The results are not made up, nor are they statistical flukes. You have no choice but to accept that the major conclusions of these studies are true.” Yet, Kahneman could have seen the train wreck coming. In 1971, he co-authored an article about scientists’ “exaggerated confidence in the validity of conclusions based on small samples” (Tversky & Kahneman, 1971, p. 105). Yet, many of the studies described in Kahneman’s book had small samples. For example, Bargh’s priming study used only 30 undergraduate students to demonstrate the effect. From Daniel Kahneman I accept the basic conclusions of this blog. To be clear, I do so (1) without expressing an opinion about the statistical techniques it employed and (2) without stating an opinion about the validity and replicability of the individual studies I cited. What the blog gets absolutely right is that I placed too much faith in underpowered studies. As pointed out in the blog, and earlier by Andrew Gelman, there is a special irony in my mistake because the first paper that Amos Tversky and I published was about the belief in the “law of small numbers,” which allows researchers to trust the results of underpowered studies with unreasonably small samples. We also cited Overall (1969) for showing “that the prevalence of studies deficient in statistical power is not only wasteful but actually pernicious: it results in a large proportion of invalid rejections of the null hypothesis among published results.” Our article was written in 1969 and published in 1971, but I failed to internalize its message. My position when I wrote “Thinking, Fast and Slow” was that if a large body of evidence published in reputable journals supports an initially implausible conclusion, then scientific norms require us to believe that conclusion. Implausibility is not sufficient to justify disbelief, and belief in well-supported scientific conclusions is not optional. This position still seems reasonable to me – it is why I think people should believe in climate change. But the argument only holds when all relevant results are published. I knew, of course, that the results of priming studies were based on small samples, that the effect sizes were perhaps implausibly large, and that no single study was conclusive on its own. What impressed me was the unanimity and coherence of the results reported by many laboratories. I concluded that priming effects are easy for skilled experimenters to induce, and that they are robust. However, I now understand that my reasoning was flawed and that I should have known better. Unanimity of underpowered studies provides compelling evidence for the existence of a severe file-drawer problem (and/or p-hacking). The argument is inescapable: Studies that are underpowered for the detection of plausible effects must occasionally return non-significant results even when the research hypothesis is true – the absence of these results is evidence that something is amiss in the published record. Furthermore, the existence of a substantial file-drawer effect undermines the two main tools that psychologists use to accumulate evidence for a broad hypotheses: meta-analysis and conceptual replication. Clearly, the experimental evidence for the ideas I presented in that chapter was significantly weaker than I believed when I wrote it. This was simply an error: I knew all I needed to know to moderate my enthusiasm for the surprising and elegant findings that I cited, but I did not think it through. When questions were later raised about the robustness of priming results I hoped that the authors of this research would rally to bolster their case by stronger evidence, but this did not happen. I still believe that actions can be primed, sometimes even by stimuli of which the person is unaware. There is adequate evidence for all the building blocks: semantic priming, significant processing of stimuli that are not consciously perceived, and ideo-motor activation. I see no reason to draw a sharp line between the priming of thoughts and the priming of actions. A case can therefore be made for priming on this indirect evidence. But I have changed my views about the size of behavioral priming effects – they cannot be as large and as robust as my chapter suggested. I am still attached to every study that I cited, and have not unbelieved them, to use Daniel Gilbert’s phrase. I would be happy to see each of them replicated in a large sample. The lesson I have learned, however, is that authors who review a field should be wary of using memorable results of underpowered studies as evidence for their claims. Liked by 18 people Reply 1. Dr. R February 14, 2017 at 8:57 pm Dear Daniel Kahneman, Thank you for your response to my blog. Science relies on trust and we all knew that non-significant results were not published, but we had no idea how weak the published results were. Nobody expected a train-wreck of this magnitude. Hindsight (like my bias analysis of old studies) is 20/20. The real challenge is how the field and individuals respond to the evidence of a major crisis. I hope more senior psychologists will follow your example and work towards improving our science. Although we have fewer answers today than we thought we had five years ago, we still have many important questions that deserve a scientific answer. Dear Daniel Kahneman, there is another reason to be sceptical of many of the social priming studies. You wrote: “I still believe that actions can be primed, sometimes even by stimuli of which the person is unaware. There is adequate evidence for all the building blocks: semantic priming, significant processing of stimuli that are not consciously perceived, and ideo-motor activation. I see no reason to draw a sharp line between the priming of thoughts and the priming of actions.” However, there is an important constraint on subliminal priming that needs to be taken into account. That is, they are very short lived, on the order of seconds. So any claims that a masked prime affects behavior for an extend period of time seems at odd with these more basic findings. Perhaps social priming is more powerful than basic cognitive findings, but it does raise questions. Here is a link to an old paper showing that masked *repetition* priming is short-lived. Presumably semantic effects will be even more transient. Jeff Bowers Liked by 1 person Reply 1. Hal Pashler February 15, 2017 at 4:00 pm Good point, Jeff. One might ask if this is something about repetition priming, but associative semantic priming is also fleeting. In our JEP:G paper failing to replicate money priming we noted “For example, Becker, Moscovitch, Behrmann, and Joordens (1997) found that lexical decision priming effects disappeared if the prime and target were separated by more than 15 seconds, and similar findings were reported by Meyer, Schvaneveldt, and Ruddy (1972). In brief, classic priming effects are small and transient even if the prime and measure are strongly associated (e.g., NURSE-DOCTOR), whereas money priming effects are [purportedly] large and relatively long-lasting even when the prime and measure are seemingly unrelated (e.g., a sentence related to money and the desire to be alone).”