Thursday, September 22, 2016

2000 yr old accurate Biblical scroll Modern Technology Unlocks Secrets of a Damaged Biblical Scroll http://www.nytimes.com/2016/09/22/science/ancient-sea-scrolls-bible.html?smprod=nytcore-ipad&smid=nytcore-ipad-share&_r=0 A composite image of the completed virtual unwrapping of the En-Gedi scroll. SEALES ET AL. SCI. ADV. 2016; 2 : E1601247 By NICHOLAS WADE SEPTEMBER 21, 2016 Nearly half a century ago, archaeologists found a charred ancient scroll in the ark of a synagogue on the western shore of the Dead Sea. The lump of carbonized parchment could not be opened or read. Its curators did nothing but conserve it, hoping that new technology might one day emerge to make the scroll legible. Just such a technology has now been perfected by computer scientists at theUniversity of Kentucky. Working with biblical scholars in Jerusalem, they have used a computer to unfurl a digital image of the scroll. It turns out to hold a fragment identical to the Masoretic text of the Hebrew Bible and, at nearly 2,000 years old, is the earliest instance of the text. The writing retrieved by the computer from the digital image of the unopened scroll is amazingly clear and legible, in contrast to the scroll’s blackened and beaten-up exterior. “Never in our wildest dreams did we think anything would come of it,” said Pnina Shor, the head of the Dead Sea Scrolls Project at the Israel Antiquities Authority. Scholars say this remarkable new technique may make it possible to read other scrolls too brittle to be unrolled. The scroll’s content, the first two chapters of the Book of Leviticus, has consonants — early Hebrew texts didn’t specify vowels — that are identical to those of the Masoretic text, the authoritative version of the Hebrew Bible and the one often used as the basis for translations of the Old Testament in Protestant Bibles. The Dead Sea scrolls, those found at Qumran and elsewhere around the Dead Sea, contain versions quite similar to the Masoretic text but with many small differences. The text in the scroll found at the En-Gedi excavation site in Israeldecades ago has none, according to Emanuel Tov, an expert on the Dead Sea scrolls at the Hebrew University of Jerusalem. “We have never found something as striking as this,” Dr. Tov said. “This is the earliest evidence of the exact form of the medieval text,” he said, referring to the Masoretic text. The experts say this new method may make it possible to read other ancient scrolls, including several Dead Sea scrolls and about 300 carbonized ones from Herculaneum, which were destroyed by the volcanic eruption of Mount Vesuvius in A.D. 79. The date of the En-Gedi scroll is the subject of conflicting evidence. A carbon-14 measurement indicates that the scroll was copied around A.D. 300. But the style of the ancient script suggests a date nearer to A.D. 100. “We may safely date this scroll” to between A.D. 50 and 100, wrote Ada Yardeni, an expert on Hebrew paleography, in an article in the journal Textus. Dr. Tov said he was “inclined toward a first-century date, based on paleography.” The feat of recovering the text was made possible by software programs developed by W. Brent Seales, a computer scientist at the University of Kentucky. Inspired by the hope of reading the many charred and unopenable scrolls found at Herculaneum, near Pompeii in Italy, Dr. Seales has been working for the last 13 years on ways to read the text inside an ancient scroll. Methods like CT scans can pick out blobs of ink inside a charred scroll, but the jumble of letters is unreadable unless each letter can be assigned to the surface on which it is written. Dr. Seales realized that the writing surface of the scroll had first to be reconstructed and the letters then stuck back to it. He succeeded in 2009 in working out the physical structure of the ruffled layers of papyrus in a Herculaneum scroll. He has since developed a method, called virtual unwrapping, to model the surface of an ancient scroll in the form of a mesh of tiny triangles. Each triangle can be resized by the computer until the virtual surface makes the best fit to the internal structure of the scroll, as revealed by the scanning method. The blobs of ink are assigned to their right place on the structure, and the computer then unfolds the whole 3-D structure into a 2-D sheet. The suite of software programs, called Volume Cartography, will become open source when Dr. Seales’s current government grant ends, he said. The En-Gedi scroll was brought to Dr. Seales’s attention by Dr. Shor. A colleague, Sefi Porat, who had helped excavate the En-Gedi synagogue in 1970, was preparing a final publication of the findings. The scroll from En-Gedi rendered from the micro-CT scan. B. SEALES He asked Dr. Shor to scan the scroll and other artifacts as part of a project to create images of all Dead Sea scroll material, and showed her a box full of lumps of charcoal. “I said, ‘There is nothing we can do because our system isn’t geared toward these chunks,’ ” she said. But because she was submitting other objects for a high-resolution scan, she put one of the lumps in with other items. Dr. Shor had the lump scanned by a commercially available, X-ray based, micro-computed tomography machine, of the kind used for fine-resolution scanning of biological tissues. Knowing of Dr. Seales’s work, she sent him the scan and asked him to analyze it. Both were surprised when, after several refinements, an image emerged with clear and legible script. “We were amazed at the quality of the images — much of the text is as readable as that of unharmed Dead Sea scrolls,” said Michael Segal, a biblical scholar at the Hebrew University of Jerusalem, who helped analyze the text. The surviving content of the En-Gedi scroll, the first two chapters of Leviticus, is part of a listing of the various sacrifices that were performed in biblical times at the temple in Jerusalem. Although some text has previously been identified in ancient artifacts, “the En-Gedi manuscript represents the first severely damaged, ink-based scroll to be unrolled and identified noninvasively,” Dr. Seales and his colleagues write in the Thursday issue of Science. Richard Janko, a classical scholar at the University of Michigan, said the carbonized scrolls from Herculaneum were a small section of a much larger library at a grand villa probably owned by Julius Caesar’s father-in-law, Lucius Calpurnius Piso. Much of the villa is still unexcavated, and its library could contain long-lost works of Latin and Greek literature. Successful reading of even a single scroll from Herculaneum with Dr. Seales’s method would spur excavation of the rest of Piso’s villa, Dr. Janko said. Both Dr. Tov and Dr. Segal said that scholars might come to consider the En-Gedi manuscript as a Dead Sea scroll, especially if the early date indicated by paleography is confirmed. “It doesn’t tell us what was the original text, only that the Masoretic text is a very ancient text in all of its details,” Dr. Segal said. “And we now have evidence that this text was being used from a very early date by Jews in the land of Israel.”

Thursday, September 15, 2016

Neither intelligence nor education can stop you from forming prejudiced opinions – but an inquisitive attitude may help you make wiser judgements. http://www.bbc.com/future/story/20160907-how-curiosity-can-protect-the-mind-from-bias • by Tom Stafford 8 September 2016 Ask a left-wing Brit what they believe about the safety of nuclear power, and you can guess their answer. Ask a right-wing American about the risks posed by climate change, and you can also make a better guess than if you didn’t know their political affiliation. Issues like these feel like they should be informed by science, not our political tribes, but sadly, that’s not what happens. Psychology has long shown that education and intelligence won’t stop your politics from shaping your broader worldview, even if those beliefs do not match the hard evidence. Instead, your ability to weigh up the facts may depend on a less well-recognised trait – curiosity. The political lens It is a mistake to think that you can somehow ‘correct’ people’s views on an issue by giving them more facts There is now a mountain of evidence to show that politics doesn’t just help predict people’s views on some scientific issues; it also affects how they interpret new information. This is why it is a mistake to think that you can somehow ‘correct’ people’s views on an issue by giving them more facts, since study after study has shown that people have a tendency to selectively reject factsthat don’t fit with their existing views. This leads to the odd situation that people who are most extreme in their anti-science views – for example skeptics of the risks of climate change – are more scientifically informed than those who hold anti-science views but less strongly. People who have the facility for deeper thought about an issue can use those cognitive powers to justify what they already believe But smarter people shouldn’t be susceptible to prejudice swaying their opinions, right? Wrong. Other research shows that people with themost education, highest mathematical abilities, and the strongest tendencies to be reflective about their beliefs are the most likely to resist information which should contradict their prejudices. This undermines the simplistic assumption that prejudices are the result of too much gut instinct and not enough deep thought. Rather, people who have the facility for deeper thought about an issue can use those cognitive powers to justify what they already believe and find reasons to dismiss apparently contrary evidence. It’s a messy picture, and at first looks like a depressing one for those who care about science and reason. A glimmer of hope can be found in new research from a collaborative team of philosophers, film-makers and psychologists led by Dan Kahan of Yale University. Kahan and his team were interested in politically biased information processing, but also in studying the audience for scientific documentaries and using this research to help film-makers. They developed two scales. The first measured a person’s scientific background, a fairly standard set of questions asking about knowledge of basic scientific facts and methods, as well as quantitative judgement and reasoning. The second scale was more innovative. The idea of this scale was to measure something related but independent – a person’s curiosity about scientific issues, not how much they already knew. This second scale was also innovative in how they measured scientific curiosity. As well as asking some questions, they also gave people choices about what material to read as part of a survey about reactions to news. If an individual chooses to read about science stories rather than sports or politics, their corresponding science curiosity score was marked up. Armed with their scales, the team then set out to see how they predicted people’s opinions on public issues which should be informed by science. With the scientific knowledge scale the results were depressingly predictable. The left-wing participants – liberal Democrats – tended to judge issues such as global warming or fracking as significant risks to human health, safety or prosperity. The right-wing participants – conservative Republicans – were less likely to judge the issues as significant risks. What’s more, the liberals with more scientific background were most concerned about the risks, while the conservatives with more scientific background were least concerned. That’s right – higher levels of scientific education results in a greater polarisation between the groups, not less. So much for scientific background, but scientific curiosity showed a different pattern. Differences between liberals and conservatives still remained – on average there was still a noticeable gap in their estimates of the risks – but their opinions were at least heading in the same direction. For fracking for example, more scientific curiosity was associated with more concern, for both liberals and conservatives. The team confirmed this using an experiment which gave participants a choice of science stories, either in line with their existing beliefs, or surprising to them. Those participants who were high in scientific curiosity defied the predictions and selected stories which contradicted their existing beliefs – this held true whether they were liberal or conservative. And, in case you are wondering, the results hold for issues in which political liberalism is associated with the anti-science beliefs, such as attitudes to GMO or vaccinations. So, curiosity might just save us from using science to confirm our identity as members of a political tribe. It also shows that to promote a greater understanding of public issues, it is as important for educators to try and convey their excitement about science and the pleasures of finding out stuff, as it is to teach people some basic curriculum of facts. --

Monday, September 5, 2016

The Tyranny of Simple Explanations http://www.theatlantic.com/science/archive/2016/08/occams-razor/495332/ The history of science has been distorted by a longstanding conviction that correct theories about nature are always the most elegant ones. Imagine you’re a scientist with a set of results that are equally well predicted by two different theories. Which theory do you choose? This, it’s often said, is just where you need a hypothetical tool fashioned by the 14th-century English Franciscan friar William of Ockham, one of the most important thinkers of the Middle Ages. Called Ockam’s razor (more commonly spelled Occam’s razor), it advises you to seek the more economical solution: In layman’s terms, the simplest explanation is usually the best one. Occam’s razor is often stated as an injunction not to make more assumptions than you absolutely need. What William actually wrote (in his Summa Logicae, 1323) is close enough, and has a pleasing economy of its own: “It is futile to do with more what can be done with fewer.” Isaac Newton more or less restated Ockham’s idea as the first rule of philosophical reasoning in his great work Principia Mathematica (1687): “We are to admit no more causes of natural things, than such as are both true and sufficient to explain their appearances.” In other words, keep your theories and hypotheses as simple as they can be while still accounting for the observed facts. This sounds like good sense: Why make things more complicated than they need be? You gain nothing by complicating an explanation without some corresponding increase in its explanatory power. That’s why most scientific theories are intentional simplifications: They ignore some effects not because they don’t happen, but because they’re thought to have a negligible effect on the outcome. Applied this way, simplicity is a practical virtue, allowing a clearer view of what’s most important in a phenomenon. There’s no easy equation between simplicity and truth. There’s absolutely no reason to believe that. But it’s what Francis Crick was driving at when he warned that Occam’s razor (which he equated with advocating “simplicity and elegance”) might not be well suited to biology, where things can get very messy. While it’s true that “simple, elegant” theories have sometimes turned out to be wrong (a classical example being Alfred Kempe’s flawed 1879 proof of the “four-color theorem” in mathematics), it’s also true that simpler but less accurate theories can be more useful than complicated ones for clarifying the bare bones of an explanation. There’s no easy equation between simplicity and truth, and Crick’s caution about Occam’s razor just perpetuates misconceptions about its meaning and value. The worst misuses, however, fixate on the idea that the razor can adjudicate between rival theories. I have found no single instance where it has served this purpose to settle a scientific debate. Worse still, the history of science is often distorted in attempts to argue that it has. Take the debate between the ancient geocentric view of the universe—in which the sun and planets move around a central Earth—and Nicolaus Copernicus’s heliocentric theory, with the Sun at the center and the Earth and other planets moving around it. In order to get the mistaken geocentric theory to work, ancient philosophers had to embellish circular planetary orbits with smaller circular motions called epicycles. These could account, for example, for the way the planets sometimes seem, from the perspective of the Earth, to be executing backwards loops along their path. It is often claimed that, by the 16th century, this Ptolemaic model of the universe had become so laden with these epicycles that it was on the point of falling apart. Then along came the Polish astronomer with his heliocentric universe, and no more epicycles were needed. The two theories explained the same astronomical observations, but Copernicus’s was simpler, and so Occam’s razor tells us to prefer it. This is wrong for many reasons. First, Copernicus didn’t do away with epicycles. Largely because planetary orbits are in fact elliptical, not circular, he still needed them (and other tinkering, such as a slightly off-center Sun) to make the scheme work. It isn’t even clear that he used fewer epicycles than the geocentric model did. In an introductory tract called the Commentariolus, published around 1514, he said he could explain the motions of the heavens with “just” 34 epicycles. Many later commentators took this to mean that the geocentric model must have needed many more than 34, but there’s no actual evidence for that. And the historian of astronomy Owen Gingerich has dismissed the common assumption that the Ptolemaic model was so epicycle-heavy that it was close to collapse. He argues that a relatively simple design was probably still in use in Copernicus’s time. Theories are distinguished not by making fewer assumptions but different ones. So the reasons for preferring Copernican theory are not so clear. It certainly looked nicer: Ignoring the epicycles and other modifications, you could draw it as a pleasing system of concentric circles, as Copernicus did. But this didn’t make it simpler. In fact, some of the justifications Copernicus gives are more mystical than scientific: In his main work on the heliocentric theory, De revolutionibus orbium coelestium, he maintained that it was proper for the sun to sit at the centre “as if resting on a kingly throne,” governing the stars like a wise ruler. If Occam’s razor doesn’t favor Copernican theory over Ptolemy, what does it say for the cosmological model that replaced Copernicus’s: the elliptical planetary orbits of 17th-century German astronomer Johannes Kepler? By making the orbits ellipses, Kepler got rid of all those unnecessary epicycles. Yet his model wasn’t explaining the same data as Copernicus with a more economical theory; because Kepler had access to the improved astronomical observations of his mentor Tycho Brahe, his model gave a more accurate explanation. Kepler wasn’t any longer just trying to figure out the arrangement of the cosmos. He was also starting to seek a physical mechanism to explain it—the first step towards Newton’s law of gravity. The point here is that, as a tool for distinguishing between rival theories, Occam’s razor is only relevant if the two theories predict identical results but one is simpler than the other—which is to say, it makes fewer assumptions. This is a situation rarely if ever encountered in science. Much more often, theories are distinguished not by making fewer assumptions but different ones. It’s then not obvious how to weigh them up. From a 17th-century perspective, it’s not even clear that Kepler’s single ellipses are “simpler” than Copernican epicycles. Circular orbits seemed a more aesthetically pleasing and divine basis for the universe, so Kepler adduced them only with hesitation. (Mindful of this, even Galileo refused to accept Kepler’s ellipses.) It’s been said also that Darwinian evolution, by allowing for a single origin of life from which all other organisms descended, was a simplification of what it replaced. But Darwin was not the first to propose evolution from a common ancestor (his grandfather Erasmus was one of those predecessors), and his theory had to assume a much longer history of the Earth than did those which supposed divine creation. Sure, a supernatural creator might seem like a pretty complex assumption today, but it wouldn’t have looked that way in the devout Victorian age. Even today, whether or not the “God hypothesis” simplifies matters remains contentious. The fact that our universe sports physical constants, such as the strength of fundamental forces, that seem oddly fine-tuned to enable life to exist, is one of the most profound puzzles in cosmology. An increasingly popular answer among cosmologists is to suggest that ours is just one of a vast, perhaps infinite, number of universes with different constants, and ours looks fine-tuned purely because we’re here to see it. There are theories that lend some credence to this view, but it rather lacks the economy demanded by Occam’s razor, and it is hardly surprising if some people decide that a single divine creation, with life as part of the plan, is more parsimonious. What’s more, scientific models that differ in their assumptions typically make slightly different predictions, too. It is these predictions, not criteria of “simplicity,” that are of greatest use for evaluating rival theories. The judgement may then depend on where you look: Different theories may have predictive strengths in different areas. Another popular example advanced in favor of Occam’s razor is the replacement of the phlogiston theory of chemistry—the idea that a substance called phlogiston was released when things burn in air—by the chemist Antoine Lavoisier’s theory of oxygen in the late 18th century. However, it’s far from obvious that, at the time, the notion that reaction with oxygen in air, rather than expulsion of phlogiston, was either simpler or more consistent with the observed “facts” about combustion. As the historian of science Hasok Chang has argued, by the standards of its times, “the old concept of phlogiston was no more mistaken and no less productive than Lavoisier’s concept of oxygen”. But as with so many scientific ideas that have fallen by the wayside, it has been deemed necessary not just to discard it but to vilify and ridicule it so as to paint a triumphant picture of progress from ignorance to enlightenment. I can think of only one instance in science where rival “theories” contend to explain exactly the same set of facts on the basis of easily enumerable and comparable assumptions. These are not “theories” in the usual sense, but interpretations: namely, interpretations of quantum mechanics, the theory generally needed to describe how objects behave at the scale of atoms and subatomic particles. Quantum mechanics works exceedingly well as a mathematical theory for predicting phenomena, but there is still no agreement on what it tells us about the fundamental fabric of reality. The theory predicts not what will happen in a quantum experiment or observation, but only what the probabilities of the various outcomes are. Yet in practice we see just a single outcome. How then do we get from calculating probabilities to anticipating definite, unique observations? One answer is that there is a process called “collapse of the wavefunction,” through which, from all the outcomes allowed by quantum theory, just one emerges at the size scales that humans can perceive. But it’s not at all clear how this putative collapse occurs. Some say it’s just a convenient fiction that describes the subjective updating of our knowledge when we make a measurement—rather like the way all 52 probabilities for the top card of a shuffled pack collapse to just one when we look. Others think wavefunction collapse might be a real physical process, a bit like radioactive decay, which can be triggered by the act of looking with human-scale instruments. Either way, there’s no prescription for it in quantum theory; it needs to be added “by hand.” A theory is not “better” if it is simpler—but it might well be more useful. In what looks like a more economical interpretation, first proposed by the physicist Hugh Everett III in 1957, there is no collapse at all. Instead, all the possible outcomes are realized—but they happen in different universes, which “split” when a measurement is made. This is the Many Worlds Interpretation (MWI) of quantum mechanics. We only see one outcome, because we ourselves split too, and each copy can only perceive events in one world. It’s a testament to scientists’ confusion about Occam’s razor that it has been invoked both to defend and to attack the MWI. Some consider this ceaseless, bewildering proliferation of universes to be the antithesis of William of Ockham’s principle of economy. “As far as economy of thought is concerned … there never was anything in the history of thought so bluntly contrary to Ockham’s rule than Everett’s many worlds,” the quantum theorist Roland Omnès writes in The Interpretation of Quantum Mechanics. Others who favor the MWI wave off such criticisms by saying that Occam’s razor was never a binding criterion anyway. And still other advocates, like Sean Carroll, a cosmologist at the California Institute of Technology, point out that Occam’s razor is meant only to apply to the assumptions of a theory, not the predictions. Because the Many Worlds Interpretation accounts for all the observations without the added assumption of collapse of the wavefunction, says Carroll, the MWI is preferable—according to Occam’s razor—to the alternatives. But this is all just special pleading. Occam’s razor was never meant for paring nature down to some beautiful, parsimonious core of truth. Because science is so difficult and messy, the allure of a philosophical tool for clearing a path or pruning the thickets is obvious. In their readiness to find spurious applications of Occam’s razor in the history of science, or to enlist, dismiss, or reshape the razor at will to shore up their preferences, scientists reveal their seduction by this vision. But they should resist it. The value of keeping assumptions to a minimum is cognitive, not ontological: It helps you to think. A theory is not “better” if it is simpler—but it might well be more useful, and that counts for much more. ABOUT THE AUTHOR • PHILIP BALL is a writer based in London. His work has appeared in Nature, The New York Times, and Chemistry World. He is the author ofThe Water Kingdom: A Secret History of Chinaand Critical Mass: How One Thing Leads to Another.