Monday, December 5, 2016

Insight The 'right' to be spared from guilt By George Will Published Dec. 5, 2016 The word "inappropriate" is increasingly used inappropriately. It is useful to describe departures from good manners and other social norms, such as wearing white after Labor Day and using the salad fork with the entree. But the adjective has become a splatter of verbal fudge, a weasel word falsely suggesting measured seriousness. Its misty imprecision does not disguise but advertises the user's moral obtuseness. A French court has demonstrated how "inappropriate" can be an all-purpose device of intellectual evasion and moral cowardice. The court said it is inappropriate to do something that might disturb people who killed their unborn babies for reasons that were, shall we say, inappropriate. Prenatal genetic testing enables pregnant women to be apprised of a variety of problems with their unborn babies, including Down syndrome. It is a congenital condition resulting from a chromosomal defect that causes varying degrees of mental disability and some physical abnormalities, such as low muscle tone, small stature, flatness of the back of the head and an upward slant to the eyes. Within living memory, Down syndrome people were called Mongoloids. Now they are included in the category called "special needs" people. What they most need is nothing special. It is for people to understand their aptitudes, and to therefore quit killing them in utero. Down syndrome, although not common, is among the most common congenital anomalies at 49.7 per 100,000 births. In approximately 90 percent of instances when prenatal genetic testing reveals Down syndrome, the baby is aborted. Cleft lips or palates, which occur in 72.6 per 100,000 births, also can be diagnosed in utero and sometimes are the reason a baby is aborted. In 2014, in conjunction with World Down Syndrome Day (March 21), the Global Down Syndrome Foundation prepared a two-minute video titled "Dear Future Mom" to assuage the anxieties of pregnant women who have learned that they are carrying a Down syndrome baby. More than 7 million people have seen the video online in which one such woman says, "I'm scared: What kind of life will my child have?" Down syndrome children from many nations tell the woman that her child will hug, speak, go to school, tell you he loves you and "can be happy, just like I am - and you'll be happy, too." The French state is not happy about this. The court has ruled that the video is - wait for it - "inappropriate" for French television. The court upheld a ruling in which the French Broadcasting Council had banned the video as a commercial. The court said the video's depiction of happy Down syndrome children was "likely to disturb the conscience of women who had lawfully made different personal life choices." So, what happens on campuses does not stay on campuses. There, in many nations, sensitivity bureaucracies have been enforcing the relatively new entitlement to be shielded from whatever might disturb, even inappropriate jokes. And now this rapidly metastasizing right has come to this: A video that accurately communicates a truthful proposition - that Down syndrome people can be happy and give happiness - should be suppressed because some people might become ambivalent, or morally queasy, about having chosen to extinguish such lives because . . . This is why the video giving facts about Down syndrome people is so subversive of the flaccid consensus among those who say aborting a baby is of no more moral significance than removing a tumor from a stomach. Pictures persuade. Today's improved prenatal sonograms make graphic the fact that the moving fingers and beating heart are not mere "fetal material." They are a baby. Toymaker Fisher-Price, children's apparel manufacturer OshKosh, McDonald's and Target have featured Down syndrome children in ads that the French court would probably ban from television. The court has said, in effect, that the lives of Down syndrome people - and by inescapable implication, the lives of many other disabled people - matter less than the serenity of people who have acted on one or more of three vicious principles: That the lives of the disabled are not worth living. Or that the lives of the disabled are of negligible value next to the desire of parents to have a child who has no special, meaning inconvenient, needs. Or that government should suppress the voices of Down syndrome children in order to guarantee other people's right not to be disturbed by reminders that they have made lethal choices on the basis of one or both of the first two inappropriate principles.

Monday, November 14, 2016

Prince or Pauper? Researchers Find Functional Pseudogene in Fruit Fly Evolution News & Views November 14, 2016 2:33 AM | Permalink The_Prince_and_the_Pauper_1881_p20.jpg Suppose we introduced you to a friend and said he works as a pseudoscientist. You would be immediately suspicious of his white lab coat and apparent command of scientific language in subsequent conversation. After all, he just pretends to be a scientist. He's fake. He's false. He is bogus, sham, phony, mock, ersatz, quasi-, spurious, deceptive, misleading, assumed, contrived, affected, insincere, and all the other negative synonyms we associate with the prefix pseudo. But then suppose we corrected the description and said that, actually, he is a "pseudo-pseudoscientist." The double negative suddenly opens the possibility that he really is a scientist. He's faking his fakery, contriving his contrivance, mocking insincerity for some reason. Maybe he's a psychologist studying the effects of perceived pretentiousness, using you as his lab rat. Maybe he's a real MD playing a doctor on a fictional TV show, leading us to believe he is "just an actor." Think of the guards in Mark Twain's The Prince and the Pauper who quickly escort the shabbily dressed prince off the palace grounds without noticing the royal seal in his pocket. Have scientists too quickly dismissed pseudogenes as broken genes, worthless transcripts of DNA without function? Could at least some of them be "pseudo-pseudogenes"? A surprising paper in Nature actually uses that term: "Olfactory receptor pseudo-pseudogenes." Researchers in Switzerland found a case in a species of fruit fly that defies the pseudogene paradigm. Pseudogenes are often suspected of being broken genes when a premature termination codon (PTC) is found in the DNA sequence. Obviously, such a gene could not be translated into a functional protein, right? Translation would stop before the messenger RNA is complete. Often, that is the case. What good is that? These scientists found something interesting about an olfactory receptor gene in Drosophila sechellia, "an insect endemic to the Seychelles that feeds almost exclusively on the ripe fruit of Morinda citrifolia." They looked at its Ir75a locus, a gene that encodes an olfactory receptor for acetic acid in its more famous cousin D. melanogaster, Finding a PTC in this species' Ir75a gene, they initially thought it was a broken gene -- a pseudogene. The abstract begins with the usual evolutionary rhetoric about pseudogenes: Pseudogenes are generally considered to be non-functional DNA sequences that arise through nonsense or frame-shift mutations of protein-coding genes. Although certain pseudogene-derived RNAs have regulatory roles, and some pseudogene fragments are translated, no clear functions for pseudogene-derived proteins are known. Olfactory receptor families contain many pseudogenes, which reflect low selection pressures on loci no longer relevant to the fitness of a species. [Emphasis added.] That's their setup for the surprise announcement. This pseudogene might just be a "pseudo-pseudogene"! It might be a prince masquerading as a pauper. What started them on their paradigm-breaking find was noticing that this apparent pseudogene is fixed in the population, suggesting it has a function. Taking a closer look, they found that the translation machinery is able to "read through" the premature stop codon, the PTC. How? They're not sure, but they found something else interesting: the read-though operation works efficiently only in neurons, not other types of cells. That opens up a whole new way of looking at pseudogenes: some of them might be tissue-specific regulators. It is not yet clear how the D. sechellia Ir75a PTC is read through. It cannot be because of insertion of the alternative amino acid selenocysteine (which is incorporated at UGA18). Moreover, no suppressor tRNAs are known in D. melanogaster and ribosomal frame-shifting is also unlikely because there is no change in the reading frame after the PTC. We suggest that read-through is due to PTC recognition by a near-cognate tRNA that allows insertion of an amino acid instead of translation termination. Although the trans-acting factors regulating read-through are unclear, the neuronal specificity of this process is reminiscent of RNA editing and micro-exon splicing, in which key responsible regulatory proteins are neuronally enriched. We therefore speculate that tissue-specific expression differences in tRNA populations underlie neuron-specific read-through. We might be tempted to dismiss this as a rare case of evolutionary tinkering. The gene broke, but natural selection found a way to tinker with it and get it to work. Perhaps. But further experimentation with D. melanogaster suggests that "pseudogenization" has a logical function: it works to tune odor sensitivity. The part of the gene downstream from the PTC apparently affects the type of receptor produced. What's more, this kind of regulation might not be rare. Read-through is detected only in neurons and is independent of the type of termination codon, but depends on the sequence downstream of the PTC. Furthermore, although the intact Drosophila melanogaster Ir75a orthologue detects acetic acid -- a chemical cue important for locating fermenting food found only at trace levels in Morinda fruit -- D. sechellia Ir75a has evolved distinct odour-tuning properties through amino-acid changes in its ligand-binding domain. We identify functional PTC-containing loci within different olfactory receptor repertoires and species, suggesting that such 'pseudo-pseudogenes' could represent a widespread phenomenon. Experiments showed that the Ir75a 'pseudo-pseudogene' actually yields a functional odor receptor, but not for acetic acid as in D. melanogaster. Instead, it makes a receptor tuned for similar acidic odorants unique to food sources available on the Seychelles. The tissue-specific read-through capabilities of this gene provide the fly with a way to detect food sources it needs in its environment. Perhaps nothing beyond chance mutation or neutral drift is needed to explain this. On the other hand, the research team may have stumbled onto an important function for pseudogenes. Our efforts to understand the molecular basis of the loss of olfactory sensitivity to acetic acid in D. sechellia led us to discover a notable and, to our knowledge, unprecedented evolutionary trajectory of a presumed pseudogene. Efficient read-through of a PTC in D. sechellia Ir75a permits production of a full-length receptor protein, in which reduction in acetic acid sensitivity and gain of responses to other acids is due to lineage-specific amino acid substitutions in the LBD pocket. The PTC does not noticeably influence the activity of D. sechellia Ir75a, suggesting that it is selectively neutral from an evolutionary standpoint. We propose that it became fixed through genetic drift, given D. sechellia's persistent low effective population size. They can call it an "evolutionary trajectory" if they wish. Another way of looking at this is a design feature. The premature stop codon, or PTC, may be more elegant than a stop sign. It may be a switch, telling the translation machinery to pay attention to the downstream code if -- and only if -- translation is taking place inside neuronal cell. In non-neuronal cells, the PTC might indeed say "stop," delivering the transcript to the trash. In neurons, though, environmental cues may trigger pre-existing routines to fine-tune the sensitivity to odorants available in food sources. A design perspective could accelerate discoveries along this line. We've seen the tendency to dismiss things as evolutionary castoffs when their functions were not understood, only to find higher levels of organization at work. Introns are spliced out of messenger RNAs; they must be junk. Methyl groups interfere with translation; they must be mistakes. Retrotransposons must be parasites. Pseudogenes must be broken genes. Maybe not. If scientists had expected design, maybe they would have hit upon today's paradigms about epigenetics, alternative splicing and gene regulation sooner. Intelligent design theory doesn't require everything to be designed. It does, however, prevent a "premature stop" to dismissing things as not designed.
Sensitivity training at harvard - in case you wondered about the impact of the liberal agenda of overcoming vicious social attitudes.... Harvard’s Rank and File On Campus Maia Silber ON CAMPUS NOV. 14, 2016 CAMBRIDGE, Mass. — Two men sit in the dining hall, leaning over trays filled with stacks of pancakes and glasses of blue Gatorade. “She’s a solid 10. I’m banging her.” “Hey! I called her.” “We can flip a coin.” When The Harvard Crimson reported that Harvard’s men’s soccer team circulated a sexually explicit “scouting report” evaluating female recruits, my friends and I were appalled, but not surprised. Nor were we surprised when the paper reported that the men’s cross-country team produced a similar document. We’d heard it before — in the dining hall, on the street, in the back of lecture halls — Harvard men rating and degrading Harvard women. After all, before he created Facebook in his Harvard dorm, Mark Zuckerberg made “facemash” — a site where Harvard students could deem their peers hot, or not. It may seem shocking that students at one of America’s most elite universities, in one of its most progressive states, would behave so crudely. But in fact those publicly shared scouting reports show Harvard students engaging in an activity at which we excel: rating and categorizing one another. Like most adolescents, we’re eager to define our identities, and determine our place on campus and in the world. In high school, many of us were known as “the kid who got into Harvard.” Here, we can all claim that title, so we sort ourselves into groups even more exclusive than the roughly 5 percent of applicants our school admits. By the time my family dropped me off in Harvard Square, I had already submitted applications for limited-enrollment freshman seminars and pre-orientation programs for students interested in the arts and social justice. At convocation, as Harvard’s president delivered a speech about the importance of forming a community, I worried that everyone had already found their friends for the next four years. I soon found that the students who competed for academic honors and leadership positions during the day staged different contests at night. On Friday and Saturday evenings, young women dressed in bandage skirts and heels line up outside the clubhouses on Mt. Auburn Street. Shivering in the cold, they wait for the nod of a bouncer. On Sunday mornings, young men brag about their conquests. Now, as a senior, I stand behind the tables. While my friends and I don’t evaluate the appearance of female “compers” or candidates for leadership positions, we toss out superficial judgments about our fellow students all too easily: “He really dropped the ball on that project.” “She never smiles.” “She just doesn’t seem committed.” We gossip under the guise of meritocracy. Harvard’s competitiveness does not cause men to degrade women. Men — even, apparently, presidents — need no excuse to do that. Yet when we regularly evaluate one another’s fitness to join our organizations, attend our parties and become our friends, we give misogyny a vocabulary. We give it a place on our campus, and in our culture. It’s not just Harvard, either. We are the generation of the Buzzfeed listicle, the Yelp rating, the Tinder swipe and the Facebook like. Surely, the Paleolithic man ranked women on the walls of his cave, but the 21st-century man makes his lists for all the world to see. Each entry in the soccer team’s 2012 scouting report included, in addition to a nickname and a numerical value, a paragraphs-long assessment and a photograph culled from social media. The cross-country team designed spreadsheets, some of which allowed individual men to add comments about the women’s physical appearance. This “locker room talk” was not idle chatter, but a project that required time, effort and a certain kind of skill. We’ve honed that skill for years. Maia Silber is a senior at Harvard University.

Sunday, October 9, 2016

Dark Matter – An Update October 9, 2016 In Genesis and Genes, I devoted some space to the notion of Dark Matter. I recently read an article in Nature about developments in this area, and I’d like to update my readers about this fascinating subject. What follows is an excerpt from Genesis and Genes (for the purpose of this post, I have omitted the endnotes that appear in the book). I will then comment on the article in Nature. &&& Nobody – including astronomers and cosmologists – knows what the universe is made of. Visible matter – the kind of stuff that people and planets are made of – is outweighed by a factor of 6 or 7 by invisible, cold dark matter. To put it another way, something like 95% of the universe is made up of stuff we can’t detect, except that it seems to exert a gravitational pull. Here is how one distinguished astronomer and author, James Kaler, puts it: Our Galaxy, its stars revolving around the center under the influence of their combined gravity, is spinning too fast for what we see. Galaxies in clusters orbit around the clusters’ centers under the influence of their mutual gravities, but again, they move faster than expected. There must be something out there with enough of a gravitational hold to do the job, to speed things up, but it is completely unseen. Dark matter… We have no idea what constitutes it. Rather, there are many ideas, but none that can be proven. A popular history of astronomy weighs in with this: Over 90 per cent of our Universe is invisible – filled with particles of mysterious dark matter. And astronomers have no idea what it is. Theoretical physicists working on the kinds of particles produced in the Big Bang say that dark matter cannot be anything ordinary – it has to be something very exotic. I don’t wish to labour the point, but I must. The public is subjected to absolute statements about our knowledge of the universe and its history so frequently that the average person is simply inured to the fact that there remain basic questions about our cosmic abode. To wit, we do not know what it is made of. Consider this. The most ambitious project in astronomy in the early 21st century is the SKA, or Square Kilometre Array, a network of radio telescopes that is gargantuan in every respect: complexity, size and cost. An article in TIME magazine about the instrument begins by asking the project manager what it is that astronomers wish to discover with this machine: For someone whose job title could read Man Most Likely to Blow Your Mind, Bernie Fanaroff looks pretty conventional… Consider the fact, says Fanaroff, that we have no idea what 96% of the universe is made of. Cosmologists have known for some time that only 4% of the universe is stuff like dust, gas and basic elements. Dark matter, says Fanaroff, accounts for 23% to 30%; dark energy makes up the rest. (Dark, Fanaroff explains, is the scientific term for “nobody knows what it is.”) That’s not an exaggeration – nobody knows anything significant about what makes up 96% of the universe. And this is acknowledged even by those who pretend to be able to answer ultimate questions in naturalistic terms. Lawrence Krauss is a world-famous physicist and an ardent atheist. His latest book, A Universe from Nothing: Why There Is Something Rather than Nothing (Free Press, 2012) was reviewed in the January 2012 issue of Nature, the world’s most respected science journal. Nature appointed Caleb Scharf, an astrobiologist at Columbia University, to aggrandise Krauss’s ideas about the universe popping out of absolutely nothing, but even he could not hide the gigantic lacuna in Krauss’s thesis: He notes that a number of vital empirical discoveries are, ominously, missing from our cosmic model. Dark matter is one. Despite decades of astrophysical evidence for its presence, and plausible options for its origins, physicists still cannot say much about it. We don’t know what this major mass component of the Universe is, which is a bit of a predicament. We even have difficulty accounting for every speck of normal matter in our local Universe. It is crucial to appreciate that dark matter is not something that was initially discovered in a laboratory, and whose existence was then used to explain some phenomenon. It is also not an entity whose existence was implied by some cosmological theory, and then applied to the problem of energetic stars. Dark matter is entirely hypothetical. Its existence was postulated to explain how the stars in spiral galaxies can orbit at such breakneck speeds without being flung off into the void. In other words, when astronomers tallied up all the mass in the universe, they came face to face with a phenomenon which they could not explain using known physical laws: those laws would indicate that stars in spiral galaxies should indeed be flying off in all directions. Since they aren’t, there must be something out there to prevent them from doing so. What that something is remains anybody’s guess, as Professor Kaler pointed out above. Many astronomers believe that there is matter out there; matter which for whatever reason, we cannot see. This is why they refer to this hypothetical entity as dark matter. They appear to have considerable fun in speculating on the nature of this hypothetical matter: is it made up of MACHOs (Massive Compact Halo Objects)? Or is it WIMPs (Weakly Interacting Massive Particles)? But since the whole exercise is built on speculation as to what could possibly be acting as a brake on those wayward stars, other scientists do not believe that dark matter even exists. And there is nothing to contradict their view. All you have to do is propose a plausible mechanism to restrain energetic stars from flying off into the cosmic sunset. [END OF QUOTATION FROM GENESIS AND GENES.] &&& A recent article in Nature, written by Jeff Hecht and cleverly entitled Dark Matter: What’s the Matter? provides a welcome update in this regard.[1] Hecht begins by introducing the subject: Most of the Universe is missing. The motion of the stars and galaxies allows astronomers to weigh it, and when they do, they see a major discrepancy in cosmological accounting. For every gram of ordinary matter that emits and absorbs light, the Universe contains around five grams of matter that responds to gravity, but is invisible to light. Physicists call this stuff dark matter, and as the search to identify it is now in its fourth decade, things are starting to get a little desperate. A little later, Hecht discusses a new attempt to crack the problem, one that has both supporters and detractors within the scientific community. Hecht is not optimistic about the latest approach: It looks unlikely that primordial black holes are the mysterious dark matter. And as time passes without a confirmed detection, even the most heavily backed theories are beginning to look less likely. A series of experiments have systematically searched for, and failed to find, the theoretical candidates for dark matter — one by one, the possibilities are being reduced. A raft of experiments designed to finally detect, or refute, the remaining candidates are now underway, each with vastly different approaches to the problem. As more options are crossed off the list, physicists may have to explore new ideas and reconsider alternative theories… — or accept that nature may have hidden dark matter just out of our reach. When Genesis and Genes was written, MACHOS – massive Compact Halo Objects – were still considered candidates for Dark Matter. No longer: Decades of research have narrowed down the possibilities. Early favourites included not only black holes, but also other massive compact halo objects (MACHOs) made of ordinary matter. A series of studies, however, gradually ruled out most of the possibilities… But in the view of theoretical physicist John Ellis of King’s College London, “MACHOs are dead.” The other candidate for Dark Matter I mentioned in Genesis and Genes was WIMPS – Weakly Interacting Massive Particles. WIMPS still hold some promise for resolving the Dark Matter conundrum: Although MACHOs have fallen by the wayside, another candidate has hung around. A decade ago, physicists were largely convinced that dark matter was made up of weakly interacting massive particles (WIMPs)… WIMPs remain the leading candidate for dark matter. “Supersymmetry is beautiful mathematically,” says physicist Oliver Buchmueller of Imperial College London. “With just one weakly interacting particle, we can explain all the dark matter we see in the Universe.” Indeed, so well does the lightest of these hypothetical particles fit the bill for dark matter that it has been called “the WIMP miracle”, says physicist Leslie Rosenberg of the University of Washington in Seattle. But only in theory: But supersymmetrical particles have proved maddeningly elusive. Physicists at CERN, Europe’s particle-physics laboratory, are searching for WIMPs with the Large Hadron Collider (LHC) by smashing protons or atomic nuclei together to recreate the conditions of the early Universe… The longer the puzzle goes unsolved, the more twitchy the scientific community will become. “People are a little nervous,” says Rosenberg. Hecht goes on to discuss the difficult – and rather exotic – ways in which scientists use particle colliders to try to detect recalcitrant particles: Researchers won’t see dark matter directly. Instead, they look for signs that energy and momentum in collisions have gone missing when they should have been conserved. Ellis compares searching for evidence of dark matter to watching billiard balls roll away after the cue ball hits them on the break shot. If the balls on one side of the group were invisible, and only the balls rolling away on the opposite side could be seen, the path and nature of the unseen balls can still be deduced, he says. Physicists are using the paths of the particles they can see to identify the paths of the dark matter that they can’t. So far, nothing has come up. Dark Matter is a fascinating scientific problem. For informed consumers of science, a number of issues are important in this context: We don’t know what 95% of the universe is made of! That’s astonishing. Members of the public should be aware that when peremptory remarks about the universe are made by scientists, or in magazine articles, or in documentaries, they hide enormous assumptions about how much we really know. As I explain in Genesis and Genes, Dark Matter (and Dark Energy) may one day turn out to be made of exotic particles; then again, it is quite possible that the scientific picture of our universe is seriously wrong, a possibility freely acknowledged by astronomers such as James Kaler and physicists like Mordechai Milgrom. Don’t be duped by those who insist that matter and energy form the fundamental substrate of our universe. This view originates in an ideology – scientism – and not in evidence from Nature itself. The only reasonable response to knowing how little we know about the universe is humility. 2. It is worth bearing in mind the similar situation that pertained in biology before the Junk DNA paradigm collapsed (see my previous post, Francis Collins Does Teshuva). In that context, many biologists dismissed about 95% of the human genome as junk, because they did not know what it did. This turned out to be a spectacular failure, delaying by several decades the onset of the age of epigenetics. In my view, physicists and astronomers are generally more open to the possibility of paradigm shifts than are biologists. They are also more likely to admit, in public, that major lacuna remain in our knowledge of the physical world. 3. All the methods that have been devised to detect Dark Matter rely on complicated statistical analyses to infer particles of Dark Matter. This is not a simple matter of observation, and lends itself to different interpretations. Here, too, the history of science would indicate that healthy scepticism be maintained when certain results are proclaimed. REFERENCES: [1] Retrieved 7th October 2016.

Thursday, September 22, 2016

2000 yr old accurate Biblical scroll Modern Technology Unlocks Secrets of a Damaged Biblical Scroll A composite image of the completed virtual unwrapping of the En-Gedi scroll. SEALES ET AL. SCI. ADV. 2016; 2 : E1601247 By NICHOLAS WADE SEPTEMBER 21, 2016 Nearly half a century ago, archaeologists found a charred ancient scroll in the ark of a synagogue on the western shore of the Dead Sea. The lump of carbonized parchment could not be opened or read. Its curators did nothing but conserve it, hoping that new technology might one day emerge to make the scroll legible. Just such a technology has now been perfected by computer scientists at theUniversity of Kentucky. Working with biblical scholars in Jerusalem, they have used a computer to unfurl a digital image of the scroll. It turns out to hold a fragment identical to the Masoretic text of the Hebrew Bible and, at nearly 2,000 years old, is the earliest instance of the text. The writing retrieved by the computer from the digital image of the unopened scroll is amazingly clear and legible, in contrast to the scroll’s blackened and beaten-up exterior. “Never in our wildest dreams did we think anything would come of it,” said Pnina Shor, the head of the Dead Sea Scrolls Project at the Israel Antiquities Authority. Scholars say this remarkable new technique may make it possible to read other scrolls too brittle to be unrolled. The scroll’s content, the first two chapters of the Book of Leviticus, has consonants — early Hebrew texts didn’t specify vowels — that are identical to those of the Masoretic text, the authoritative version of the Hebrew Bible and the one often used as the basis for translations of the Old Testament in Protestant Bibles. The Dead Sea scrolls, those found at Qumran and elsewhere around the Dead Sea, contain versions quite similar to the Masoretic text but with many small differences. The text in the scroll found at the En-Gedi excavation site in Israeldecades ago has none, according to Emanuel Tov, an expert on the Dead Sea scrolls at the Hebrew University of Jerusalem. “We have never found something as striking as this,” Dr. Tov said. “This is the earliest evidence of the exact form of the medieval text,” he said, referring to the Masoretic text. The experts say this new method may make it possible to read other ancient scrolls, including several Dead Sea scrolls and about 300 carbonized ones from Herculaneum, which were destroyed by the volcanic eruption of Mount Vesuvius in A.D. 79. The date of the En-Gedi scroll is the subject of conflicting evidence. A carbon-14 measurement indicates that the scroll was copied around A.D. 300. But the style of the ancient script suggests a date nearer to A.D. 100. “We may safely date this scroll” to between A.D. 50 and 100, wrote Ada Yardeni, an expert on Hebrew paleography, in an article in the journal Textus. Dr. Tov said he was “inclined toward a first-century date, based on paleography.” The feat of recovering the text was made possible by software programs developed by W. Brent Seales, a computer scientist at the University of Kentucky. Inspired by the hope of reading the many charred and unopenable scrolls found at Herculaneum, near Pompeii in Italy, Dr. Seales has been working for the last 13 years on ways to read the text inside an ancient scroll. Methods like CT scans can pick out blobs of ink inside a charred scroll, but the jumble of letters is unreadable unless each letter can be assigned to the surface on which it is written. Dr. Seales realized that the writing surface of the scroll had first to be reconstructed and the letters then stuck back to it. He succeeded in 2009 in working out the physical structure of the ruffled layers of papyrus in a Herculaneum scroll. He has since developed a method, called virtual unwrapping, to model the surface of an ancient scroll in the form of a mesh of tiny triangles. Each triangle can be resized by the computer until the virtual surface makes the best fit to the internal structure of the scroll, as revealed by the scanning method. The blobs of ink are assigned to their right place on the structure, and the computer then unfolds the whole 3-D structure into a 2-D sheet. The suite of software programs, called Volume Cartography, will become open source when Dr. Seales’s current government grant ends, he said. The En-Gedi scroll was brought to Dr. Seales’s attention by Dr. Shor. A colleague, Sefi Porat, who had helped excavate the En-Gedi synagogue in 1970, was preparing a final publication of the findings. The scroll from En-Gedi rendered from the micro-CT scan. B. SEALES He asked Dr. Shor to scan the scroll and other artifacts as part of a project to create images of all Dead Sea scroll material, and showed her a box full of lumps of charcoal. “I said, ‘There is nothing we can do because our system isn’t geared toward these chunks,’ ” she said. But because she was submitting other objects for a high-resolution scan, she put one of the lumps in with other items. Dr. Shor had the lump scanned by a commercially available, X-ray based, micro-computed tomography machine, of the kind used for fine-resolution scanning of biological tissues. Knowing of Dr. Seales’s work, she sent him the scan and asked him to analyze it. Both were surprised when, after several refinements, an image emerged with clear and legible script. “We were amazed at the quality of the images — much of the text is as readable as that of unharmed Dead Sea scrolls,” said Michael Segal, a biblical scholar at the Hebrew University of Jerusalem, who helped analyze the text. The surviving content of the En-Gedi scroll, the first two chapters of Leviticus, is part of a listing of the various sacrifices that were performed in biblical times at the temple in Jerusalem. Although some text has previously been identified in ancient artifacts, “the En-Gedi manuscript represents the first severely damaged, ink-based scroll to be unrolled and identified noninvasively,” Dr. Seales and his colleagues write in the Thursday issue of Science. Richard Janko, a classical scholar at the University of Michigan, said the carbonized scrolls from Herculaneum were a small section of a much larger library at a grand villa probably owned by Julius Caesar’s father-in-law, Lucius Calpurnius Piso. Much of the villa is still unexcavated, and its library could contain long-lost works of Latin and Greek literature. Successful reading of even a single scroll from Herculaneum with Dr. Seales’s method would spur excavation of the rest of Piso’s villa, Dr. Janko said. Both Dr. Tov and Dr. Segal said that scholars might come to consider the En-Gedi manuscript as a Dead Sea scroll, especially if the early date indicated by paleography is confirmed. “It doesn’t tell us what was the original text, only that the Masoretic text is a very ancient text in all of its details,” Dr. Segal said. “And we now have evidence that this text was being used from a very early date by Jews in the land of Israel.”

Thursday, September 15, 2016

Neither intelligence nor education can stop you from forming prejudiced opinions – but an inquisitive attitude may help you make wiser judgements. • by Tom Stafford 8 September 2016 Ask a left-wing Brit what they believe about the safety of nuclear power, and you can guess their answer. Ask a right-wing American about the risks posed by climate change, and you can also make a better guess than if you didn’t know their political affiliation. Issues like these feel like they should be informed by science, not our political tribes, but sadly, that’s not what happens. Psychology has long shown that education and intelligence won’t stop your politics from shaping your broader worldview, even if those beliefs do not match the hard evidence. Instead, your ability to weigh up the facts may depend on a less well-recognised trait – curiosity. The political lens It is a mistake to think that you can somehow ‘correct’ people’s views on an issue by giving them more facts There is now a mountain of evidence to show that politics doesn’t just help predict people’s views on some scientific issues; it also affects how they interpret new information. This is why it is a mistake to think that you can somehow ‘correct’ people’s views on an issue by giving them more facts, since study after study has shown that people have a tendency to selectively reject factsthat don’t fit with their existing views. This leads to the odd situation that people who are most extreme in their anti-science views – for example skeptics of the risks of climate change – are more scientifically informed than those who hold anti-science views but less strongly. People who have the facility for deeper thought about an issue can use those cognitive powers to justify what they already believe But smarter people shouldn’t be susceptible to prejudice swaying their opinions, right? Wrong. Other research shows that people with themost education, highest mathematical abilities, and the strongest tendencies to be reflective about their beliefs are the most likely to resist information which should contradict their prejudices. This undermines the simplistic assumption that prejudices are the result of too much gut instinct and not enough deep thought. Rather, people who have the facility for deeper thought about an issue can use those cognitive powers to justify what they already believe and find reasons to dismiss apparently contrary evidence. It’s a messy picture, and at first looks like a depressing one for those who care about science and reason. A glimmer of hope can be found in new research from a collaborative team of philosophers, film-makers and psychologists led by Dan Kahan of Yale University. Kahan and his team were interested in politically biased information processing, but also in studying the audience for scientific documentaries and using this research to help film-makers. They developed two scales. The first measured a person’s scientific background, a fairly standard set of questions asking about knowledge of basic scientific facts and methods, as well as quantitative judgement and reasoning. The second scale was more innovative. The idea of this scale was to measure something related but independent – a person’s curiosity about scientific issues, not how much they already knew. This second scale was also innovative in how they measured scientific curiosity. As well as asking some questions, they also gave people choices about what material to read as part of a survey about reactions to news. If an individual chooses to read about science stories rather than sports or politics, their corresponding science curiosity score was marked up. Armed with their scales, the team then set out to see how they predicted people’s opinions on public issues which should be informed by science. With the scientific knowledge scale the results were depressingly predictable. The left-wing participants – liberal Democrats – tended to judge issues such as global warming or fracking as significant risks to human health, safety or prosperity. The right-wing participants – conservative Republicans – were less likely to judge the issues as significant risks. What’s more, the liberals with more scientific background were most concerned about the risks, while the conservatives with more scientific background were least concerned. That’s right – higher levels of scientific education results in a greater polarisation between the groups, not less. So much for scientific background, but scientific curiosity showed a different pattern. Differences between liberals and conservatives still remained – on average there was still a noticeable gap in their estimates of the risks – but their opinions were at least heading in the same direction. For fracking for example, more scientific curiosity was associated with more concern, for both liberals and conservatives. The team confirmed this using an experiment which gave participants a choice of science stories, either in line with their existing beliefs, or surprising to them. Those participants who were high in scientific curiosity defied the predictions and selected stories which contradicted their existing beliefs – this held true whether they were liberal or conservative. And, in case you are wondering, the results hold for issues in which political liberalism is associated with the anti-science beliefs, such as attitudes to GMO or vaccinations. So, curiosity might just save us from using science to confirm our identity as members of a political tribe. It also shows that to promote a greater understanding of public issues, it is as important for educators to try and convey their excitement about science and the pleasures of finding out stuff, as it is to teach people some basic curriculum of facts. --

Monday, September 5, 2016

The Tyranny of Simple Explanations The history of science has been distorted by a longstanding conviction that correct theories about nature are always the most elegant ones. Imagine you’re a scientist with a set of results that are equally well predicted by two different theories. Which theory do you choose? This, it’s often said, is just where you need a hypothetical tool fashioned by the 14th-century English Franciscan friar William of Ockham, one of the most important thinkers of the Middle Ages. Called Ockam’s razor (more commonly spelled Occam’s razor), it advises you to seek the more economical solution: In layman’s terms, the simplest explanation is usually the best one. Occam’s razor is often stated as an injunction not to make more assumptions than you absolutely need. What William actually wrote (in his Summa Logicae, 1323) is close enough, and has a pleasing economy of its own: “It is futile to do with more what can be done with fewer.” Isaac Newton more or less restated Ockham’s idea as the first rule of philosophical reasoning in his great work Principia Mathematica (1687): “We are to admit no more causes of natural things, than such as are both true and sufficient to explain their appearances.” In other words, keep your theories and hypotheses as simple as they can be while still accounting for the observed facts. This sounds like good sense: Why make things more complicated than they need be? You gain nothing by complicating an explanation without some corresponding increase in its explanatory power. That’s why most scientific theories are intentional simplifications: They ignore some effects not because they don’t happen, but because they’re thought to have a negligible effect on the outcome. Applied this way, simplicity is a practical virtue, allowing a clearer view of what’s most important in a phenomenon. There’s no easy equation between simplicity and truth. There’s absolutely no reason to believe that. But it’s what Francis Crick was driving at when he warned that Occam’s razor (which he equated with advocating “simplicity and elegance”) might not be well suited to biology, where things can get very messy. While it’s true that “simple, elegant” theories have sometimes turned out to be wrong (a classical example being Alfred Kempe’s flawed 1879 proof of the “four-color theorem” in mathematics), it’s also true that simpler but less accurate theories can be more useful than complicated ones for clarifying the bare bones of an explanation. There’s no easy equation between simplicity and truth, and Crick’s caution about Occam’s razor just perpetuates misconceptions about its meaning and value. The worst misuses, however, fixate on the idea that the razor can adjudicate between rival theories. I have found no single instance where it has served this purpose to settle a scientific debate. Worse still, the history of science is often distorted in attempts to argue that it has. Take the debate between the ancient geocentric view of the universe—in which the sun and planets move around a central Earth—and Nicolaus Copernicus’s heliocentric theory, with the Sun at the center and the Earth and other planets moving around it. In order to get the mistaken geocentric theory to work, ancient philosophers had to embellish circular planetary orbits with smaller circular motions called epicycles. These could account, for example, for the way the planets sometimes seem, from the perspective of the Earth, to be executing backwards loops along their path. It is often claimed that, by the 16th century, this Ptolemaic model of the universe had become so laden with these epicycles that it was on the point of falling apart. Then along came the Polish astronomer with his heliocentric universe, and no more epicycles were needed. The two theories explained the same astronomical observations, but Copernicus’s was simpler, and so Occam’s razor tells us to prefer it. This is wrong for many reasons. First, Copernicus didn’t do away with epicycles. Largely because planetary orbits are in fact elliptical, not circular, he still needed them (and other tinkering, such as a slightly off-center Sun) to make the scheme work. It isn’t even clear that he used fewer epicycles than the geocentric model did. In an introductory tract called the Commentariolus, published around 1514, he said he could explain the motions of the heavens with “just” 34 epicycles. Many later commentators took this to mean that the geocentric model must have needed many more than 34, but there’s no actual evidence for that. And the historian of astronomy Owen Gingerich has dismissed the common assumption that the Ptolemaic model was so epicycle-heavy that it was close to collapse. He argues that a relatively simple design was probably still in use in Copernicus’s time. Theories are distinguished not by making fewer assumptions but different ones. So the reasons for preferring Copernican theory are not so clear. It certainly looked nicer: Ignoring the epicycles and other modifications, you could draw it as a pleasing system of concentric circles, as Copernicus did. But this didn’t make it simpler. In fact, some of the justifications Copernicus gives are more mystical than scientific: In his main work on the heliocentric theory, De revolutionibus orbium coelestium, he maintained that it was proper for the sun to sit at the centre “as if resting on a kingly throne,” governing the stars like a wise ruler. If Occam’s razor doesn’t favor Copernican theory over Ptolemy, what does it say for the cosmological model that replaced Copernicus’s: the elliptical planetary orbits of 17th-century German astronomer Johannes Kepler? By making the orbits ellipses, Kepler got rid of all those unnecessary epicycles. Yet his model wasn’t explaining the same data as Copernicus with a more economical theory; because Kepler had access to the improved astronomical observations of his mentor Tycho Brahe, his model gave a more accurate explanation. Kepler wasn’t any longer just trying to figure out the arrangement of the cosmos. He was also starting to seek a physical mechanism to explain it—the first step towards Newton’s law of gravity. The point here is that, as a tool for distinguishing between rival theories, Occam’s razor is only relevant if the two theories predict identical results but one is simpler than the other—which is to say, it makes fewer assumptions. This is a situation rarely if ever encountered in science. Much more often, theories are distinguished not by making fewer assumptions but different ones. It’s then not obvious how to weigh them up. From a 17th-century perspective, it’s not even clear that Kepler’s single ellipses are “simpler” than Copernican epicycles. Circular orbits seemed a more aesthetically pleasing and divine basis for the universe, so Kepler adduced them only with hesitation. (Mindful of this, even Galileo refused to accept Kepler’s ellipses.) It’s been said also that Darwinian evolution, by allowing for a single origin of life from which all other organisms descended, was a simplification of what it replaced. But Darwin was not the first to propose evolution from a common ancestor (his grandfather Erasmus was one of those predecessors), and his theory had to assume a much longer history of the Earth than did those which supposed divine creation. Sure, a supernatural creator might seem like a pretty complex assumption today, but it wouldn’t have looked that way in the devout Victorian age. Even today, whether or not the “God hypothesis” simplifies matters remains contentious. The fact that our universe sports physical constants, such as the strength of fundamental forces, that seem oddly fine-tuned to enable life to exist, is one of the most profound puzzles in cosmology. An increasingly popular answer among cosmologists is to suggest that ours is just one of a vast, perhaps infinite, number of universes with different constants, and ours looks fine-tuned purely because we’re here to see it. There are theories that lend some credence to this view, but it rather lacks the economy demanded by Occam’s razor, and it is hardly surprising if some people decide that a single divine creation, with life as part of the plan, is more parsimonious. What’s more, scientific models that differ in their assumptions typically make slightly different predictions, too. It is these predictions, not criteria of “simplicity,” that are of greatest use for evaluating rival theories. The judgement may then depend on where you look: Different theories may have predictive strengths in different areas. Another popular example advanced in favor of Occam’s razor is the replacement of the phlogiston theory of chemistry—the idea that a substance called phlogiston was released when things burn in air—by the chemist Antoine Lavoisier’s theory of oxygen in the late 18th century. However, it’s far from obvious that, at the time, the notion that reaction with oxygen in air, rather than expulsion of phlogiston, was either simpler or more consistent with the observed “facts” about combustion. As the historian of science Hasok Chang has argued, by the standards of its times, “the old concept of phlogiston was no more mistaken and no less productive than Lavoisier’s concept of oxygen”. But as with so many scientific ideas that have fallen by the wayside, it has been deemed necessary not just to discard it but to vilify and ridicule it so as to paint a triumphant picture of progress from ignorance to enlightenment. I can think of only one instance in science where rival “theories” contend to explain exactly the same set of facts on the basis of easily enumerable and comparable assumptions. These are not “theories” in the usual sense, but interpretations: namely, interpretations of quantum mechanics, the theory generally needed to describe how objects behave at the scale of atoms and subatomic particles. Quantum mechanics works exceedingly well as a mathematical theory for predicting phenomena, but there is still no agreement on what it tells us about the fundamental fabric of reality. The theory predicts not what will happen in a quantum experiment or observation, but only what the probabilities of the various outcomes are. Yet in practice we see just a single outcome. How then do we get from calculating probabilities to anticipating definite, unique observations? One answer is that there is a process called “collapse of the wavefunction,” through which, from all the outcomes allowed by quantum theory, just one emerges at the size scales that humans can perceive. But it’s not at all clear how this putative collapse occurs. Some say it’s just a convenient fiction that describes the subjective updating of our knowledge when we make a measurement—rather like the way all 52 probabilities for the top card of a shuffled pack collapse to just one when we look. Others think wavefunction collapse might be a real physical process, a bit like radioactive decay, which can be triggered by the act of looking with human-scale instruments. Either way, there’s no prescription for it in quantum theory; it needs to be added “by hand.” A theory is not “better” if it is simpler—but it might well be more useful. In what looks like a more economical interpretation, first proposed by the physicist Hugh Everett III in 1957, there is no collapse at all. Instead, all the possible outcomes are realized—but they happen in different universes, which “split” when a measurement is made. This is the Many Worlds Interpretation (MWI) of quantum mechanics. We only see one outcome, because we ourselves split too, and each copy can only perceive events in one world. It’s a testament to scientists’ confusion about Occam’s razor that it has been invoked both to defend and to attack the MWI. Some consider this ceaseless, bewildering proliferation of universes to be the antithesis of William of Ockham’s principle of economy. “As far as economy of thought is concerned … there never was anything in the history of thought so bluntly contrary to Ockham’s rule than Everett’s many worlds,” the quantum theorist Roland Omnès writes in The Interpretation of Quantum Mechanics. Others who favor the MWI wave off such criticisms by saying that Occam’s razor was never a binding criterion anyway. And still other advocates, like Sean Carroll, a cosmologist at the California Institute of Technology, point out that Occam’s razor is meant only to apply to the assumptions of a theory, not the predictions. Because the Many Worlds Interpretation accounts for all the observations without the added assumption of collapse of the wavefunction, says Carroll, the MWI is preferable—according to Occam’s razor—to the alternatives. But this is all just special pleading. Occam’s razor was never meant for paring nature down to some beautiful, parsimonious core of truth. Because science is so difficult and messy, the allure of a philosophical tool for clearing a path or pruning the thickets is obvious. In their readiness to find spurious applications of Occam’s razor in the history of science, or to enlist, dismiss, or reshape the razor at will to shore up their preferences, scientists reveal their seduction by this vision. But they should resist it. The value of keeping assumptions to a minimum is cognitive, not ontological: It helps you to think. A theory is not “better” if it is simpler—but it might well be more useful, and that counts for much more. ABOUT THE AUTHOR • PHILIP BALL is a writer based in London. His work has appeared in Nature, The New York Times, and Chemistry World. He is the author ofThe Water Kingdom: A Secret History of Chinaand Critical Mass: How One Thing Leads to Another.