Thursday, February 23, 2017

Global warming: thoughtful observations on reporting Richard Muller, Prof. Physics UC Berkeley, author "Physics for Future Presidents" What are some widely cited studies in the news that are false? Whenever I see the latest headline grabber article citing a certain study as evidence that doing something will cause you to be more rich or have a higher risk of cancer, I am always skeptical if they've really taken the steps to find a cause and effect, or if they are only looking for correlation. I'm looking of good examples of studies that people still talk about that have been clearly disproven and how. That 97% of all climate scientists accept that climate change is real, large, and a threat to the future of humanity. That 97% basically concur with the vast majority of claims made by Vice President Al Gore in his Nobel Peace Prize winning film, An Inconvenient Truth. The question asked in typical surveys is neither of those. It is this: “Do you believe that humans are affecting climate?” My answer would be yes. Humans are responsible for about a 1 degree C rise in the average temperature in the last 100 years. So I would be included as one of the 97% who believe. Yet the observed changes that are scientifically established, in my vast survey of the science, are confined to temperature rise and the resulting small (4-inch) rise in sea level. (The huge “sea level rise” seen in Florida is actually subsidence of the land mass, and is not related to global warming.) There is no significant change in the rate of storms, or of violent storms, including hurricanes and volcanoes. The temperature variability is not increasing. There is no scientifically significant increase in floods or droughts. Even the widely reported warming of Alaska (“the canary in the mine”) doesn’t match the pattern of carbon dioxide increase; and it may have an explanation in terms of changes in the northern Pacific and Atlantic currents. Moreover, the standard climate models have done a very poor job of predicting the temperature rise in Antarctica, so we must be cautious about the danger of confirmation bias. My friend Will Happer believes that humans do affect the climate, particularly in cities where concrete and energy use cause what is called the “urban heat island effect”. So he would be included in the 97% who believe that humans affect climate, even though he is usually included among the more intense skeptics of the IPCC. He also feels that humans cause a small amount of global warming (he isn’t convinced it is as large as 1 degree), but he does not think it is heading towards a disaster; he has concluded that the increase in carbon dioxide is good for food production, and has helped mitigate global hunger. Yet he would be included in the 97%. The problem is not with the survey, which asked a very general question. The problem is that many writers (and scientists!) look at that number and mischaracterize it. The 97% number is typically interpreted to mean that 97% accept the conclusions presented in An Inconvenient Truth by former Vice President Al Gore. That’s certainly not true; even many scientists who are deeply concerned by the small global warming (such as me) reject over 70% of the claims made by Mr. Gore in that movie (as did a judge in the UK; see the following link: Gore climate film's nine 'errors'). The pollsters aren’t to blame. Well, some of them are; they too can do a good poll and then misrepresent what it means. The real problem is that many people who fear global warming (include me) feel that it is necessary to exaggerate the meaning of the polls in order to get action from the public (don’t include me). There is another way to misrepresent the results of the polls. Yes, 97% of those polled believe that there is human caused climate change. How did they reach that decision? Was it based on a careful reading of the IPCC report? Was it based on their knowledge of the potential systematic uncertainties inherent in the data? Or was it based on their fear that opponents to action are anti-science, so we scientists have to get together and support each other. There is a real danger in people with Ph.D.s joining a consensus that they haven’t vetted professionally. I like to ask scientists who “believe” in global warming what they think of the data. Do they believe hurricanes are increasing? Almost never do I get the answer “Yes, I looked at that, and they are.” Of course they don’t say that, because if they did I would show them the actual data! Do they say, “I’ve looked at the temperature record, and I agree that the variability is going up”? No. Sometimes they will say, “There was a paper by Jim Hansen that showed the variability was increasing.” To which I reply, “I’ve written to Jim Hansen about that paper, and he agrees with me that it shows no such thing. He even expressed surprise that his paper has been so misinterpreted.” A really good question would be: “Have you studied climate change enough that you would put your scientific credentials on the line that most of what is said in An Inconvenient Truth is based on accurate scientific results?” My guess is that a large majority of the climate scientists would answer no to that question, and the true percentage of scientists who support the statement I made in the opening paragraph of this comment, that true percentage would be under 30%. That is an unscientific guestimate, based on my experience in asking many scientists about the claims of Al Gore.
Gone: Kahaneman on priming Reconstruction of a Train Wreck: How Priming Research Went off the Rails February 2, 2017Kahneman, Priming, r-index, Statistical Power, Thinking Fast and Slow Authors: Ulrich Schimmack, Moritz Heene, and Kamini Kesavan Abstract: We computed the R-Index for studies cited in Chapter 4 of Kahneman’s book “Thinking Fast and Slow.” This chapter focuses on priming studies, starting with John Bargh’s study that led to Kahneman’s open email. The results are eye-opening and jaw-dropping. The chapter cites 12 articles and 11 of the 12 articles have an R-Index below 50. The combined analysis of 31 studies reported in the 12 articles shows 100% significant results with average (median) observed power of 57% and an inflation rate of 43%. The R-Index is 14. This result confirms Kahneman’s prediction that priming research is a train wreck and readers of his book “Thinking Fast and Slow” should not consider the presented studies as scientific evidence that subtle cues in their environment can have strong effects on their behavior outside their awareness. Introduction In 2011, Nobel Laureate Daniel Kahneman published a popular book, “Thinking Fast and Slow”, about important finding in social psychology. In the same year, questions about the trustworthiness of social psychology were raised. A Dutch social psychologist had fabricated data. Eventually over 50 of his articles would be retracted. Another social psychologist published results that appeared to demonstrate the ability to foresee random future events (Bem, 2011). Few researchers believed these results and statistical analysis suggested that the results were not trustworthy (Francis, 2012; Schimmack, 2012). Psychologists started to openly question the credibility of published results. In the beginning of 2012, Doyen and colleagues published a failure to replicate a prominent study by John Bargh that was featured in Daniel Kahneman’s book. A few month later, Daniel Kahneman distanced himself from Bargh’s research in an open email addressed to John Bargh (Young, 2012): “As all of you know, of course, questions have been raised about the robustness of priming results…. your field is now the poster child for doubts about the integrity of psychological research… people have now attached a question mark to the field, and it is your responsibility to remove it… all I have personally at stake is that I recently wrote a book that emphasizes priming research as a new approach to the study of associative memory…Count me as a general believer… My reason for writing this letter is that I see a train wreck looming.” Five years later, Kahneman’s concerns have been largely confirmed. Major studies in social priming research have failed to replicate and the replicability of results in social psychology is estimated to be only 25% (OSC, 2015). Looking back, it is difficult to understand the uncritical acceptance of social priming as a fact. In “Thinking Fast and Slow” Kahneman wrote “disbelief is not an option. The results are not made up, nor are they statistical flukes. You have no choice but to accept that the major conclusions of these studies are true.” Yet, Kahneman could have seen the train wreck coming. In 1971, he co-authored an article about scientists’ “exaggerated confidence in the validity of conclusions based on small samples” (Tversky & Kahneman, 1971, p. 105). Yet, many of the studies described in Kahneman’s book had small samples. For example, Bargh’s priming study used only 30 undergraduate students to demonstrate the effect. From Daniel Kahneman I accept the basic conclusions of this blog. To be clear, I do so (1) without expressing an opinion about the statistical techniques it employed and (2) without stating an opinion about the validity and replicability of the individual studies I cited. What the blog gets absolutely right is that I placed too much faith in underpowered studies. As pointed out in the blog, and earlier by Andrew Gelman, there is a special irony in my mistake because the first paper that Amos Tversky and I published was about the belief in the “law of small numbers,” which allows researchers to trust the results of underpowered studies with unreasonably small samples. We also cited Overall (1969) for showing “that the prevalence of studies deficient in statistical power is not only wasteful but actually pernicious: it results in a large proportion of invalid rejections of the null hypothesis among published results.” Our article was written in 1969 and published in 1971, but I failed to internalize its message. My position when I wrote “Thinking, Fast and Slow” was that if a large body of evidence published in reputable journals supports an initially implausible conclusion, then scientific norms require us to believe that conclusion. Implausibility is not sufficient to justify disbelief, and belief in well-supported scientific conclusions is not optional. This position still seems reasonable to me – it is why I think people should believe in climate change. But the argument only holds when all relevant results are published. I knew, of course, that the results of priming studies were based on small samples, that the effect sizes were perhaps implausibly large, and that no single study was conclusive on its own. What impressed me was the unanimity and coherence of the results reported by many laboratories. I concluded that priming effects are easy for skilled experimenters to induce, and that they are robust. However, I now understand that my reasoning was flawed and that I should have known better. Unanimity of underpowered studies provides compelling evidence for the existence of a severe file-drawer problem (and/or p-hacking). The argument is inescapable: Studies that are underpowered for the detection of plausible effects must occasionally return non-significant results even when the research hypothesis is true – the absence of these results is evidence that something is amiss in the published record. Furthermore, the existence of a substantial file-drawer effect undermines the two main tools that psychologists use to accumulate evidence for a broad hypotheses: meta-analysis and conceptual replication. Clearly, the experimental evidence for the ideas I presented in that chapter was significantly weaker than I believed when I wrote it. This was simply an error: I knew all I needed to know to moderate my enthusiasm for the surprising and elegant findings that I cited, but I did not think it through. When questions were later raised about the robustness of priming results I hoped that the authors of this research would rally to bolster their case by stronger evidence, but this did not happen. I still believe that actions can be primed, sometimes even by stimuli of which the person is unaware. There is adequate evidence for all the building blocks: semantic priming, significant processing of stimuli that are not consciously perceived, and ideo-motor activation. I see no reason to draw a sharp line between the priming of thoughts and the priming of actions. A case can therefore be made for priming on this indirect evidence. But I have changed my views about the size of behavioral priming effects – they cannot be as large and as robust as my chapter suggested. I am still attached to every study that I cited, and have not unbelieved them, to use Daniel Gilbert’s phrase. I would be happy to see each of them replicated in a large sample. The lesson I have learned, however, is that authors who review a field should be wary of using memorable results of underpowered studies as evidence for their claims. Liked by 18 people Reply 1. Dr. R February 14, 2017 at 8:57 pm Dear Daniel Kahneman, Thank you for your response to my blog. Science relies on trust and we all knew that non-significant results were not published, but we had no idea how weak the published results were. Nobody expected a train-wreck of this magnitude. Hindsight (like my bias analysis of old studies) is 20/20. The real challenge is how the field and individuals respond to the evidence of a major crisis. I hope more senior psychologists will follow your example and work towards improving our science. Although we have fewer answers today than we thought we had five years ago, we still have many important questions that deserve a scientific answer. Dear Daniel Kahneman, there is another reason to be sceptical of many of the social priming studies. You wrote: “I still believe that actions can be primed, sometimes even by stimuli of which the person is unaware. There is adequate evidence for all the building blocks: semantic priming, significant processing of stimuli that are not consciously perceived, and ideo-motor activation. I see no reason to draw a sharp line between the priming of thoughts and the priming of actions.” However, there is an important constraint on subliminal priming that needs to be taken into account. That is, they are very short lived, on the order of seconds. So any claims that a masked prime affects behavior for an extend period of time seems at odd with these more basic findings. Perhaps social priming is more powerful than basic cognitive findings, but it does raise questions. Here is a link to an old paper showing that masked *repetition* priming is short-lived. Presumably semantic effects will be even more transient. Jeff Bowers Liked by 1 person Reply 1. Hal Pashler February 15, 2017 at 4:00 pm Good point, Jeff. One might ask if this is something about repetition priming, but associative semantic priming is also fleeting. In our JEP:G paper failing to replicate money priming we noted “For example, Becker, Moscovitch, Behrmann, and Joordens (1997) found that lexical decision priming effects disappeared if the prime and target were separated by more than 15 seconds, and similar findings were reported by Meyer, Schvaneveldt, and Ruddy (1972). In brief, classic priming effects are small and transient even if the prime and measure are strongly associated (e.g., NURSE-DOCTOR), whereas money priming effects are [purportedly] large and relatively long-lasting even when the prime and measure are seemingly unrelated (e.g., a sentence related to money and the desire to be alone).”

Tuesday, February 21, 2017

Here's How You Buy Your Way Onto The New York Times Bestsellers List Jeff Bercovici , FORBES STAFF I cover technology with an emphasis on social and digital media. Opinions expressed by Forbes Contributors are their own. Delivering Happiness (Photo credit: Wikipedia) An endorsement from Oprah Winfrey. A film deal from Steven Spielberg. A debut at the top of The New York Times bestsellers list. These are the things every author craves most, and while the first two require the favor of a benevolent God, the third can be had by anyone with the ability to write a check -- a pretty big one. ResultSource, a San Diego-based marketing consultancy, specializes in getting books onto bestseller lists, according to The Wall Street Journal. For clients willing to pay enough, it will even guarantee a No. 1 spot. It does this by taking bulk sales and breaking them up into more organic-looking individual purchases, defeating safeguards that are supposed to make it impossible to "buy" bestseller status. And it's not cheap. Soren Kaplan, a business consultant and speaker, hired ResultSource to promote his book "Leapfrogging." Responding to the WSJ article on his website, Kaplan breaks out the economics of making the list. With a $27.95 list price, I was told that the cost of each book would total about $23.50 after various retail discounts and including $3.99 for tax, handling and shipping. To ensure a spot on The Wall Street Journal’s bestseller list, I needed to obtain commitments from my clients for a minimum of 3000 books at about $23.50, a total of about $70,500. I would need to multiply these numbers by a factor of about three to hit The New York Times list. So it would've cost more than $211,000, and that's before ResultSource's fee, which is typically more than $20,000. Kaplan settled for making the Journal's list, reaching the pre-sale figure of 3,000 by securing commitments from corporate clients, who agreed to buy copies as part of his speaking fees, and by buying copies for himself to resell at public appearances. Kaplan expresses significant reservations about taking part in what is essentially a laundering operation aimed at deceiving the book-buying public into believing a title is more in-demand than it is. "It’s no wonder few people in the industry want to talk about bestseller campaigns," he writes "Put bluntly, they allow people with enough money, contacts, and know-how to buy their way onto bestseller lists." Yet ResultSource's methods aren't exactly secret. The company's website features an endorsement from Zappos CEO Tony Hsieh and a breakdown of the campaign it mounted behind his book "Delivering Happiness," which included a Groupon offering of 1,600 copies. Via a spokeswoman, Hsieh confirmed that he hired the firm and detailed the services it provided. (You can read Hsieh's full statement at the bottom of this post.) Still, Amazon disapproves strongly enough of ResultSource's methods that it told WSJ it will no longer do business with the company. What about the publishers of the various bestsellers lists -- particularly the all-important New York Times list? The Times's methodology (which you can find at the bottom of this page) samples sales from a diverse range of retail outlets, a measure specifically intended to weed out books whose sales surge is a product of artificial demand. Books that benefited from bulk sales are supposed to have a dagger icon next to them to denote that fact. Yet when Hsieh's book debuted on the list in 2009, it had no such symbol. I called and emailed the Times with several questions, including whether it was aware before today of ResultSource's activities. Here's the reply I got from a spokeswoman: "The New York Times comprehensively tracks and tabulates the weekly unit sales of all titles reported by book retailers as their general interest bestsellers. We will not comment beyond our methodology on the other questions. ResultSource CEO Kevin Small did not reply to a voicemail. Here's Tony Hsieh's full message: ResultSource booked us for various speaking events in many of our cities during our 2010 book tour, where we went to 23 cities over 3.5 months on the Delivering Happiness bus. For part one of our trip, see: At many of those events, people paid to come watch me speak and receive an autographed copy of my book. ResultSource managed the speaking, book ordering, and distribution of the books for us during the tour. We're excited that the book has continued to do well over the years since the launch, and are also excited that the paperback version of the book will be coming out next month! Since the book launch, "Delivering Happiness" has spun off into a company, and now has its own apparel line as part of its mission to help spread the Delivering Happiness message: Kenneth Rapoza 4 years ago the best seller list is nothing but a scam. Its like the author comments on book jackets; Jeff’s thriller makes Steven King look like the Mickey Mouse club circa 1950 — by some famous author who is your BFF and read a whole two chapters in your book, OR is repped by the same agent/publisher. Ive interviewed “best sellers” and they and their agents told me that they had no idea how they got there, because total sales were 20k. Admit it, we all thought that a best seller sold millions of copies, or at least a few hundred thousand. 20k? You and I have written blog posts that have had more readers than that! Top Comment REPLY Flag Permalink • djvanderhoeven 4 years ago At least Amazon’s process is much more transparently flawed (and I do mean that as a good thing. Don’t know if you’re very familiar with the web comics community, but one of the more popular ones, Dinasaur Comics ( inspired an anthology of short stories written by readers and compiled by various web comic artists. It was called “Machine of Death”. Anyway, the authors used their considerable web tug to tell all of their readers to make sure to buy the book on Amazon on the a certain day. Guess what? A few thousand buys quickly knocked it up to a #1 best seller, and as a bonus, pissed off Glen Beck (whose book debuted below it on the Amazon list) at the same time. For what’s it’s worth, I’m one of those readers/orderers, and it really was a very good book. Top Comment REPLY Flag Permalink • gbooker 4 years ago Yup, the same trick can be used to make a top selling record album. Top Comment REPLY Flag Permalink • Marianne Canter 4 years ago All best seller lists can be bought. Whether I’m in a bookstore or buying online, I avoid the Top Ten racks and search for selections recommended by friends or co-workers. Top Comment REPLY Flag Permalink • Katherine Sears 4 years ago And yet, it is the first thing people ask about my business (Booktrope Publishing): “how many bestsellers have you had?” My answer, “that depends”. Do you mean via Amazon, Barnes and Noble/Nook, the NYTimes? The latter who, in their policies, state they reserve the right to ignore books they deem unworthy (and who all acknowledge prefer companies who give them advertising dollars). Amazon, who makes no pretense of telling anyone how their lists are calculated at all – maybe it is fair, who knows? Barnes and Noble, who just says nothing at all. But, regardless of it all, readers do not care how a book made “the list”. They just want a trusted recommendation, and they don’t want to make an incremental effort to get it. Well, until there is an impartial source for the casual reader, those of us in the business will continue to do our best to work within the current flawed system, to bring great books to market and to the public’s attention. Top Comment REPLY Flag Permalink • Elizabeth Vennekens-Kelly 4 years ago An interesting article but a sad commentary on the industry. As an author whose book, was published by a boutique publisher, I know my chances are slim to none when it comes to making a best seller’s list. If you want to help the ‘little guy/gal’ and/or have an interest in cultural differences, check out my book Subtle Differences, Big Faux Pas, available from Amazon. Top Comment REPLY Flag Permalink • Peter de Jager 4 years ago Ah yes Ethics – a case of “out of sight, out of mind” Welcome to the new world of marketing. I’ll pass Top Comment REPLY Flag Permalink • ignacio sanabria 4 years ago Meanwhile, we the authors, have to do the selling of our books by ourselves. Perhaps a book tour would be most effective. Top Comment REPLY Flag Permalink • Teresa de Grosbois 4 years ago It’s sad that there are outfits like this that advocate cheating the system. It really doesn’t serve the author or the industry in the long run. A foundation built on mud won’t stand. As someone who works with authors, I’d recommend running fast from anyone who would tell you to buy your way onto a list! Top Comment REPLY Flag Permalink •

Monday, December 5, 2016

Insight The 'right' to be spared from guilt By George Will Published Dec. 5, 2016 The word "inappropriate" is increasingly used inappropriately. It is useful to describe departures from good manners and other social norms, such as wearing white after Labor Day and using the salad fork with the entree. But the adjective has become a splatter of verbal fudge, a weasel word falsely suggesting measured seriousness. Its misty imprecision does not disguise but advertises the user's moral obtuseness. A French court has demonstrated how "inappropriate" can be an all-purpose device of intellectual evasion and moral cowardice. The court said it is inappropriate to do something that might disturb people who killed their unborn babies for reasons that were, shall we say, inappropriate. Prenatal genetic testing enables pregnant women to be apprised of a variety of problems with their unborn babies, including Down syndrome. It is a congenital condition resulting from a chromosomal defect that causes varying degrees of mental disability and some physical abnormalities, such as low muscle tone, small stature, flatness of the back of the head and an upward slant to the eyes. Within living memory, Down syndrome people were called Mongoloids. Now they are included in the category called "special needs" people. What they most need is nothing special. It is for people to understand their aptitudes, and to therefore quit killing them in utero. Down syndrome, although not common, is among the most common congenital anomalies at 49.7 per 100,000 births. In approximately 90 percent of instances when prenatal genetic testing reveals Down syndrome, the baby is aborted. Cleft lips or palates, which occur in 72.6 per 100,000 births, also can be diagnosed in utero and sometimes are the reason a baby is aborted. In 2014, in conjunction with World Down Syndrome Day (March 21), the Global Down Syndrome Foundation prepared a two-minute video titled "Dear Future Mom" to assuage the anxieties of pregnant women who have learned that they are carrying a Down syndrome baby. More than 7 million people have seen the video online in which one such woman says, "I'm scared: What kind of life will my child have?" Down syndrome children from many nations tell the woman that her child will hug, speak, go to school, tell you he loves you and "can be happy, just like I am - and you'll be happy, too." The French state is not happy about this. The court has ruled that the video is - wait for it - "inappropriate" for French television. The court upheld a ruling in which the French Broadcasting Council had banned the video as a commercial. The court said the video's depiction of happy Down syndrome children was "likely to disturb the conscience of women who had lawfully made different personal life choices." So, what happens on campuses does not stay on campuses. There, in many nations, sensitivity bureaucracies have been enforcing the relatively new entitlement to be shielded from whatever might disturb, even inappropriate jokes. And now this rapidly metastasizing right has come to this: A video that accurately communicates a truthful proposition - that Down syndrome people can be happy and give happiness - should be suppressed because some people might become ambivalent, or morally queasy, about having chosen to extinguish such lives because . . . This is why the video giving facts about Down syndrome people is so subversive of the flaccid consensus among those who say aborting a baby is of no more moral significance than removing a tumor from a stomach. Pictures persuade. Today's improved prenatal sonograms make graphic the fact that the moving fingers and beating heart are not mere "fetal material." They are a baby. Toymaker Fisher-Price, children's apparel manufacturer OshKosh, McDonald's and Target have featured Down syndrome children in ads that the French court would probably ban from television. The court has said, in effect, that the lives of Down syndrome people - and by inescapable implication, the lives of many other disabled people - matter less than the serenity of people who have acted on one or more of three vicious principles: That the lives of the disabled are not worth living. Or that the lives of the disabled are of negligible value next to the desire of parents to have a child who has no special, meaning inconvenient, needs. Or that government should suppress the voices of Down syndrome children in order to guarantee other people's right not to be disturbed by reminders that they have made lethal choices on the basis of one or both of the first two inappropriate principles.

Monday, November 14, 2016

Prince or Pauper? Researchers Find Functional Pseudogene in Fruit Fly Evolution News & Views November 14, 2016 2:33 AM | Permalink The_Prince_and_the_Pauper_1881_p20.jpg Suppose we introduced you to a friend and said he works as a pseudoscientist. You would be immediately suspicious of his white lab coat and apparent command of scientific language in subsequent conversation. After all, he just pretends to be a scientist. He's fake. He's false. He is bogus, sham, phony, mock, ersatz, quasi-, spurious, deceptive, misleading, assumed, contrived, affected, insincere, and all the other negative synonyms we associate with the prefix pseudo. But then suppose we corrected the description and said that, actually, he is a "pseudo-pseudoscientist." The double negative suddenly opens the possibility that he really is a scientist. He's faking his fakery, contriving his contrivance, mocking insincerity for some reason. Maybe he's a psychologist studying the effects of perceived pretentiousness, using you as his lab rat. Maybe he's a real MD playing a doctor on a fictional TV show, leading us to believe he is "just an actor." Think of the guards in Mark Twain's The Prince and the Pauper who quickly escort the shabbily dressed prince off the palace grounds without noticing the royal seal in his pocket. Have scientists too quickly dismissed pseudogenes as broken genes, worthless transcripts of DNA without function? Could at least some of them be "pseudo-pseudogenes"? A surprising paper in Nature actually uses that term: "Olfactory receptor pseudo-pseudogenes." Researchers in Switzerland found a case in a species of fruit fly that defies the pseudogene paradigm. Pseudogenes are often suspected of being broken genes when a premature termination codon (PTC) is found in the DNA sequence. Obviously, such a gene could not be translated into a functional protein, right? Translation would stop before the messenger RNA is complete. Often, that is the case. What good is that? These scientists found something interesting about an olfactory receptor gene in Drosophila sechellia, "an insect endemic to the Seychelles that feeds almost exclusively on the ripe fruit of Morinda citrifolia." They looked at its Ir75a locus, a gene that encodes an olfactory receptor for acetic acid in its more famous cousin D. melanogaster, Finding a PTC in this species' Ir75a gene, they initially thought it was a broken gene -- a pseudogene. The abstract begins with the usual evolutionary rhetoric about pseudogenes: Pseudogenes are generally considered to be non-functional DNA sequences that arise through nonsense or frame-shift mutations of protein-coding genes. Although certain pseudogene-derived RNAs have regulatory roles, and some pseudogene fragments are translated, no clear functions for pseudogene-derived proteins are known. Olfactory receptor families contain many pseudogenes, which reflect low selection pressures on loci no longer relevant to the fitness of a species. [Emphasis added.] That's their setup for the surprise announcement. This pseudogene might just be a "pseudo-pseudogene"! It might be a prince masquerading as a pauper. What started them on their paradigm-breaking find was noticing that this apparent pseudogene is fixed in the population, suggesting it has a function. Taking a closer look, they found that the translation machinery is able to "read through" the premature stop codon, the PTC. How? They're not sure, but they found something else interesting: the read-though operation works efficiently only in neurons, not other types of cells. That opens up a whole new way of looking at pseudogenes: some of them might be tissue-specific regulators. It is not yet clear how the D. sechellia Ir75a PTC is read through. It cannot be because of insertion of the alternative amino acid selenocysteine (which is incorporated at UGA18). Moreover, no suppressor tRNAs are known in D. melanogaster and ribosomal frame-shifting is also unlikely because there is no change in the reading frame after the PTC. We suggest that read-through is due to PTC recognition by a near-cognate tRNA that allows insertion of an amino acid instead of translation termination. Although the trans-acting factors regulating read-through are unclear, the neuronal specificity of this process is reminiscent of RNA editing and micro-exon splicing, in which key responsible regulatory proteins are neuronally enriched. We therefore speculate that tissue-specific expression differences in tRNA populations underlie neuron-specific read-through. We might be tempted to dismiss this as a rare case of evolutionary tinkering. The gene broke, but natural selection found a way to tinker with it and get it to work. Perhaps. But further experimentation with D. melanogaster suggests that "pseudogenization" has a logical function: it works to tune odor sensitivity. The part of the gene downstream from the PTC apparently affects the type of receptor produced. What's more, this kind of regulation might not be rare. Read-through is detected only in neurons and is independent of the type of termination codon, but depends on the sequence downstream of the PTC. Furthermore, although the intact Drosophila melanogaster Ir75a orthologue detects acetic acid -- a chemical cue important for locating fermenting food found only at trace levels in Morinda fruit -- D. sechellia Ir75a has evolved distinct odour-tuning properties through amino-acid changes in its ligand-binding domain. We identify functional PTC-containing loci within different olfactory receptor repertoires and species, suggesting that such 'pseudo-pseudogenes' could represent a widespread phenomenon. Experiments showed that the Ir75a 'pseudo-pseudogene' actually yields a functional odor receptor, but not for acetic acid as in D. melanogaster. Instead, it makes a receptor tuned for similar acidic odorants unique to food sources available on the Seychelles. The tissue-specific read-through capabilities of this gene provide the fly with a way to detect food sources it needs in its environment. Perhaps nothing beyond chance mutation or neutral drift is needed to explain this. On the other hand, the research team may have stumbled onto an important function for pseudogenes. Our efforts to understand the molecular basis of the loss of olfactory sensitivity to acetic acid in D. sechellia led us to discover a notable and, to our knowledge, unprecedented evolutionary trajectory of a presumed pseudogene. Efficient read-through of a PTC in D. sechellia Ir75a permits production of a full-length receptor protein, in which reduction in acetic acid sensitivity and gain of responses to other acids is due to lineage-specific amino acid substitutions in the LBD pocket. The PTC does not noticeably influence the activity of D. sechellia Ir75a, suggesting that it is selectively neutral from an evolutionary standpoint. We propose that it became fixed through genetic drift, given D. sechellia's persistent low effective population size. They can call it an "evolutionary trajectory" if they wish. Another way of looking at this is a design feature. The premature stop codon, or PTC, may be more elegant than a stop sign. It may be a switch, telling the translation machinery to pay attention to the downstream code if -- and only if -- translation is taking place inside neuronal cell. In non-neuronal cells, the PTC might indeed say "stop," delivering the transcript to the trash. In neurons, though, environmental cues may trigger pre-existing routines to fine-tune the sensitivity to odorants available in food sources. A design perspective could accelerate discoveries along this line. We've seen the tendency to dismiss things as evolutionary castoffs when their functions were not understood, only to find higher levels of organization at work. Introns are spliced out of messenger RNAs; they must be junk. Methyl groups interfere with translation; they must be mistakes. Retrotransposons must be parasites. Pseudogenes must be broken genes. Maybe not. If scientists had expected design, maybe they would have hit upon today's paradigms about epigenetics, alternative splicing and gene regulation sooner. Intelligent design theory doesn't require everything to be designed. It does, however, prevent a "premature stop" to dismissing things as not designed.
Sensitivity training at harvard - in case you wondered about the impact of the liberal agenda of overcoming vicious social attitudes.... Harvard’s Rank and File On Campus Maia Silber ON CAMPUS NOV. 14, 2016 CAMBRIDGE, Mass. — Two men sit in the dining hall, leaning over trays filled with stacks of pancakes and glasses of blue Gatorade. “She’s a solid 10. I’m banging her.” “Hey! I called her.” “We can flip a coin.” When The Harvard Crimson reported that Harvard’s men’s soccer team circulated a sexually explicit “scouting report” evaluating female recruits, my friends and I were appalled, but not surprised. Nor were we surprised when the paper reported that the men’s cross-country team produced a similar document. We’d heard it before — in the dining hall, on the street, in the back of lecture halls — Harvard men rating and degrading Harvard women. After all, before he created Facebook in his Harvard dorm, Mark Zuckerberg made “facemash” — a site where Harvard students could deem their peers hot, or not. It may seem shocking that students at one of America’s most elite universities, in one of its most progressive states, would behave so crudely. But in fact those publicly shared scouting reports show Harvard students engaging in an activity at which we excel: rating and categorizing one another. Like most adolescents, we’re eager to define our identities, and determine our place on campus and in the world. In high school, many of us were known as “the kid who got into Harvard.” Here, we can all claim that title, so we sort ourselves into groups even more exclusive than the roughly 5 percent of applicants our school admits. By the time my family dropped me off in Harvard Square, I had already submitted applications for limited-enrollment freshman seminars and pre-orientation programs for students interested in the arts and social justice. At convocation, as Harvard’s president delivered a speech about the importance of forming a community, I worried that everyone had already found their friends for the next four years. I soon found that the students who competed for academic honors and leadership positions during the day staged different contests at night. On Friday and Saturday evenings, young women dressed in bandage skirts and heels line up outside the clubhouses on Mt. Auburn Street. Shivering in the cold, they wait for the nod of a bouncer. On Sunday mornings, young men brag about their conquests. Now, as a senior, I stand behind the tables. While my friends and I don’t evaluate the appearance of female “compers” or candidates for leadership positions, we toss out superficial judgments about our fellow students all too easily: “He really dropped the ball on that project.” “She never smiles.” “She just doesn’t seem committed.” We gossip under the guise of meritocracy. Harvard’s competitiveness does not cause men to degrade women. Men — even, apparently, presidents — need no excuse to do that. Yet when we regularly evaluate one another’s fitness to join our organizations, attend our parties and become our friends, we give misogyny a vocabulary. We give it a place on our campus, and in our culture. It’s not just Harvard, either. We are the generation of the Buzzfeed listicle, the Yelp rating, the Tinder swipe and the Facebook like. Surely, the Paleolithic man ranked women on the walls of his cave, but the 21st-century man makes his lists for all the world to see. Each entry in the soccer team’s 2012 scouting report included, in addition to a nickname and a numerical value, a paragraphs-long assessment and a photograph culled from social media. The cross-country team designed spreadsheets, some of which allowed individual men to add comments about the women’s physical appearance. This “locker room talk” was not idle chatter, but a project that required time, effort and a certain kind of skill. We’ve honed that skill for years. Maia Silber is a senior at Harvard University.

Sunday, October 9, 2016

Dark Matter – An Update October 9, 2016 In Genesis and Genes, I devoted some space to the notion of Dark Matter. I recently read an article in Nature about developments in this area, and I’d like to update my readers about this fascinating subject. What follows is an excerpt from Genesis and Genes (for the purpose of this post, I have omitted the endnotes that appear in the book). I will then comment on the article in Nature. &&& Nobody – including astronomers and cosmologists – knows what the universe is made of. Visible matter – the kind of stuff that people and planets are made of – is outweighed by a factor of 6 or 7 by invisible, cold dark matter. To put it another way, something like 95% of the universe is made up of stuff we can’t detect, except that it seems to exert a gravitational pull. Here is how one distinguished astronomer and author, James Kaler, puts it: Our Galaxy, its stars revolving around the center under the influence of their combined gravity, is spinning too fast for what we see. Galaxies in clusters orbit around the clusters’ centers under the influence of their mutual gravities, but again, they move faster than expected. There must be something out there with enough of a gravitational hold to do the job, to speed things up, but it is completely unseen. Dark matter… We have no idea what constitutes it. Rather, there are many ideas, but none that can be proven. A popular history of astronomy weighs in with this: Over 90 per cent of our Universe is invisible – filled with particles of mysterious dark matter. And astronomers have no idea what it is. Theoretical physicists working on the kinds of particles produced in the Big Bang say that dark matter cannot be anything ordinary – it has to be something very exotic. I don’t wish to labour the point, but I must. The public is subjected to absolute statements about our knowledge of the universe and its history so frequently that the average person is simply inured to the fact that there remain basic questions about our cosmic abode. To wit, we do not know what it is made of. Consider this. The most ambitious project in astronomy in the early 21st century is the SKA, or Square Kilometre Array, a network of radio telescopes that is gargantuan in every respect: complexity, size and cost. An article in TIME magazine about the instrument begins by asking the project manager what it is that astronomers wish to discover with this machine: For someone whose job title could read Man Most Likely to Blow Your Mind, Bernie Fanaroff looks pretty conventional… Consider the fact, says Fanaroff, that we have no idea what 96% of the universe is made of. Cosmologists have known for some time that only 4% of the universe is stuff like dust, gas and basic elements. Dark matter, says Fanaroff, accounts for 23% to 30%; dark energy makes up the rest. (Dark, Fanaroff explains, is the scientific term for “nobody knows what it is.”) That’s not an exaggeration – nobody knows anything significant about what makes up 96% of the universe. And this is acknowledged even by those who pretend to be able to answer ultimate questions in naturalistic terms. Lawrence Krauss is a world-famous physicist and an ardent atheist. His latest book, A Universe from Nothing: Why There Is Something Rather than Nothing (Free Press, 2012) was reviewed in the January 2012 issue of Nature, the world’s most respected science journal. Nature appointed Caleb Scharf, an astrobiologist at Columbia University, to aggrandise Krauss’s ideas about the universe popping out of absolutely nothing, but even he could not hide the gigantic lacuna in Krauss’s thesis: He notes that a number of vital empirical discoveries are, ominously, missing from our cosmic model. Dark matter is one. Despite decades of astrophysical evidence for its presence, and plausible options for its origins, physicists still cannot say much about it. We don’t know what this major mass component of the Universe is, which is a bit of a predicament. We even have difficulty accounting for every speck of normal matter in our local Universe. It is crucial to appreciate that dark matter is not something that was initially discovered in a laboratory, and whose existence was then used to explain some phenomenon. It is also not an entity whose existence was implied by some cosmological theory, and then applied to the problem of energetic stars. Dark matter is entirely hypothetical. Its existence was postulated to explain how the stars in spiral galaxies can orbit at such breakneck speeds without being flung off into the void. In other words, when astronomers tallied up all the mass in the universe, they came face to face with a phenomenon which they could not explain using known physical laws: those laws would indicate that stars in spiral galaxies should indeed be flying off in all directions. Since they aren’t, there must be something out there to prevent them from doing so. What that something is remains anybody’s guess, as Professor Kaler pointed out above. Many astronomers believe that there is matter out there; matter which for whatever reason, we cannot see. This is why they refer to this hypothetical entity as dark matter. They appear to have considerable fun in speculating on the nature of this hypothetical matter: is it made up of MACHOs (Massive Compact Halo Objects)? Or is it WIMPs (Weakly Interacting Massive Particles)? But since the whole exercise is built on speculation as to what could possibly be acting as a brake on those wayward stars, other scientists do not believe that dark matter even exists. And there is nothing to contradict their view. All you have to do is propose a plausible mechanism to restrain energetic stars from flying off into the cosmic sunset. [END OF QUOTATION FROM GENESIS AND GENES.] &&& A recent article in Nature, written by Jeff Hecht and cleverly entitled Dark Matter: What’s the Matter? provides a welcome update in this regard.[1] Hecht begins by introducing the subject: Most of the Universe is missing. The motion of the stars and galaxies allows astronomers to weigh it, and when they do, they see a major discrepancy in cosmological accounting. For every gram of ordinary matter that emits and absorbs light, the Universe contains around five grams of matter that responds to gravity, but is invisible to light. Physicists call this stuff dark matter, and as the search to identify it is now in its fourth decade, things are starting to get a little desperate. A little later, Hecht discusses a new attempt to crack the problem, one that has both supporters and detractors within the scientific community. Hecht is not optimistic about the latest approach: It looks unlikely that primordial black holes are the mysterious dark matter. And as time passes without a confirmed detection, even the most heavily backed theories are beginning to look less likely. A series of experiments have systematically searched for, and failed to find, the theoretical candidates for dark matter — one by one, the possibilities are being reduced. A raft of experiments designed to finally detect, or refute, the remaining candidates are now underway, each with vastly different approaches to the problem. As more options are crossed off the list, physicists may have to explore new ideas and reconsider alternative theories… — or accept that nature may have hidden dark matter just out of our reach. When Genesis and Genes was written, MACHOS – massive Compact Halo Objects – were still considered candidates for Dark Matter. No longer: Decades of research have narrowed down the possibilities. Early favourites included not only black holes, but also other massive compact halo objects (MACHOs) made of ordinary matter. A series of studies, however, gradually ruled out most of the possibilities… But in the view of theoretical physicist John Ellis of King’s College London, “MACHOs are dead.” The other candidate for Dark Matter I mentioned in Genesis and Genes was WIMPS – Weakly Interacting Massive Particles. WIMPS still hold some promise for resolving the Dark Matter conundrum: Although MACHOs have fallen by the wayside, another candidate has hung around. A decade ago, physicists were largely convinced that dark matter was made up of weakly interacting massive particles (WIMPs)… WIMPs remain the leading candidate for dark matter. “Supersymmetry is beautiful mathematically,” says physicist Oliver Buchmueller of Imperial College London. “With just one weakly interacting particle, we can explain all the dark matter we see in the Universe.” Indeed, so well does the lightest of these hypothetical particles fit the bill for dark matter that it has been called “the WIMP miracle”, says physicist Leslie Rosenberg of the University of Washington in Seattle. But only in theory: But supersymmetrical particles have proved maddeningly elusive. Physicists at CERN, Europe’s particle-physics laboratory, are searching for WIMPs with the Large Hadron Collider (LHC) by smashing protons or atomic nuclei together to recreate the conditions of the early Universe… The longer the puzzle goes unsolved, the more twitchy the scientific community will become. “People are a little nervous,” says Rosenberg. Hecht goes on to discuss the difficult – and rather exotic – ways in which scientists use particle colliders to try to detect recalcitrant particles: Researchers won’t see dark matter directly. Instead, they look for signs that energy and momentum in collisions have gone missing when they should have been conserved. Ellis compares searching for evidence of dark matter to watching billiard balls roll away after the cue ball hits them on the break shot. If the balls on one side of the group were invisible, and only the balls rolling away on the opposite side could be seen, the path and nature of the unseen balls can still be deduced, he says. Physicists are using the paths of the particles they can see to identify the paths of the dark matter that they can’t. So far, nothing has come up. Dark Matter is a fascinating scientific problem. For informed consumers of science, a number of issues are important in this context: We don’t know what 95% of the universe is made of! That’s astonishing. Members of the public should be aware that when peremptory remarks about the universe are made by scientists, or in magazine articles, or in documentaries, they hide enormous assumptions about how much we really know. As I explain in Genesis and Genes, Dark Matter (and Dark Energy) may one day turn out to be made of exotic particles; then again, it is quite possible that the scientific picture of our universe is seriously wrong, a possibility freely acknowledged by astronomers such as James Kaler and physicists like Mordechai Milgrom. Don’t be duped by those who insist that matter and energy form the fundamental substrate of our universe. This view originates in an ideology – scientism – and not in evidence from Nature itself. The only reasonable response to knowing how little we know about the universe is humility. 2. It is worth bearing in mind the similar situation that pertained in biology before the Junk DNA paradigm collapsed (see my previous post, Francis Collins Does Teshuva). In that context, many biologists dismissed about 95% of the human genome as junk, because they did not know what it did. This turned out to be a spectacular failure, delaying by several decades the onset of the age of epigenetics. In my view, physicists and astronomers are generally more open to the possibility of paradigm shifts than are biologists. They are also more likely to admit, in public, that major lacuna remain in our knowledge of the physical world. 3. All the methods that have been devised to detect Dark Matter rely on complicated statistical analyses to infer particles of Dark Matter. This is not a simple matter of observation, and lends itself to different interpretations. Here, too, the history of science would indicate that healthy scepticism be maintained when certain results are proclaimed. REFERENCES: [1] Retrieved 7th October 2016.